Week 10 – Post Response

Reading Bret Victor’s “A Brief Rant on the Future of Interaction Design” feels like a wake-up call. His critique of modern interfaces—what he calls “Pictures Under Glass”—is undeniable. We have taken the incredible dexterity of the human hand, capable of thousands of distinct movements, and reduced it to a single, numb motion: the swipe.

Victor argues that the future should be about “Dynamic Media”—screens that can physically morph, allowing us to feel buttons and textures. While I agree with his diagnosis of the problem (we are disconnected from the physical world), I disagree with his solution. I don’t think the future is a morphing screen.

In my opinion, Smart Glasses are the next big thing.

The End of the Swipe

Victor’s main gripe is that touchscreens blind our hands. We have to look at the screen to know where the buttons are because we can’t feel them.

Smart glasses solve this, but not by adding texture. They solve it by freeing our hands entirely. With the advanced hand-tracking and depth sensors we are seeing in emerging tech, the “swipe” becomes obsolete. Instead, we can return to the real-life gestures Victor mourns the loss of.

If I want to turn a dial, I can mime turning a dial in the air. If I want to grab an object, I just grab it. The interface isn’t trapped under a sheet of glass anymore; it is overlaid onto the real world. We can use our full range of motor skills to manipulate digital objects as if they were physical ones.

24/7 Access, Zero Friction

The other massive advantage of the Smart Glasses form factor is integration. Victor worries that interfaces are separating us from our environment. But if the interface is a pair of glasses that look like normal eyewear, the digital world becomes a seamless layer on top of the physical one.

We could have access to the digital world 24/7, completely unnoticeable to the people around us. No more looking down at a phone, hunching over, or disengaging from a conversation to check a notification. The technology recedes into the background, becoming a true extension of the senses rather than a “tool” you have to pick up and hold.

The Futility of Prediction

However, reading through Victor’s “Responses” page reminds me that we should be humble with our predictions.

Victor wrote his rant in 2011, convinced that the iPad was just a transitional phase like black-and-white photography. Yet, over a decade later, we are still swiping on glass.

When we look back at how people in the 1900s predicted the year 2000, they got almost everything wrong. They imagined flying firefighters and personal blimps, but they completely missed the internet, microchips, and AI. We tend to predict the future by exaggerating the present—Victor predicted better physical buttons because he liked physical tools. I am predicting smart glasses because I like visual overlays.

Ultimately, nobody knows what the “next big thing” actually is until it arrives. We can analyze and debate, but in the end, we just have to wait and see.

Week 9 – Simon Says

Description

For this assignment, I created a “Simon Says” style memory game. The goal of the game is to memorize a sequence of colors displayed on the main RGB LED and repeat the pattern back using four push buttons.

To fulfill the assignment requirements, I incorporated both digital and analog controls to affect the game’s behavior:

  1. Digital Inputs (Switches): Four push buttons act as the game controller. These are used to start the game and input the color sequence.

  2. Analog Input (Sensor): A Potentiometer is used as a difficulty selector. Before the game starts, reading the analog value of the potentiometer determines the speed of the flashes and the length of the sequence.

  3. Outputs:

    • RGB LED: Displays the randomized game sequence.

    • Feedback LEDs: Two separate LEDs (Green and Red) indicate if the player won or lost the round.

Schematic

Here is the hand-drawn wiring diagram for the circuit. It details the connections for the RGB LED (Pins 11-13), the Feedback LEDs (Pins 9-10), the Buttons (Pins 2-5), and the Potentiometer (Pin A1).

Logic & Interaction

The system waits for the Yellow button to be pressed to start. Once triggered, the Arduino reads the Potentiometer.

  • If the potentiometer is turned one way, the game is “Easy” (slower flashes, shorter sequence).

  • If turned the other way, the game becomes “Hard” (rapid flashes, longer sequence).

The Arduino then generates a random pattern displayed on the RGB LED. The player must press the buttons in the correct order. If successful, the distinct Green feedback LED flashes; if incorrect, the Red feedback LED flashes.

Gameplay Video

Code

Below is the Arduino code used for this project. 

// RGB LED Pins
const int RGB_RED_PIN = 13;
const int RGB_GREEN_PIN = 12;
const int RGB_BLUE_PIN = 11;

// Feedback LED Pins
const int FEEDBACK_RED_PIN = 9;
const int FEEDBACK_GREEN_PIN = 10;

// Button Pins
const int BUTTON_YELLOW_PIN = 2;
const int BUTTON_BLUE_PIN = 3;
const int BUTTON_GREEN_PIN = 4;
const int BUTTON_RED_PIN = 5;

// Potentiometer Pin
const int POT_PIN = A1;

// Game Settings
const int BASE_SEQUENCE_LENGTH = 3;
const int MAX_SEQUENCE_LENGTH = 12;
const int BASE_LIGHT_DISPLAY_TIME = 1000;
const int FAST_LIGHT_DISPLAY_TIME = 100;
const int PAUSE_BETWEEN_LIGHTS = 50;
const int FEEDBACK_BLINK_TIME = 200;
const int FEEDBACK_BLINK_COUNT = 3;

// Game State
int gameSequence[MAX_SEQUENCE_LENGTH];
int currentSequenceLength = BASE_SEQUENCE_LENGTH;
int currentDisplayTime = BASE_LIGHT_DISPLAY_TIME;

void setup() {
  // Initialize serial communication
  Serial.begin(9600);
  Serial.println("Simon Game Started!");

  // Configure LED pins
  pinMode(RGB_RED_PIN, OUTPUT);
  pinMode(RGB_GREEN_PIN, OUTPUT);
  pinMode(RGB_BLUE_PIN, OUTPUT);
  pinMode(FEEDBACK_RED_PIN, OUTPUT);
  pinMode(FEEDBACK_GREEN_PIN, OUTPUT);

  // Configure button pins
  pinMode(BUTTON_YELLOW_PIN, INPUT_PULLUP);
  pinMode(BUTTON_BLUE_PIN, INPUT_PULLUP);
  pinMode(BUTTON_GREEN_PIN, INPUT_PULLUP);
  pinMode(BUTTON_RED_PIN, INPUT_PULLUP);

  // Reset LEDs
  turnAllRGBOff();
  digitalWrite(FEEDBACK_RED_PIN, LOW);
  digitalWrite(FEEDBACK_GREEN_PIN, LOW);

  // Seed random generator
  randomSeed(analogRead(A0));
  
  Serial.println("Waiting for start...");
}

void loop() {
  // Wait for start button 
  while (digitalRead(BUTTON_YELLOW_PIN) == HIGH) {
    digitalWrite(FEEDBACK_RED_PIN, HIGH);
    digitalWrite(FEEDBACK_GREEN_PIN, LOW);
    delay(150);
    digitalWrite(FEEDBACK_RED_PIN, LOW);
    digitalWrite(FEEDBACK_GREEN_PIN, HIGH);
    delay(150);
    digitalWrite(FEEDBACK_GREEN_PIN, LOW);
    delay(100);
  }
  
  digitalWrite(FEEDBACK_RED_PIN, LOW);
  digitalWrite(FEEDBACK_GREEN_PIN, LOW);

  Serial.println("Game Starting...");
  delay(200);
  
  // Update difficulty based on potentiometer
  updateGamePace();
  
  // Generate and display sequence
  generateSequence();
  printSequence();
  displaySequence();

  // Process player input
  bool correct = getUserInput();

  // Provide result feedback
  if (correct) {
    Serial.println("Correct!");
    feedbackBlink(FEEDBACK_GREEN_PIN, FEEDBACK_BLINK_COUNT, FEEDBACK_BLINK_TIME);
  } else {
    Serial.println("Incorrect!");
    feedbackBlink(FEEDBACK_RED_PIN, FEEDBACK_BLINK_COUNT, FEEDBACK_BLINK_TIME);
  }
}

// Adjust sequence length and speed based on potentiometer value
void updateGamePace() {
  int potValue = analogRead(POT_PIN);

  currentSequenceLength = map(potValue, 0, 1023, BASE_SEQUENCE_LENGTH, MAX_SEQUENCE_LENGTH);
  currentSequenceLength = constrain(currentSequenceLength, BASE_SEQUENCE_LENGTH, MAX_SEQUENCE_LENGTH);

  currentDisplayTime = map(potValue, 0, 1023, BASE_LIGHT_DISPLAY_TIME, FAST_LIGHT_DISPLAY_TIME);
  currentDisplayTime = constrain(currentDisplayTime, FAST_LIGHT_DISPLAY_TIME, BASE_LIGHT_DISPLAY_TIME);
}

// Fill sequence array with random colors
void generateSequence() {
  for (int i = 0; i < currentSequenceLength; i++) {
    gameSequence[i] = random(4);
  }
}

// Output current sequence to serial monitor for debugging
void printSequence() {
  Serial.print("Sequence: [");
  for (int i = 0; i < currentSequenceLength; i++) {
    Serial.print(gameSequence[i]);
    if (i < currentSequenceLength - 1) Serial.print(", ");
  }
  Serial.println("]");
}

// Play back the sequence on the RGB LED
void displaySequence() {
  for (int i = 0; i < currentSequenceLength; i++) {
    switch (gameSequence[i]) {
      case 0: turnOnRGBRed(); break;
      case 1: turnOnRGBGreen(); break;
      case 2: turnOnRGBBlue(); break;
      case 3: turnOnRGBYellow(); break;
    }
    delay(currentDisplayTime);
    turnAllRGBOff();
    delay(PAUSE_BETWEEN_LIGHTS);
  }
}

// Capture player input and verify against sequence
bool getUserInput() {
  int inputCount = 0;

  while (inputCount < currentSequenceLength) {
    int pressedButton = readButtons();
    
    if (pressedButton != -1) {
      // Visual feedback for button press
      switch (pressedButton) {
        case 0: turnOnRGBRed(); break;
        case 1: turnOnRGBGreen(); break;
        case 2: turnOnRGBBlue(); break;
        case 3: turnOnRGBYellow(); break;
      }
      delay(100);
      turnAllRGBOff();
      delay(50);

      // Check against expected sequence
      if (pressedButton != gameSequence[inputCount]) {
        return false;
      }
      
      inputCount++;
      
      // Wait for button release
      while(digitalRead(BUTTON_RED_PIN) == LOW || digitalRead(BUTTON_GREEN_PIN) == LOW ||
            digitalRead(BUTTON_BLUE_PIN) == LOW || digitalRead(BUTTON_YELLOW_PIN) == LOW);
    }
  }
  return true;
}

// Return index of pressed button or -1 if none
int readButtons() {
  if (digitalRead(BUTTON_RED_PIN) == LOW) return 0;
  if (digitalRead(BUTTON_GREEN_PIN) == LOW) return 1;
  if (digitalRead(BUTTON_BLUE_PIN) == LOW) return 2;
  if (digitalRead(BUTTON_YELLOW_PIN) == LOW) return 3;
  return -1;
}

// Blink specified LED for feedback
void feedbackBlink(int pin, int count, int blinkTime) {
  for (int i = 0; i < count; i++) {
    digitalWrite(pin, HIGH);
    delay(blinkTime);
    digitalWrite(pin, LOW);
    delay(blinkTime);
  }
}

// --- RGB Control Helpers ---

void turnAllRGBOff() {
  digitalWrite(RGB_RED_PIN, LOW);
  digitalWrite(RGB_GREEN_PIN, LOW);
  digitalWrite(RGB_BLUE_PIN, LOW);
}

void turnOnRGBRed() {
  turnAllRGBOff();
  digitalWrite(RGB_RED_PIN, HIGH);
}

void turnOnRGBGreen() {
  turnAllRGBOff();
  digitalWrite(RGB_GREEN_PIN, HIGH);
}

void turnOnRGBBlue() {
  turnAllRGBOff();
  digitalWrite(RGB_BLUE_PIN, HIGH);
}

void turnOnRGBYellow() {
  turnAllRGBOff();
  digitalWrite(RGB_RED_PIN, HIGH);
  digitalWrite(RGB_GREEN_PIN, HIGH);
}

 

Week 9 – Post Response

Good Design Needs No Explanation

This week we read “Physical Computing’s Greatest Hits and Misses” and “Making Interactive Art: Set the Stage, Then Shut Up and Listen.” These readings made me think about how we interact with objects.

In my opinion, the main factor for a brilliant design is intuition. It should be unnoticeable. It should not require an instruction manual. I am a self-proclaimed kinesthetic learner. This means I learn by doing. I despise manuals. When I get something new, I like to jump in headfirst and discover how it works.

The reading on “Greatest Hits and Misses” talks about reliability and clarity. However, modern tech companies often ignore this. They prioritize simple, sleek designs to look futuristic. This often sacrifices intuitivity.

For example, you might have to double tap the back of a phone to take a screenshot. Or you have to triple tap to turn on a flash. These are hidden secrets, not clear controls.

I have a personal example of this. I used to go into the settings menu on my phone every time I wanted to turn on noise cancellation on my AirPods. Eventually, someone pointed out that I can just hold the stem of the AirPod to change the mode. I felt silly, but I shouldn’t have. The design gave me no clue that the feature existed.

Of course, there are exceptions. An airplane cockpit is very complex. It has hundreds of buttons. This affords complexity because knowledge is a prerequisite. You have to go to flight school to fly a plane.

But for everyday items or art, we should not need flight school.

This connects to the second reading, “Set the Stage, Then Shut Up and Listen.” This text argues that an artist should not stand next to their work and explain it.

If you let a user interact with your art without context, and they do it wrong, that is important data. It does not mean the user is stupid. It means the design is flawed or unintuitive.

As a designer, you have to set the stage. You have to give clues through the shape and feel of the object. If you have to write a manual for your art installation, you have failed to make a connection. Good interaction is about letting the user figure it out with their own hands.

Week 8 – Post Response

Design is More Than Function: Why Feeling Right Matters

When we think about technology, it’s easy to focus only on what it does. But good design isn’t just about function, it’s about how it feels. The readings “Emotion & Design: Attractive Things Work Better” and Her Code Got Humans on the Moon reminded me that emotion, perception, and clever design tricks shape how we interact with technology every day.

Take something as simple as a volume bar on your phone. On paper, 50% should be half the sound. But our ears don’t perceive sound linearly, they perceive it logarithmically. This means that a volume slider isn’t just a slider, it’s an emotional experience. Apple and Google realized this and adjusted the curves so that the volume feels smooth and natural to our ears. YouTube, by contrast, leaves most of the adjustment in the last 10%, making it feel unpredictable and frustrating.

Some designs don’t actually change the outcome but give the illusion of control. Elevators are a perfect example. About 80% of “close door” buttons don’t do anything, the door closes automatically, but the button makes users feel empowered. That illusion of control creates a better experience, even if it’s technically unnecessary. This is exactly the kind of trick designers use to shape our perception and emotions.

Another example from the transcript is the Nintendo Switch joycon mirrors. The solution was low-tech and simple: a small mirror to reflect the LED lights. It’s deceptively simple but brilliantly effective. It shows that design isn’t always about complexity, it’s about solving the human problem in a way that feels right.

Even everyday software hides these design decisions. Progress bars, icons, and animations are not just decorative, they are carefully crafted to keep users calm, informed, and engaged. Google tested 50 shades of blue to see which one users clicked most, and Apple uses subtle motion blur to make screens feel smoother. These small touches are invisible if done well, but they make a huge difference in the user experience.

Week 5 – Post Response

Computer vision works in a very different way from how people see. We read situations without thinking. We notice feelings, intent, and small details that are not written anywhere. A computer has none of this. It only receives a grid of values and waits for someone to tell it what to look for. Nothing has meaning until a developer builds rules or trains a model to reach a goal.

There are simple methods that can guide the system. Finding outlines helps the computer separate one shape from another. Tracking color ranges can point it toward a moving object. Contours make it easier to follow form. With the right setup and some training, the computer can get close to noticing what matters in a scene, even if it never truly understands it.

This matters a lot in interactive art. Vision gives the work a way to notice the audience and respond. A camera can capture movement, distance, and presence at the same time, which makes it one of the strongest sensors an artist can use. At the same time, people take in art through many senses. We see, listen, and feel. Computer vision tries to narrow that gap by giving machines a way to observe instead of just wait. This opens new creative space, but it also raises questions about how much the artwork should watch and how that changes the experience for the viewer.

Week 3 – Post Response

Rethinking Interactivity

After reading Chapter 1 of The Art of Interactive Design by Chris Crawford, I started to see how we often talk about interactivity as if it is all or nothing. You either have it or you do not. In my opinion, it is not that simple. There are levels to it.

Crawford draws a clear line between reaction and interaction. He says that a reactive system only answers a user action with a single output. An interactive system listens, responds, and listens again. It becomes a loop between the user and the system. I agree with this idea, but I still think even simple reactions sit on the same scale, only at the very bottom.

A fridge light turning on when the door opens is a good example. It changes based on what the user does, so it is not completely passive. Still, it does not take in new information after that first moment. There is no ongoing exchange. It is interactive in a very low way.

What separates the two is the direction of the flow. A reactive system goes from input to output and then stops. An interactive system keeps the conversation going. It listens to the user, gives feedback, and then listens again. The more complex the loop becomes, the higher the interactivity feels.

Thinking of interactivity as a spectrum makes design more interesting. It helps us see why some systems feel alive while others feel flat. It also shows that even simple actions can be part of a larger idea when we look at them in context.

Week 2 – Post Response

Chance Operations and Ordered Randomness

Watching Casey Reas talk about chance operations changes how we see art. It is not full control. It is not total randomness. It is something in between.

Determinism is boring because everything is already decided. There is no surprise. Chaos is also boring because nothing has meaning when anything can happen.

Ordered randomness is different. The artist sets simple rules. The system creates the result. The final work is not chosen by hand. It grows on its own. It feels natural and alive.

I think the best balance is when the user knows how their input affects the system, but not the exact result. If the output is predictable with total certainty, the art becomes flat. On the other hand, if the output is completely random, it feels like TV noise. There is nothing to hold on to.

We see this balance in nature. Small rules create big patterns. Clouds form. Birds move in groups. No one controls every part, yet it still makes sense.

Chance operations show that we do not need to choose between control and chaos. We can guide the process without forcing it. When we create space for the unexpected, the outcome becomes more interesting.

Week 12: Commit to Final Proposal

For my final project, I will create a physically interactive Tamagotchi. It is a mini robot creature that the user can pat, touch, spin, and talk to, but instead of being cute and grateful, it responds in unexpectedly pessimistic and slightly hostile ways. The project combines physical interaction through Arduino sensors with a character interface and dialogue system in p5. The concept is inspired by classic Tamagotchi toys, small handheld digital pets that demand constant attention and reward care with affection. In contrast, my project imagines a near-future world in which artificial beings no longer need, or even want, human caretaking. This pet has a personality shaped by human environmental destruction and techno-optimism gone wrong, and it is deeply unimpressed by humans.

Physically, the project will take the form of a small creature-like object mounted on a potentiometer so that the whole robot can be spun like a tiny rotating idol. The main interactions happen through touch and rotation. The user can pat or press the creature via a force sensor embedded in its body, and they can spin it to face different directions using the potentiometer as a rotation input. Inside the body, the creature uses NeoPixels to show changing emotions through light patterns, while on the computer a p5 interface displays a larger animated avatar of the creature and shows its dialogue text.

Unlike typical virtual pets that reward attention with affection and gratitude, this creature is intentionally negative and resistant. When the user pats it, it might snap: “Go away, you silly little human.” When the user spins it so that it faces away, it might respond: “Correct. I prefer not to see you.” If the user keeps spinning it quickly, the creature may complain: “Dizzy. This is abuse, not affection.” When the robot is left facing a corner with its back turned to the user, it may mutter: “Finally. A view without humans in it.” The rotation angle therefore becomes a key part of the interaction design. Different angular positions correspond to different stances or modes of the creature, and those modes drive both the NeoPixel emotion effects on the physical object and the dialogue responses on the p5 screen.

On the Arduino side, the project relies on two main inputs that are sent to p5 over serial communication. The first is a force sensor used as a pat or squeeze input. The Arduino continuously reads the analog value from the FSR, maps the raw reading to a smaller range such as 0 to 10 representing pat intensity, and sends this information to p5 in the form of tagged serial messages like “PAT:<value>”. A reading of “PAT:0” would mean no touch, while something like “PAT:9” would correspond to an aggressive squeeze. The second input is the potentiometer that encodes the robot’s rotation angle. The creature is physically attached to the shaft of the potentiometer so that when the user spins the creature, they are directly rotating the pot. The Arduino reads the analog value from the potentiometer, originally in the range 0 to 1023, and maps it either to a normalized angle between 0 and 359 degrees or to a set of discrete orientation zones. For example, Zone 0 can represent facing the user, Zone 1 slightly turned to the left, Zone 2 slightly turned to the right, and Zone 3 completely turned away with its back to the user. The Arduino then sends periodic messages to p5 such as “ANGLE:<value>” for the continuous angle or “ZONE:<id>” for the discrete orientation. As a stretch feature, the Arduino can also compare the current angle with the previous reading to estimate spin speed and send additional messages such as “SPIN:FAST” or “SPIN:SLOW” if there is enough time to implement this.

The Arduino is also in charge of several outputs, primarily the NeoPixels that visualize the creature’s emotional state. The NeoPixels are used to display different moods and orientations through color and animation patterns. The Arduino listens for commands coming from p5, such as “MOOD:ANGRY”, “MOOD:BORED”, “MOOD:AMUSED” or “MOOD:DISGUSTED”, and possibly additional tags like “DIR:FRONT”, “DIR:LEFT”, “DIR:RIGHT” and “DIR:BACK” that encode the direction it should appear to be facing. For each combination of mood and orientation, the Arduino selects a specific pattern from a small internal lookup table of NeoPixel animations. For instance, when the creature is facing the user and annoyed, the LEDs might show sharp, high-contrast flashing patterns. When it is turned away, the colors might become dim and cold to signal that it is ignoring the user. When the user spins it quickly, it might display chaotic, flickering lights to suggest dizziness and disturbance. In this way, the Arduino acts as the body-level controller that turns high-level mood messages from p5 into concrete light and motion behaviors on the physical pet.

On the p5 side, the program handles visual behavior, dialogue, and integration of the serial data coming from Arduino. The main visual element is a two-dimensional avatar of the creature whose orientation mirrors the potentiometer readings. When Arduino reports that the creature is in Zone 0, facing the user, the avatar will be drawn facing forward. When it reports Zone 1 or Zone 2, the avatar will turn slightly left or right. When it reports Zone 3, the avatar will show its back or a dismissive side profile. Background layers or subtle interface elements can reinforce the sense of orientation, for example by using a spotlight effect when the creature faces the user, and a shadowy or desaturated background when it turns away.

The p5 sketch keeps track of several state variables. It records the current orientation zone or angle reported by Arduino, the most recent pat intensity from the “PAT:<value>” messages, and the time since the last interaction to detect whether the user is ignoring the creature or constantly bothering it. Based on these values, p5 chooses a mood state such as “Annoyed”, “Dizzy”, “Dismissive” or “Begrudgingly Attentive”. That mood state determines the avatar’s expression, including eyes, mouth shape, and posture, as well as background color or small motion effects like shaking or pulsing. Whenever the mood changes, p5 also sends the corresponding mood label back to the Arduino, for example “MOOD:DISMISSIVE”, so the NeoPixels can stay synchronized with the on-screen visuals.

Dialogue and personality are deeply connected to rotation and touch. p5 interprets the angle or orientation zone in semantic terms. When the creature is facing the user in Zone 0, it selects lines that complain about being watched, such as “Why are you staring? I do not perform on command.” When it is turned slightly away in Zones 1 or 2, it may comment on the user’s persistence with lines like “You still there? Unfortunately, yes.” When it is turned completely away in Zone 3, it chooses more extreme dismissive phrases such as “This direction is better. Less human.” If the system detects fast spinning, it can draw from a set of dizzy or abused responses like “Stop spinning me. I am not a fidget toy.”

Beyond instantaneous input, p5 maintains some simple memory over time. It tracks how often and how strongly the creature has been patted in the recent past and how often the user has spun it back and forth between zones. By combining rotation data with touch data, the system can generate interaction-dependent responses. For example, if the user keeps forcing the creature to face them by repeatedly moving it back into Zone 0 after it has “chosen” to be in Zone 3, the creature can complain about humans forcing attention with lines such as “You keep dragging me back. Typical human behavior.” If the user spins it away and then leaves it alone for a while, the system can trigger more subtle, relieved comments like “Finally. A horizon without you.”

The dialogue itself will at minimum be based on prewritten arrays of lines for each mood and orientation combination. p5 will maintain collections such as “linesFacingAnnoyed”, “linesBackTurned” or “linesDizzy” and will choose one line depending on the current mood, orientation zone, and a bit of randomness, to avoid sounding too repetitive. As a stretch goal, the project may integrate an AI API into p5. In that case, p5 would send a short prompt that includes the current mood, orientation description (such as “facing”, “back turned”, “spun fast”), and a brief summary of recent interactions. It would then receive a generated line of dialogue and optionally wrap or filter it to ensure it remains safe, in character, and consistent with the theme. In both the base and stretch versions, the personality remains negative, sarcastic, and skeptical of humans, reflecting a world where artificial beings are not necessarily grateful for their existence or their relationship with their creators.

Week 12: Final Project Proposal

My final project will be a short game/experience inspired by the wish lanterns in Disney’s Tangled and will use the artwork I did all the way back in week 3:

I want users to be able to send out their own wishes. Before they can enter their wish, they must complete a short challenge. The challenge is a physical reaction-time game: four lanterns (each with an LED inside) are paired with four corresponding push buttons. The lanterns light up randomly at a fast pace, and the user must quickly press the button for the lantern that lights up, essentially “whacking” it. Their successful hits are tracked and displayed in p5, with a similar sketch as the one above as the background.
Once the user reaches 10 successful hits, a text box appears in the p5 interface (as already present in the sketch above) where they can type their wish. That wish is then displayed on an Arduino screen, all the lanterns light up together, and the Tangled music plays to complete the experience. After this, the p5 screen gives the user an option to restart the experience all over again.

The Arduino handles all the feedback related to the physical input:
it flashes the LEDs in a random, fast paced manner and detects corresponding button presses by the user. Every time a lit up LED’s button is pressed on time, it sends a signal to p5 to increase the score of the user by 1.

On the p5 side, the sketch handles the game logic, tracks the score, maintains the aesthetics, provides the textbox to enter the wish, and sends the wish back to the Arduino. It also gives instructions at the start of the experience to the user and gives them an option to restart at the end.

To get started, I tried to implement the reaction-time game logic with the score-tracking on a small scale using just my own Arduino, breadboard and 4 LEDs. It mostly works, but the LEDs light up too slowly (ignore the interface also):

Week 12 – Final Project Documentation

For my final project, I’m creating a physically interactive memory-sequence game centered around the metaphor of “recovering a fragmented memory.” The game uses four large LED pushbuttons wired to an Arduino, each with a built-in LED that flashes as part of a color sequence. The player must watch and memorize the flashing sequence and then repeat it correctly by pressing the matching buttons in order. With each successfully completed level, a blurry or pixelated image on the screen becomes clearer, symbolizing memory restoration. If the user gets the sequence wrong, the image distorts or glitches, as if the memory is slipping away. Only after completing all levels does the fully restored image appear.

The Arduino handles all sensing and feedback related to physical input: it detects button presses using INPUT_PULLUP, flashes the LEDs during each round (based on input from P5), and sends messages to P5 whenever the player presses a button. Each button press is communicated over serial with a simple string like “BUTTON:0”, “BUTTON:1”, etc. P5 receives these signals, checks them against the correct sequence, and determines whether to progress the game, update the image clarity, or apply a glitch effect. On the flip side, P5 sends commands to Arduino to flash specific LEDs by sending numbers (0-3) over serial that correspond to the button LEDs.

On the P5 side, the sketch manages all game logic, sequence generation, visual feedback, and memory visualization. It starts with a low-resolution or blurred image and gradually resolves the image as the user completes levels. The sketch also gives instructions to the user and visual cues about success or failure. This layered system allows for a compelling interaction that blends precise physical input with expressive visual output.

 I’ve successfully soldered one of the large LED pushbuttons with its wires and tested it using the Arduino with the internal pull-up setup. The button press registers correctly, and the built-in LED lights up when triggered from code. This confirms that the wiring and logic are working as intended.

Next, I’ll repeat the soldering and wiring process for the remaining three buttons, ensuring each is connected to a unique input and output pin. I’ve also laser-cut the top panel of the box, which has four holes precisely sized to mount the pushbuttons. This will keep the layout organized and user-friendly for gameplay. Once all buttons are mounted and connected, I’ll move on to integrating all four into the Arduino code and begin syncing with the visual side in p5.js.

Laser Cutting Video: