Final Project Documentation

Concept

Memory Dial is a desk object that listens to the room and turns what it senses into a living visual field.
Three inputs drive everything:
1. Light from a photoresistor
2. Proximity from an ultrasonic sensor
3. Sound from the laptop microphone
These values animate a layered visualization in p5.js. Over time, the system also writes short poems that describe how the room behaved during the last interval.
The project is both physical and digital: the Arduino gathers the signals, and the browser visualizes the “aura” of the space.

Process

Hardware

The electronics sit inside or beside the desk object.

  • Photoresistor (LDR): measures brightness in the room
  • HC-SR04 Ultrasonic Sensor: detects how close a person stands
  • (Piezo sensor is present but not used in the final logic)
  • Arduino Uno: reads the sensors and streams data via Web Serial

Arduino prints values in this format:

lightValue,distanceValue,piezoValue

Software Flow

The browser-based sketch (p5.js) opens the serial port and receives the sensor stream.
Then it does five main jobs:

1. Signal Smoothing

Raw sensor data is jittery.
To make the animation feel organic, I smooth everything, which ultimately prevents flickering and gives the visuals a “breathing” quality.

smoothLight = lerp(smoothLight, currentLight, 0.18);

smoothDist  = lerp(smoothDist,  currentDist,  0.18);

smoothMicLevel = lerp(smoothMicLevel, rawMicLevel, 0.2);

2. Normalization

Each value is mapped into a range between 0 and ~1 so it can drive animation:

  • lightNorm: 0 = very dim, 1 = very bright
  • presenceNorm: 0 = far away, 1 = very close
  • micNorm: 0 = quiet, 1 = loud

These norms control different visual layers.

Visual System: What Each Sensor Controls

The sketch builds a three-layer aura that reacts in real time.

1. Light → Affective Basefield (Background)
  • Low light = blue / cool hues
  • Medium = muted purples
  • Bright room = warm oranges

The background is made of hundreds of fading circles with slow noise-driven texture (“veins”), giving it a soft atmospheric motion.

2. Distance → Flowfield and Moving Rings

When someone approaches:

  • Flowfield particles move faster and glow more
  • Radial rings become thicker and more defined
  • Blobs orbiting the center grow and multiply

Distance affects the energy level of the whole system.

3. Sound → Breathing Halo (Distorted Circles)

Microphone amplitude controls:

  • Speed of breathing
  • Amount of wobble
  • Thickness and brightness of the halo

Louder sound → faster breathing, more distortion
Quiet room → slow, smooth pulse

This halo behaves like the room’s heartbeat.

4. Timed Sparks

Every few seconds, spark particles are released from the center:

  • They drift, swirl, and fade over ~10 seconds
  • Create a subtle “memory trace” effect

This gives the visuals a slow-time texture

Poem System: Long-Duration Feedback

While the visual reacts instantly, a second system keeps track of behaviour over longer periods.

Every frame, the sketch adds to running totals:

lightNorm
presenceNorm
micNorm

After a set interval (currently short for testing), the program:

  1. Computes average light, distance, and sound
  2. Categorizes the ambience into a mood (quietDark, quietBright, busyClose, mixed)
  3. Selects a matching four-line poem from a text file (some of most popular & my favorites)
  4. Fades the poem in and out on top of the animation
  5. Resets the history buffer for the next cycle

This creates a rhythm:
instant animation → slow reflection → instant animation.

(Picture 2: Cinematic Photography)

Interaction Summary

When people walk by:

  • Their presence wakes the field
  • Their voice or music changes the breathing
  • Lighting conditions tint the entire atmosphere

After a long stretch of time, the system “writes” a short poem capturing the mood of the environment.

The desk object becomes a quiet companion that listens, reacts, and eventually reflects.

What the Viewer Actually Experiences is –

  • They see the aura pulse when sound changes
  • They see the energy rise when someone approaches
  • They feel the ambience shift when lights dim or brighten
  • They get a poem that summarizes the vibe of the past hour

The project sits between functional sensing and artistic ambience.
It doesn’t demand interaction; it responds to whatever happens.

(Video clip 1: close-up shot of breathing)

Schematic Diagram

What I am proud of:

One part I’m especially happy with is the poem selection system, because it isn’t just random text appearing every few minutes. It observes the room quietly, stores a rolling history of the ambience, and then chooses a poem that matches the emotional “weather” of the last cycle.

The logic only works because the sketch accumulates sensor values over time, instead of reacting to single spikes. For example, this block keeps a running history of the room’s brightness, proximity, and sound:

histLightSum += lightNorm; 
histDistSum += presenceNorm; 
histMicSum += breathStrength; 
histSampleCount++;

Then, when it’s time to display a poem, the system computes averages and interprets them like a mood:

let avgLight = histLightSum / histSampleCount; 
let avgDist = histDistSum / histSampleCount; 
let avgMic = histMicSum / histSampleCount; 
if (avgMic < 0.20 && avgLight < 0.30 && avgDist < 0.55) { 
moodKey = "quietDark"; } 
else if (avgMic < 0.20 && avgLight > 0.65) { 
moodKey = "quietBright"; } 
else if (avgMic > 0.45 && avgDist > 0.55) { 
moodKey = "busyClose"; } 
else { moodKey = "mixed"; }

Future Improvements

I want to expand the Memory Dial so it becomes a small personal companion. Ideas include:

  • Logging poems over time into a journal page
  • Adding touch sensing so the object reacts when someone taps the table
  • Allowing the user to “bookmark” moments when the aura looks special
  • Using AI to rewrite or remix poems based on ambience trends
  • Creating a physical enclosure for the sensors so it feels like a finished artifact

Week 13 – User Testing

For this week’s user test, I asked a couple of acquaintances to interact with the Memory Dial without explaining anything upfront. I wanted to see how much the system communicates on its own and whether the visuals give any sense of “cause and effect” without my narration.

The first reaction was always the same: “It looks pretty, but what does it do?”
That told me something immediately – people understood they were looking at a reactive visual piece, but they couldn’t tell what influenced what. They waved their hands, stepped closer, dimmed the room lights with their phone flash, spoke near the laptop mic, and slowly started noticing patterns:

Once they figured out these relationships, the piece became much more interesting to them. The visuals were described as “alive,” “breathing,” and “calming.” But almost every tester needed a little hint about the sensors before they fully understood the interaction.

Because of this, I later added a few lines on the landing page briefly explaining what the room is controlling:

After that addition, testers understood the interaction far faster.

One more observation: the timed poetry reveal was a pleasant surprise for everyone, but since the demo interval was short, the poem sometimes popped up before they had fully understood the visuals. They liked it, but it felt “mysterious without warning.”

Link to video

What worked well

The aura system made sense once users noticed the patterns.
People liked how the three layers moved differently. The breathing halo reacting to sound was the most immediately understood and got the most comments. Testers also said the overall motion felt calm instead of distracting.

The poem reveal was everyone’s favorite moment.
Even though it startled a few people, the transition and fade-in were described as “soft” and “cinematic.” Users said it made the piece feel thoughtful and “less like a tech demo and more like an experience.”

The visuals feel cohesive.
No one felt overwhelmed by the amount of motion, which was a relief because this system has a lot going on.

Room for Improvement

People needed guidance early on.
Without context, the sensors were basically invisible. Users tried to understand the visuals by guessing instead of experimenting. A tiny amount of text solved the issue, which shows the landing page actually matters.

Poem timing needs a cue.
Since users didn’t know a poem would appear, the transition felt abrupt during testing. A small hint like “The room is writing…” or a subtle dimming before the text appears might help lead the eye.

Distance-based motion was harder to notice.
Unlike sound or light, distance required more exaggerated motion. Some testers were just sitting down and moving their hands in front of the ultrasonic sensor. I guess that’s what matters more in the end since the difference might not be entirely comprehensible from a distance.

Final Thoughts

The Memory Dial works without instructions, but only after a short moment of confusion. Once people learn what the sensors represent, the piece becomes engaging and even meditative. It encourages slow interaction rather than quick action, and that resonated with users.

This test showed me that the concept is strong, but the system benefits from gentle onboarding, even something as simple as a few lines on the landing screen. With those adjustments, the Memory Dial becomes easier to understand and more enjoyable for first-time viewers.

Week 14: Final Project Documentation

Make a Wish ✨

Concept

My final project is a Disney’s Tangled themed reaction-time game. A major focus in Tangled is the floating lanterns that appear every year on Rapunzel’s birthday. These same lanterns are also popularly known as “wish” lanterns and I wanted to mix these two ideas to create my game. The overall flow of the game is that the user catches enough lanterns to earn a wish, and can then type and send out their wish into the universe with the other lanterns.

Video Documentation

Interaction Design

Once the user starts the game, the lanterns in front of them start to light up and turn off rapidly, and lively music, specifically the Kingdom Dance music from the film, plays in the background. The user needs to press a lit up lantern’s corresponding button to “catch” it. Every time they catch a lantern, their score (displayed on the p5js screen) goes up. Once they’ve reached a score of 20, the lanterns turn off and I See the Light begins to play in the background. They can input their wish on the screen and press ‘Enter’ to send it up with the floating wish lanterns. The physical lanterns blink pleasantly as the user’s wish floats upwards. Once out of sight, the light and music stops and the experience finishes. The user then gets the option to restart if they wish.

Arduino

Code:

// 4 lanterns (1 pin per lantern, 4 LEDs in parallel) + 4 buttons

// LED pins - one pin controls all 4 LEDs per lantern
const int lantern1 = 2;
const int lantern2 = 5;
const int lantern3 = 8;
const int lantern4 = 11;

// Button pins (using internal pullup resistors)
const int button1 = A0;
const int button2 = A2;
const int button3 = A3;
const int button4 = A4;

// track lit up lanterns
bool lanternActive[4] = {false, false, false, false};

// Button state tracking for debouncing
bool lastButtonState[4] = {HIGH, HIGH, HIGH, HIGH};   // last stable reading
unsigned long lastDebounceTime[4] = {0, 0, 0, 0};
const unsigned long debounceDelay = 50;   // 50ms debounce duration

void setup() {
  Serial.begin(9600);
  
  // Initialize LED pins
  pinMode(lantern1, OUTPUT);
  pinMode(lantern2, OUTPUT);
  pinMode(lantern3, OUTPUT);
  pinMode(lantern4, OUTPUT);
  
  // Initialize button pins with internal pullup
  pinMode(button1, INPUT_PULLUP);
  pinMode(button2, INPUT_PULLUP);
  pinMode(button3, INPUT_PULLUP);
  pinMode(button4, INPUT_PULLUP);
  
  // Turn off all LEDs initially
  turnOffAllLanterns();
  
  // Wait for serial to stabilize
  delay(1000);
  
  // debugging
  Serial.println("READY");
  Serial.println("Button test: Press each button");
  Serial.flush();
}

void loop() {
  // Check for commands from p5.js
  if(Serial.available() > 0) {
    String command = Serial.readStringUntil('\n');
    command.trim();
    
    if(command.startsWith("ON:")) {
      int lanternNum = command.substring(3).toInt();
      turnOnLantern(lanternNum);
    }
    else if(command.startsWith("OFF:")) {
      int lanternNum = command.substring(4).toInt();
      turnOffLantern(lanternNum);
    }
    else if(command == "ALLON") {
      turnOnAllLanterns();
    }
    else if(command == "ALLOFF") {
      turnOffAllLanterns();
    }
  }
  
  // Check buttons with debouncing
  checkButton(0, button1);
  checkButton(1, button2);
  checkButton(2, button3);
  checkButton(3, button4);
}

// Button check function
void checkButton(int buttonIndex, int buttonPin) {
  bool reading = digitalRead(buttonPin);
  
  // check if reading is bouncing, reset debounce timer
  if(reading != lastButtonState[buttonIndex]) {
    lastDebounceTime[buttonIndex] = millis();
    
    // If state is now LOW (pressed), send immediately
    if(reading == LOW) {
      Serial.print("BTN:");
      Serial.println(buttonIndex + 1);
      Serial.flush(); // data is sent immediately
    }
  }
  
  lastButtonState[buttonIndex] = reading;
}

// LED Control Functions

void turnOnLantern(int lanternNum) {
  lanternActive[lanternNum - 1] = true;
  
  switch(lanternNum) {
    case 1:
      digitalWrite(lantern1, HIGH);
      break;
    case 2:
      digitalWrite(lantern2, HIGH);
      break;
    case 3:
      digitalWrite(lantern3, HIGH);
      break;
    case 4:
      digitalWrite(lantern4, HIGH);
      break;
  }
}

void turnOffLantern(int lanternNum) {
  lanternActive[lanternNum - 1] = false;
  
  switch(lanternNum) {
    case 1:
      digitalWrite(lantern1, LOW);
      break;
    case 2:
      digitalWrite(lantern2, LOW);
      break;
    case 3:
      digitalWrite(lantern3, LOW);
      break;
    case 4:
      digitalWrite(lantern4, LOW);
      break;
  }
}

void turnOnAllLanterns() {
  for(int i = 0; i < 4; i++) {
    lanternActive[i] = true;
  }
  
  digitalWrite(lantern1, HIGH);
  digitalWrite(lantern2, HIGH);
  digitalWrite(lantern3, HIGH);
  digitalWrite(lantern4, HIGH);
}

void turnOffAllLanterns() {
  for(int i = 0; i < 4; i++) {
    lanternActive[i] = false;
  }
  
  digitalWrite(lantern1, LOW);
  digitalWrite(lantern2, LOW);
  digitalWrite(lantern3, LOW);
  digitalWrite(lantern4, LOW);
}

Description:

This Arduino sketch controls four LED-based lanterns and four physical push buttons while communicating with the p5.js application through the Serial connection. Each lantern consists of four LEDs connected in parallel and is controlled by a single digital output pin (2, 5, 8, and 11), while the buttons are connected to analog pins (A0, A2, A4, and A3) and use internal pull-up resistors, meaning they read HIGH by default and LOW when pressed.

The program tracks the active state of each lantern using an array and also monitors button activity with additional arrays that store the last button state and the last debounce time, applying a 50-millisecond debounce delay to prevent false or repeated presses caused by mechanical noise.

Inside the main loop, the Arduino continuously listens for serial commands sent from p5.js, such as instructions to turn individual lanterns on or off using formatted messages like “ON:1” or “OFF:3”, as well as commands that control all lanterns at once using “ALLON” and “ALLOFF”. At the same time, the loop constantly checks each physical button using a dedicated debouncing function that detects state changes, filters out signal noise, and immediately sends a message like “BTN:1” through the Serial port when a valid press occurs so that the visual system can respond instantly.

Schematic

P5.js

(you can enter fullscreen mode by double-clicking on the canvas in p5.js)

Description:

The p5.js sketch controls the full digital side of the interactive experience by handling the visuals, sound, gameplay logic, and communication with the Arduino. It manages multiple game states including the start screen, instructions, active gameplay, and the final wish sequence. The program sends commands to the Arduino to activate and deactivate physical lantern LEDs while simultaneously listening for incoming button press data through the Web Serial API. Animated lanterns are continuously spawned and float upward across the screen, a live score is tracked, and background music changes based on the game state. Once the user successfully collects enough lanterns, a text input appears to capture and animate their written wish before the experience resets.

Arduino + p5.js communication

The communication between the Arduino and p5.js is handled through serial data exchange. The Arduino continuously listens for text-based commands sent from p5.js, such as “ON:1”, “OFF:3”, ALLON, and ALLOFF, which control the physical LED lanterns. At the same time, the Arduino sends messages like “BTN:1” whenever a physical button is pressed. These messages are read and interpreted by p5.js to update the game state, score, and visuals. This two-way communication allows the physical hardware and the digital game to stay perfectly synchronized.

What I’m most proud of

I think what I’m most proud of is the physical components and setup of this game, as the software side of it is not too complex. I got to use a lot of new tools such as 3-D printing for the lanterns and laser cutting for the panel box for the buttons which I may not have used otherwise. Moreover, I got through several issues, such as the not visible yellow light from LEDs (which was the initial plan) by switching to blue, and the mess of wires going into the Arduino (that was an eyesore) by adding the cardboard base under the lanterns to hide the wires under. Just yesterday, I found that one of the buttons in my panel box was no longer working and I had to remove the solder from it to free the wires and replace the system with a new button. I’m also still very proud of the castle that’s part of the visuals in p5.js as I made every single individual shape of that silhouette through trial and error with the shape coordinates.

How this was made

Media sources:
The starting page’s background image came from this website: wallup.net
The font Homemade Apple cam from fonts.google.com
The background music was from the Youtube videos linked previously in the post

The game concept, code, and writeup were done by me, with AI being used for debugging the p5.js code. For example, it helped me debug the connectSerial() and readSerial() functions when they weren’t working as I still was not fully clear on how they connected and functioned, and it’s also how I found out about the windowResized() function.

// Keeps the layout responsive when the browser window is resized
function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
  // Reposition input box
  if (wishInput) {
    wishInput.position(windowWidth/2 - 180, windowHeight - 100);
    // Keeps the wish input centered at the bottom of the screen
  }
  if (restartButton && experienceFinished) {
    restartButton.position(windowWidth / 2 - 110, windowHeight / 2 - 25);
    // Keeps the restart button centered on the screen after resizing
  }
}

My grammar and sentence phrasing in general in this writeup was occasionally polished by Grammarly to help make it more clear and concise.

Future Improvement

I think there are many things I would like to improve in the future:
1) Make a full platform (similar to the one for the buttons) for the lanterns to sit on under which I can hide the Arduino and the full wire.
2) Create a mechanism that allows the player to adjust the game speed, maybe through a potentiometer.
3) Use a brighter source of light such a neo pixels instead of LEDs so that lanterns shine brighter.
4) On the software side the user can still click anywhere on the screen and that creates their own lantern that begins floating upwards, but currently there is no corresponding reaction in the physical setup. It would be interesting to set up such a reaction for every new lantern that is spawned by clicking.

Week 14 – Final Project

Inspired from parking lot detection system, my project is a small re-creation of that as well as a mix of some fun elements to entertain users. The idea is simple: a system that detects if a room is occupied or not as well as a way of showing the number of people in the room. This system can be used in the Bahara rooms. These rooms have no windows and there is no ptracical way of checking if the room is in use without acutally opening the doors. Hence, the secuirty working in our campus has to periodically check if there are any people in the room, as well as possible accidents. Using this system can provide people with these unknown information.
Link to p5js: https://editor.p5js.org/yj2082/full/CUheQmfX3
Link to Arduino: https://github.com/yongjejeon/Detection-Ardunio-p5js

How this was made(Arduino Explanation):
This system uses 2 Infrared distance sensors and 1 neo pixel ring. The 2 distance sensors are placed side by side to check if the person either entered the room or exited the room. If sensor 1 detects first before sensor 2, then it means that the person entered the room and vice versa. There is more logic in double checking if the detection is really true or not. For example, there are cases where user moves too slow or the user stops half way. These shouldnt be counted so I added a logic to at first detect potential IN or potential OUT and confirming if these are actually IN and OUT depending if timeout does not run out. We have a boolean variable called armed. It is set to true if there is no detection. If there is a detection, there are 3 cases: Idle, S1 active and S2 active. If its idle it means that there is potentially a in or an out. It checks and sends to either S1 active or S2 active cases. In either of the cases, it does sometime similar, it checks if the other sensor is met before the time out, if its not then its idle. If it meets, than it means that its confirmed to be either In or out.

/arming logic
      if (s1 || s2) {             //either the sensor sees somethings
        lastNotClearTime = now;
      } else {                //if both are clear meaning enought time since delay has passed
        if (!armed && (now - lastNotClearTime > CLEAR_DELAY)) {
          armed = true;
        }
      }
      //direction
      switch (state) {

        case IDLE:  //only if armed is true, it reacts. 
          if (armed) {
            if (s1 && !s2) { //sensor 1 sees but sensor 2 does not
              state = S1_ACTIVE;    //potential IN
              stateStart = now;
            } else if (s2 && !s1) { //other case
              state = S2_ACTIVE;    //potential OUT
              stateStart = now;
            }
          }
          break;

        case S1_ACTIVE:     //potential IN
          if (s2) {       //if we see s2 before time out,
            // Sequence: S1 then S2 -> IN
            peopleCount++;
            if (peopleCount < 0) peopleCount = 0;
            updateRing();
            lastEvent = 1;   // IN
            armed = false;
            lastNotClearTime = now;
            state = IDLE;
          } else if (!s1 || (now - stateStart > TIMEOUT)) {   //if time out then idle
            state = IDLE;
          }
          break;

        case S2_ACTIVE:
          if (s1) { //same idea for sensor 2
            peopleCount--;
            if (peopleCount < 0) peopleCount = 0;
            updateRing();
            lastEvent = -1;  // OUT
            armed = false;
            lastNotClearTime = now;
            state = IDLE;
          } else if (!s2 || (now - stateStart > TIMEOUT)) {
            state = IDLE;
          }
          break;
      }

The neo pixel ring on the other hand provides one simple information: if the room is occupied or not. If there is no one in the room, meaning the count is 0, then it displays green. If there is atleast one person in the room, then it displays red.

//updating led based on count if room is occupied its red if not its green
void updateRing() {
  ring.clear();
  if (peopleCount > 0) {
    for (int i = 0; i < NUM_LEDS; i++) {
      ring.setPixelColor(i, ring.Color(150, 0, 0));
    }
  } else {
    for (int i = 0; i < NUM_LEDS; i++) {
      ring.setPixelColor(i, ring.Color(0, 150, 0));
    }
  }
  ring.show();
}


Arduino sends 2 information: the number of people and if its IN or OUT.

      Serial.print(peopleCount);
      Serial.print(',');
      Serial.println(lastEvent);

How this was made (P5js)

There is two transactions between p5js and arduino. p5js receives count and if a person enters or not.

if (port.opened()) {
    let data = port.readUntil("\n"); //reading line from ardunio
    if (data.length > 0) {            //if read, split the data by comma
      let parts = split(trim(data), ",");
      if (parts.length === 2) {
        let pc = int(parts[0]);
        if (!isNaN(pc) && pc >= 0) {
          peopleCount = pc;            //receiving count 
        }

        let ev = int(parts[1]);
        if (!isNaN(ev)) {
          lastEvent = ev;              //checking if entered or exited
        }

On the other hand, p5js sends reset command to arduino to reset counter for people when r is pressed. This is the helper function for that.

function resetCount() {
  // Reset on the p5 side
  peopleCount = 0;
  lastPeopleCount = 0;
  statusMessage = "Manual reset to 0.";
  showGifForCount(0);
  console.log("People count manually reset to 0 in p5.");

  //Reset arduino
  if (port.opened()) {
    port.write("RESET\n");
    console.log("Sent RESET command to Arduino.");
  }
}

This is the main logic of p5js. It saves the meme that suits each situation with a dictionary. the key is the number of people and the value is the image address. Im proud of how I included memes in my project to bring humor into my project. Without it my project would have been a detection system, which is cool by itself, but there are nothing to show or present in the showcase other than showing how it can detect yoshi. However, this adds a layer of exitement into my project and a motivation for users to test my project more than once to find out which meme comes out next.

let gifGroups = {
  0: [
    "https://media.tenor.com/hrisiYKNn6UAAAAj/you-may-now-enter-kourtlyn-wiggins.gif",
    "https://media1.tenor.com/m/IZF4HViktvgAAAAd/abbott-elementary-come-on-in.gif",
    "https://media1.tenor.com/m/ZGJod50ebXIAAAAd/you-want-to-come-in-invitation.gif"
  ],
  1: [
    "https://media.tenor.com/eIoZmG3L4fYAAAAi/yoshi-yoshi-tv.gif", 
    "https://media1.tenor.com/m/lANYAosZI4AAAAAd/yoshi-mario.gif" 
  ],
  2: [
    "https://media1.tenor.com/m/8Mt2eEPPSg4AAAAd/happy-birthday-dance.gif"  
  ],
  3: [
    "https://media1.tenor.com/m/cMvelryh5BAAAAAd/car.gif",
    "https://media1.tenor.com/m/ovq2B-ML6I4AAAAd/guys-hugging.gif"
  ],
  4: [
    "https://media1.tenor.com/m/ROTEC3I3vkQAAAAd/despicable-me.gif",
    "https://media1.tenor.com/m/onl3-G1xIGEAAAAd/walk-cross.gif"
  ],
  5: [
    "https://media1.tenor.com/m/K3shTb7Ow-MAAAAd/johnny-depp-movie.gif",
    "https://media1.tenor.com/m/iLYNgJj42gEAAAAd/dwight-the-office.gif",
    "https://media1.tenor.com/m/ywI3ImfzsvYAAAAd/nicolas-cage-who-are-you-people.gif",
    "https://media1.tenor.com/m/ZBuCuZ4Ms-oAAAAd/where-did-all-of-these-people-come-from-patrick.gif"
  ],
  6: [
    "https://media1.tenor.com/m/27Atub3mjoMAAAAd/jordan-stop-it.gif"
  ],
  7: [ 
    "https://media1.tenor.com/m/fTXGp5PtzscAAAAd/yoshi-luigi.gif"
  ],
  8: [ // 8 or more
    "https://media.tenor.com/uaqJICjtx4QAAAAM/that%27s-it-enough.gif"
  ]
};

For the final project display, I created a small door way to test this system in action. Using laser cutting printers I cut out a door way large enough for my yoshi doll to walk through. I also used 3D printers to print out 2 things: a case for the Infrared sensors and a case for the arduino board. I made these myself and although it was confusing at first, I was able to quickly adapt to it.

Since these sensors were not from the ardunio kit, I had to solder wires to the sensors as well as for the neo pixels. Thankfully, I was able to get used to soldering towards the end.

Schematic

Usage of AI
I received help from AI when displaying the memes.

function showGifForCount(count) {
  // Remove previous gif element if any
  if (gifElement) {
    gifElement.remove();
    gifElement = null;
  }

  let url = pickGifForPeopleCount(count);
  if (!url) return;

The memes were not made by me. They were all saved from a website called tenor.com.

Future Improvement
One major improvement is the cable management. In the current prototype, the wiring between the sensors, microcontroller, and LED ring is exposed, which makes the setup look unfinished and less polished. A future version could incorporate a more integrated system and internal routing channels to hide the wires and create a cleaner presentation.

Shahram Chaudhry – Final Project Documentation

Concept

For my midterm project, I created a mind palace experience where users could choose to get rid of memories. For my final project, I wanted to continue exploring the theme of memory,  but take it in the opposite direction. This time, users can’t choose whether they lose a memory or not; instead, it depends on their ability to remember.

In Faded, LEDs light up in a sequence, and the user has to repeat the sequence correctly. A blurred image on the screen represents a memory, if the user recalls the pattern correctly, the image becomes clear. If they make a mistake, the memory is lost, and the image fades.

It’s a slightly ironic twist: to hold on to a memory, you need a good memory. The project reflects how we remember or forget.

Pictures/Videos

Final Interaction Video

How does the implementation work?

Interaction Design

  • The experience begins with a landing screen and an instructions page.
  • A “Connect to Arduino” button helps with easy serial connection. .
  • The system is designed to enter fullscreen on mouse click, but also includes a failsafe:
    • If the user presses the spacebar before going fullscreen, it first triggers fullscreen mode.
    • On the next spacebar press, the game starts. (For the showcase, the experience will remain in fullscreen from the start.)
  • On starting the game, a blurred image (memory) is displayed on screen.
  • A random sequence of 4 LEDs flash one by one.
  • The player must repeat the sequence using the corresponding physical buttons.
  • If the sequence is correct:
    • The blurred image becomes clear, representing a remembered memory.
  • If the player makes a mistake:
    • The memory is lost, and a “Game Over” message is shown as the memory fades away.

Arduino Code:

int buttonPins[4] = {2, 3, 4, 5};
int ledPins[4]    = {8, 9, 10, 11};
int lastState[4] = {HIGH, HIGH, HIGH, HIGH};

void setup() {
  Serial.begin(9600);
  for (int i = 0; i < 4; i++) {
    pinMode(buttonPins[i], INPUT_PULLUP);
    pinMode(ledPins[i], OUTPUT);
    digitalWrite(ledPins[i], LOW);
  }
}

void loop() {
  if (Serial.available()) {
    String input = Serial.readStringUntil('\n');
    input.trim();
    if (input == "-1") {
      // Turn off ALL LEDs
      for (int i = 0; i < 4; i++) {
        digitalWrite(ledPins[i], LOW);
      }
    } 
    else {
      int index = input.toInt();
      if (index >= 0 && index < 4) {
        digitalWrite(ledPins[index], HIGH); // Turn on specific LED
      }
    }
  }

  for (int i = 0; i < 4; i++) {
    int reading = digitalRead(buttonPins[i]);
    // Because the buttons use INPUT_PULLUP, the pin reads HIGH when unpressed and LOW when pressed. So when button is pressed, state changes from HIGH to LOW.
    if (lastState[i] == HIGH && reading == LOW) {
      // Send button info to p5.js
      Serial.print("BUTTON:");
      Serial.println(i);
      // light up LED when that button is pressed
      digitalWrite(ledPins[i], HIGH);
      delay(120);
      digitalWrite(ledPins[i], LOW);
    }
    lastState[i] = reading; 
  }
}

Schematic:

p5.js Code and Description

The p5.js sketch is the main part as it is managing visuals, game flow, and Arduino communication. It controls screen transitions using a currentScreen variable, loads and blurs memory images dynamically, and generates a random LED sequence that is shown with proper timing using millis() and flags. The sketch receives button input from Arduino via serial and checks it against the correct sequence to determine success or failure. It also sends LED control commands back to Arduino and provides audio feedback for correct or incorrect inputs, creating a cohesive and interactive experience.

View p5.JS code

Arduino and p5.js Communication

  • p5.js to Arduino:
    • Sends LED index (“0\n” to “3\n”) to flash a specific LED.
    • Send “-1\n” to turn all LEDs off. 
  • Arduino to p5.js:
    • On button press, Arduino sends BUTTON:x (e.g., BUTTON:2) to identify which button was pressed.
    • p5.js parses this input and compares it to the expected sequence.

Aspects I’m proud of?

I’m someone who isn’t very handy and had never soldered before, so being able to solder everything, no matter how long it took, was something I’m genuinely proud of. I’m also proud of the concept itself. It’s simple but carries a deeper, emotional meaning about memory and loss. Even though the technical side isn’t overly complex, I intentionally kept it that way to make the experience more approachable and enjoyable. I had fun building it, and I hope it is rewarding to see others engage with it without feeling overwhelmed by complicated interactions.

How this was made:

Initially, I used  images from the internet, but I found it difficult to find ones that were  nostalgic, aesthetic, and suitable for fullscreen display. Eventually, I turned to AI image generation, and while I’m often skeptical about its accuracy, this time I was genuinely pleased with the results. The generated images conveyed the intended atmosphere and worked well with the game’s visual design.

On the coding side, one of my main struggles was implementing the showSequence() function, which controls the timing of LED flashes. I initially attempted to manage it using simple flags, but the logic became unreliable. With help from ChatGPT, I learned to use millis() and a lastTime variable to precisely track time intervals. This made the LED sequence much more consistent and readable.

Another area where AI support was valuable was in blurring images. I had initially applied a filter(BLUR, 10) directly to the image, but it unintentionally blurred the entire canvas. ChatGPT suggested using a separate blurLayer graphics buffer to isolate and control the blur effect. Additionally, I was manually creating each button’s design and behavior across different screens, which led to a lot of repetitive code. On AI’s suggestion, I created a reusable drawButton() function, which significantly simplified the interface logic and made the code cleaner.

AI also helped me understand Arduino’s serial communication functions more clearly. I learned the difference between readStringUntil() and readBytesUntil(), and how readStringUntil(‘\n’) could be used to parse complete lines of input from the serial port from this example: Arduino documentation.

Future Improvements

To enhance the experience, I’d like to implement multiple levels with increasing difficulty, where the LED sequences become longer or faster as the player progresses. This would add a sense of progression. Additionally, I’d like to add subtle background audio that evolves based on the player’s performance, making the experience more immersive. 

 

Week 13 – Shahram Chaudhry – User Testing

For this week’s usability test, I had one person try my interactive memory game without giving them any instructions at all. I wanted to see if the game made sense on its own and whether they could figure out what to do just by watching and interacting with it.

Right away, they understood that it was some kind of memory or reaction game. As soon as the LEDs started lighting up in a sequence, they naturally tried to repeat it using the buttons. They did not need the main written instructions to figure out what the goal of the game was.

There was a bit of confusion, though. Because of the slight delay that sometimes happens between Arduino and p5.js, they were not always sure if the game had registered their button press or how long they needed to wait before the next LED would turn on. They mentioned that the timing threw them off a little and that they were not sure whether the game was lagging or if the delay was intentional.

Link for user testing video.

What worked well

The buttons were very intuitive. The tester said it was easy to understand that you just press them and see what happens. Even though they skipped the main instructions, the messages that appear during the game, like “Watch the sequence” or “Press the buttons now,” were enough to guide them. The overall interaction felt simple and understandable.

Areas that could be improved

The biggest issue was the timing between the LEDs and the user inputs. The delay sometimes made the game feel slower than it should be, and the tester wasn’t sure when to focus on the LEDs and when to start pressing buttons. This came from the communication speed between p5.js and the Arduino, but from a user’s point of view it just felt like a pause with no explanation.

A simple fix here would be adding a small message before the game starts that says something like “The LEDs will light up in a pattern. Watch carefully. There may be a short delay before you can start pressing.” That way the player knows what to expect.

Another thing I noticed is that when the player presses the spacebar to start, the LED sequence begins almost right away. For someone who did not read the instructions, this means they have to quickly switch attention from the screen to the LED buttons with no warm up. Adding even a two second “Get ready…” screen would help the user settle in before the actual sequence begins.

Final thoughts

Even with the timing issues, the game was still very easy for the tester to understand. The layout is simple, the buttons are satisfying to press, and the idea of repeating a sequence feels familiar. The small delay caused some hesitation, but overall the interaction was still clear enough for someone to figure out without being told what to do.



Final Project Documentation

 Concept

For my final project, I wanted to build an installation that captures a piece of home: the experience of walking around a stupa in Nepal and spinning the prayer wheels. Growing up, I always loved the peaceful repetition of spinning each wheel, even before I fully understood their spiritual meaning. Over time, I learned that prayer wheels (Mani wheels) are believed to spread blessings and compassion every time they turn.

My goal was to translate that ritual into an interactive digital-physical artwork. The project consists of two main parts:

A physical prayer wheel that plays a sacred “Om” sound when spun.

A digitally illustrated stupa in p5.js, where different architectural sections light up on the physical stupa when touched.

Together, the two elements let the user explore both the ritualistic and symbolic aspects of the stupa.

How the Implementation Works
Interaction Design

The experience begins with the user spinning the prayer wheel. Instead of powering the wheel with a motor, I use the motor as a generator, when the wheel spins, it produces a small voltage. That voltage is read by the Arduino as a signal.

At the same time, the stupa illustration in p5.js acts like an interactive map. When the user touches different physical regions of the stupa (pinnacle, dome, Buddha’s eyes, mandala, etc.) in p5js then p5js sends signal to arudino and light up the parts in physical space.

The design relies on:

Discovery: Users figure out what is touch-sensitive by interacting.

Cultural symbolism: Each part of the stupa has meaning, and the lighting reveals that visually.

Multi-modal feedbackL Sound (prayer wheel), light (stupa), and animation (p5.js).

 

Arudino Code:

My Arduino reads four analog voltages from the prayer wheel motor and also listens for letters sent from p5.js. Each letter corresponds to a part of the stupa that should light up.

// ----- Analog input pins -----
const int volt2 = A2;   // Sensor or voltage input on A2
const int volt3 = A3;   // Sensor or voltage input on A3
const int volt4 = A4;   // Sensor or voltage input on A4
const int volt5 = A5;   // Sensor or voltage input on A5

// ----- Digital output pins (LEDs / relays / indicators) -----
const int pinnacle = 2;
const int thirteen = 3;
const int eye1 = 4;
const int eye2 = 5;
const int dome = 6;
const int mandala = 7;
const int flag = 8;

void setup() {
  // Configure analog input pins
  pinMode(volt2, INPUT);
  pinMode(volt3, INPUT);
  pinMode(volt4, INPUT);
  pinMode(volt5, INPUT);

  // Configure all digital output pins
  pinMode(pinnacle, OUTPUT);
  pinMode(thirteen, OUTPUT);
  pinMode(eye1, OUTPUT);
  pinMode(eye2, OUTPUT);
  pinMode(dome, OUTPUT);
  pinMode(mandala, OUTPUT);
  pinMode(flag, OUTPUT);

  // Start serial communication
  Serial.begin(9600);
}

void loop() {

  // ----- Read all analog inputs -----
  int a2 = analogRead(volt2);
  int a3 = analogRead(volt3);
  int a4 = analogRead(volt4);
  int a5 = analogRead(volt5);

  // ----- Send readings to the Serial Monitor (comma-separated) -----
  Serial.print(a2);
  Serial.print(",");
  Serial.print(a3);
  Serial.print(",");
  Serial.print(a4);
  Serial.print(",");
  Serial.print(a5);
  Serial.print("\n");

  delay(50); // Small delay for stable output



  // ----- Handle incoming serial commands -----
  while (Serial.available() > 0) {

    char message = Serial.read();   // Read one character command

    allOff();   // Always reset all outputs first



    // ----- Activate specific outputs based on incoming character -----
    if (message == 'p') {
      digitalWrite(pinnacle, HIGH);     // Turn on pinnacle
    }
    else if (message == 't') {
      digitalWrite(thirteen, HIGH);     // Turn on thirteen
    }
    else if (message == 'e') {          // "e" turns on both eyes
      digitalWrite(eye1, HIGH);
      digitalWrite(eye2, HIGH);
    }
    else if (message == 'd') {
      digitalWrite(dome, HIGH);         // Dome on
    }
    else if (message == 'm') {
      digitalWrite(mandala, HIGH);      // Mandala on
    }
    else if (message == 'f') {
      digitalWrite(flag, HIGH);         // Flag on
    }
    // Any other character is ignored
  }
}



// ----- Helper function: turn ALL outputs OFF -----
void allOff() {
  digitalWrite(pinnacle, LOW);
  digitalWrite(thirteen, LOW);
  digitalWrite(eye1, LOW);
  digitalWrite(eye2, LOW);
  digitalWrite(dome, LOW);
  digitalWrite(mandala, LOW);
  digitalWrite(flag, LOW);
}

Schematic of the Circuit

p5.js Code Description

The p5.js sketch does these things:

Draws multiple screens (welcome screen → prayer wheel → stupa)

Listens for sensor values sent from Arduino

Plays an “Om” sound only when the wheel spins

Sends letters (‘p’, ‘t’, ‘e’, ‘d’, ‘m’) back to Arduino to activate lights

Handles all on-screen interactions through mouse clicks

code:

// -------------------------------------------------------------
// GLOBAL VARIABLES
// -------------------------------------------------------------

let port;                 // Serial port object for Arduino communication
let button;               // Connect button
let open = false;         // Tracks whether the port is open
let trimvalue;            // Parsed Arduino sensor values
let screen = 1;           // Screen state controller
let sentCommand = false;  // Tracks if Arduino command is already sent
let soundPlaying = false; // Prevents OM sound from retriggering too fast


// -------------------------------------------------------------
// ASSET LOADING (Audio + Images)
// -------------------------------------------------------------
function preload() {

  /////////music////////
  om_sound = loadSound('om.mp3'); // sound from https://pixabay.com/music/search/om/

  // UI Images (all from canva.com as cited)
  welcomescreen = loadImage("startpage.png"); // image from canva.com
  screen2i = loadImage("screen2i.png");       // image from canva.com
  screen3i = loadImage("screen3i.png");       // image from canva.com
  screenpi = loadImage("screenpi.png");       // image from canva.com
  screenmi = loadImage("screenmi.png");       // image from canva.com
  screendi = loadImage("screendi.png");       // image from canva.com
  screenei = loadImage("screenei.png");       // image from canva.com
  screenti = loadImage("screenti.png");       // image from canva.com
}


// -------------------------------------------------------------
// SETUP FUNCTION — Runs once
// -------------------------------------------------------------
function setup() {

  createCanvas(400, 400);

  // Create serial port object (p5.js → Arduino communication bridge)
  port = createSerial();

  // Create connect button
  button = createButton("Connect to Arduino");
  button.position(width / 2 - 50, height / 2);
  button.mousePressed(openArduino); // Attach handler
}


// -------------------------------------------------------------
// OPEN ARDUINO SERIAL PORT
// -------------------------------------------------------------
function openArduino() {

  // If port is not already open, open it
  if (!port.opened()) {

    port.open(9600);   // Must match Arduino baud rate
    open = true;       // Mark port as open
    button.remove();   // Hide button after connecting
  }
}


// -------------------------------------------------------------
// MAIN DRAW LOOP — Runs continuously
// -------------------------------------------------------------
function draw() {

  // Only run UI + sound + sensor logic after port is open
  if (open == true) {

    // ---------------------------------------------------------
    // Screen Navigation
    // ---------------------------------------------------------
    if (screen == 1) {
      welcomescreenf();  // Start page
    }
    else if (screen == 2) {
      screen2f();
    }
    else if (screen == 3) {
      screen3f();
    }
    else if (screen == 4) {
      screenpf();
    }
    else if (screen == 7) {
      screend();
    }
    else if (screen == 8) {
      screenm();
    }

    // ---------------------------------------------------------
    // Read serial input (Arduino → p5.js)
    // ---------------------------------------------------------
    value = port.readUntil("\n");  // Read full sensor line
    port.clear();                  // Clear leftover buffer

    trimvalue = value.trim().split(",");  
    console.log(trimvalue);        // Print array of sensor values


    // ---------------------------------------------------------
    // SOUND TRIGGER LOGIC — OM sound plays when any sensor > 0
    // ---------------------------------------------------------
    if (!soundPlaying) {

      if (
        parseInt(trimvalue[0]) > 0 ||
        parseInt(trimvalue[1]) > 0 ||
        parseInt(trimvalue[2]) > 0 ||
        parseInt(trimvalue[3]) > 0
      ) {
        soundPlaying = true;  // Prevents double-trigger

        om_sound.play();      // Play OM sound

        // Reset lock after sound finishes
        om_sound.onended(() => {
          soundPlaying = false;
        });
      }
    }
  }

  // If port is closed → pause sound
  else {
    om_sound.pause();
  }
}


// -------------------------------------------------------------
// WELCOME SCREEN
// -------------------------------------------------------------
function welcomescreenf() {
  image(welcomescreen, 0, 0, 400, 400);
}


// -------------------------------------------------------------
// MOUSE-PRESSED HANDLER FOR SCREEN NAVIGATION + ARDUINO COMMANDS
// -------------------------------------------------------------
function mousePressed() {

  // ---------------- Screen 1 → Screen 2 -----------------
  if (screen == 1 &&
      mouseX >= 135 && mouseX <= 263 &&
      mouseY >= 354 && mouseY <= 371) {

    screen2f();
  }

  // ---------------- Screen 2 → Screen 3 -----------------
  else if (screen == 2 &&
           mouseX >= 120 && mouseX <= 346 &&
           mouseY >= 192 && mouseY <= 366) {

    screen3f();
  }

  // ---------------- Screen 3 Interactive Hotspots -----------------
  else if (screen == 3) {

    // Pinnacle (Top)
    if (mouseInside(192, 211, 117, 144)) {
      screenpf();  // Arduino: 'p'
    }

    // Thirteen tiers
    else if (mouseInside(185, 225, 147, 178)) {
      screent();   // Arduino: 't'
    }

    // Eyes
    else if (mouseInside(183, 244, 183, 195)) {
      screene();   // Arduino: 'e'
    }

    // Dome
    else if (mouseInside(124, 289, 194, 233)) {
      screend();   // Arduino: 'd'
    }

    // Mandala
    else if (mouseInside(0, 400, 240, 286)) {
      screen = 8;
      screenm();   // Arduino: 'm'
    }
  }

  // ---------------- Back Buttons for All Detail Screens -----------------

  else if (screen == 4 && mouseInside(148, 240, 339, 355)) goBackToMain();
  else if (screen == 5 && mouseInside(126, 274, 302, 325)) goBackToMain();
  else if (screen == 6 && mouseInside(122, 260, 302, 326)) goBackToMain();
  else if (screen == 7 && mouseInside(129, 274, 305, 329)) goBackToMain();
  else if (screen == 8 && mouseInside(115, 259, 304, 325)) goBackToMain();
}


// -------------------------------------------------------------
// HELPERS
// -------------------------------------------------------------

// Reusable function for BACK NAVIGATION
function goBackToMain() {
  port.write(' ');  // Sends "turn everything OFF" to Arduino
  screen = 3;
  screen3f();
}

// Check if mouse is inside a bounding box
function mouseInside(x1, x2, y1, y2) {
  return mouseX >= x1 && mouseX <= x2 &&
         mouseY >= y1 && mouseY <= y2;
}


// -------------------------------------------------------------
// SCREEN FUNCTIONS + ARDUINO COMMANDS
// -------------------------------------------------------------

function screen2f() {
  image(screen2i, 0, 0, 400, 400);
  screen = 2;
}

function screen3f() {
  image(screen3i, 0, 0, 400, 400);
  screen = 3;
}

function screenpf() {
  image(screenpi, 0, 0, 400, 400);
  port.write('p');  // Send “pinnacle”
  screen = 4;
}

function screent() {
  image(screenti, 0, 0, 400, 400);
  port.write('t');
  screen = 5;
}

function screene() {
  image(screenei, 0, 0, 400, 400);
  port.write('e');
  screen = 6;
}

function screend() {
  image(screendi, 0, 0, 400, 400);
  port.write('d');
  screen = 7;
}

function screenm() {
  image(screenmi, 0, 0, 400, 400);
  port.write('m');
  screen = 8;
}

p5js screen:

Full Screen p5js code:

// -------------------------------------------------------------
// GLOBAL VARIABLES
// -------------------------------------------------------------

let port;                 // Serial port object for Arduino communication
let button;               // Connect button
let open = false;         // Tracks whether the port is open
let trimvalue;            // Parsed Arduino sensor values
let screen = 1;           // Screen state controller
let sentCommand = false;  // Tracks if Arduino command is already sent
let soundPlaying = false; // Prevents OM sound from retriggering too fast


// -------------------------------------------------------------
// ASSET LOADING (Audio + Images)
// -------------------------------------------------------------
function preload() {

  /////////music////////
  om_sound = loadSound('om.mp3'); // sound from https://pixabay.com/music/search/om/

  // UI Images (all from canva.com as cited)
  welcomescreen = loadImage("startpage.png"); // image from canva.com
  screen2i = loadImage("screen2i.png");       // image from canva.com
  screen3i = loadImage("screen3i.png");       // image from canva.com
  screenpi = loadImage("screenpi.png");       // image from canva.com
  screenmi = loadImage("screenmi.png");       // image from canva.com
  screendi = loadImage("screendi.png");       // image from canva.com
  screenei = loadImage("screenei.png");       // image from canva.com
  screenti = loadImage("screenti.png");       // image from canva.com
}

// -------------------------------------------------------------
// SETUP — now starts in window size but ready for fullscreen
// -------------------------------------------------------------
function setup() {
  createCanvas(400, 400);           // your original size
  // If you open the sketch in a new tab it will start fullscreen-ready

  port = createSerial();

  button = createButton("Connect to Arduino");
  button.position(width / 2 - 50, height / 2);
  button.mousePressed(openArduino);
}

// -------------------------------------------------------------
// MAKE CANVAS ALWAYS FILL THE SCREEN (even after fullscreen)
// -------------------------------------------------------------
function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
}

// -------------------------------------------------------------
// PRESS "F" → TOGGLE FULLSCREEN
// -------------------------------------------------------------
function keyPressed() {
  if (key === 'f' || key === 'F') {
    let fs = fullscreen();
    fullscreen(!fs);
  }
}

// -------------------------------------------------------------
// MAIN DRAW LOOP
// -------------------------------------------------------------
function draw() {
  background(0);

  if (open == true) {

    // ---------------------------------------------------------
    // Screen Navigation (your original images, now scaled)
    // ---------------------------------------------------------
    if (screen == 1) {
      image(welcomescreen, 0, 0, width, height);
    }
    else if (screen == 2) {
      image(screen2i, 0, 0, width, height);
    }
    else if (screen == 3) {
      image(screen3i, 0, 0, width, height);
    }
    else if (screen == 4) {
      image(screenpi, 0, 0, width, height);
    }
    else if (screen == 5) {
      image(screenti, 0, 0, width, height);
    }
    else if (screen == 6) {
      image(screenei, 0, 0, width, height);
    }
    else if (screen == 7) {
      image(screendi, 0, 0, width, height);
    }
    else if (screen == 8) {
      image(screenmi, 0, 0, width, height);
    }

    // ---------------------------------------------------------
    // Read serial input
    // ---------------------------------------------------------
    let value = port.readUntil("\n");
    port.clear();

    if (value != "") {
      trimvalue = value.trim().split(",");  
      console.log(trimvalue);

      // ---------------------------------------------------------
      // SOUND TRIGGER LOGIC
      // ---------------------------------------------------------
      if (!soundPlaying) {
        if (
          parseInt(trimvalue[0]) > 0 ||
          parseInt(trimvalue[1]) > 0 ||
          parseInt(trimvalue[2]) > 0 ||
          parseInt(trimvalue[3]) > 0
        ) {
          soundPlaying = true;
          om_sound.play();

          om_sound.onended(() => {
            soundPlaying = false;
          });
        }
      }
    }
  }
  else {
    om_sound.pause();
  }
}

// -------------------------------------------------------------
// MOUSE-PRESSED HANDLER — scaled coordinates for fullscreen
// -------------------------------------------------------------
function mousePressed() {

  if (!open) return;

  // Scale mouse coordinates to original 400×400 layout
  let mx = mouseX * 400 / width;
  let my = mouseY * 400 / height;

  // ---------------------------------------------------------
  // SCREEN 1 → Start Button
  // ---------------------------------------------------------
  if (screen == 1) {
    if (mx >= 135 && mx <= 263 && my >= 354 && my <= 371) {
      screen = 2;
    }
  }

  // ---------------------------------------------------------
  // SCREEN 2 → Next Button
  // ---------------------------------------------------------
  else if (screen == 2) {
    if (mx >= 120 && mx <= 346 && my >= 192 && my <= 366) {
      screen = 3;
    }
  }

  // ---------------------------------------------------------
  // SCREEN 3 → 5 Interactive Zones
  // ---------------------------------------------------------
  else if (screen == 3) {

    // PF
    if (mx >= 192 && mx <= 211 && my >= 117 && my <= 144) { 
      screenpf(); 
    }

    // T
    else if (mx >= 185 && mx <= 225 && my >= 147 && my <= 178) { 
      screent(); 
    }

    // E
    else if (mx >= 183 && mx <= 244 && my >= 183 && my <= 195) { 
      screene(); 
    }

    // D
    else if (mx >= 124 && mx <= 289 && my >= 194 && my <= 233) { 
      screend(); 
    }

    // M
    else if (mx >= 0 && mx <= 400 && my >= 240 && my <= 286) { 
      screenm(); 
    }
  }

  // ---------------------------------------------------------
  // BACK BUTTONS FOR SCREENS 4–8
  // ---------------------------------------------------------
  else if (screen == 4) {
    if (mx >= 148 && mx <= 240 && my >= 339 && my <= 355) goBackToMain();
  }

  else if (screen == 5) {
    if (mx >= 126 && mx <= 274 && my >= 302 && my <= 325) goBackToMain();
  }

  else if (screen == 6) {
    if (mx >= 122 && mx <= 260 && my >= 302 && my <= 326) goBackToMain();
  }

  else if (screen == 7) {
    if (mx >= 129 && mx <= 274 && my >= 305 && my <= 329) goBackToMain();
  }

  else if (screen == 8) {
    if (mx >= 115 && mx <= 259 && my >= 304 && my <= 325) goBackToMain();
  }
}

// -------------------------------------------------------------
// HELPERS
// -------------------------------------------------------------
function goBackToMain() {
  port.write(' ');  // turn everything off
  screen = 3;
}

// -------------------------------------------------------------
// SCREEN FUNCTIONS + ARDUINO COMMANDS (unchanged)
// -------------------------------------------------------------
function screenpf() {
  port.write('p');
  screen = 4;
}
function screent() {
  port.write('t');
  screen = 5;
}
function screene() {
  port.write('e');
  screen = 6;
}
function screend() {
  port.write('d');
  screen = 7;
}
function screenm() {
  port.write('m');
  screen = 8;
}

// -------------------------------------------------------------
// OPEN ARDUINO SERIAL PORT
// -------------------------------------------------------------
function openArduino() {
  if (!port.opened()) {
    port.open(9600);
    open = true;
    button.remove();
  }
}

 

Arduino  and p5.js Communication

Flow:

User spins the wheel → motor sends voltage

Arduino reads values → prints as CSV string

p5.js reads the CSV → detects movement → plays sound

User clicks a part of the stupa in p5.js → p5 sends a letter

Arduino receives the letter → lights the corresponding LEDs

This loop creates a tight physical and digital connection.

What I’m Proud Of

One of the things I’m most proud of in this project is the way I used a motor as a sensor. In class, we mostly learned how to drive a motor, how to make it spin, how to control its speed, how to power it. But we never talked much about using a motor in reverse, as a generator. The idea actually came from something I learned back in high school: when you spin a motor manually, it becomes a dynamo and creates a small voltage. Remembering that old concept and realizing I could apply it here felt like a huge breakthrough. Instead of attaching extra sensors or complicated hardware, I turned the motor itself into the perfect input device for the prayer wheel. It made the interaction feel more authentic and made me feel resourceful like I made something out of almost nothing.

I’m also proud that the project became more than just a technical assignment. A lot of Arduino + p5.js demos end up being simple lights or sliders, but I wanted my project to feel culturally grounded and emotionally meaningful. Recreating the experience of spinning a prayer wheel and interacting with a stupa allowed me to share a part of Nepalese culture in a way that felt personal. It wasn’t just, “Here’s a sensor and an LED” it was a small spiritual journey for the user. The moment the “Om” sound plays when the wheel turns feels like the installation is breathing with you.

Finally, I’m proud of creating a fully two-way communication system between Arduino and p5.js. At the beginning of the semester, I struggled even to understand how serial communication worked. But in this project, the Arduino and p5.js are constantly talking to each other. Arduino sends sensor data to p5.js, p5.js analyzes it, then sends back precise commands to control the lights on the physical stupa. This feedback loop makes the experience feel alive and responsive. Building this system made me feel like I actually understand how physical computing and digital interaction can merge into one continuous experience.

Overall, the project pushed me technically, creatively, and culturally. It’s the first time I felt like I wasn’t just completing a class assignment. I was creating something that feels like mine.

How This Was Made

I used several tools throughout this project:

Arduino UNO for sensing and controlling LEDs

p5.js for the interactive visuals and sound

Adobe Illustrator / Canva (or whichever tool you used) for drawing the stupa

A small DC motor as a dynamo sensor

WordPress to document the process

Generative AI (ChatGPT) to help debug my Arduino code,  and explain concepts more clearly

The write-up for this project came together very organically. Instead of sitting down and trying to write a perfect report in one go, I started by brainstorming everything in my head and dumping ideas onto paper. I wrote down fragments of thoughts, sketches of memories about stupas and prayer wheels, notes about how the interactions should feel, and even some quick diagrams. It was messy at first, but that process helped me understand what parts of the project mattered the most to me.

From there, I organized the ideas into sections—concept, interaction, technical breakdown, cultural meaning, challenges, and future improvements. I rewrote and refined them little by little. Some parts came from real experiences I’ve had at stupas in Nepal, and others came from experimenting with the Arduino and p5.js until something clicked. Once I had the raw content, I shaped it into the final narrative you see here.

 

Area for future improvement:

I also want to add more lights and more detailed lighting zones on the stupa. At the moment, the LEDs represent the main sections (like the pinnacle, dome, eyes, mandala, etc.), but the lighting could be much richer. I imagine having multiple LEDs in each section, maybe even different colors or subtle animations (like pulsing or fading) to show the sacredness and energy of that part of the structure. More lights would not only make the physical model more visually striking, but also help guide the user’s attention and make the mapping between touch and light feel clearer.

Lastly, I’d like to include more educational content about stupa symbolism. Right now, the project hints at the meanings (like the dome representing the world or the eyes representing wisdom), but I could go deeper. For example, when a section lights up, a short description could appear explaining its spiritual role, history, or connection to Buddhist philosophy. This would turn the installation not just into an interactive artwork, but also a small learning experience about Nepali and Himalayan culture.

Final working video:

Google drive link

 

 

 

Week 13: User testing

Most users were able to understand the basic concept of interacting with the prayer wheel and the stupa. They could figure out that touching different parts of the stupa would cause the lights to turn on and trigger audio responses. However, the main area of confusion was which specific areas of the stupa were touch-sensitive. Users often touched the wrong parts and then tried to adjust their behavior based on trial and error.

Even though the mapping between each touch point and the animation/light feedback did make sense to them after discovery, the initial uncertainty slowed down the interaction. Some users expressed that they “felt like they knew better,” meaning they expected the interactive areas to match their own understanding of how a stupa is structured, rather than how my sensor layout was designed.

What Worked Well & What Could Be Improved

The prayer wheel interaction worked especially well. Users intuitively understood that the wheel was meant to be spun, and the connection between spinning motion and sound was clear. The feedback loop felt natural and satisfying.

The lighting on the stupa, however, could be improved. While the concept of touching different architectural parts (pinnacle, dome, mandala, etc.) was meaningful, the technical responsiveness wasn’t always consistent. This inconsistency made some users second-guess whether they were interacting correctly or whether the system was malfunctioning. Improving sensor sensitivity or adding clearer visual affordances would help eliminate this confusion.

Areas I Felt the Need to Explain

The two aspects that required the most explanation were:

The cultural concept of the prayer wheel
Some users were not familiar with what a prayer wheel is or how it traditionally functions. This lack of background knowledge made the interaction less immediately intuitive for them.

Which specific parts of the stupa activate the LEDs
The stupa layout visually makes sense, but it wasn’t obvious to users where to touch. They needed guidance on how the different zones mapped to the lighting changes.

To make these areas clearer in the future, I could incorporate visual cues, such as subtle highlights, icons, or glowing outlines that indicate interactive regions. Alternatively, I could add a brief onboarding animation that demonstrates the interaction without relying on text.

User testing video: google drive

Week 13 – Prototyping and User Testing

I had a grand, academic ambition when I started my final project. My midterm, Pitchy Bird, had successfully used vocal pitch to control a character. For the final, I wanted to escalate the complexity to gamify the Implicit Association Test (IAT). The concept was Flappy IAT, a game where players navigated by physically “bumping” a controller while simultaneously tilting it left or right to categorize words (e.g., “Good” vs. “Bad”) displayed on pipes.

It sounded brilliant on paper, a lesson to learn in practice… A lesson on how an ambitious outlook (conceptual AND technical) can suffocate user experience.

User Testing

My game’s initial prompt for the user was: “Bump to fly, Tilt Left for Green/Good, Tilt Right for Red/Bad.”

There was immediate confusion when I described it to people and even later handed a prototype to them without instructions. The cognitive load was too high to keep the bird afloat WHILE processing semantic text on moving pipes. This has become clear immediately after user testing, even with a keyboard-simulated prototype.

Even when the users understand the game control logic, “Bump” didn’t play well physically. Users weren’t sure if they should tap the device or shake it. The motion was actually a “Wave” and a sudden vertical movement regardless of the angle. I decided to changethe internal and external naming from BUMP to WAVE, which made the game sound nicer and action more intuitive.

Technical Challenges

Behind the scenes, the technical implementation was really a “battlefield”. I was using an Arduino with an MMA8452Q accelerometer sending data to p5.js via Web Serial.

Calibration Nightmare

In the first iteration, I assumed the controller would calibrate itself and has some error tolerance. None. Hardcoded thresholds (e.g., expecting 1.0g on the Z-axis at rest) failed the moment a user held the device at a comfortable, natural angle.

I had to write a manual calibration routine calibrateRestPosition() that runs at startup. It takes 20 samples to establish a “zero” baseline for the user’s specific grip, allowing the game to detect relative changes rather than absolute ones.

Sensitivity Tuning

Initially, the bird was sluggish. Users were waving frantically with no response, or triggering two responses as they waved up AND down. The threshold was set to 1.0f (requiring 1g of force above gravity) and bidirectional. I had to lower this to 0.5f to make the “Wave” detection feel snappy and responsive, and one-directional so you go down to “Bump” the bird up.

Killing the IAT

Note that I had built elaborate logic for stimuli arrays, category pairs (like “Female Doctor”), and complex CSV logging for reaction times. But I looked at the codebase and the user experience. The game wasn’t fun. It was stressful and buggy.

The IAT logic was causing errors in the drawing loop, and the “Game Over” screen was breaking because the state management between the IAT trials and the physics engine was out of sync. Users were dying because they lacked familiarity and the buggy interface.

I commented out the entire IAT part.

  • I removed the word overlays and the tilt logic.
  • I optimized the Arduino code to stop listening for X/Y tilt axes entirely to save processing power.
  • I refactored the name from Flappy IAT to Wavy Bird.

Refining Core Experience

Once I removed the “noise” of the psychological test, I could focus on making the actual game better.

  • The Tutorial: I noticed users could die instantly upon starting. I introduced a “Tutorial Mode” where collisions don’t kill you but reset the bird position.
  • The “Game Over” Loop: A persistent bug caused the screen to go blank upon death instead of showing the score. I fixed the state transition so that handleCollision correctly triggers the Game Over screen, allowing for a restart without refreshing the page,.

Conclusion

The project I ended up with is smaller than the one I proposed, but I would say it’s a trade-off, particularly when I am managing three final projects for this semester. By cutting the IAT feature, I could focus more, making the game from a broken research tool to a polished physical game.

If one has to explain the controls for five minutes before someone can play, the design is too complex. Sometimes, we just need to wave.

Week 14 – Final Project Documentation

Concept and Inspiration:

The concept for my project was to create a playful and intuitive physical DJ booth where anyone regardless of technical experience could press buttons and instantly feel like a DJ. My inspiration came from real DJ booths, toy soundboards, and the desire to blend physical interaction with coding and visuals. I wanted something energetic, colorful, and fun that made people smile the second they pressed a button. The design reflects my aesthetic as an IM student: bright colors, cute graphics, and interactive storytelling through sound and visuals.

 

How the Implementation Works:

The project uses an Arduino Uno connected to five physical switches (four push buttons and one toggle switch). Each button corresponds to a unique sound effect, while the toggle controls a looping background beat. The Arduino sends signals over Web Serial to p5.js, which handles audio playback, a spinning animated record, sparks effects, and screen navigation. Pressing a button triggers communication from the Arduino to the browser, and the p5.js sketch reacts immediately with sound, visuals, and color changes.

Interaction Design:

The interaction design is intentionally simple and intuitive. Users begin with a welcome screen, then view instructions, and finally arrive at the DJ booth environment. During user testing, people instantly understood what to do, if you see a button, you press it. The toggle switch clearly communicates “main beat,” while the colored push buttons invite exploration. The interface includes a spinning record that flashes in the color of the button pressed, along with spark effects that reinforce the beat visually. Everything is designed so that the user feels like they are performing music live, without needing instructions.

Description of the Arduino Code:

The Arduino code uses five digital input pins, each connected to a button or toggle switch. Inputs use  “pinMode(pin, INPUT_PULLUP)” so the board can detect when each switch is grounded. When a button is pressed, the Arduino sends a single letter over Serial (e.g., `’B’` for blue, `’R’` for red). For the toggle switch that controls the looping beat, it sends `’L’` when switched on and `’l’` when switched off. This communication ensures fast response times and minimal lag. Every press is immediately transmitted to the p5.js sketch.

// DJ Booth Final Project 
const int BTN_YELLOW_PIN = 2;   // meow.mp3
const int BTN_BLUE_PIN   = 3;   // funkydrums.mp3
const int BTN_RED_PIN    = 4;   // snare.mp3
const int BTN_GREEN_PIN  = 5;   // scratch.mp3
const int TOGGLE_PIN     = 6;   // loopbeat.mp3

// Track previous states
int lastYellowState = HIGH;
int lastBlueState   = HIGH;
int lastRedState    = HIGH;
int lastGreenState  = HIGH;
int lastToggleState = HIGH;

void setup() {
  Serial.begin(9600);

  pinMode(BTN_YELLOW_PIN, INPUT_PULLUP);
  pinMode(BTN_BLUE_PIN,   INPUT_PULLUP);
  pinMode(BTN_RED_PIN,    INPUT_PULLUP);
  pinMode(BTN_GREEN_PIN,  INPUT_PULLUP);
  pinMode(TOGGLE_PIN,     INPUT_PULLUP);

  Serial.println("DJ buttons ready");
}

void loop() {
  //Read all current states
  int yellowState = digitalRead(BTN_YELLOW_PIN);
  int blueState   = digitalRead(BTN_BLUE_PIN);
  int redState    = digitalRead(BTN_RED_PIN);
  int greenState  = digitalRead(BTN_GREEN_PIN);
  int toggleState = digitalRead(TOGGLE_PIN);

  // Momentary buttons:
  if (yellowState == LOW && lastYellowState == HIGH) {
    Serial.println("Y");
  }
  if (blueState == LOW && lastBlueState == HIGH) {
    Serial.println("B");
  }
  if (redState == LOW && lastRedState == HIGH) {
    Serial.println("R");
  }
  if (greenState == LOW && lastGreenState == HIGH) {
    Serial.println("G");
  }

  // Toggle switch: 
  if (toggleState != lastToggleState) {
    if (toggleState == LOW) {
      Serial.println("L");   // loop ON
    } else {
      Serial.println("l");   // loop OFF
    }
  }

  // Update previous states
  lastYellowState = yellowState;
  lastBlueState   = blueState;
  lastRedState    = redState;
  lastGreenState  = greenState;
  lastToggleState = toggleState;

  delay(10);
}

Explanation of the Schematic:

  • One leg of every button goes to GND
  • The other leg goes to a digital pin on the Arduino (Red→D2, Blue→D3, Yellow→D4, Green→D6, Toggle→D5).
  • Because `INPUT_PULLUP` is used, each button reads HIGH when untouched and LOW when pressed.
  • Thanks to input pull up I didn’t need any resistors which saved me so much time in the wiring
  • All grounds share a common GND rail.
  • The toggle switch behaves the same as a button but stays latched, which makes it perfect for a looping beat.

Description of the p5.js Code:

The p5.js code controls the full experience: screen navigation, graphics, audio playback, animations, and effects. It includes three screens: welcome, instructions, and play. For sound, p5.js loads separate “.mp3” files for each button. When a button is pressed, the code plays the sound and triggers a color flash and particle explosion around the spinning record. When the toggle is on, a purple base color is applied to the record and a looping track plays continuously. The code also handles resizing, UI buttons, custom fonts, background images, and smooth animation effects.

Communication Between Arduino and p5.js:

Communication is handled using the Serial Connection, which allows the browser to read Serial data from the Arduino. When the user clicks “Connect to Arduino,” the browser opens a serial port. Every button press sends a letter from the Arduino, which p5.js reads asynchronously inside “handleSerialCommand()”. Depending on the letter received, p5.js plays a sound, updates visuals, and triggers effects. This real-time communication creates a seamless physical to digital interaction.

Aspects of the Project I’m Proud Of:

I’m especially proud of how smooth and responsive the experience feels. The combination of physical buttons, instant audio playback, color flashes, and spark effects makes the project feel alive. The UI design using a custom background and logo gives everything a polished, cohesive aesthetic. I’m also proud of the user testing results, people were genuinely excited to play with it and immediately understood how it worked. I loved how tactile the buttons felt and how the overall design turned out. I’m genuinely very proud that I got the serial connection to work quite easily because that was the part I had least practice with, so it was quite intimidating at the start.

How This Was Made (Tools + Process + AI Disclosure):

This project was created using an Arduino Uno, momentary push buttons, a toggle switch, a breadboard, and the p5.js for visuals and audio. I designed the project through several stages: wiring and prototyping, interface design, audio testing, and finally the integration of everything using serial connection.  instruction screen.

For the visuals, the logo and all background illustrations were generated using ChatGPT’s image generation, then edited slightly on procreate and integrated into the p5.js sketch. This allowed the look of the project to match my playful, colorful DJ booth theme.

I also used ChatGPT when I hit my major roadblock with more advanced p5.js visuals. One moment where I relied heavily on AI help was when I wanted the spinning record to generate spark effects every time a button was pressed. I couldn’t find any tutorials or examples online that matched what I wanted. The spark idea was really important for the UI experience because it gives the user instant visual feedback that their button press produced a beat.

ChatGPT helped me write and understand the logic. These are the exact lines of code that were produced with AI assistance and I tweaked it a bit after:

function spawnSparks(col) {
  for (let i = 0; i < 25; i++) {
    let angle = random(TWO_PI);
    let speed = random(2, 6);
    sparks.push({
      x: 0,
      y: 0,
      vx: cos(angle) * speed,
      vy: sin(angle) * speed,
      age: 0,
      life: random(20, 40),
      col: col
    });
  }
}

function updateAndDrawSparks() {
  for (let i = sparks.length - 1; i >= 0; i--) {
    let s = sparks[i];
    s.age++;
    s.x += s.vx;
    s.y += s.vy;
    s.vx *= 0.95;
    s.vy *= 0.95;

    let alpha = map(s.age, 0, s.life, 255, 0);

    if (s.age > s.life || alpha <= 0) {
      sparks.splice(i, 1);
      continue;
    }

    noStroke();
    fill(red(s.col), green(s.col), blue(s.col), alpha);
    ellipse(s.x, s.y, 8, 8);
  }
}

These functions generate a small explosion of colored sparks around the record every time the user presses a button. Without ChatGPT, I wouldn’t have figured it out, and I might not have included it at all; but it became a core part of the experience. All of the actual board design, sound integration, wiring, physical construction, and final design choices were done by me, but AI supported me when I was genuinely stuck and unable to find online resources.

The physical box for my DJ Booth actually started as an old shoebox that I spray-painted and transformed into the final enclosure. I measured each button carefully and cut out holes so they would fit snugly without wobbling, which made the whole build feel more secure and polished. It was also my first time soldering, which was definitely intimidating at first, holding the iron, melting the solder, and checking if it actually was functional with the multimeter was a learning curve. But once I printed and attached the buttons to the board design on top, everything finally looked cohesive and intentional.

Before building it physically, I sketched the layout on Procreate so I could plan the shape and spacing of the buttons. That helped me visualize how the final booth should feel. For the code, I worked on it in clear chunks because I wanted the experience to stay minimal (just three pages) but still feel smooth, refined, and complete. This structured approach made the project feel much more manageable and allowed the hardware and UI to come together cleanly. I wrote the code in clear, manageable chunks so I wouldn’t get overwhelmed by trying to build everything at once. I started with the basics, getting the Arduino buttons to send the correct serial messages, then moved on to the p5.js screens, audio, and visuals. After that, I added more advanced elements like the sparks, color changes, and the spinning record. Breaking it up this way helped me stay organized and made the entire project feel polished and intentional.

 

Credits/Inspiration:

  • The logo and backgrounds were generated on chat GPT
  • The board design was made on canva
  • Sound: free sound effects libraries specifically www.freesound.org
  • Inspiration: I was inspired by the Pioneer DJ All-In-One systems (https://www.pioneerdj.com/en/product/all-in-one-system/), especially their intuitive layouts and the way each button or dial has a clear purpose. I loved how professional DJ booths combine simplicity with powerful control, and I wanted to translate that concept into a playful, beginner-friendly version using Arduino and p5.js. My project recreates the feeling of pressing real DJ buttons and triggering beats, but in a simplified, interactive form that anyone can instantly understand.
  • Also I was inspired by arcade-style soundboards specifically from https://www.myinstants.com/en/search/?name=arcade. This website really captures the essence of what I wanted my DJ booth to sound and feel like.
  • Special thanks to Prof. Mang for always answering my questions in class 🙂

Challenges & How I Overcame Them:

Debugging this project took much longer than I expected. At one point, nothing was working, the serial messages weren’t reading correctly, the sounds weren’t triggering, and the interface wasn’t behaving the way it should. I kept trying to fix everything at once, which only made the process more overwhelming. Eventually, I realized the best thing I could do was take a break and step away from the code. When I came back with a fresh pair of eyes, it was so much easier to spot the mistakes I had been missing. Small things like typos, out-of-place brackets, or mismatched variable names were suddenly obvious. That moment taught me how important rest is when troubleshooting; sometimes clarity only comes after giving yourself space to breathe.

Future Improvements:

In the future, I would love to add a recording feature that captures the user’s live performance and lets them play it back. I could also add a volume knob using a potentiometer, though during prototyping I realized it cluttered the interface and confused users. Additional ideas include adding more sound banks, using RGB LEDs for visual feedback, and building a more polished physical enclosure to resemble a small DJ mixer.

IM Showcase Highlights:

My Interactive Media showcase went really well, people were genuinely excited about my DJ booth, and it felt like everyone wanted a turn to play. What made me happiest was seeing how quickly they understood the user interface without me needing to explain much; the controls felt intuitive, so they could jump straight into the experience and focus on having fun. Watching friends and classmates interact with something I built from scratch was honestly so rewarding, and it made all the time spent designing, testing, and refining the project feel completely worth it. Overall, the showcase experience was both exciting and validating, and it left me feeling proud of how the booth brought people together in such an engaging way.