Shahram Chaudhry – Final Project Documentation

Concept

For my midterm project, I created a mind palace experience where users could choose to get rid of memories. For my final project, I wanted to continue exploring the theme of memory,  but take it in the opposite direction. This time, users can’t choose whether they lose a memory or not; instead, it depends on their ability to remember.

In Faded, LEDs light up in a sequence, and the user has to repeat the sequence correctly. A blurred image on the screen represents a memory, if the user recalls the pattern correctly, the image becomes clear. If they make a mistake, the memory is lost, and the image fades.

It’s a slightly ironic twist: to hold on to a memory, you need a good memory. The project reflects how we remember or forget.

Pictures/Videos

Final Interaction Video

How does the implementation work?

Interaction Design

  • The experience begins with a landing screen and an instructions page.
  • A “Connect to Arduino” button helps with easy serial connection. .
  • The system is designed to enter fullscreen on mouse click, but also includes a failsafe:
    • If the user presses the spacebar before going fullscreen, it first triggers fullscreen mode.
    • On the next spacebar press, the game starts. (For the showcase, the experience will remain in fullscreen from the start.)
  • On starting the game, a blurred image (memory) is displayed on screen.
  • A random sequence of 4 LEDs flash one by one.
  • The player must repeat the sequence using the corresponding physical buttons.
  • If the sequence is correct:
    • The blurred image becomes clear, representing a remembered memory.
  • If the player makes a mistake:
    • The memory is lost, and a “Game Over” message is shown as the memory fades away.

Arduino Code:

int buttonPins[4] = {2, 3, 4, 5};
int ledPins[4]    = {8, 9, 10, 11};
int lastState[4] = {HIGH, HIGH, HIGH, HIGH};

void setup() {
  Serial.begin(9600);
  for (int i = 0; i < 4; i++) {
    pinMode(buttonPins[i], INPUT_PULLUP);
    pinMode(ledPins[i], OUTPUT);
    digitalWrite(ledPins[i], LOW);
  }
}

void loop() {
  if (Serial.available()) {
    String input = Serial.readStringUntil('\n');
    input.trim();
    if (input == "-1") {
      // Turn off ALL LEDs
      for (int i = 0; i < 4; i++) {
        digitalWrite(ledPins[i], LOW);
      }
    } 
    else {
      int index = input.toInt();
      if (index >= 0 && index < 4) {
        digitalWrite(ledPins[index], HIGH); // Turn on specific LED
      }
    }
  }

  for (int i = 0; i < 4; i++) {
    int reading = digitalRead(buttonPins[i]);
    // Because the buttons use INPUT_PULLUP, the pin reads HIGH when unpressed and LOW when pressed. So when button is pressed, state changes from HIGH to LOW.
    if (lastState[i] == HIGH && reading == LOW) {
      // Send button info to p5.js
      Serial.print("BUTTON:");
      Serial.println(i);
      // light up LED when that button is pressed
      digitalWrite(ledPins[i], HIGH);
      delay(120);
      digitalWrite(ledPins[i], LOW);
    }
    lastState[i] = reading; 
  }
}

Schematic:

p5.js Code and Description

The p5.js sketch is the main part as it is managing visuals, game flow, and Arduino communication. It controls screen transitions using a currentScreen variable, loads and blurs memory images dynamically, and generates a random LED sequence that is shown with proper timing using millis() and flags. The sketch receives button input from Arduino via serial and checks it against the correct sequence to determine success or failure. It also sends LED control commands back to Arduino and provides audio feedback for correct or incorrect inputs, creating a cohesive and interactive experience.

View p5.JS code

Arduino and p5.js Communication

  • p5.js to Arduino:
    • Sends LED index (“0\n” to “3\n”) to flash a specific LED.
    • Send “-1\n” to turn all LEDs off. 
  • Arduino to p5.js:
    • On button press, Arduino sends BUTTON:x (e.g., BUTTON:2) to identify which button was pressed.
    • p5.js parses this input and compares it to the expected sequence.

Aspects I’m proud of?

I’m someone who isn’t very handy and had never soldered before, so being able to solder everything, no matter how long it took, was something I’m genuinely proud of. I’m also proud of the concept itself. It’s simple but carries a deeper, emotional meaning about memory and loss. Even though the technical side isn’t overly complex, I intentionally kept it that way to make the experience more approachable and enjoyable. I had fun building it, and I hope it is rewarding to see others engage with it without feeling overwhelmed by complicated interactions.

How this was made:

Initially, I used  images from the internet, but I found it difficult to find ones that were  nostalgic, aesthetic, and suitable for fullscreen display. Eventually, I turned to AI image generation, and while I’m often skeptical about its accuracy, this time I was genuinely pleased with the results. The generated images conveyed the intended atmosphere and worked well with the game’s visual design.

On the coding side, one of my main struggles was implementing the showSequence() function, which controls the timing of LED flashes. I initially attempted to manage it using simple flags, but the logic became unreliable. With help from ChatGPT, I learned to use millis() and a lastTime variable to precisely track time intervals. This made the LED sequence much more consistent and readable.

Another area where AI support was valuable was in blurring images. I had initially applied a filter(BLUR, 10) directly to the image, but it unintentionally blurred the entire canvas. ChatGPT suggested using a separate blurLayer graphics buffer to isolate and control the blur effect. Additionally, I was manually creating each button’s design and behavior across different screens, which led to a lot of repetitive code. On AI’s suggestion, I created a reusable drawButton() function, which significantly simplified the interface logic and made the code cleaner.

AI also helped me understand Arduino’s serial communication functions more clearly. I learned the difference between readStringUntil() and readBytesUntil(), and how readStringUntil(‘\n’) could be used to parse complete lines of input from the serial port from this example: Arduino documentation.

Future Improvements

To enhance the experience, I’d like to implement multiple levels with increasing difficulty, where the LED sequences become longer or faster as the player progresses. This would add a sense of progression. Additionally, I’d like to add subtle background audio that evolves based on the player’s performance, making the experience more immersive. 

 

Week 13 – Shahram Chaudhry – User Testing

For this week’s usability test, I had one person try my interactive memory game without giving them any instructions at all. I wanted to see if the game made sense on its own and whether they could figure out what to do just by watching and interacting with it.

Right away, they understood that it was some kind of memory or reaction game. As soon as the LEDs started lighting up in a sequence, they naturally tried to repeat it using the buttons. They did not need the main written instructions to figure out what the goal of the game was.

There was a bit of confusion, though. Because of the slight delay that sometimes happens between Arduino and p5.js, they were not always sure if the game had registered their button press or how long they needed to wait before the next LED would turn on. They mentioned that the timing threw them off a little and that they were not sure whether the game was lagging or if the delay was intentional.

Link for user testing video.

What worked well

The buttons were very intuitive. The tester said it was easy to understand that you just press them and see what happens. Even though they skipped the main instructions, the messages that appear during the game, like “Watch the sequence” or “Press the buttons now,” were enough to guide them. The overall interaction felt simple and understandable.

Areas that could be improved

The biggest issue was the timing between the LEDs and the user inputs. The delay sometimes made the game feel slower than it should be, and the tester wasn’t sure when to focus on the LEDs and when to start pressing buttons. This came from the communication speed between p5.js and the Arduino, but from a user’s point of view it just felt like a pause with no explanation.

A simple fix here would be adding a small message before the game starts that says something like “The LEDs will light up in a pattern. Watch carefully. There may be a short delay before you can start pressing.” That way the player knows what to expect.

Another thing I noticed is that when the player presses the spacebar to start, the LED sequence begins almost right away. For someone who did not read the instructions, this means they have to quickly switch attention from the screen to the LED buttons with no warm up. Adding even a two second “Get ready…” screen would help the user settle in before the actual sequence begins.

Final thoughts

Even with the timing issues, the game was still very easy for the tester to understand. The layout is simple, the buttons are satisfying to press, and the idea of repeating a sequence feels familiar. The small delay caused some hesitation, but overall the interaction was still clear enough for someone to figure out without being told what to do.



Week 12 – Final Project Documentation

For my final project, I’m creating a physically interactive memory-sequence game centered around the metaphor of “recovering a fragmented memory.” The game uses four large LED pushbuttons wired to an Arduino, each with a built-in LED that flashes as part of a color sequence. The player must watch and memorize the flashing sequence and then repeat it correctly by pressing the matching buttons in order. With each successfully completed level, a blurry or pixelated image on the screen becomes clearer, symbolizing memory restoration. If the user gets the sequence wrong, the image distorts or glitches, as if the memory is slipping away. Only after completing all levels does the fully restored image appear.

The Arduino handles all sensing and feedback related to physical input: it detects button presses using INPUT_PULLUP, flashes the LEDs during each round (based on input from P5), and sends messages to P5 whenever the player presses a button. Each button press is communicated over serial with a simple string like “BUTTON:0”, “BUTTON:1”, etc. P5 receives these signals, checks them against the correct sequence, and determines whether to progress the game, update the image clarity, or apply a glitch effect. On the flip side, P5 sends commands to Arduino to flash specific LEDs by sending numbers (0-3) over serial that correspond to the button LEDs.

On the P5 side, the sketch manages all game logic, sequence generation, visual feedback, and memory visualization. It starts with a low-resolution or blurred image and gradually resolves the image as the user completes levels. The sketch also gives instructions to the user and visual cues about success or failure. This layered system allows for a compelling interaction that blends precise physical input with expressive visual output.

 I’ve successfully soldered one of the large LED pushbuttons with its wires and tested it using the Arduino with the internal pull-up setup. The button press registers correctly, and the built-in LED lights up when triggered from code. This confirms that the wiring and logic are working as intended.

Next, I’ll repeat the soldering and wiring process for the remaining three buttons, ensuring each is connected to a unique input and output pin. I’ve also laser-cut the top panel of the box, which has four holes precisely sized to mount the pushbuttons. This will keep the layout organized and user-friendly for gameplay. Once all buttons are mounted and connected, I’ll move on to integrating all four into the Arduino code and begin syncing with the visual side in p5.js.

Laser Cutting Video:

 

 

Shahram Chaudhry – Final Project Brainstorm

I don’t know why I’m so obsessed with memories, even my midterm project was memory-themed. I guess that’s what happens when you don’t get to major in neuroscience but end up majoring in computer science instead.

For my final project, I want to create a physically interactive memory-sequence game that plays with the idea of “recovering a forgotten memory.”  I’ve always liked memory games, and I thought it would be interesting to turn that mechanic into a metaphor: every correct sequence helps restore a blurry image on the screen, as if the player is trying to remember something long lost.

The physical side of the project is intentionally minimal. I’m planning to use four LEDs (diff colours) paired with four corresponding buttons, wired to an Arduino. The Arduino will flash sequences of LEDs, starting easy and growing in complexity, and the user has to repeat them by pressing the buttons in the same order. When the user gets a sequence right, the p5 interface will respond instantly by revealing more detail in the image, for e.g. decreasing the blur.  If they get it wrong, the image  becomes more distorted, symbolizing the memory slipping further away. Only if the player successfully completes all three levels does the final clear image appear. Otherwise, the memory remains lost. I’m also considering having a different image each game, so even if the user replays the game, they can’t recover a memory they “failed”, reinforcing the idea that some memories can be lost forever. (Life is unfair , I know.)

On the p5 side, I want to focus on smooth feedback and atmosphere. The screen will always show the partially recovered image, and p5 will handle visualization, sound feedback (buzzer for wrong sequence) , tracking correctness, and the level progression. The project feels manageable for my current skill level, but I think it is still creative and expressive. 

Shahram Chaudhry – Week 11 – Production

Task 1

I used a flex sensor to control the horizontal movement of the circle on the screen. When the sensor is bent, the circle moves horizontally. 

P5JS Code:

let port;
let sensorValue = 0;

function setup() {
  createCanvas(600, 300);
  port = createSerial();
  let used = usedSerialPorts();
  if (used.length > 0) {
    port.open(used[0], 9600);
  }
}

function draw() {
  background(240);
  let str = port.readUntil("\n");
  if (str.length > 0) {
    sensorValue = int(str);
  }
  let x = map(sensorValue, 430, 500, 0, width);
  fill(100, 150, 255);
  ellipse(x, height / 2, 50, 50);
  fill(0);
  text("Sensor: " + sensorValue, 10, height - 20);
}

function mousePressed() {
  if (!port.opened()) {
    port.open("Arduino", 9600);
  }
}

Arduino Code:

void setup() {
  Serial.begin(9600); 
}

void loop() {
  int sensorValue = analogRead(A0);  
  Serial.println(sensorValue);       
  delay(20);                         
}

Schematic:

Video

 

Task 2:

Moving the mouse left or right on the screen changes how bright the LED connected to the Arduino glows. As we move the mouse toward the left, the LED gets dimmer, and as we move it to the right, it gets brighter.

P5js Code:

let port;

function setup() {
  createCanvas(600, 600);
  port = createSerial();

  let usedPorts = usedSerialPorts();
  if (usedPorts.length > 0) {
    port.open(usedPorts[0], 9600);
  }
}

function draw() {
  background(220);
  if (port.opened()) {
    let brightness = map(mouseX, 0, width, 0, 255);
    port.write(int(brightness) + "\n");
    textSize(50);
    text("Brightness: " + int(brightness), 50 , height - 20);
  } else {
    text("Click to connect", 10, height - 20);
  }
} 
function mousePressed() {
  if (!port.opened()) {
    port.open("Arduino", 9600);  
  }
}

Arduino Code:

int brightness = 0; 
int input = 0; 
void setup() { 
  Serial.begin(9600); 
  pinMode(9, OUTPUT); 
} 
void loop() { 
  if (Serial.available() > 0) { 
    input = Serial.parseInt(); 
    brightness = map(input, 0, 1023, 0, 255); 
    analogWrite(9, brightness); 
  } 
}
Schematic

Video

 

 

Task 3:

I modified the gravity wind example so that an LED lights up briefly each time the ball hits the ground and bounces back. The wind that pushes the ball left or right is controlled by the potentiometer.

P5js Code: 

let port;
let sensorValue = 0;
let velocity;
let gravity;
let position;
let acceleration;
let wind;
let drag = 0.99;
let mass = 50;
let onGroundLastFrame = false;

function setup() {
  createCanvas(640, 360);
  noFill();
  port = createSerial();
  let usedPorts = usedSerialPorts();
  if (usedPorts.length > 0){
    port.open(usedPorts[0], 9600);
  }
  position = createVector(width/2, 0);
  velocity = createVector(0,0); 
  acceleration = createVector(0,0);
  gravity = createVector(0, 0.5*mass);
  wind = createVector(0,0);
}

function draw() {
  background(255);
  let incoming = port.readUntil("\n");
  if (incoming.length > 0) {
    sensorValue = int(incoming);  
  }
  wind.x = map(sensorValue, 0, 1023, -1, 1);

  applyForce(wind);
  applyForce(gravity);
  velocity.add(acceleration);
  velocity.mult(drag);
  position.add(velocity);
  acceleration.mult(0);
  ellipse(position.x,position.y,mass,mass);
  
  let onGroundNow = (position.y > height - mass / 2);

  if (onGroundNow && !onGroundLastFrame) {
    port.write("1\n"); // bounce, flash led
  } else {
    port.write("0\n");
  }
  if (onGroundNow) {
    velocity.y *= -0.9;
    position.y = height - mass / 2;
  }
  onGroundLastFrame = onGroundNow;
}

function applyForce(force){
  // Newton's 2nd law: F = M * A
  // or A = F / M
  let f = p5.Vector.div(force, mass);
  acceleration.add(f);
}

function mousePressed() {
  if (!port.opened()) {
    port.open("Arduino", 9600);
  }
}

Arduino Code:

int ledPin = 2;

void setup() {
  Serial.begin(9600);
  pinMode(ledPin, OUTPUT);
}

void loop() {
  int sensor = analogRead(A0);
  Serial.println(sensor);
  if (Serial.available() > 0) {
    int bouncesig = Serial.parseInt();
    if (bouncesig == 1) {
      digitalWrite(ledPin, HIGH);
      delay(50);
      digitalWrite(ledPin, LOW);
    }
  }
  delay(10); 
}

Schematic

 

Video

 

 

Shahram Chaudhry – Week 11 – Reading Response

This reading challenged my assumptions about the relationship between disability and design. I’ve always seen assistive technologies as specialized tools adapted from mainstream innovations, but the suggestion that disability itself can inspire new directions in design was a refreshing and thought-provoking shift. It reframes disability not as a limitation but as a perspective, a source of insight that can benefit everyone. This reversal of influence invites us to see people with disabilities not only as users but also as collaborators in the design process.

One idea I found interesting was the coexistence of two design ideologies: one rooted in problem-solving and respecting constraints, and another that playfully challenges them. This balance resonates with many everyday experiences. Take elevators, for instance. Originally intended to solve mobility issues, they’ve become an expected convenience in buildings of all kinds. They represent an engineering mindset that respects accessibility constraints, yet their widespread use has also changed how we think about them. Elevators are now also aesthetic spaces. Designers often include mirrors, polished metal, ambient lighting, or even music, not because they help with accessibility, but because they enhance the experience.

The reading also discusses how the push for discretion in assistive devices, like smaller hearing aids, can ironically reinforce stigma. Trying to hide disability implies shame or abnormality, which is counterproductive. 

The example of glasses was especially meaningful to me. I’ve seen firsthand how they function both as corrective devices and as fashion statements. Growing up, my siblings and parents all wore glasses, and I actually used to wear non prescription glasses as an accessory, but now that I actually have poor sight, I don’t like wearing glasses. I guess it’s human nature to want the freedom to choose, especially when it comes to fashion, which is all about personal expression. Now that I have to wear glasses out of necessity rather than choice, it feels different. I don’t like that the decision has been taken out of my hands.

The reading opened up broader conversations about involving fashion designers and aesthetic thinking in assistive technology. Why shouldn’t prosthetics be beautiful? Why not design hearing aids that are meant to be seen, not hidden?

Also I agree with the idea that we must be cautious about making technology overly complex in the name of accessibility, sometimes simplicity serves a broader audience better. I liked the example of the iPod. Its tactile minimalist interface made it accessible to more users, including those with visual impairments. I think if I had owned an iPod back when it was still popular, I would have really enjoyed the experience, especially because I’m quite indecisive. Having the order of songs chosen for me would’ve taken away the pressure of deciding what to play next, making listening feel more effortless and enjoyable.

Shahram Chaudhry – Week 10 – Reading Response

Reading A Brief Rant on the Future of Interaction Design honestly made me think how much I take my hands for granted. The author talks about how our hands are designed to feel and manipulate things and it really clicked for me when he compared using a touchscreen to “pictures under glass.”  That idea stuck with me because it perfectly captures what phones and tablets feel like: flat, smooth, and completely disconnected from the sense of touch. I never thought about it before, but it’s true , we don’t really “feel” anything when we use them. It’s weird to realize that the same hands that can tie shoelaces or shape clay are now mostly used to swipe on a flat screen.

The part that resonated most with me was his point that technology should adapt to us, not the other way around, especially when he says that technology can change but human nature cannot change as much. I see it all the time,  people getting frustrated because a phone gesture doesn’t work or because an app “expects” you to use it a certain way. It’s like we’re the ones bending to fit the machine, instead of the machine fitting how we naturally act. Also , with older objects like books or physical instruments, there’s something really satisfying about physically turning a page or pressing real keys. You just feel connected to what you’re doing. With touchscreens, everything feels the same no matter what you’re interacting with.

One part I didn’t completely agree with was how strongly he dismissed “Pictures Under Glass.” I get his point, but I think those devices have opened up creativity in new ways,  like drawing on an iPad or making music digitally. It’s not tactile in the same sense, but it’s still expressive.

The follow-up also made me think about how disconnected technology can make us sitting all day, barely moving, everything done with a single finger. I guess, I am a little scared of a future where we become immobile because there’s too much automation, and we don’t need to do much anymore. I guess we shouldn’t be losing touch entirely with the world around us (pun intended).

Week 10 – Shahram Chaudhry – Musical Instrument

 

Concept

For this week’s assignment, Khatira and I made a small musical instrument using an Arduino. We decided to create a pressure-sensitive drum pad that lets you play different drum sounds depending on how hard you press it.

The main part of the project is a force-sensitive resistor (FSR) that acts as the drum pad. When you press on it, the Arduino reads how much pressure you applied and plays a sound through a small buzzer. The harder you hit it, the longer the sound lasts, kind of like playing a real drum.

We also added a button that lets you switch between three drum sounds: a kick, a snare, and a hi-hat. So pressing the pad feels interactive, and you can change the type of drum as you play. It’s a really simple setup, but it was fun to experiment with.

 

Schematic

Video Demo

 

const int FSR_PIN = A0;        
const int BUTTON_PIN = 2;     
const int PIEZO_PIN = 8;     

// Drum sounds 
int kickDrum = 80;             // low drum
int snareDrum = 200;           // mid drum
int hiHat = 1000;              // high drum

int currentDrum = 0;         
int lastButtonState = HIGH;    

void setup() {
  pinMode(BUTTON_PIN, INPUT);  
  pinMode(PIEZO_PIN, OUTPUT);
  Serial.begin(9600); 
}

void loop() {
  int pressure = analogRead(FSR_PIN);
  if (pressure > 20) {
    int duration = map(pressure, 10, 1023, 10, 200);
    // Play the drum sound
    if (currentDrum == 0) {
      tone(PIEZO_PIN, kickDrum, duration);
    } else if (currentDrum == 1) {
      tone(PIEZO_PIN, snareDrum, duration);
    } else {
      tone(PIEZO_PIN, hiHat, duration);
    }
    delay(50);  
  }
  int buttonState = digitalRead(BUTTON_PIN);
  //if button was just pressed, we need to change drum sound
  if (buttonState == LOW && lastButtonState == HIGH) {
    currentDrum = currentDrum + 1;
    if (currentDrum > 2) {
      currentDrum = 0;
    }
    delay(200); 
  }
  lastButtonState = buttonState;  // Store utton state 
}

Future Improvements

For future improvements, we’d like to add a potentiometer to control the sound more precisely, allowing the player to adjust tone or volume in real time while drumming. We could also include LEDs that light up based on which drum sound is active and how hard the pad is hit. These additions would make the drum pad feel more dynamic,  and visually engaging.

Week 9 – Reading Response – Shahram Chaudhry

Physical Computing’s Greatest Hits (and misses)

I think my main takeaway from this reading would be that we could have interactions but what’s more important is for those interactions to be meaningful.I liked how he pointed out that waving your hand over a sensor “has little meaning by itself.” As creative coders, I think that’s such an important reminder. It’s easy to get caught up in cool tech and forget the why behind our interactions. If the action doesn’t hold some significance, it ends up feeling more like a CS demo than an expressive piece (no shade, I’m a CS major myself).

The part about video mirrors also really resonated. I totally agree. They’re super visually engaging (who doesn’t love staring at themselves?), but there’s not much to do. It reminded me of our early class discussions about high vs low interaction. Just because something responds doesn’t mean it creates depth. And also I think mirrors are often more reactive than interactive.

I loved the section about interactive pets, especially since I’m not really into real animals. The idea of a cuddly robot pet that behaves like a dog but doesn’t shed or poop? Count me in.

Making Interactive Art: Set the Stage, Then Shut Up and Listen

I found this reading really refreshing because it reframes the role of the artist in interactive work in a way that feels freeing. The title, Set the Stage, Then Shut Up and Listen, is blunt, but it hits hard. It’s such a shift from the traditional idea of art being about your expression, your vision, and instead saying, “Hey, I’ve built the framework,  now let them do something with it.” That resonated with me and also reminded me of the advice Professor Mang gave us when presenting our mditerm projects, letting our audience interact with our projects, without any additional instructions or explanations. While it can get frustrating if the audience doesn’t interact the way we expected, I think that’s where we actually find room for improvements in our work. Also, if they don’t interact in the way we expected them to, maybe it’s because we didn’t design with them in mind.

I agree with the idea that interactive work is the beginning of a conversation rather than the whole message. It means not trying to force people into a specific reaction, but creating space for them to explore and find their own meaning. That kind of openness can be scary, but it’s also really exciting.

I also really liked the part about using space and affordances thoughtfully. Like, if something has a handle, we naturally want to grab it. That kind of design isn’t just about aesthetics, it’s about instinct and behavior. As someone making interactive things, I think the key takeaway for me is the shift in mindset, moving away from a rigid, outcome-driven approach where I expect the audience to engage in a specific way, and instead embracing curiosity about how they actually interact. It’s less about forcing a response and more about observing what they do, learning from it, and letting that shape the work.


Week 9 – Shahram Chaudhry – The Emotional Snap

You know those days when you’re going about your day, composed, unfazed, maybe a little affected by how things are going,  a compliment here lifts your mood a bit, an awkward text there dims it slightly. Nothing dramatic. That’s me, most of the time. Collected. Measured. But then someone says something, or does something, and boom, something inside flips. And I get  triggered (Only a good squash session can fix that).

That’s the idea behind this project, the emotional snap, that flips from calm to intensity. The potentiometer controls a blue LED, which I chose because blue is often associated with calmness (or at least that’s the association I have). The idea is: when things are calm, you’re still feeling things, but softly. You turn the dial, and the blue LED glows brighter or dimmer depending on how strongly you’re feeling. It’s gradual, and ever-changing, just like most of our emotional states.

But then there’s the toggle switch. 

When flipped UP, it triggers a red LED, one that doesn’t fade in or out. It’s either ON or OFF. That red LED represents those intense moments of anger, panic etc. The contrast here is what makes the circuit special. On one hand, you have the blue LED, whose brightness gently flows with the potentiometer, like your emotional depth shifting over time. On the other, the red LED is binary, triggered by the switch, like someone pushing a very specific emotional button.

So this project is a metaphor for the way we, as humans, respond to the world around us.

The code for it:

int potPin = A0;      
int switchPin = 2;     
int redLED = 9;      
int blueLED = 10;    

void setup() {
  pinMode(redLED, OUTPUT);
  pinMode(blueLED, OUTPUT);
  pinMode(switchPin, INPUT);   
  Serial.begin(9600);
}

void loop() {
  int potValue = analogRead(potPin);
  int brightness = map(potValue, 0, 1023, 0, 255);
  int switchPosition = digitalRead(switchPin);
  if (switchPosition == HIGH) {
    //Red LED ON 
    digitalWrite(redLED, HIGH);
    analogWrite(blueLED, 0);
  } else {
    // Blue LED with analog control
    digitalWrite(redLED, LOW);
    analogWrite(blueLED, brightness);
  }
}

The schematic design:

Video Demo:

IMG_0611