Week 14 — Final project

Concept Description

My project is a physical digital simulator that showcases the UAE across three eras: the past, the present, and the imagined future. The idea came from listening to my grandfather’s stories about how he used to live, and how different life is now. Seeing how quickly the UAE developed made me wonder how the future will look. I wanted to create an experience where people can explore this progression visually by interacting with a real physical device.

The simulator lets users switch between the three eras using physical buttons, and then cycle through multiple images for each era. A potentiometer controls the transition between morning and night, allowing people to view each scene in two different lighting conditions. Overall, the goal of my concept is to let users “travel through time” and explore how the UAE evolved and how it might continue to evolve.

How the Implementation Works

The project works through a simple but effective communication between Arduino and p5.js:

• The Arduino has three buttons (Past, Present, Future) and a potentiometer.

• When the user presses a button, Arduino sends data to p5.js identifying the era and which image should appear.

• When the user turns the potentiometer, Arduino sends a number from 0–1023, which p5.js interprets as morning vs. night.

• p5.js displays the correct image from a set of 18 total images (3 eras × 3 photos × 2 lighting versions).

• Everything is controlled physically the user doesn’t interact with the laptop at all after connecting.

I intentionally kept the interaction simple so it would be easy for younger users (including my younger brother) to understand instantly.

IMG_0934

 Description of Interaction Design

The interaction is entirely physical and designed to be intuitive:

Three buttons, each labeled clearly: Past, Present, Future.

• Pressing a button cycles through three images per era.

• The potentiometer smoothly switches the scene from morning to night.

• No touchscreen interaction the laptop only displays the images.

My goal was to make the mapping extremely obvious. Every person who tested the project understood the basic interaction immediately because the controls directly match the results on the screen. The only part that took a few seconds to discover was that each button can be pressed multiple times to cycle through all images, but users figured it out naturally by experimenting.

Description of Arduino Code (with link/summary)

The Arduino code is fairly simple. It:

• Reads the state of three buttons using INPUT_PULLUP

• Reads a potentiometer value (0–1023)

• Tracks which era is active

• Tracks how many times the user pressed each button (to rotate through 3 images)

// pins
const int pastBtn    = 2;   // Button 1 Past UAE
const int presentBtn = 3;   // Button 2  Present UAE
const int futureBtn  = 4;   // Button 3  Future UAE
const int potPin     = A0;  // Potentiometer day/night

// variables
int era = 0;       // 0 = past, 1 = present, 2 = future
int imgIndex = 0;  // 0, 1, 2

bool pastPrev    = HIGH;
bool presentPrev = HIGH;
bool futurePrev  = HIGH;

void setup() {
  Serial.begin(9600);

  pinMode(pastBtn, INPUT_PULLUP);
  pinMode(presentBtn, INPUT_PULLUP);
  pinMode(futureBtn, INPUT_PULLUP);
}

void loop() {
  bool pastState    = digitalRead(pastBtn);
  bool presentState = digitalRead(presentBtn);
  bool futureState  = digitalRead(futureBtn);

  if (pastPrev == HIGH && pastState == LOW) {
    era = 0;
    imgIndex = (imgIndex + 1) % 3;
    sendData();
    delay(200);
  }

  if (presentPrev == HIGH && presentState == LOW) {
    era = 1;
    imgIndex = (imgIndex + 1) % 3;
    sendData();
    delay(200);
  }

  if (futurePrev == HIGH && futureState == LOW) {
    era = 2;
    imgIndex = (imgIndex + 1) % 3;
    sendData();
    delay(200);
  }

  pastPrev    = pastState;
  presentPrev = presentState;
  futurePrev  = futureState;

  //  update for the potentiometer
  static unsigned long lastSend = 0;
  if (millis() - lastSend > 200) {
    sendData();
    lastSend = millis();
  }
}
//serial
void sendData() {
  int timeVal = analogRead(potPin); // 0–1023

  Serial.print(era);
  Serial.print(",");
  Serial.print(imgIndex);
  Serial.print(",");
  Serial.println(timeVal);  
}

Description of p5.js Code

The p5.js code handles:

• Displaying all 18 images

• Fading transitions between images

• Scaling images to full screen

• Playing different audio for each era

• Reading serial data from the Arduino

• Switching between three states:

Connect screen

Intro screen

Simulator screen

The images are the main content 18 total files (6 per era). They were made by taking real images of the UAE and using generative AI tools to convert them into cartoon versions. p5.js simply loads these files and displays them according to the physical input.

Communication Between Arduino and p5.js

The communication uses Web Serial:

1. The user clicks once to connect.

2. The browser opens a Serial Port window.

3. After selecting the Arduino, p5.js starts receiving lines of text like:

4. p5.js splits the line into:

• era (0 = past, 1 = present, 2 = future)

• imageIndex (0, 1, or 2)

• timeVal (0–1023, used for day/night)

Every change on the physical device immediately updates the display on screen.

It feels similar to using a game controller or a Joy-Con everything is physical, and the screen responds instantly.

What I’m Proud of

I am most proud of how clean and professional the final project looks.

You can’t see any of the wiring  I hid everything neatly inside the cardboard housing. The labeling, colors, and layout make the experience very user-friendly.  I’m also proud of the fact that people were able to figure it out without me saying anything. When I stepped back and just observed, I realized the design communicated itself very clearly, which was exactly my goal.

Looking back at the entire process, I’m genuinely proud of how much I accomplished and how much I learned along the way. At first, organizing all the images felt extremely tedious because I had so many files 18 images total, each with morning and night versions. I also made a small mistake in the naming of the files, and that one mistake made the whole program stop working. I kept getting errors and I couldn’t figure out why. I had to go through each image name one by one, and because the names were long and similar, it was hard to spot the issue. It took me a very long time to fix something that seemed so small, but once I finally found the mistake and everything started working again, it felt very rewarding. I’m also incredibly proud of the physical construction, especially the welding. This was my first time ever welding metal, and it honestly took me one full hour just to weld the first button. The wires kept slipping, the metal didn’t stick properly, and I felt like I was never going to get it. But after doing it over and over, I suddenly got the hang of it, and by the end I was welding each button in about five minutes. Learning a skill like that felt like a big milestone. It really made me feel like I gained a new hands-on skill something I had never tried before in my life.

In the end, the project came together in a way that made me really proud. The wiring is completely hidden, the design is clean and professional-looking, and people were able to interact with it without any instructions. Seeing the final result made all the tedious moments worth it, and it also made me feel more confident in both my coding and physical building abilities.

How This Was Made

I built the physical simulator using:

  • Cardboard and printed graphics
  • Buttons and a potentiometer
  • Metal wires (which I welded for the first time and it took me one full hour to weld my first button!)
  • Arduino and jumper wires

Use of Generative AI

I used AI for visual styling. I first found real photos of the UAE (past, present, and future concept images) and then used AI tools to convert them into cartoon-style illustrations. This helped give the project a consistent artistic style.

I also used AI to help me debug an issue in my p5.js code I sent it the error message and it told me that most probably one of my files name was not the same name I put in the code, which was correct in naming one of my images I accidentally made one of the letters capital and in my code it was lowercase so the code wasn’t running

Design

I used Canva to design the visual aspect

Code Writing & Design

Most of the code is simple enough that I was able to write it myself, but I watched a few YouTube videos to help me understand specific parts, such as Web Serial and Arduino button logic:

Sound source

https://pixabay.com/sound-effects/search/mp3/

Areas for Future Improvement

In the future, I would like to:

• Add more images per era to make the experience richer

• Include more interactive controls, not just day/night

• Maybe add animated elements like moving clouds or cars

• Improve the instruction screen so that users immediately know they can press each button multiple times

• Add richer audio or voice narration explaining the history of the UAE

Week 13 — User Testing

IMG_0847

 

1. Are they able to figure it out? Where do they get confused and why? Do they understand the mapping between the controls and the experience?

When I tested my project with different people, I noticed that the overall interaction was very easy for them to understand. The three buttons were clearly labeled “Past,” “Present,” and “Future,” and the potentiometer automatically felt like a control for changing between morning and night. I designed the layout to be very straightforward on purpose because I wanted even younger users  like my younger brother to be able to use it without help.

When I let my friend try it without any instructions, she was able to figure out the basic interaction immediately. She understood the mapping between the labeled buttons and the changes on the screen. The design helped guide her because I placed each label directly under the button, and the screen clearly showed the UAE environment changing based on the chosen era.

The only part that took her a bit longer to discover was that each button could be pressed multiple times to cycle through three different images. She eventually figured it out on her own by experimenting and playing with it. Most of my friends had the same experience: they understood the main controls right away, but it took some time to realize the buttons could be pressed repeatedly to see more images. Even though it wasn’t immediately obvious, they still learned it naturally without any instructions from me, which made me feel confident that the interaction was intuitive.

2. What parts of the experience are working well? What areas could be improved?

Overall, the system worked exactly the way I intended. The clear labels, simple design, and straightforward interaction made the experience smooth for almost everyone who tested it. People enjoyed seeing the UAE change across the past, present, and future, and the morning night movement using the potentiometer worked very naturally.

However, one area I think I could improve is adding a small instruction guide or a simple on-screen hint. Even though most people figured it out, some took longer to realize they could press each button multiple times to explore all the images. A very small, minimal instruction (like “Press again to see more”) could make the experience clearer from the very beginning.

Other than that, the core interaction and design felt strong and easy to understand.

3. What parts of your project did you feel the need to explain? How could you make these areas clearer to first-time users?

At first, I thought I needed to explain everything especially the fact that there are multiple images per era. But when I stepped back and watched people interact with it without saying anything, I realized that they figured it out on their own. The project ended up being much more self-explanatory than I expected, and most of the clarity came from the very clean design and labeling.

The only part that consistently required a moment of discovery was the “multiple press” feature. To make that clearer for first-time users, I could add a small visual cue or a short line of text somewhere on the screen that hints that the user should “press to cycle through images.” This would make the experience smoother for absolutely everyone, even if they don’t experiment as much.

But overall, user testing showed me that the project communicates itself pretty naturally, and I didn’t really have to explain much which was exactly the kind of interaction experience I wanted to create.

Final Project Concept

Project Concept

For my final project, I’m creating a gesture-based vocabulary learning system (more like a game, basically). The idea came from noticing how flashcard apps never really stick for me because they’re so passive, and I wanted to create something where your body is actually involved in the learning process. Hover your left hand to see just a hard definition, challenging you to guess the word. When curiosity gets the better of you, hover your right hand to reveal the answer. A quick swipe over both sensors is like saying “got it” and moves to the next word, while holding both hands still gives you an example sentence to see the word in context. The interface tracks which words trip you up and brings them back more often, while words you nail consistently take a break. It’s spaced repetition, but you’re physically interacting with it rather than just clicking buttons.

 GRE prep, LSAT terms, fancy academic writing words, or a custom list of interesting words you’ve encountered and want to remember, the system visualizes your progress over time so you can actually see your vocabulary growing.

Arduino Design

Both photoresistors constantly feed readings to the Arduino, which compares them against threshold values to detect when your hands are hovering. The key is making the gesture detection smooth rather than jumpy, so I’m building in some debouncing logic that prevents flickering when your hand is at the edge of the sensor range. The timing matters too because the Arduino needs to distinguish between a quick swipe that says “next word” and deliberately holding both hands there to request an example sentence. I’m planning to add more to this but this is it for now.

P5.js Program Design

Large, readable text dominates the screen. When you hover your hand over a sensor, there’s subtle visual feedback confirming the system detected you, so you’re never wondering if it’s working. The progress dashboard lives off to the side, quietly showing you how many words you’ve learned today and your overall mastery percentage. The “Word of the Day” feature adds a nice ritual to opening the system. The system saves your progress using local storage, so your learning history persists between sessions. Over time, you build up this visualization of your vocabulary growth that’s genuinely satisfying to watch. You can see which word sets you’ve conquered, which ones still need work, and how consistent you’ve been with your practice. It’s the kind of feedback that makes you want to keep going.

Implementation Progress

I’m starting with getting the Arduino sensors working reliably because everything else depends on that foundation. First step is just wiring up the photoresistors and watching the raw values in the Serial Monitor to understand what I’m working with. Once I know what “hand hovering” actually looks like in terms of sensor readings, I can write the gesture detection code with appropriate thresholds. After the gestures feel solid and responsive when I test them by hand, I’ll set up the serial connection to P5 and make sure the commands are flowing through correctly.  Then it’s on to building the P5 interface, starting simple with a hard-coded list of maybe ten words that cycle through in response to Arduino commands. Once that basic interaction loop works, I’ll layer in the vocabulary database, spaced repetition algorithm, and progress tracking features. The final polish phase is where I’ll add the Word of the Day, multiple vocabulary sets, and any visual refinements that make it feel complete. The goal is to have something functional quickly, then make it delightful.

Week 12- Final Project Proposal

Finalised Concept for the Project

Plant Care Station is an interactive, game-like system that combines physical sensors through Arduino with a browser-based visual interface through p5.js.
The project teaches users about plant care (light, water, soil, and touch) through playful interactions, animations, glowing suns, and confetti celebrations.

My project uses sensor data from the Arduino : including four light sensors,  capacitive touch sensors using foil, and a soil moisture sensor — which are visualized in p5.js as a growing, living plant environment. As the user helps the plant receive enough light, fertilizer and moisture, the interface responds through movement, glowing suns, page transitions, and celebratory effects.

Arduino to P5:

  • 4 Light Sensors: Once the light has hit the sensors at a specific threshold, we use the p5 interface to show the user which sensors need more light. Once the light threshold is met, then confetti is thrown and the user can move to the next page.
  • Capacitative Touch sensors: I used aluminium foil to act as a capacitative touch sensor. This foil will be stuck to the scissors so that everytime the reading goes from NO-CONTACT to CONTACT: I count that as one leaf cut.
  • Capacitative Touch sensors/Pressure sensors: Once the fertilizer is paced in the soil we confirm that on the p5 screen and allow the user to move to the next page
  • Soil Moisture Sensor: Once the plant is watered to a specific threshold we notify the user on the p5 screen

 P5 to Arduino:

  • Once all the Steps are complete, p5 will send a signal to Arduino and the colourful LEDs will start flashing- to celebrate the completion.

 

Reading Response Week 11

I’ve always been fascinated by the ways design can alter our everyday experiences, but this reading made me realize how deeply it can also impact dignity and independence. Design Meets Disability argues that assistive technologies aren’t just medical tools; they’re cultural objects that can express identity and empower people. That idea immediately reminded me of when I first discovered the Be My Eyes app.

The app enables people with visual impairments to call volunteers, open their phone camera, and request assistance with tasks such as locating items in the fridge or reading labels. I’ll never forget one call I had: the person asked me to help identify items in their kitchen, and while we were talking, he told me a story about how he once cooked an entire meal for his family using the app to double-check ingredients and instructions. I was amazed, not just by his resourcefulness but by how technology became a bridge for independence and creativity.

Reflecting on that experience alongside the reading, I realized how much design can influence confidence and joy. When assistive tools are thoughtfully designed, they don’t just solve problems; they open doors to new possibilities. Be My Eyes is a perfect example of inclusive design, empowering people by turning what might seem like a barrier into an opportunity for connection and creativity. My takeaway is that disability should never be viewed as a deficit in design, but rather as an opportunity to rethink and expand what technology can do for everyone.

Production Week 11

For this assignment, I worked with Bigo to connect p5.js with an Arduino. We completed three exercises to practice sending data back and forth between the physical and digital worlds.

Part 1: One Sensor to p5.js

In the first exercise, we used a potentiometer connected to the Arduino. This sensor controlled the horizontal position of a circle on the computer screen. The Arduino read the potentiometer value and sent it to p5.js, which then moved the circle left or right based on that input.

Schematic

Arduino Code

void setup() {
  Serial.begin(9600);
}

void loop() {
  // Read analog value and send it as a line of text
  int sensorValue = analogRead(A0);
  Serial.println(sensorValue);
  delay(20); 
}

p5.js Code

let port;
let connectBtn;
let ballX = 0;
let sensorVal = 0;

function setup() {
  createCanvas(600, 400);
  background(50);

  port = createSerial();

  // Open the port automatically if used before
  let usedPorts = usedSerialPorts();
  if (usedPorts.length > 0) {
    port.open(usedPorts[0], 9600);
  }

  connectBtn = createButton('Connect to Arduino');
  connectBtn.position(10, 10);
  connectBtn.mousePressed(connectBtnClick);
}

function draw() {
  // Check if port is open
  if (port.available() > 0) {
    let data = port.readUntil("\n");
    
    if (data.length > 0) {
      // Update value
      sensorVal = Number(data.trim()); 
    }
  }

  background(256);
  
  // Map sensor val to canvas width
  ballX = map(sensorVal, 0, 1023, 25, width - 25);
  
  // Draw ball
  fill(0, 255, 100);
  noStroke();
  ellipse(ballX, height / 2, 50, 50);
}

function connectBtnClick() {
  if (!port.opened()) {
    port.open('Arduino', 9600);
  } else {
    port.close();
  }
}

Part 2: p5.js to LED Brightness

For the second part, we reversed the direction of the data. Instead of sending information from the Arduino to p5.js, we sent it from p5.js back to the Arduino. The ball on the screen acted like a virtual light bulb; dragging it upward made the physical LED brighten, while dragging it downward caused the LED to dim.

Arduino Code

C++

void setup() {
  Serial.begin(9600);
  pinMode(9, OUTPUT); //pmw pin
}

void loop() {
  if (Serial.available() > 0) {
    String input = Serial.readStringUntil('\n');
    int brightness = input.toInt();    // convert str to int
    brightness = constrain(brightness, 0, 255);    // just in case data is weird
    analogWrite(9, brightness);
  }
}

p5.js Code

JavaScript

let port;
let connectBtn;

// Ball variables
let ballX = 300;
let ballY = 200;
let ballSize = 50;
let isDragging = false; 

// Data variables
let brightness = 0;
let lastSent = -1; 

function setup() {
  createCanvas(600, 400);
  
  port = createSerial();
  
  let usedPorts = usedSerialPorts();
  if (usedPorts.length > 0) {
    port.open(usedPorts[0], 9600);
  }

  connectBtn = createButton('Connect to Arduino');
  connectBtn.position(10, 10);
  connectBtn.mousePressed(connectBtnClick);
}

function draw() {
  background(50);

  // ball logic
  if (isDragging) {
    ballX = mouseX;
    ballY = mouseY;
    
    // Keep ball inside canvas
    ballY = constrain(ballY, 0, height);
    ballX = constrain(ballX, 0, width);
  }

  // map brightness to y pos
  brightness = floor(map(ballY, 0, height, 255, 0));

  // send data
  if (port.opened() && brightness !== lastSent) {
    port.write(String(brightness) + "\n");
    lastSent = brightness;
  }
  //draw ball
  noStroke();
  fill(brightness, brightness, 0); 
  ellipse(ballX, ballY, ballSize);
  stroke(255);
  line(ballX, 0, ballX, ballY);

}

// --- MOUSE INTERACTION FUNCTIONS ---
function mousePressed() {
  // check if mouse is inside the ball
  let d = dist(mouseX, mouseY, ballX, ballY);
  if (d < ballSize / 2) {
    isDragging = true;
  }
}

function mouseReleased() {
  // stop dragging when mouse is let go
  isDragging = false;
}

function connectBtnClick() {
  if (!port.opened()) {
    port.open('Arduino', 9600);
  } else {
    port.close();
  }
}

Part 3: Gravity Wind and Bi-directional Communication

For the final exercise, we brought all the concepts together using the gravity-and-wind example. We modified the code to add two new features.

First, a potentiometer was used to control the wind speed in real-time. Second, we programmed the Arduino so that whenever the ball hit the ground, an LED would turn on.

This part took some troubleshooting. I had to filter out very small bounces because the LED kept flickering while the ball rolled along the floor. Once that was fixed, I also added a visual arrow on the screen to show the current wind direction and intensity.

Arduino Code

void setup() {
  Serial.begin(9600);
  pinMode(9, OUTPUT); // LED on Pin 9
}

void loop() {
  // read & send pot value
  int potValue = analogRead(A0);
  Serial.println(potValue);


  if (Serial.available() > 0) {
    char inChar = Serial.read();
    
    // check for blink command
    if (inChar == 'B') {
      digitalWrite(9, HIGH);
      delay(50); 
      digitalWrite(9, LOW);
    }
  }
  
  delay(15);
}

p5.js Code

let port;
let connectBtn;

let velocity;
let gravity;
let position;
let acceleration;
let wind;
let drag = 0.99;
let mass = 50;
let sensorVal = 512; 

function setup() {
  createCanvas(640, 360);
  noFill();
  
  // Physics Setup
  position = createVector(width/2, 0);
  velocity = createVector(0,0);
  acceleration = createVector(0,0);
  gravity = createVector(0, 0.5*mass);
  wind = createVector(0,0);

  // Serial Setup
  port = createSerial();
  let usedPorts = usedSerialPorts();
  if (usedPorts.length > 0) {
    port.open(usedPorts[0], 9600);
  }
  
  connectBtn = createButton('Connect to Arduino');
  connectBtn.position(10, 10);
  connectBtn.mousePressed(connectBtnClick);
}

function draw() {
  background(255);
  
  // read for wind
  if (port.available() > 0) {
    let data = port.readUntil("\n");
    if (data.length > 0) {
      sensorVal = Number(data.trim());
    }
  }
  
  // map wind
  let windX = map(sensorVal, 0, 1023, -0.8, 0.8);
  wind.set(windX, 0);

  // apply physics
  applyForce(wind);
  applyForce(gravity);
  velocity.add(acceleration);
  velocity.mult(drag);
  position.add(velocity);
  acceleration.mult(0);
  
  // draw
  fill(0);
  ellipse(position.x, position.y, mass, mass);
  drawWindIndicator(windX);

  // detect bounce
  if (position.y > height - mass/2) {
      velocity.y *= -0.9; 
      position.y = height - mass/2;
      
      // send blink command
      if (abs(velocity.y) > 1 && port.opened()) {
        port.write('B');
      }
  }
  
  // collision detection
  if (position.x > width - mass/2) {
    position.x = width - mass/2;
    velocity.x *= -0.9;
  } else if (position.x < mass/2) {
    position.x = mass/2;
    velocity.x *= -0.9;
  }
}

function applyForce(force){
  let f = p5.Vector.div(force, mass);
  acceleration.add(f);
}

function connectBtnClick() {
  if (!port.opened()) {
    port.open('Arduino', 9600);
  } else {
    port.close();
  }
}

// helper to visualize the wind
function drawWindIndicator(w) {
  push();
  translate(width/2, 50);
  fill(150);
  noStroke();
  text("Wind Force", -30, -20);
  stroke(0);
  strokeWeight(3);
  line(0, 0, w * 100, 0); 
  fill(255, 0, 0);
  noStroke();
  if (w > 0.05) triangle(w*100, 0, w*100-10, -5, w*100-10, 5); // Right Arrow
  if (w < -0.05) triangle(w*100, 0, w*100+10, -5, w*100+10, 5); // Left Arrow
  pop();
}

function keyPressed(){
  // reset ball
  if (key==' '){
    mass=random(15,80);
    position.y=-mass;
    position.x = width/2;
    velocity.mult(0);
  }
}

Video Demonstration

Reading Response Week 10

I still remember the first time I used VR. I was so amazed by the experience that I didn’t touch my phone for the whole day. It felt completely different from the usual screen interactions I was used to; suddenly, I was moving, reaching, and using my body in ways that made the technology feel alive. Reading A Brief Rant on the Future of Interaction Design reminded me of that moment, because the author argues that our visions of the future are too focused on “Pictures Under Glass,” flat screens that limit the richness of human interaction.

The rant makes a strong case that our hands and bodies are capable of far more expressive actions than just tapping and swiping. The follow-up responses clarify that the point wasn’t to offer a neat solution, but to spark research into dynamic, tactile interfaces that embrace our physicality. I completely agree with this perspective, as VR demonstrates the power of technology when it engages the entire body. It’s entertaining, immersive, and feels closer to what interaction design should be.

At the same time, I know it would be hard to design everything this way. Not every task needs full-body interaction, and sometimes the simplicity of a phone screen is enough. But I do think it’s doable to push more technologies in that direction, blending practicality with embodied experiences. My main takeaway is that the future of interaction design shouldn’t settle for prettier screens; it should aim for interfaces that make us feel connected to our bodies and the environments around us. VR proves that this is possible, and even if it’s challenging to apply everywhere, it’s a direction worth pursuing.

Production Week 10

Project: The Arduino DJ Console

Assignment Description

For this assignment, we were tasked with designing and building a musical instrument using Arduino. The requirements were simple: the project had to include at least one digital sensor (such as a switch) and one analog sensor.

What We Built

Working with Bigo, we created a DJ-style sound console powered by an Arduino. This instrument enables users to experiment with sound by adjusting pitch and speed, providing a substantial amount of room for creativity.

Our setup included:

  • Two Potentiometers (Analog Sensors):
    One knob changes the pitch of the note, and the other controls the duration (speed).

  • One Button (Digital Sensor):
    This serves as a mute button, instantly silencing the sound.

  • Piezo Buzzer:
    The component responsible for producing the tones.

Schematic

The circuit uses analog pins A0 and A5 for the two knobs and digital pin 13 for the pushbutton.
The buzzer is connected to pin 9.

Video Demonstration

Code

We wrote code to read the sensors and play notes from a C Major scale. Here is the source code for the project:

C++

// Control a buzzer with two knobs and a button

// Define hardware pins
const int piezoPin = 9;
const int pitchPotPin = A0;
const int durationPotPin = A5;
const int buttonPin = 13;

// List of frequencies for C Major scale
int notes[] = {262, 294, 330, 349, 392, 440, 494, 523};

void setup() {
  // Start data connection to computer
  Serial.begin(9600);
  Serial.println("Instrument Ready! Note Stepping Enabled.");

  // Set pin modes
  pinMode(piezoPin, OUTPUT);
  pinMode(buttonPin, INPUT_PULLUP);
}

void loop() {
  // Check if the button is pressed
  int buttonState = digitalRead(buttonPin);

  // Mute sound if button is held down
  if (buttonState == LOW) {
    noTone(piezoPin);
  } else {
    // Read values from both knobs
    int pitchValue = analogRead(pitchPotPin);
    int durationValue = analogRead(durationPotPin);

    // Convert pitch knob value to a note index from 0 to 7
    int noteIndex = map(pitchValue, 0, 1023, 0, 7);

    // Select the frequency from the list
    int frequency = notes[noteIndex];

    // Convert duration knob value to time in milliseconds
    int noteDuration = map(durationValue, 0, 1023, 50, 500);

    // Play the sound
    tone(piezoPin, frequency, noteDuration);

    // Show information on the screen
    Serial.print("Note Index: ");
    Serial.print(noteIndex);
    Serial.print(" | Frequency: ");
    Serial.print(frequency);
    Serial.print(" Hz | Duration: ");
    Serial.print(noteDuration);
    Serial.println(" ms");

    // Wait for the note to finish
    delay(noteDuration + 50);
  }
}

Reading Response Week 9

What really stood out to me across these two readings was how much creativity in physical computing and interactive art depends on participation. In Physical Computing’s Greatest Hits (and Misses), I was struck by how many projects keep reappearing: theremin-like instruments, drum gloves, video mirrors, and interactive paintings. At first, it almost feels repetitive, but the point is that each version can still surprise us. The same theme can be reinvented in ways that feel fresh, because the interaction itself is what makes it unique. It’s less about inventing something completely new every time, and more about how the design invites people to play, explore, and discover.

That idea connected with my own love of music. I’ve always been fascinated by how instruments themselves are designed to invite interaction. Even something as simple as a guitar feels like it’s guiding you, its strings and frets practically tell you how to play, and once you start experimenting, you realize how much freedom you have to create your own sound. Reading about theremin-like instruments and drum gloves reminded me of that same feeling: the design doesn’t just produce music, it encourages you to participate, to experiment, and to find joy in the process.

Then, in Making Interactive Art: Set the Stage, Then Shut Up and Listen, the focus shifts to the artist’s role. Instead of dictating meaning, the artist’s job is to create an environment where the audience can respond and interpret for themselves. I liked the idea that interactive art is more like a performance than a finished statement; the audience completes the work through their actions. That perspective really changes how I think about design. It’s not about control, but about setting up the right conditions for discovery.

Taken together, both readings made me realize that physical computing and interactive art thrive on openness. Whether it’s a recurring project idea or a carefully staged environment, the real magic happens when people bring their own curiosity and interpretation to the table. Good design doesn’t just show us something, it gives us space to participate, and that’s what makes the experience meaningful.

Production Week 9

I designed a simple interactive game using LEDs, a potentiometer, and a digital switch. The setup has two rows of LED pairs, each pair matching in color. The potentiometer controls the start point. When I rotate it, the highlighted position shifts across the pairs and cycles through them.

When I press the digital switch, one of the two lit LEDs starts moving quickly along its row. Pressing the button again stops it. If the moving LED stops exactly on the matching position of the other fixed LED, a green verdict LED turns on to show success. If not, a red verdict LED lights up, and the game resets to the start point.

It’s a fun, simple matching game that combines timing, control, and basic electronics.

Code:

// — Pin Definitions —
// LED Rows (G, Y, R, B)
const int fixedRowPins[] = {2, 4, 6, 8};
const int cyclingRowPins[] = {3, 5, 7, 9};

// RGB Feedback LED Pins
const int RGB_RED_PIN = 11;
const int RGB_GREEN_PIN = 12;
const int RGB_BLUE_PIN = 13;

// Input Pins
const int POT_PIN = A1;
const int BUTTON_PIN = 10;

// — Game Logic Variables —
// Game State: false = waiting/potentiometer mode, true = cycling/active mode
bool gameIsActive = false;

// Stores the color index (0-3) when the game starts
int targetColorIndex = 0;

// Stores the current color index (0-3) of the cycling light
int currentCyclingIndex = 0;

// — Timing Variables for Non-Blocking Cycling —
unsigned long previousCycleMillis = 0;
const int CYCLE_SPEED_MS = 80; // How fast the light cycles (lower is faster)
const int VERDICT_DISPLAY_MS = 2000; // How long to show Red/Green verdict

void setup() {
Serial.begin(9600); // For debugging

// Set all 8 game LED pins to OUTPUT
for (int i = 0; i < 4; i++) {
pinMode(fixedRowPins[i], OUTPUT);
pinMode(cyclingRowPins[i], OUTPUT);
}

// Set RGB LED pins to OUTPUT
pinMode(RGB_RED_PIN, OUTPUT);
pinMode(RGB_GREEN_PIN, OUTPUT);
pinMode(RGB_BLUE_PIN, OUTPUT);

// Set button pin with an internal pull-up resistor
// The pin will be HIGH when not pressed and LOW when pressed
pinMode(BUTTON_PIN, INPUT_PULLUP);
}

void loop() {
// Read the button state
bool buttonPressed = (digitalRead(BUTTON_PIN) == LOW);

// — Main Game Logic: Two States —

// STATE 1: Game is NOT active. Control LEDs with the potentiometer.
if (!gameIsActive) {
handlePotentiometer(); // Light up LED pair based on pot

// Check if the button is pressed to START the game
if (buttonPressed) {
Serial.println(“Button pressed! Starting game…”);
// Lock in the current color from the potentiometer
targetColorIndex = getPotIndex();

gameIsActive = true; // Switch to the active game state
turnRgbOff(); // Ensure verdict light is off

// Wait for the button to be released to avoid misfires
while(digitalRead(BUTTON_PIN) == LOW);
delay(50); // Simple debounce
}
}
// STATE 2: Game IS active. Cycle the lights and wait for the player’s timing.
else {
cycleLights(); // Handle the light cycling logic

// Check if the button is pressed to STOP the cycle and check the verdict
if (buttonPressed) {
Serial.println(“Timing button pressed!”);
// The currentCyclingIndex is the player’s selection
checkVerdict();

gameIsActive = false; // Game is over, switch back to potentiometer mode

// Wait for button release
while(digitalRead(BUTTON_PIN) == LOW);
delay(50); // Simple debounce
}
}
}

// — Helper Functions —

// Reads potentiometer and lights up the corresponding LED pair
void handlePotentiometer() {
int potIndex = getPotIndex();
lightLedPair(potIndex);
}

// Reads the potentiometer and maps its value to an index from 0 to 3
int getPotIndex() {
int potValue = analogRead(POT_PIN); // Reads value from 0-1023
// Map the 0-1023 range to four sub-ranges (0, 1, 2, 3)
int index = map(potValue, 0, 1023, 0, 3);
return index;
}

// Lights one pair of LEDs based on an index (0-3)
void lightLedPair(int index) {
turnAllGameLedsOff();
digitalWrite(fixedRowPins[index], HIGH);
digitalWrite(cyclingRowPins[index], HIGH);
}

// Manages the fast, non-blocking cycling of the second row of LEDs
void cycleLights() {
// This uses millis() to avoid delay() so we can still read the button
unsigned long currentMillis = millis();

if (currentMillis – previousCycleMillis >= CYCLE_SPEED_MS) {
previousCycleMillis = currentMillis; // Reset the timer

// Move to the next LED in the cycle
currentCyclingIndex++;
if (currentCyclingIndex > 3) {
currentCyclingIndex = 0; // Loop back to the start
}

// Update the lights
turnAllGameLedsOff();
digitalWrite(fixedRowPins[targetColorIndex], HIGH); // Keep fixed LED on
digitalWrite(cyclingRowPins[currentCyclingIndex], HIGH); // Light the current cycling LED
}
}

// Checks if the player’s timing was correct and shows the verdict
void checkVerdict() {
if (currentCyclingIndex == targetColorIndex) {
Serial.println(“Verdict: CORRECT!”);
showVerdict(true); // Show green light
} else {
Serial.println(“Verdict: WRONG!”);
showVerdict(false); // Show red light
}
}

// Lights up the RGB LED Green for correct, Red for incorrect
void showVerdict(bool isCorrect) {
turnAllGameLedsOff(); // Turn off game lights to focus on verdict
if (isCorrect) {
// Light RGB GREEN
digitalWrite(RGB_RED_PIN, LOW);
digitalWrite(RGB_GREEN_PIN, HIGH);
digitalWrite(RGB_BLUE_PIN, LOW);
} else {
// Light RGB RED
digitalWrite(RGB_RED_PIN, HIGH);
digitalWrite(RGB_GREEN_PIN, LOW);
digitalWrite(RGB_BLUE_PIN, LOW);
}
delay(VERDICT_DISPLAY_MS); // Hold the verdict light
turnRgbOff(); // Turn off the verdict light before returning to the game
}

// — Utility Functions —

// Turns off all 8 game LEDs
void turnAllGameLedsOff() {
for (int i = 0; i < 4; i++) {
digitalWrite(fixedRowPins[i], LOW);
digitalWrite(cyclingRowPins[i], LOW);
}
}

// Turns off the RGB LED
void turnRgbOff() {
digitalWrite(RGB_RED_PIN, LOW);
digitalWrite(RGB_GREEN_PIN, LOW);
digitalWrite(RGB_BLUE_PIN, LOW);
}