Week 11 – Production Assignment (Air Piano)

Concept

The “Air Piano” is a musical instrument controlled using the distance from a sensor. I have always been fascinated by the distance sensors, using reflection of sound waves to calculate the distance of an object. I used the the HC-S04 sensor to detect the distance in this work as input for the instrument. I was inspired by the tone knob in electrical guitars which was discussed in class so I integrated it into my work. A potentiometer serves as my tone knob and its input is used to switch among 3 sound modes: Piano mode, Sci-Fi mode and Bass Mode. There are also different colored LEDS to show the user which mode is on. Red – Piano, Green – SciFi and Blue – Bass mode. The mode determines the scale of frequency produced by the buzzer. The coding behind this project was to map the distance of an object from the sensor to a value from 0 to 7 (8 values) and these values each correspond to a specific note.

Sketch

Image

Code

// A0 - Potentiometer
// 2 - Blue
// 4 - Green
// 7 - Red
// 12 - Buzzer 

const int trigPin = 10;
const int echoPin = 9;

// Mode 1 - Piano Scale
int mode1[8] = {262, 294, 330, 349, 392, 440, 494, 523};

// Mode 2 - Sci-Fi
int mode2[8] = {600, 750, 900, 1100, 1400, 1800, 2300, 3000};

// Mode 3 - Higher Bass / Bright Low Mode
int mode3[8] = {350, 420, 500, 600, 720, 850, 1000, 1200};

float duration;
int distance;

void setup() {
  pinMode(trigPin, OUTPUT);
  pinMode(echoPin, INPUT);
  pinMode(2, OUTPUT);
  pinMode(4, OUTPUT);
  pinMode(7, OUTPUT);
  pinMode(12, OUTPUT);

  Serial.begin(9600);
  digitalWrite(2, HIGH);
  digitalWrite(4, HIGH);
  digitalWrite(7, HIGH);
  delay(1000);

  digitalWrite(2, LOW);
  digitalWrite(4, LOW);
  digitalWrite(7, LOW);
}

void loop() {
  digitalWrite(trigPin, LOW);
  delayMicroseconds(2);
  digitalWrite(trigPin, HIGH);
  delayMicroseconds(10);
  digitalWrite(trigPin, LOW);
  int mode;

  int pm = analogRead(A0);

  duration = pulseIn(echoPin, HIGH);
  distance = (duration*0.0343)/2;

  Serial.print("Distance: ");
  Serial.println(distance);
  delay(100);

  if (pm >= 700) {
    mode = 0;
    digitalWrite(7, HIGH);
    digitalWrite(2, LOW);
    digitalWrite(4, LOW);
  } else if (pm >= 350) {
    mode = 1;
    digitalWrite(4, HIGH);
    digitalWrite(7, LOW);
    digitalWrite(2, LOW);
  } else {
    mode = 2;
    digitalWrite(2, HIGH);
    digitalWrite(4, LOW);
    digitalWrite(7, LOW);
  }

  if (distance >= 0 && distance <= 40) {

    int noteIndex = map(distance, 0, 40, 0, 7);
    int freq;

    if (mode == 0) freq = mode1[noteIndex];
    if (mode == 1) freq = mode2[noteIndex];
    if (mode == 2) freq = mode3[noteIndex];

    tone(12, freq);
  }
  else {
    noTone(12);
  }

  //delay(50);
}

 

 

How it was made

This work was made by first drawing the sketch of the circuit. The inputs are the resistance value of the potentiometer and the distance from the distance sensor. I had to watch a video and use the ardruino website to figure out how to configure the distance sensor. The output of the circuit are 3 LEDs and a buzzer. The LEDs where connected to digital pins, a resistor and ground. The buzzer was connected to a digital pin and ground. In the coding of the instrument, I used ChatGPT to generate an array of frequencies corresponding to the modes. I connected all the wires and components according to the sketch.

Reflection

Being someone with no knowledge in music, I thought this work will be very difficult for me but after I figured out the input and output of the program, it went quite smoothly. To improve the project, a more sensitive distance sensor can be used to improve the readings of the sensor and more modes with different frequencies can be added to create more musical effects.

Week 11 Assignment – Zere

Concept

The concept of my project is to create a light-controlled musical instrument using Arduino. Instead of buttons or keys, the player controls pitch by covering or exposing a photoresistor to light, like waving your hand over it. A pushbutton switches between a low octave and a high octave, making the instrument feel expressive and playful. I wanted the interaction to feel unusual and fun, more like a theremin than a traditional instrument.

const int LDR_PIN = A0;
const int BTN_PIN = 2;
const int BUZZ_PIN = 8;

void setup() {
  pinMode(BTN_PIN, INPUT_PULLUP);
}

void loop() {
  int lightVal = analogRead(LDR_PIN);
  int btnState = digitalRead(BTN_PIN);

  int freq;
  if (btnState == LOW) {
    freq = map(lightVal, 0, 1023, 500, 2000);
  } else {
    freq = map(lightVal, 0, 1023, 100, 500);
  }

  tone(BUZZ_PIN, freq);
  delay(10);
}

I started by placing a photoresistor on the breadboard with a 10kΩ resistor forming a voltage divider, which lets the Arduino read changing light levels through A0. At first the buzzer played the same tone constantly, later I discovered through the serial monitor that A0 was reading 1023 the whole time, which meant that the resistor was not properly connected to the photoresistor. Once I fixed that, the pitch started responding to light.

What I’m proud of

The part I’m most proud of is figuring out the button wiring. This was incredibly frustrating, as I spent a long time confused about which legs of the button connect internally, which pins on the Arduino were digital versus analog, and why the signal kept reading the wrong values. I watched a YouTube tutorial to better understand how the process, it was quite challenging to be honest. https://www.youtube.com/watch?v=gj-H_agfd6U

What I can do better next time

Next time, I want to map the light values to specific musical notes rather than continuous frequencies, so it sounds more like a real scale than a smooth slide between pitches. I could also add an LED that lights up when the button is pressed for visual feedback, or use multiple photoresistors to create distinct pitch-control zones. I could also arrange the wires more neatly next time.

This is the video of me testing the sound: IMG_2491

 

Week 11 Assignment

Concept

The concept of my project is to create a simple electronic musical instrument using Arduino. I wanted to turn basic components like a button and a potentiometer into something interactive and expressive. The button works like a “play key,” while the potentiometer controls the pitch of the sound. By combining these elements, the user can actively “play” the instrument instead of just hearing a fixed sound. I also added a second button to switch between low and high pitch modes, which makes the instrument feel more dynamic and closer to a real musical device.

How I Made This

int playButtonPin = 2;
int modeButtonPin = 3;
int buzzerPin = 8;
int ledPin = 6;
int potPin = A0;

int lowNotes[]  = {262, 294, 330, 349, 392, 440, 494, 523};   // C4 到 C5
int highNotes[] = {523, 587, 659, 698, 784, 880, 988, 1047};  // C5 到 C6

bool highMode = false;
int lastModeButtonState = LOW;

void setup() {
  pinMode(playButtonPin, INPUT);
  pinMode(modeButtonPin, INPUT);
  pinMode(buzzerPin, OUTPUT);
  pinMode(ledPin, OUTPUT);
  Serial.begin(9600);
}

void loop() {
  int playButtonState = digitalRead(playButtonPin);
  int modeButtonState = digitalRead(modeButtonPin);
  int potValue = analogRead(potPin);

  int noteIndex = map(potValue, 0, 1023, 0, 7);

  // detect second button:change mode
  if (modeButtonState == HIGH && lastModeButtonState == LOW) {
    highMode = !highMode;
    delay(200); // 
  }
  lastModeButtonState = modeButtonState;

  int frequency;
  if (highMode) {
    frequency = highNotes[noteIndex];
  } else {
    frequency = lowNotes[noteIndex];
  }

  if (playButtonState == HIGH) {
    tone(buzzerPin, frequency);
    digitalWrite(ledPin, HIGH);
  } else {
    noTone(buzzerPin);
    digitalWrite(ledPin, LOW);
  }

  Serial.print("Mode: ");
  if (highMode) {
    Serial.print("HIGH");
  } else {
    Serial.print("LOW");
  }

  Serial.print("  Note index: ");
  Serial.print(noteIndex);
  Serial.print("  Frequency: ");
  Serial.println(frequency);

  delay(10);
}

 

I started by building a very basic circuit with a buzzer, a button, and a potentiometer. At first, the buzzer only made continuous sounds, which didnot feel like music. Then I modified the code so that the potentiometer controls discrete notes instead of continuous frequencies. This made the sound more like a real scale (Do, Re, Mi).

After that, I added a second button to switch between two sets of notes (low and high pitch). This required both wiring changes and updating the code logic to detect button presses and toggle modes. I also added an LED that lights up when the instrument is being played, which gives visual feedback and makes the interaction clearer.

Throughout the process, I tested each part step by step instead of building everything at once. This helped me identify problems more easily and understand how each component works.

What I’m Proud Of

The part I’m most proud of is how I figured out the mistakes with the buttons. At the beginning, my buttons did not work correctly at all. Sometimes the sound would play randomly, and sometimes pressing the button did nothing. I realized later that I misunderstood how the button pins (1A, 2A, etc.) are connected internally, and I also forgot to use the resistor properly.

After debugging, I learned that the button needs to cross the middle gap on the breadboard and that I must connect a pull-down resistor to stabilize the signal. Fixing this problem made everything work correctly, and it was a moment where I felt I really understood the circuit instead of just following instructions. This experience helped me become more confident in troubleshooting hardware problems.

What I Can Do Better Next Time

Next time, I think I can improve both the design and the interaction of my project. Right now, the instrument is still quite simple, and the sound is limited to basic tones. I could improve this by adding more buttons to create a small keyboard or by programming simple melodies.

I also want to make the interaction more intuitive. For example, I could use a light sensor to control pitch instead of a potentiometer, which would make the instrument feel more creative and less mechanical. Another improvement would be organizing the wiring more clearly, because my current circuit is a bit messy and hard to read.

Overall, this project helped me understand both coding and circuit design better, but I think there is still a lot of space to make it more expressive and closer to a real musical instrument.

Reading reflection

After reading both the essay and the responses, one clear idea is the difference between what interaction design could be vs what it is now. Bret Victor wants computers to help people think and understand, not just click buttons. This made me reflect that many of my own projects are still very basic—more like reactions, not real thinking tools.

Another important thought is about making things visible. He argues that systems should show how they work instead of hiding everything. This connects to learning. When I can see changes directly, I understand faster. So interaction design is not only about design, but also about how people learn.

However, the responses made me question his ideas. Some people say his vision is too idealistic and hard to apply in real life. Real systems have limits, and not all users want to explore deeply. Sometimes people just want things to be fast and simple. So his ideas may work better for learning tools, not everyday apps.

It also made me think about the role of the designer. Instead of controlling everything, the designer creates a system where users explore by themselves. This is similar to interactive art, but it also means less control over the final experience.

Overall, these readings made me see interaction design as more than coding. It is about how people think, understand, and interact with systems, but also about balancing ideal ideas with real-world limits.

Week 11 – Serial Communication

Repository

Repository

Exercise 1

1. Overview

In this exercise, we explored serial communication between Arduino and p5.js. The main objective was to use a single sensor on Arduino and translate its input into visual movement in p5 — specifically controlling an ellipse moving horizontally across the screen. A potentiometer connected to pin A1 served as the analog input, and no data was sent back from p5 to Arduino, making this a one-directional communication exercise.

2. Concept

The core idea was to establish the simplest possible link between a physical input and a digital visual. By turning a potentiometer, the user directly moves an ellipse across a canvas in the browser in real time. This introduced us to the fundamental pipeline of physical interaction: sensor reads a value, Arduino maps and sends it, p5 receives and translates it into something visible on screen. The simplicity of the setup made it easy to trace the data flow end-to-end and understand what was happening at each stage.

3. Process and Methods
    • We began by following the example demonstrated in class and gradually adapted it to better understand the data flow. On the Arduino side, the potentiometer on pin A1 returns a raw analog value between 0 and 1023. We mapped this to a smaller range of 0–255 using map() before sending it through Serial.println(). This made the value easier to work with on the p5 side without losing meaningful range.
    • For p5.js, we used the p5.webserial library following the structure introduced in class. The serial connection is opened manually through a button, which triggers the browser’s port picker dialog. Inside draw(), port.readUntil(“\n”) reads each incoming line, trim() strips the newline character, and int() converts the string into a usable number. That number is then mapped to the canvas width using map(), which drives the ellipse’s horizontal position. The ellipse stays fixed on the vertical centre of the canvas at all times.
4. Technical Details
    • The Arduino maps the raw potentiometer value before sending it to reduce the range to something the p5 side can easily stretch across the canvas:
// ── READ SENSOR ──
// analogRead returns 0–1023 based on the voltage at A1.
// At 0V (GND side) → 0. At 5V → 1023.
int potentiometer = analogRead(A1);

// ── MAP TO SMALLER RANGE ──
// Compress 0–1023 to 0–255 so p5 can easily map it onto the canvas width.
int mappedPotValue = map(potentiometer, 0, 1023, 0, 255);

// ── SEND TO p5 ──
// Serial.println() sends the number as a string followed by a newline character '\n'.
// p5 uses that newline to know where one value ends and the next begins.
Serial.println(mappedPotValue);
    • On the p5 side, the incoming string is cleaned and converted before being mapped to a screen position:
let str = port.readUntil("\n");

if (str.length > 0) {
  // trim() removes the trailing '\n' (and any spaces).
  // int() converts the cleaned string into a number.
  let val = int(trim(str));

  // ── MAP VALUE TO CANVAS ──
  // Arduino sends 0–255. We stretch that range across
  // the full canvas width so the ellipse covers the
  // entire screen as the pot is turned.
  x = map(val, 0, 255, 0, width);
}
    • This two-step mapping — first on Arduino from 0-1023 to 0-255, then in p5 from 0-255 to 0-canvas width — demonstrates how data can be progressively transformed as it passes between systems.
5. Reflection

This exercise gave us a concrete understanding of how physical input can directly influence digital visuals in real time. The most valuable part was seeing the full pipeline in action: turning the potentiometer caused an immediate, visible response on screen, which made the abstract idea of serial communication feel tangible. It also highlighted the importance of consistent data formatting — the newline character that Serial.println() appends is what makes port.readUntil(“\n”) work reliably on the p5 side. If we were to continue developing this, we would explore using different sensors such as FSR or ultrasonic distance sensors, add smoothing to reduce noise, and expand the visuals to control more than one element.

Exercise 2

1. Overview

In this exercise, we reversed the direction of communication — from p5.js to Arduino. The objective was to control the brightness of a physical LED using mouse movement in the browser. p5 continuously sends a brightness value based on the mouse’s horizontal position, and Arduino uses that value to dim or brighten the LED through PWM. No data is sent back from Arduino to p5.

2. Concept

Where Exercise 1 used hardware to control software, this exercise flipped that relationship. The browser became the controller and the LED became the output. Moving the mouse across the screen is a familiar, intuitive gesture, and seeing that gesture reflected immediately in a physical light made the connection between the two systems feel direct and satisfying. This exercise also introduced us to the handshake pattern, which is necessary when Arduino needs to wait for p5 to be ready before the communication loop can begin.

3. Process and Methods
    • We kept the overall structure close to what was demonstrated in class and simplified the communication to a single value per message. On the p5 side, mouseX is mapped to a range of 0–255 using map() and constrained with constrain() to prevent out-of-range values. This number is sent to Arduino as a string followed by a newline character using port.write().
    • On the Arduino side, the sketch begins with a handshake loop that repeatedly sends “0” and blinks the built-in LED until p5 responds. Once connected, Serial.parseInt() reads the incoming integer from the serial buffer. After confirming the message ends with ‘\n’, analogWrite() applies the value to the LED on pin 5. Because analogWrite() requires a PWM-capable pin, the LED must be connected to one of the pins marked with a tilde (~) on the Arduino board — in our case, pin 5.
4. Technical Details
    • The p5 sketch maps mouse position to brightness and sends it continuously while the port is open:
// ── CALCULATE BRIGHTNESS ──
// Map the mouse's horizontal position across the canvas
// to a brightness value in the range 0–255.
// This matches the range that analogWrite() accepts on Arduino.
let brightness = int(map(mouseX, 0, width, 0, 255));

// ── CONSTRAIN ──
// Clamp the value so it never exceeds 0–255,
// even if the mouse moves outside the canvas.
brightness = constrain(brightness, 0, 255);
// ── SEND TO ARDUINO ──
// Only write if the port is open (i.e., user has connected).
// We append '\n' so Arduino's Serial.read() can detect
// the end of the message after parseInt() runs.
if (port.opened()) {
  port.write(brightness + "\n");
}
    • Arduino reads the value and applies it to the LED:
// ── READ BRIGHTNESS ──
// parseInt() reads digits from the buffer and returns them as an integer. 
// It stops at any non-digit character (including the newline).
int brightness = Serial.parseInt();

// ── CONFIRM COMPLETE MESSAGE ──
// After parseInt(), the next character should be '\n'.
// This confirms we received a full message and not a partial one, preventing corrupted values.
if (Serial.read() == '\n') {

  // ── SET LED BRIGHTNESS ──
  // analogWrite(pin, 0–255) uses PWM to control brightness.
  // 0 = fully off, 255 = fully on, values in between = dimmed.
  analogWrite(ledPin, brightness);
}
    • The ‘\n’ check after parseInt() is important — it confirms that a complete message was received before acting on the value, which prevents the LED from responding to corrupted or partial data.
5. Reflection

This exercise made the communication feel more interactive than Exercise 01, because the browser was no longer just a display — it was actively sending instructions to hardware. The main issue we encountered was that the LED did not dim smoothly at first. After checking the wiring, we found the LED was connected to a non-PWM pin, which meant analogWrite() had no effect. Moving it to pin 5 resolved this immediately. This was a useful reminder that the physical wiring must match the assumptions in the code. If we were to continue, we would replace mouse control with a more interesting interaction such as key presses or dragging, add multiple LEDs with independent brightness values, and eventually expand into bi-directional communication.

Exercise 3

1. Overview

This exercise brought together everything from the previous two by implementing full bi-directional communication between Arduino and p5.js. A joystick connected to Arduino controls the wind force in a physics simulation, while p5 sends a signal back to Arduino to light up an LED every time the ball bounces. Data flows in both directions simultaneously.

2. Concept

The gravity and wind sketch provided a compelling context for bi-directional communication because it has two clearly distinct interactions that naturally map to each side: the joystick controls something in the simulation, and something in the simulation triggers a response on the hardware. Rather than the user controlling a parameter directly, the LED reacts to an event — a bounce — which made the physical and digital feel genuinely connected rather than just linked. Replacing keyboard input with a physical joystick also made the experience more immersive, since it gave the user a tangible way to influence the simulation.

3. Process and Methods
    • We started from the gravity and wind example provided in class and kept the simulation structure mostly unchanged. The main modifications were replacing keyboard-based wind control with joystick input, and adding bounce detection that communicates back to Arduino.
    • On the Arduino side, the joystick’s horizontal axis is read from pin A0 using analogRead() and sent to p5 via Serial.println() after every incoming message. On the p5 side, the value is read with port.readUntil(“\n”), trimmed, converted to an integer, and mapped from 0-1023 to a wind force range of -1 to 1. This value is applied to the wind vector each frame, so tilting the joystick left or right pushes the ball accordingly.
    • For the return signal, we added bounce detection inside draw(). Each frame, a variable bounced is initialised to 0. If the ball hits the floor with a downward velocity greater than 2, it is counted as a real bounce and bounced is set to 1. This value is sent back to Arduino using port.write(). Arduino reads it with Serial.parseInt() and calls digitalWrite() to turn the LED on or off.
4. Technical Details
    • The bounce detection uses a velocity threshold to distinguish real impacts from the small residual movements that occur when the ball settles on the ground:
  // ── BOUNCE DETECTION ──
  let bounced = 0;                    // assume no bounce this frame
  let floorY = height - mass / 2;    // y where ball touches the floor

  if (position.y > floorY) {
    position.y = floorY;   // prevent ball from going below floor

    // ── VELOCITY THRESHOLD ──
    // Only count as a real bounce if the ball hits with enough
    // downward speed (> 2). This filters out the tiny movements when the ball is nearly at rest
    // Would otherwise cause the LED to flicker continuously.
    if (velocity.y > 2) {
      velocity.y *= -0.9;   // reverse and reduce (energy loss on impact)
      bounced = 1;           // signal a real bounce to Arduino
    } else {
      velocity.y = 0;        // ball has come to rest — stop it completely
    }
  }

  // ── SEND BOUNCE STATE TO ARDUINO ──
  // Sends 1 if a real bounce happened this frame, 0 otherwise.
  // Arduino uses this to briefly light the LED on each bounce.
  // '\n' is appended so Arduino can confirm end of message.
  if (port.opened()) {
    port.write(bounced + "\n");
  }
}
    • On the Arduino side, the received state is applied directly to the LED:
// ── READ LED STATE FROM p5 ──
// p5 sends '1' when a bounce is detected, '0' otherwise.
// parseInt() extracts the integer from the incoming string.
int ledState = Serial.parseInt();

// ── CONFIRM COMPLETE MESSAGE ──
// Check for the newline that p5 appends with port.write().
// This prevents acting on partial or corrupted values.
if (Serial.read() == '\n') {

  // ── CONTROL LED ──
  // ledState is 1 (HIGH) when p5 detects a bounce → LED on.
  // ledState is 0 (LOW) otherwise → LED off.
  digitalWrite(ledPin, ledState);
}
    • The joystick read and send happens inside the same while (Serial.available()) block, directly after the LED state is processed. This keeps the two directions of communication tightly coupled within each communication cycle.
5. Reflection

This was the most technically demanding exercise of the three, and also the most rewarding. The biggest challenge was the LED flickering continuously when the ball came to rest on the ground. Even though the ball appeared stationary, the physics simulation kept generating tiny downward movements due to gravity, which the system kept interpreting as bounces. Introducing the velocity threshold velocity.y > 2 solved this cleanly and also taught us something broader: physics simulations produce continuous, noisy output, and meaningful events often need an explicit threshold or condition to be extracted from that noise. If we were to continue, we would add a dead zone to the joystick centre to reduce drift, use both joystick axes for more complex motion, and add more LEDs to represent different simulation events.

6. Resources

Week 11 — Reading Response

What struck me most about Pullin’s argument is the way he reframes concealment — not as a neutral design default, but as a value judgment quietly embedded in every flesh-toned hearing aid and skin-matching prosthetic. I’d always assumed that “discreet” design for disability was simply considerate, a way of respecting the user’s desire to blend in. Pullin destabilizes that assumption by asking who decided blending in was the goal in the first place. The eyeglasses example is effective precisely because it’s already resolved: nobody today apologizes for visible frames or tinted lenses, and the question of why hearing aids can’t occupy the same cultural space is genuinely difficult to answer without exposing some discomfort about how disability is perceived. What this raises for me is whether the concealment instinct in design is a response to user needs or a projection of the designer’s own unease — and whether those two things are even distinguishable in practice. If a designer has never lived with a hearing aid, their intuition about “what users want” is shaped by imagining the stigma from the outside, which may be a very different thing from what someone actually navigating that stigma would choose.
The tension I keep returning to is the one between designing for expression and designing for choice — and I’m not sure Pullin fully resolves it. His most compelling cases, like Aimee Mullins’ carved wooden legs, work because they belong to a specific person with a specific relationship to visibility and performance. But what interests me is the middle ground he gestures at: the user who neither wants to pass as non-disabled nor be conscripted into a narrative of disability-as-spectacle they didn’t ask for. That unresolved space feels like the honest core of the book, and it connects for me to broader questions in interaction design about whether personalization is a solution or a way of deferring a harder design decision. Giving users options is good, but the options still frame what’s possible — and right now, as Pullin shows, the frame has been set almost entirely by clinical priorities and the discomfort of people who aren’t disabled. That framing shapes perception in ways I hadn’t fully considered before reading this.

Week_11_Assignment

Schematics

Video Demo

P5 code

https://editor.p5js.org/JingyiChen/sketches/8ggtbgHZV

I made changes to the code to include the serial communication components and also adjust the LED logic slightly so it would fit the bounce better. The code for serial communication was all adapted from the week 11 example 2 bidirectional communication example.

//original bounce code, but it doesnt work well if I put the ledState deciding if else in here.
if (position.y > height - mass / 2) {
  velocity.y *= -0.9; // A little dampening when hitting the bottom
  position.y = height - mass / 2;
}
//so I added another if statement to individually decide whether the LED should light up according to ball y position
//first line creats a 15 pixel range near the bottom line for the led to light up, to prevent the glitch like blinks that would happen with the original bounce code.
if (position.y > height - mass / 2 - 15) {
  // Use the absolute of the velosity so the ball triggers the LED on both the way down and up
  if (abs(velocity.y) > 2) {
    ledState = 1;
  } else {
    ledState = 0;
  }
} else {
  //if the ball is not within the 15 pixel range
  ledState = 0;
}

This code snippet is where I added some logic to make the LED poerform more in sync with the bounce. Originally I added the ledStae changee directly to if (position.y > height – mass / 2){}, which resulted in a bit of a glitchy looking blink, which I think might have to do with the very small time frame on the on signal and the time it takes for the information to be communicated between p5 and arduino. So I added another if statement to have the LED turn on when the ball is in a 15 pixel range above the bottom line, and also checked the velocity so the led does not stay on when the ball is barely bouncing or stationary. This make the led a lot more stable and light up correctly every bounce.

Arduino code

https://github.com/JingyiChen-jc12771/Intro-to-IM/blob/8f5bdbc0282acbce4a7edea2334f1305c493216c/W11_01_serial_simple_potentiometer.ino

The arduino code is also adapted from the week 11 example 2 bidirectional communication example. I changes the code very slightly so it would reflect the wiring of having only one input from the potentialmeter.

Difficulties and areas for improvemment

The biggest difficulty was the glitchy blink when I added blink code to the original code. I spend some time trying to figure out why but because I couldnt find anything wrong with the p5 or arduino code I had to guess it was timing differences. It would have been better if I had been able to uncover the root cause of the problem and figured out how to solve it in the original code.

 

 

Week 11- Reading Response

A Brief Rant on the Future of Interaction Design

The author emphasizes that human capabilities are important when thinking about designing the future. I have to agree with this part, especially when he mentioned that humans have hundreds of degrees of freedom, which reminds me of my robotics class—our professor highlighted that robotics takes a lot from human nature, like the robotic 3 degrees of freedom (DOF) arm. So, when designing things, we can make use of human capabilities.

But this is not what the author meant; he wanted designs to make use of our already existing capabilities to interact with them. As he mentioned, a hammer’s grip is meant for a human hand. However, when it comes to the future of interactive design and technology, I have to disagree that interacting with it should require using my full human body, because not everyone is able-bodied. Technology has to be accessible to everyone—isn’t that why we have it? To provide access to more things in a more effective, optimized way?

Making mobile phone calls rather than using a phone booth, reading or listening to books and articles anywhere, anytime—before having interactive design at my fingertips, I would not be able to type on a computer; I would have to go to a bookstore and have a professional typewriter write this reading response out. Sometimes, it takes writing things out to realize how truly blessed we are to have these types of things right at our fingertips.

Then, we can adapt this technology to be used by everyone. I also want to mention that I agree that using the full immersive experience with the human body is much more entertaining and fun. So, it depends on the end-use product or idea. I do not think the author is biased, but I think the author should cover different cases where this is not necessary or makes things harder for some people. I do not think the author changed my mind, but it opened me up to more ideas and thoughts on how to truly design good products that try to include everyone. An example of an incredible inclusive interactive design is the Meta Quest 3 and 3S VR headset, which has an option to play while seated, adjust the distance between the eyes, and even add a glasses prescription. If you do not want to pay extra or share it with family members, it also has space to fit your own glasses.

I had a few questions throughout, but I eventually answered them myself by writing my thoughts out, such as: “Technology has to be accessible to everyone—isn’t that why we have it? To provide access to more things in a more effective, optimized way?” Another question I had was: isn’t a good designer someone who considers different cases to make the product or design as effective as possible?

 

A follow-up article

I thought the author might cover some of the questions and respond to thoughts similar to mine, but he did not. He seemed to express some dislike for “waving hands in the air” when it comes to manipulating things, because you cannot feel what you are manipulating. From an improvement point of view, I agree that this would be beneficial. However, I believe that if researchers were to receive funding for it, it would mainly come from the medical field, to help people with loss of sensation, such as from neuropathy, stroke, or spinal cord injury.

There are mainly two types of gloves; the type depends entirely on whether the goal is therapeutic improvement (relearning sensation) or sensory substitution (using technology to mimic touch), which I believe could later be used in games. It reminds me of audiobooks, which were initially made to assist people with hearing or reading difficulties in accessing information from books, but nowadays are used by a much larger audience—busy parents, people with demanding work schedules, kinesthetic learners, and many more.

A lot of the time, these types of research efforts end up helping a larger group of people than initially predicted. I believe that designers should make good use of what we have, while researchers should continue to expand on what we need and what we already know.

 

Arduino/Tinkercad (2)

Create Your Rhythm

Concept:
During class when Professor Mang was explaining the homework for Tuesday, my younger brother was actually sitting next to me and watching nursery rhymes (to be specific he was watching Mary had a little lamb) which I found funny because I then started to think about the project and different elements related. I also remember having a small toy piano (I had the one that looked like the piano and the one that looked like the toy) so then I started to wonder if I could create the keys and make my own music and to maybe try creating the same sounds for Mary had a little lamb.
I was able to create the song! and not just for Mary had a little lamb but also just experimenting with the notes to create my music! (3-2-1-2 (Ma-ry had a) 3-3-3 (lit-tle lamb) 2-2-2 (lit-tle lamb) 3-3-3 (lit-tle lamb) 3-2-1-2 (Ma-ry had a) 3-3-3-3 (lit-tle lamb whose) 2-2-3-2 (fleece was white as) 1 (snow)!!!!)

Hand-drawn schematic:

How this was made:
I used the tips from Professor Mang, he also advised us to use a ruler which is something I didn’t do last time but honestly compared to my first schematic this is so easy to read -my first one not that organized. Also I always like doing the schematics as I go then I do a final neat and organized one so I can easily look back at it.

Image: (Link)

How this was made:

First I duplicated my last project because I think it’s easier to just edit instead of start brand new. I just changed the LED lights, removed them, and added the piezo and connected the positive side (h7) to pin ~9 with a green wire I used Pin ~9 because like we learned in class it is a PWM pin that can handle the pulses and in my case its needed to create different sound frequencies. Then the negative side of the piezo (h2) to (-) connected with a black wire to GND. After I made sure GND is connected to (-) and 5V is connected to (+) I was able to focus on the buttons and add 4 different buttons (Button 1 plays Note C, Button 2 plays Note D, Button 3 plays Note E, and Button 4 plays Note F). I placed the buttons in a row across the middle of the board with the legs in columns g and e. Each button has one side connected to the negative rail with a black wire and the other side connected to its specific digital pin (4, 5, 6, or 7) with a green wire. I also decided to line them up this way to create something similar to a real piano keyboard. For the last part I just moved the potentiometer because I needed more space for the four buttons and they follow the similar position of my last project.

Code:

// C++ code
//
void setup() {
  pinMode(9, OUTPUT); // the piezo speaker
  
  // setting up all the buttons
  pinMode(4, INPUT_PULLUP); //Button 1=Note C
  pinMode(5, INPUT_PULLUP); //Button 2=Note D
  pinMode(6, INPUT_PULLUP); //Button 3=Note E
  pinMode(7, INPUT_PULLUP); //Button 4=Note F
}

void loop() {
  int sensor = analogRead(A0);
  int multi = map(sensor, 0, 1023, 1, 4); //when the dial is changed the sound does too

  //when button is pressed and the dial is changed then the notes are multiplied=higher pitch
  if (digitalRead(4) == LOW) {
    tone(9, 262 * multi); //C=262
  } 
  else if (digitalRead(5) == LOW) {
    tone(9, 294 * multi); //D=294
  } 
  else if (digitalRead(6) == LOW) {
    tone(9, 330 * multi); //E=330
  } 
  else if (digitalRead(7) == LOW) {
    tone(9, 349 * multi); //F=349
  } 
  else {
    noTone(9); //when nothing is pressed there is no sound
  }

  delay(10); //10 mil.sec. delay
}

 

How this was made:

For my code, in the setup first I highlighted pin 9 because thats for the Piezo -my speaker. Then I set up pins 4, 5, 6, and 7 for my buttons while using INPUT_PULLUP so I wouldn’t have add any extra resistors or wires on the breadboard since I can easily use tinkercad’s built-in resistors. Then I created a loop and used analogRead(A0) to first check the sensors and then inside because the dial gives a number from 0 to 1023, I used the map function (which I learned from ARDUINODOCS) to change that number into a small number that goes from 1 and 4. I called this variable multi (short for multiply) which allows me to change the pitch higher just by turning the dial. For the music I used the if/else structure which I am familiar with from p5js and because I wanted to create keys I followed the notes so button 1=262 a C note and it goes on (of course I checked the music notes to do this). And then based on each dial as it increases the numbers also increase by multiplication. I basically ended the code with else noTone(9) (for the tone function) so when the button is not pressed there is no sound coming from the Piezo and like always a 10ms delay at the bottom just to keep the loop from running too fast and making the sound glitchy.

Resources:

Reflection and future improvements:
Honestly, I’m super proud of what I created. Like always, the first time is confusing, complicated, and annoying but comparing myself to when I first did those tutorials to now creating my own rhythm, I’m so happy with how this turned out! I was actually playing it out loud for my siblings too. Overall, I think creating something in different formats while keeping it creative, personal, and fun is always something I look forward to doing. When it comes to the piano and my personal connection to it, honestly I think younger Moza would be so proud and amazed to see that I actually made those keys and notes using code and Arduino(tinkercad), because that’s just so cool! For future improvements, I think I did really well (fun fact: I actually created two projects: the first was simple, but for this one, I pushed myself to see what else I could do!) In the future, I want to experiment even more and keep pushing my limits. 🙂

Reading Week 11

This reading connects immersion to interactivity because Victor actually argues that real interaction should go beyond just touching a screen, “pictures under glass” like phones, tablets, and flat screens reduce human interaction to simple sliding and tapping, while ignoring how our hands actually work through touching, gripping, holding, and movement. I found this really interesting because it made me think about how much we rely on technology that removes physical experience instead of improving it. It also reminded me of the movie WALL-E, where humans stop moving, stop engaging with the world, and become fully dependent on screens and machines. This made me question if we are slowly doing the same thing in real life by ignoring our own human capabilities?

I actually made a project for my Understanding Interactive Media class based on this reading, and I called it Felt. My project focuses on immersion and accessibility because while the article asks us to question how technology limits human capabilities, screens and tech. can also create access for people with disabilities. This made me ask: how can we design interaction in a way that creates both immersion and inclusion? Felt is built on the idea that immersion is not visual, it’s actually physical and it goes beyond the use of vision and the reliance on it. (we see this a lot in many different installations like temLab) I chose to focus on blind people that already experience space differently because they rely more on sound, touch, memory, and balance. For sighted people, vision takes over making us forget how much the rest of the body can do.