Week 10 Production(Ling and Mariam)

Conceptualization

For this week’s assignment, the requirement is to “Make a musical instrument “while using at least one digital sensor (switch), at least one analog sensor (photoresistor, potentiometer, or distance measuring sensor).” I have worked with photoresistor last week, and potentiomter isn’t aesthetic enough for a musical instrument in my view, so I looked into the toolbox and proposed to apply ultrasonic sensor, or in another word, distance measuring sensor. It works perfectly as a tool to measure the distance between the sensor and any solid object in real-time and we were thinking to map this distance to pitches so that whenever people move their hands in front of the sensor, the sound the buzzer make would change accordingly.

After implement this simple step, I was thinking about two problems: first, how could i integrate a switch into this system, and second, is there anyway to alter the tempo, which is another important factor in making music, of the sound the buzzer made. To resolve the first problem, i first decide to add a system on/off switch since it’s just logical to add a switch to control the entire system. To resolve the second problem, I proposed to add a switch to control the tempo of the sound it made since after the first implementation, the buzzer would just make continuous sound and it would be hard to add any tempo element to the instrument.

We decided there could be 2 means to control the tempo:

  1. The switch would record manual interaction with the switch and play the sound precisely how it is manually inputted.
  2. The switch would record how many times people push the button within a 5-second time period(starting from the first push) and calculate the tempo based on 1000ms dividing the times the button is pushed.

We finally decided to go with the second option as it would provide us with a more steady tempo since manual input could be mathematically unstable otherwise.

So the final output(the video demonstration and the schematic) is demonstrated below.

Video Demonstration

Schematic

Code Highlight:

In my opinion, there are two interesting part about this program. First, how the pitch is controlled(a basically mapping program) and secondly, the program to control the tempo(the logic of the tempo-control is listed above).

duration = pulseIn(echoPin, HIGH);
distance = (duration * 0.0343) / 2.0;  // in cm
// Check if distance is valid and in a usable range
  if (distance > 0 && distance < 200) {
    float d = distance;

    // Limit distances for stable mapping
    if (d < 5)  d = 5;
    if (d > 50) d = 50;

    // Map distance to frequency (closer = higher pitch)
    int frequency = map((int)d, 5, 50, 2000, 200);

This code highlights how the pitch is correlated to the distance in programs.

duration = pulseIn(echoPin, HIGH);

This measures how long the echo signal stayed HIGH. This is the time for the sound to travel to the obstacle and back.

distance = (duration * 0.0343) / 2;

The duration(in microseconds) is multiplied by the speed of sound in air (0.0343 cm/µs). It’s divided by 2 because the sound traveled to the object and back, but we only want one-way distance.

// Detect a press (falling edge: HIGH -> LOW)
  if (tempoButtonState == LOW && lastTempoButtonState == HIGH) {
    if (!tempoSettingActive) {
      // First press: start 5-second capture window
      tempoSettingActive = true;
      tempoStartTime = millis();
      tempoPressCount = 1;  // Count this first press
      Serial.println("Tempo setting started: press multiple times within 5 seconds.");
    } else {
      // Additional presses inside active window
      tempoPressCount++;
    }

    delay(40); // debounce for tempo button
  }

This is the tempo controlling program. I think it is very self-explanatory with the comments in the code.

The entire code is completed with assistance from GPT.

Reflection:

For future improvements, i want to work with buzzer that produce better sound than this one, but the working experience with this buzzer is transitive for future works.

 

Week 9: Analog and Digital Sensor

Concept:

For this project, I decided to use one digital sensor and one analog sensor to control two LEDs. For the digital part, I connected a push button to a digital pin to control one LED — pressing the button turns the light on, and releasing it turns it off. For the analog part, I used a potentiometer connected to an analog input to control the brightness of a second LED through an analog output. As I turned the knob, the LED smoothly adjusted its brightness, demonstrating how analog signals can vary continuously instead of just turning on or off. This setup reflects the difference between digital and analog control and how both can work together in an interactive circuit.

Video Demonstration:

https://drive.google.com/file/d/1_urM0vEA__Piz7zGG_TUHDnuhy5QYUgQ/view?usp=drive_link

Circuit Illustration:

Code Highlight:

// Define pin numbers
int ledPin1 = 11;   // LED controlled by potentiometer (analog LED)
int ledPin2 = 2;    // LED controlled by button (digital LED)
int buttonPin = 7;  // push button input

void setup() {
  // Set pin modes
  pinMode(ledPin1, OUTPUT);     // pin 11 used for PWM (brightness control)
  pinMode(ledPin2, OUTPUT);     // pin 2 for digital on/off LED
  pinMode(buttonPin, INPUT);    // button as input (external resistor required)
  
  Serial.begin(9600);           // start serial monitor for debugging
}

void loop() {
  // Read the potentiometer (analog sensor)
  int sensorValue = analogRead(A1);     // reads values from 0–1023
  Serial.println(sensorValue);          // print the reading to serial monitor

  // Control the brightness of LED on pin 11
  // analogWrite expects a range 0–255, so divide by 4 to scale down
  analogWrite(ledPin1, sensorValue / 4);
  delay(30);                            // small delay to stabilize readings

  // Read the push button (digital sensor)
  int buttonState = digitalRead(buttonPin);   // reads HIGH or LOW

  // Control the digital LED based on button state
  if (buttonState == HIGH) {     // if button is pressed (connected to 5V)
    digitalWrite(ledPin2, HIGH); // turn LED on
  }
  else {                         // if button is not pressed (LOW)
    digitalWrite(ledPin2, LOW);  // turn LED off
  }
}

Github Link:

https://github.com/JaydenAkpalu/Intro-to-IM/blob/f43175106ae171c88a5c7db0410a56dc965127af/Week9_AnalogAndDigitalSensor.ino

Reflections & Future Improvements:

Working on this project helped me really understand the difference between digital and analog sensors and how they can control things like LEDs. I saw that a digital sensor, like a push button, only has two states — on or off — which makes it easy to control an LED directly. On the other hand, an analog sensor, like a potentiometer, can give a range of values, letting me gradually adjust the brightness of an LED. It was really satisfying to see how these two types of input behave so differently in a circuit.

If I were to improve the project, I’d try adding more interactive elements. For example, I could use multiple buttons or potentiometers to control more LEDs, or even combine the potentiometer with a light sensor so the LED responds to changes in the environment automatically. I’d also like to experiment with different LED effects, like fading, blinking patterns, or color changes, to make it more dynamic and fun. Overall, this project gave me a better understanding of how simple inputs can be used creatively to make interactive circuits.

Week 10 – A DIY Musical Instrument (The Ultrasonic Air Piano)

Concept:

The idea behind my instrument was to create a hands-free musical device that transforms invisible gestures into sound. My goal was to design something playful yet technical; a device that reacts to both motion and touch. By combining distance and pressure, the instrument invites intuitive exploration: the closer your hand gets, the higher the note, while pressing the sensor triggers the sound. It merges tactile interaction with sound, making music a physical experience.

 

Method & Materials:

This project was my first time working with two types of sensors on the Arduino Uno:

  • Analog Sensor: Ultrasonic sensor (HC-SR04) to measure distance.
  • Digital Switch: Force Sensitive Resistor (FSR) to detect pressure.
  • Output: Piezo buzzer to produce sound.

I connected the ultrasonic sensor to pins 10 (trig) and 11 (echo), the FSR to analog pin A0, and the buzzer to pin 12.
Each note from the C major scale (C–D–E–F–G–A–B) was assigned to a specific distance range, coded in an array:

int notes[7] = {261, 294, 329, 349, 392, 440, 494};

The system reads distance in real time:

  • When the FSR is pressed and your hand is between 0–50 cm of the sensor, the buzzer plays a tone corresponding to that range.
  • If no pressure is detected or the hand moves out of range, the sound stops.

Process:

At first, it took time to understand how analog vs. digital inputs work and how to read them simultaneously. I researched how to use pulseIn() for the ultrasonic sensor and experimented with mapping values using the map() function.
To visualize the notes, I placed colored paper strips at different distances  (each representing one note of the scale)

Throughout the process, I learned:

  • The importance of wiring correctly (e.g., ensuring the FSR forms a voltage divider).
  • How combining two sensors can create more expressive interaction.

Schematic:

Code:

This combines input from two sensors, an ultrasonic sensor and a force-sensitive resistor (FSR) to generate musical notes through a piezo buzzer. The ultrasonic sensor continuously measures the distance of my hand, while the FSR detects when pressure is applied. When both conditions are met (hand within 50 cm and FSR pressed), the code maps the distance value to a specific note in the C major scale (C, D, E, F, G, A, B). Each distance range corresponds to a different pitch, allowing me to “play” melodies in the air. The code I’m most proud of is the single line that transforms the project from a simple sensor experiment into a musical instrument. It defines the C major scale, turning numerical frequency values into recognizable notes. I love that such a short line of code gives the device its expressive character, it bridges logic and creativity, translating distance data into melody. It’s the heart of the project, where sound and interaction truly come together.

// --- Define musical notes (C major scale) ---
int notes[7] = {261, 294, 329, 349, 392, 440, 494}; // C D E F G A B

Result:

The final prototype acts like an invisible piano: you play by waving your hand in front of the sensor and pressing lightly on the FSR. Each distance triggers a different musical note. The colored papers made it easier to perform intentionally and visually mark pitch changes.

In the demonstration video, the tones respond smoothly to my gestures, transforming simple components into an expressive interface.

Challenges:

One of the main challenges I faced was understanding how each pin on the ultrasonic sensor worked. At first, I didn’t realize that every pin had a specific purpose, like trig for sending signals and echo for receiving them, so it took me a while to fully grasp how data was actually being measured. I also struggled with wiring the circuit, often making small mistakes that stopped the whole setup from working. Drawing out the schematic was another time-consuming part since there were many components to connect and label correctly. Finally, the coding process was challenging because it was my first time using several of these elements, and I had to learn through trial and error how to make the sensors and buzzer communicate smoothly.

Inspiration + Tools thats helped me:

https://projecthub.arduino.cc/theriveroars/simple-hand-controlled-instrument-372bfc

https://learn.adafruit.com/force-sensitive-resistor-fsr/using-an-fsr

Reflection:

This project taught me how code, sensors, and sound can merge into interactive art. The challenge was balancing sensitivity: sometimes the ultrasonic readings fluctuated, and it took some fine tuning. But once it worked, it felt rewarding to hear the instrument. It also made me realize how music can be built from logic, how creative coding allows emotional expression through electronics. I see this as the beginning of exploring computational instruments that combine art and technology.

Reflection- Week 9

When I read Physical Computing’s Greatest Hits and Misses and Making Interactive Art: Set the Stage, Then Shut Up and Listen, I started to think more deeply about what it really means to make something interactive. The first reading talked about how many beginner projects in physical computing repeat the same ideas, like using a sensor to make lights blink or to trigger sound. At first, I felt a little unsure because my own project also used simple tools like a light sensor and a button. But as I continued reading, I understood the real message: it’s okay to build something that has been done before, as long as I make it my own and give it a purpose. That made me feel more confident about my project. It reminded me that creativity doesn’t always mean doing something completely new, but doing something familiar in a meaningful or personal way.

The second reading focused on how interactive art should let people explore freely. It said that once we build something, we should “set the stage” and then step back, allowing others to interact with it in their own way. I really liked this idea because it made me think differently about my project. When I pressed the button or covered the light sensor, I realized that I was not just testing the circuit, I was actually engaging with it and discovering what it could do.

Both readings made me see that physical computing is not just about coding or wiring but it’s about creating an experience. It’s about giving people something they can explore and learn from on their own.

Analog Sensor

Concept

For this project, I used one analog sensor and one digital sensor (switch) to control two LED lights.

The analog sensor I used was a photoresistor (light sensor). It changes how much electricity passes through it depending on how bright the light in the room is. The Arduino reads this change and adjusts the brightness of one LED  when it’s dark, the LED gets brighter, and when it’s bright, the LED becomes dimmer.

For the digital sensor, I used a pushbutton connected to a digital pin. When I press the button, it turns the second LED on or off.

To make it different from what we did in class, I added a “night light” feature. When the photoresistor detects that the room is very dark, the button-controlled LED automatically turns on, like a small night light. When the light comes back, the button goes back to working normally.

This made my project more interactive and closer to how real sensors are used in everyday devices.

 Schematic of my circuit
It shows the Arduino connected to:

  • A photoresistor and 10 kΩ resistor forming a voltage divider to read light levels.

  • A pushbutton connected to a digital pin.

  • Two LEDs , one controlled by the light sensor and the other controlled by the button

Final Results

When I tested the circuit:

  • The first LED smoothly changed its brightness depending on how much light the photoresistor sensed.

  • The second LED turned on and off with the button as expected.

  • When the room got dark, the second LED automatically turned on, working like a night light.

It was a simple but satisfying project, and the extra feature made it stand out from the class example.

Video: video-url

Arduino Code

Part of Cold I am proud of

void loop() {
  // --- Read photoresistor ---
  int lightValue = analogRead(lightPin); // 0–1023
  int brightness = map(lightValue, 0, 1023, 255, 0);
  analogWrite(ledAnalog, brightness);

  // --- Button toggle ---
  if (digitalRead(buttonPin) == LOW) {
    ledState = !ledState;
    delay(200);
  }

  // --- Night light feature ---
  if (lightValue < 300) { // If it's dark, auto turn on LED
    digitalWrite(ledDigital, HIGH);
  } else {
    digitalWrite(ledDigital, ledState ? HIGH : LOW);
  }

  // --- Print readings ---
  Serial.print("Light: ");
  Serial.print(lightValue);
  Serial.print(" | Brightness: ");
  Serial.print(brightness);
  Serial.print(" | LED State: ");
  Serial.println(ledState ? "ON" : "OFF");

  delay(200);
}

Github url: Github

Challenges and Further Improvements

While I was able to make both the analog and digital sensors work, I struggled a bit with arranging all the wires and resistors neatly on the breadboard. It took a few tries to get everything connected correctly.

I also had to test different threshold numbers for the night light feature to decide when the LED should automatically turn on. Once I found the right value, it worked well.

For my next project, I want to try using other kinds of sensors, like sound or temperature sensors, and make the circuit respond in new ways. I’ll also practice reading the code line by line to understand how each part works better before adding new features.

Week 9 – Two-Degree Safety

My Concept

I decided to build a “two-degree safety guardrail.” The logic is based on two separate actions:

  1. Idle State: A red LED is ON (digital HIGH).
  2. “Armed” State: A button is pressed. This turns the red LED OFF (digital LOW).
  3. “Active” State: While the button is held, a FSR (force-sensing resistor) is pressed, which controls the brightness of a green LED (my analog output).

Challenge

My challenge was getting the red LED to be ON by default and turn OFF only when the button was pressed. I tried to build a hardware-only pull-up or bypass circuit for this but struggled to get it working reliably on the breadboard.

So, I shifted that logic into Arduino code adapted from a template in the IDE.

// constants won't change. They're used here to set pin numbers:
const int buttonPin = 2;  // the number of the pushbutton pin
const int ledPin = 13;    // the number of the LED pin

// variables will change:
int buttonState = 0;  // variable for reading the pushbutton status

void setup() {
  // initialize the LED pin as an output:
  pinMode(ledPin, OUTPUT);
  // initialize the pushbutton pin as an input:
  pinMode(buttonPin, INPUT);
}

void loop() {
  // read the state of the pushbutton value:
  buttonState = digitalRead(buttonPin);

  // check if the pushbutton is pressed. If it is, the buttonState is HIGH:
  if (buttonState == LOW) {
    // turn LED on:
    digitalWrite(ledPin, LOW);
  } else {
    // turn LED off:
    digitalWrite(ledPin, HIGH);
  }
}

The demo was shot with the Arduino implementation.

Schematic

However, I later figured out the pull-up logic, and was able to implement a hardware-only solution. This schematic was a result of the updated circuit.

Week 9: Analog sensor and Digital sensor

Concept

For this project, to create an analog sensor and one digital sensor (switch), and use this information to control at least two LEDs, I decided to make for the digital sensor a connection between a digital pin and a button to control one LED light. For the analog sensor I used the analog input and output to control the second LED light, turning it on and off depending on how much brightness the light sensor received.

Circuit Illustration

Figure 1

Final Results

oplus_0

oplus_0

VIDEO

Arduino code
oid loop() {

int sensorValue = analogRead(A3);
Serial.println(sensorValue);
// set the brightness of pin 11
sensorValue = constrain(sensorValue,500,900);
brightness = map(sensorValue, 500, 900, 0, 255);
analogWrite(11,sensorValue);
delay(30);


  int buttonState = digitalRead (A2);
  if (buttonState == HIGH){
    digitalWrite(13,LOW);
  } else{
      digitalWrite(13,HIGH);
  }

Although it isn’t the most creative or unique code, I was happy to find that my initial code was working to make both LED lights light up, and it was just a matter of adding information for the sensorValue and the brightness to make the LED light responding to the light sensor to show a greater difference in the changes of brightness.

Github

https://github.com/imh9299-sudo/Intro-to-IM.git

Challenges and Further Improvements

While I was able to successfully complete the assignment and create both digital and analog responses, I wish I could have found a more creative way of obtaining these responses. Moreover, I struggled to put the code together so both sensors would function simultaneously, as well as arranging the position of all the jumper wires, the resistors, and the LED lights themselves.  For my next project, I will seek help from the Lab assistants, and I will pay more attention to all the information on the slides we are provided in class to make sure I fully grasp the foundation behind every code and understand what each tool does. I would also like to explore the other tools found in the lab to make different responses in comparison to the tools we already have in our Arduino kit.

Week 10 – Reading Response (A Brief Rant on the Future of Interaction Design)

One passage that really stayed with me from Bret Victor’s A Brief Rant on the Future of Interaction Design is his statement that screens are “pictures under glass.” That phrase hit me because it’s so ordinary yet so revealing; every day I touch my phone dozens of times, yet I never actually feel anything back. Victor’s argument that we’ve limited human interaction to tapping on cold glass made me realize how passive our so-called “interactive” technologies have become. I started thinking about how my creativity, whether sketching or coding, always feels richer when my hands are physically involved; pressing, folding, shaping. It made me question: why did we let convenience replace tactility? What would technology look like if it honored the intelligence of our hands instead of reducing them to cursors?

In the Responses section, I was fascinated by how defensive many readers became, as if Victor’s critique was anti-progress. But what I sensed in his tone was care, not nostalgia; a desire to expand our sense of what interaction can mean. This really reminded me of  Refik Anadol’s Machine Hallucinations, a piece I’m analyzing for another course, where data transforms into movement, color, and emotion. Anadol’s work feels like the future Victor imagines: one where technology engages the body and senses, not just the eyes.

These readings challenged my old assumption that the “best” design is the smoothest and most frictionless. Victor helped me see friction as meaningful; it’s how we feel our way through the world. I now think of design less as creating perfect efficiency and more as crafting moments of connection between body, mind, and machine. The essay left me wondering whether the future of interaction design depends not on faster touchscreens, but on rediscovering touch itself; real, textured, imperfect, human touch.

Ultimately, I completely agree with Victor’s message. His critique felt refreshing, almost like a wake-up call to slow down and rethink what “innovation” actually means. I liked how he exposed the emptiness behind shiny new interfaces and instead celebrated the physical, human side of design. Even though his tone was mainly critical, I didn’t find it negative; I found it hopeful. It made me appreciate the kind of design that makes people feel connected, not just technologically advanced.

Week 9 – Reading Response

Interactive art should be designed as a conversation (or workshop seminar) instead of a speech. The clearest takeaway from the two pieces is that affordances and arrangements matter more than artist statements. If the system communicates its possibilities through handles, hints, and constraints, the audience can complete the work through action. When artists over-script, they collapse the range of possible meanings and behaviors into one narrow path.

Physical computing’s “greatest hits” list is useful because it exposes where interaction often stalls. Video mirrors and mechanical pixels are beautiful, but they rarely push beyond “move, see response.” Gloves, floor pads, and utilty controllers introduce a structured gesture vocabulary that people already know, which shortens the learning curve. Across categories: where gesture has meaning, interaction retains depth; where mappings are shallow, novelty fades quickly.

These pieces prompt two design questions. First, what minimal cues will help a participant discover the interaction without text? Second, what state changes will keep the system from settling into a single loop? In practice, that means compositional choices, such as discrete modes, cumulative effects, and recoverable errors.

For attention sensing, presence is not engagement. Designers should think of for signals that correlate with intent, then treat them probabilistically. Use the audience’s behavior as feedback to adjust affordances, not narratives. If the work does not evolve under interaction, you likely built a display, not a performance.

Week 10 Reading Response

A Brief Rant on the Future of Interaction Design

This article is extremely poorly written. It spends time criticizing Microsoft’s future vision video but ignore several important factors that are detrimental in the world of business.

First, the author ignore how cost control plays a huge role in designing products. Though the author spent tons of words describing how future technology sacrifice the tactile richness of working with hands, he/she did not calculate the cost and the technological development stage of tactile materials.  There are materials that provide these types of tactile richness, but what is the cost for producing these types of materials? what is the technical difficulties of achieving intended effect when we integrate this technology to our day to day phones?

Second, the author definitely has some illusion regarding the idea that “People choose which visions to pursue, people choose which research gets funded, people choose how they will spend their careers.” This video is created by Microsoft, a billion internet service company, and as far as I know, nobody votes in the process of choosing what vision the video will present, and I don’t know if there’s any public voting process to decide which research gets funded and with current AI development, people rarely have chances to choose their careers while balancing their satisfaction with the jobs. I don’t know how the author comes up with these words, but I think he/she probably lived in a delusional, fictional world.

A follow-up article

The first line of the follow-up article dismantle all my concerns of that I wrote for the first article: “Yes, that’s why I called it a rant, not an essay.” The author is treating the article as a science fiction and therefore in that sense, all words he produced would make sense. He specifically defines his article as a rant that try to catch people’s attention on how we lack design regarding controlling the devices.  However,  I disagree with his opinion regarding brain interface. I believe brain interface, if possible, will be the most important invention in human history. In human history, many horrible decisions/actions we made are due to the deficiencies in brain processing power, if there’s a way to connect our brain to the computer and hugely improve memory, computation speed, I believe it would give us a chance to build a better society.