Week 10 – Musical Instrument

Concept

We were thinking a lot about what kind of instrument we want to make and wanted to stray away from the classic, well known ones like guitar and piano and drums and we decided to recreate an instrument many haven’t probably heard of, called Otomatone.

The instrument work by the user pressing the upper longer part of the instrument in different positions and pressing the “mouth” on both sides so it opens. The instrument than creates a sound and makes it look like the character is singing.

Our design

To recreate this we decided to use light resistors as the upper part of the body. The resistors would detect when the user puts their hand over them and create a sound. But the sound wouldn’t be heard until the user pressed on the “cheeks” of the character which had force resistors to detect the force of the press.

Here is the photo of the board and the photoresistors. We also added a button which, if held, will give the sound some vibration while playing. The final design of our Otomatone instrument looks like this:

Code higlight

The code for this instrument wasn’t that complicated. The hardest part was finding the values for all the notes but we used the help of the internet for that.

// Multi-note LDR1
if (ldrVal1 < 500 && duration > 10) {
  if (totalPressure < 256) {
    activeNote = 440; // A4
  } else if (totalPressure < 512) {
    activeNote = 523; // C5
  } else if (totalPressure < 768) {
    activeNote = 659; // E5
  } else {
    activeNote = 784; // G5
  }
  Serial.print("LDR1 note: "); Serial.println(activeNote);
}

This is the code for one of the light resistors which as you can see combines the value of the resistor with the total pressure of the force sensors detected and gives us a tone based on those calculations. The code for other light resistors is similar and not too complicated to understand.

Challenges and future improvement

The biggest challenge for this project was, surprisingly, getting the buzzer inside the “mouth” of the instrument. Getting 2 holes in the back of the “head” of the instrument  was very hard, and even though we tried to do it with our hands it prove impossible to do without a drill which in the end, after many attempts, broke the inside of the ball enough for a jumper cable to pass. The idea was to stick the breadboard to the back of the head and in that way recreate the original instrument and also not have the alligator clips inside the mouth, rather have the buzzers nodes sticking out through the holes. Due to the time constraint this sadly wasn’t possible, but I hope in the future we will be able to add it. As for the future improvements I would like to clean up the wires a bit and add a breadboard to the back of the head. Overall we are happy with the final result and we had a lot of fun working on this project and we also learned a lot!

 

Week 10: Reading

A Brief Rant on the Future of Interaction Design

“A tool addresses human needs by amplifying human capabilities.” – this quote made me think of how I’ve always thought of tools as things that help us get stuff done, but I never really considered how they’re supposed to work with what we’re already capable of.

The author talks about how a good tool fits both sides: what we can do and what we want to do. But he mentions that most tech today is designed without really thinking about what people can physically do with their bodies, especially their hands. We’ve kind of reduced all interaction down to tapping on a screen, and we’ve just accepted that as a regular thing. But when we try building something physical, there’s a sense of control that I never feel when I’m just doing something digitally.

I also thought about how this idea connects to creativity. So many creative tools like musical instruments, or painting tools are great because they respond to human movement., but a lot of digital tools don’t really do that.

Responses: A Brief Rant on the Future of Interaction Design

Reading through the responses made me notice how many people immediately started to defend current technology, saying things like “We just need better gestures!”. I really liked how the author responded: not defensively, but by clarifying that he’s not against touchscreens or gestures. His point isn’t that modern tools are useless, but that they’re incomplete.

There was a line in the article about the two-year-old who can use an iPad but can’t tie his shoes, which made me think of how humanity made digital interfaces so simple that even toddlers can swipe around. However, this also proves that people don’t see a lot of value in developing actual physical skills.

Another moment that  stood out was when the author said “We’ve almost given up on the body already.” He points to how much of our lives are spent sitting: at work, during leisure, and even while commuting. As a result, we’ve had to create artificial forms of physical activity just to stay healthy. This all just emphasized how our environment  and the tools we design are not supporting our physical well-being

Overall, these responses helped reinforce the author’s main argument: that really effective tools should be designed to serve not just our cognitive abilities, but our physical ones too.

Week 10: Group Musical Instrument

Our Concept

For our musical instrument, we were inspired by DJ sets and how DJs interact with their equipment during live performances. We wanted to bring that experience into a smaller format, so we built a mini DJ controller using an Arduino board.

Our DJ set includes:

  • A potentiometer to control the volume, just like DJs do when they’re mixing tracks

  • LEDs that react to the sound changes, creating a responsive visual effect

  • A piezo buzzer that produces the sound itself

Set-up

Code

GitHub

A Brief Rant on the Future of Interaction Design

Reading “A Brief Rant on the Future of Interaction Design” genuinely made me pause and reconsider the direction of our relationship with technology. The author’s central argument- that today’s touchscreens, while innovative, are ultimately constraining- really struck a chord with me. I often marvel at how intuitive my phone is, yet I can’t help but notice how it reduces all my interactions to tapping and swiping on a flat surface. The analogy to early black-and-white cameras is particularly effective; just as those cameras were revolutionary yet obviously incomplete, so too are our current devices. The reference to Matti Bergström’s research about the richness of tactile sensation in our fingertips is compelling evidence that we are sacrificing a significant aspect of our cognitive and sensory development for convenience.

However, I found myself questioning the author’s rather dismissive stance on voice interfaces and gesture controls. While I agree that voice commands can’t replace the hands-on nature of artistic or spatial work, I think the author underestimates their value. Personally, I find voice assistants incredibly useful for multitasking or quick information retrieval- tasks where speed and efficiency matter more than depth of interaction. The author’s point about deep understanding requiring active exploration is well taken, but I believe there’s space for a variety of input methods, each suited to different contexts.

The text also made me reflect on the broader consequences of how we use technology. The idea that we are “giving up on the body” by designing tools that encourage us to be sedentary is quite thought-provoking. I hadn’t previously considered how interface design could contribute to a less physically engaged lifestyle. This perspective encourages me to be more mindful of how I use technology, and to seek out more physically interactive experiences where possible.

In summary, this rant has prompted me to think more deeply about what I want from the future of interaction design. While I appreciate the accessibility and simplicity of modern devices, I agree that we shouldn’t settle for tools that limit our physical and intellectual potential. The text serves as a powerful reminder that technology should enhance our full range of human abilities, not just cater to convenience.

Week 10 – Reading Response

A Brief Rant on the Future of Interactive Design + Follow-up

The first minute of the Microsoft video envisioning the future seemed really cool to me, but as it went on, the video kept repeating the same ideas over and over again. It felt like our future was limited to one or two movements. The rant definitely opened up my eyes to the subconscious abilities our hands possess. Maneuvers and motions we’ve been doing since young have become so natural we don’t realize just how powerful this sense is. The rant and response to comments made about the rant reminded me of the movie Wall-E where in the distant future, all the humans become so reliant on screens, they become unable to use their body. Living life through a screen when we’re blessed with so many degrees of motion immobilizes you; we see it even now as people get more glued to screens and are constantly sitting or lying down. I do wonder though what some potential solutions to this “picture under glass” future would be. I’m thinking about somehow incorporating textures, weight, and 3D objects because the main problems mentioned were how our hands have the ability to sense and manipulate things from touch, but a 2D glass screen avoids all of that. Or maybe centering designs around actions we can perform like flipping pages, pinching things, twisting, squishing, etc. Maybe even taking inspiration from bigger actions like planting flowers,  steering and feeling the torque of the wheel, or feeling water and how it sways under the force of your hands.

Readings

The text “A Brief Rant on the Future of Interaction Design” takes a critical look at modern digital interfaces and points out how they often force users to adapt to strict, outdated design rules instead of the other way around. The author argues that interfaces should be more in tune with natural human thought processes, which could lead to more flexible and easier-to-use systems. For example, the text challenges the reliance on traditional metaphors in design that can limit how people interact with technology, suggesting that a rethinking of these strategies would better serve everyone.

In the responses, various designers and thinkers share their own views on what works and what doesn’t in today’s interaction design. Many contributors agree that sticking to rigid structures can suppress innovation and user engagement, while others offer practical examples from their work. One common point is that when interfaces are redesigned to be more intuitive, it often results in smoother and more productive user experiences, showing that a change in approach can have positive real-world benefits.

Overall, both readings encourage a move toward interaction design that feels more natural and accommodating to users. The discussion emphasizes the importance of creating technology that adapts to how people actually think and work, rather than forcing users to learn and conform to outdated digital patterns. This friendly call for change makes it clear that smarter design is not just a theoretical goal; it can lead to improvements in everyday technology that benefit us all.

Week 10 (Musical Instrument & Reading)

With YEVA SYNTH V1.0, I wanted to create a device that felt fun to play with, responded instantly to human input, and was built from the ground up using just an Arduino Uno, a few buttons, LEDs, and some imagination.

After sketching a few interface ideas, I settled on a layout using two buttons to trigger different sound effects, a potentiometer to switch between modes, and two small LCD screens—one for control feedback and one for visual flair. The FX selector (an analog potentiometer) lets the user scroll between different sound modes like “Laser,” “Melody,” “Wobble,” “Echo,” and more. Pressing a button instantly triggers the selected effect through a piezo buzzer. One LCD shows the current FX name, while the second displays an animated visualizer that bounces in response to sound activity. The LEDs tied to the Arduino’s analog pins light up during sound playback, giving a simple but satisfying burst of light that makes the synth feel alive.

Building it was both straightforward and occasionally frustrating. Wiring two LCDs in parallel required careful pin management to avoid conflicts, and the Arduino Uno’s limited number of usable pins meant I had to repurpose analog pins as digital outputs. The buzzer was a challenge at first because some FX didn’t make any audible sound until I discovered I had to hardcode appropriate pitch and modulation values and remove interrupt logic that was prematurely cutting playback short.

One major success was making the sound effects interruptible and responsive. Early versions of the code would lock the device into one sound effect until it finished, but I rewrote the logic to allow button spamming so users can mash buttons and get immediate feedback, making the instrument feel more playful.

Of course, there are limitations. The piezo buzzer is not exactly a high-fidelity speaker, and while it’s great for beeps and bleeps, it can’t produce anything resembling full-range audio. I also wanted the visualizer to respond to actual audio signal amplitude, but without analog audio input or FFT analysis, I had to simulate that based on pitch values and FX activity. That said, the effect is convincing enough to match the synth’s character. Another improvement would be to allow the synth to send commands to a computer so that real sound files could be played through the laptop’s speakers instead of the buzzer. I’ve already prototyped this using a Python script listening over serial.

#include <LiquidCrystal.h>

LiquidCrystal lcd1(12, 11, 5, 4, 3, 2);
LiquidCrystal lcd2(8, 7, 6, A4, A3, A2);

// Pins
const int fxSelector = A5;
const int button1 = 9;
const int button2 = 10;
const int buzzerPin = 13;
const int led1 = A0;
const int led2 = A1;

// FX Setup
int pitch = 440;  // A4
int mod   = 50;
int fxIndex = 0;
const int NUM_FX = 8;

String fxNames[NUM_FX] = {"Laser", "Melody", "Alarm", "Jump", "Sweep", "Wobble", "Echo", "Random"};

void setup() {
  lcd1.begin(16, 2);
  lcd2.begin(16, 2);
  pinMode(button1, INPUT_PULLUP);
  pinMode(button2, INPUT_PULLUP);
  pinMode(buzzerPin, OUTPUT);
  pinMode(led1, OUTPUT);
  pinMode(led2, OUTPUT);

  Serial.begin(9600); // Debug

  lcd1.setCursor(0, 0);
  lcd1.print("YEVA SYNTH V1.0");
  lcd2.setCursor(0, 0);
  lcd2.print("MAKE SOME NOISE");
  delay(1500);
  lcd1.clear();
  lcd2.clear();

  randomSeed(analogRead(A3));
}

void loop() {
  fxIndex = map(analogRead(fxSelector), 0, 1023, 0, NUM_FX - 1);

  lcd1.setCursor(0, 0);
  lcd1.print("FX: ");
  lcd1.print(fxNames[fxIndex]);
  lcd1.print("        ");

  lcd1.setCursor(0, 1);
  lcd1.print("Pitch:");
  lcd1.print(pitch);
  lcd1.print(" M:");
  lcd1.print(mod);
  lcd1.print("  ");

  if (buttonPressed(button1)) {
    triggerFX(fxIndex);
  }

  if (buttonPressed(button2)) {
    triggerAltFX();
  }

  drawVisualizer(0);
}

bool buttonPressed(int pin) {
  if (digitalRead(pin) == LOW) {
    delay(10); // debounce
    return digitalRead(pin) == LOW;
  }
  return false;
}

void showFXIcon(int index) {
  lcd2.setCursor(0, 0);
  lcd2.print("FX: ");
  switch (index) {
    case 0: lcd2.print(">>>>"); break;
    case 1: lcd2.print("♫♫");   break;
    case 2: lcd2.print("!!");   break;
    case 3: lcd2.print(" ↑");   break;
    case 4: lcd2.print("/\\");   break;
    case 5: lcd2.print("~");  break;
    case 6: lcd2.print("<>");   break;
    case 7: lcd2.print("??");   break;
  }
}

void drawVisualizer(int level) {
  lcd2.setCursor(0, 1);
  int bars = map(level, 0, 1023, 0, 16);
  for (int i = 0; i < 16; i++) {
    if (i < bars) lcd2.write(byte(255));
    else lcd2.print(" ");
  }
}

void triggerFX(int index) {
  lcd2.clear();
  showFXIcon(index);
  digitalWrite(led1, HIGH);
  Serial.println("Triggering FX: " + fxNames[index]);

  if (index == 7) {
    int randFX = random(0, NUM_FX - 1);
    triggerFX(randFX);
    return;
  }

  switch (index) {
    case 0: // Laser
      for (int i = 1000; i > 200; i -= (10 + mod / 20)) {
        tone(buzzerPin, i);
        drawVisualizer(i);
        delay(10);
      }
      break;

    case 1: { // Melody
      int notes[] = {262, 294, 330, 392, 440, 494, 523};
      for (int i = 0; i < 7; i++) {
        digitalWrite(led2, HIGH);
        tone(buzzerPin, notes[i] + mod);
        drawVisualizer(notes[i]);
        delay(200);
        digitalWrite(led2, LOW);
        delay(50);
      }
      break;
    }

    case 2: // Alarm
      for (int i = 0; i < 5; i++) {
        tone(buzzerPin, 400 + mod);
        drawVisualizer(600);
        delay(150);
        noTone(buzzerPin);
        delay(100);
      }
      break;

    case 3: // Jump
      tone(buzzerPin, pitch + 200);
      drawVisualizer(800);
      delay(150);
      break;

    case 4: // Sweep
      for (int i = pitch - mod; i <= pitch + mod; i += 5) {
        tone(buzzerPin, i);
        drawVisualizer(i);
        delay(5);
      }
      for (int i = pitch + mod; i >= pitch - mod; i -= 5) {
        tone(buzzerPin, i);
        drawVisualizer(i);
        delay(5);
      }
      break;

    case 5: // Wobble
      for (int i = 0; i < 15; i++) {
        int wob = (i % 2 == 0) ? pitch + mod : pitch - mod;
        tone(buzzerPin, wob);
        drawVisualizer(wob);
        delay(80);
      }
      break;

    case 6: // Echo
      int echoDelay = 200;
      for (int i = 0; i < 5; i++) {
        int toneFreq = pitch - i * 20;
        tone(buzzerPin, toneFreq);
        drawVisualizer(toneFreq);
        delay(echoDelay);
        noTone(buzzerPin);
        delay(echoDelay / 2);
        echoDelay -= 30;
      }
      break;
  }

  noTone(buzzerPin);
  digitalWrite(led1, LOW);
  drawVisualizer(0);
}

void triggerAltFX() {
  lcd2.clear();
  lcd2.setCursor(0, 0);
  lcd2.print("FX: BLIP");

  for (int i = 0; i < 3; i++) {
    tone(buzzerPin, 600 + mod);
    digitalWrite(led2, HIGH);
    drawVisualizer(600);
    delay(100);
    noTone(buzzerPin);
    digitalWrite(led2, LOW);
    delay(100);
  }

  drawVisualizer(0);
}

Reading response

A Brief Rant on the Future of Interaction Design by Bret Victor made me rethink how we use technology today. He argues that all these futuristic concept videos we see where everything is controlled by touchscreens or voice commands are actually super boring. Not because they’re unrealistic, but because they’re unimaginative. We’re just slightly upgrading what already exists instead of rethinking how we interact with tech in the first place.

Victor’s main point is that our current interfaces like the iPad might feel revolutionary now, but they’re still pretty limited. Everything is flat, behind glass, and designed for a single finger. It works, sure, but it’s kind of like if all literature was written at a Dr. Seuss level: accessible, but not exactly fulfilling for a fully grown adult. He’s asking, “why aren’t we building tools that take advantage of the full range of human abilities—our hands, our spatial awareness, our sense of touch?”

What I found really interesting is that he’s not anti-technology. He actually says the iPad is good for now, kind of like how black-and-white film was great in the early 1900s, but eventually color took over because people realized something was missing. He’s trying to get people, especially researchers and funders, to realize what might be missing in today’s tech and explore new directions, like dynamic tactile interfaces or haptic environments.

He also talks about how voice and gesture controls aren’t the answer either. Voice is fine for simple commands, but it doesn’t help if you want to build something or deeply explore a system. Same with waving your hands in the air. It’s cool in theory, but weird and disorienting in practice, especially without any physical feedback. His whole point is that we learn and create best when we can physically engage with things.

One thing that really stuck with me is this quote he includes from a neuroscientist about how important our fingers are for brain development. Like, if kids grow up only using touchscreens and never really using their hands, they miss out on a whole layer of understanding (physically and conceptually). That spoke to me. It’s not just about functionality, it’s about how tech shapes the way we think and grow.

So yeah, it’s not a rant in the sense of being angry for no reason. It’s more like a wake-up call. He’s saying, “We can do better. We should do better.” And honestly, I agree.

Week 10 – Reading responses

A Brief Rant on the Future of Interaction Design

The article, “A Brief Rant on the Future of Interaction Design” Criticizes the “Pictures Under Glass” approach in interaction design, where the author emphasizes the importance of tactile and hands-on engagement, which is frequently disregarded in preference to visual interfaces. The author asserts that touch is essential to our interactions with the world, highlighting the extensive tactile sensations we experience from common items such as books or glasses of water, and underlining the variety of hand motions required for activities like opening a jar. He analyzes the drawbacks of touchscreens, which primarily provide a flat, sliding motion that fails to reflect the depth of physical interaction we typically experience. Although he recognizes the visual energy of digital interfaces, Victor proposes that present technologies, such as the iPad, might not have lasting potential unless they develop to utilize more of our physical abilities. This made me think about how often we overlook tactile feedback in our daily activities and how digital interfaces, although visually engaging, frequently fail to deliver that deep physical interaction.

Week 10: Music Instrument

Concept

For this assignment, me and Maliha made an interactive light-sensitive sound device using an Arduino Uno, a photoresistor (LDR), a pushbutton, an LED, and a piezo speaker. When the button is pressed, the Arduino reads the surrounding light level using the LDR and maps that value to a specific sound frequency. The speaker then emits a tone depending on the brightness or darkness—darker settings yield higher-pitched tones, and brighter settings yield lower-pitched tones. Meanwhile, the LED lights up to signal that the system is reading and responding actively. This project taught us how sensors, inputs, and outputs are combined to build responsive circuits.

 

Code Highlights

const int ldrPin = A0;         // LDR connected to analog pin A0
const int buttonPin = 2;       // Button connected to digital pin 2
const int speakerPin = 9;      // Speaker connected to digital pin 9
const int ledPin = 13;         // LED connected to pin 13

// Dramatically different frequencies (non-musical)
int notes[] = {100, 300, 600, 900, 1200, 2000, 3000};

void setup() {
  pinMode(buttonPin, INPUT);         // Button logic: HIGH when pressed
  pinMode(speakerPin, OUTPUT);     
  pinMode(ledPin, OUTPUT);         
  Serial.begin(9600);              
}

void loop() {
  int buttonState = digitalRead(buttonPin); // Read the button

  if (buttonState == HIGH) {
    int lightLevel = analogRead(ldrPin);         // Read LDR
    int noteIndex = map(lightLevel, 0, 1023, 6, 0); // Bright = low note
    noteIndex = constrain(noteIndex, 0, 6);      // Keep within range
    int frequency = notes[noteIndex];            // Pick frequency

    tone(speakerPin, frequency);                 // Play note
    digitalWrite(ledPin, HIGH);                  // LED on

    Serial.print("Light: ");
    Serial.print(lightLevel);
    Serial.print(" | Frequency: ");
    Serial.println(frequency);
  } else {
    noTone(speakerPin);            // No sound
    digitalWrite(ledPin, LOW);     // LED off
  }

  delay(100);
}

 

Video Demonstration

Challenges

One of the problems we faced was getting accurate light readings from the photoresistor since small changes in lighting at times caused big frequency jumps. We also had trouble keeping the wiring on the breadboard tidy and making sure each device was correctly connected to power and ground. Debugging the circuit and double-checking the connections fixed the issues and taught us about how analog inputs and digital outputs work together.

Week 10 — Reading Response

“A Brief Rant on the Future of Interaction Design” made several points and rebuttals to responses that I resonated with. First, the rant itself reminded me of a user testing gig that I did back when I was in New York last semester, doing my studyaway. Although I don’t think I can disclose many details, it was vaguely about swipe mechanisms that would move screens based on a thumb and tap gesture I would do in the air with a watch on. Although I think it’s slightly different from the tech that is shown in Microsoft’s video, since it involves tactile elements, it still encapsulates similar sentiments of swipes essentially being the future of human interaction with tech. Before reading this article, I had never truly considered the idea that our “future” is something we actively choose. This opening thought lingered with me and prompted a deeper reflection. I realized that my perspective on innovation was inherently capitalistic. I had always viewed technological advancements as merely profit-driven responses to market demands. I’m drawn to the idea that we, as individuals within society, are the ones who shape market demand. It’s empowering to think about the influence we hold, and it makes me want to reclaim more autonomy in how I see my role. I’d like to challenge myself to think bigger—to strive to be, or at least attempt to become, more of a pioneer in shaping the world around me.

Furthermore, his responses to the comments on his rant were very coherent. I agree that you don’t need to propose an alternative to point out when something is wrong—doing otherwise risks complacency with the status quo and undermines critical thinking. This ties into a broader issue I’ve noticed: the way technology, particularly AI, is shaping our cognitive abilities. For instance, the quote about children mastering iPads but struggling with shoelaces, or his point on how it is like understanding “Cat in the Hat” but not “Hamlet”, highlights how our tools are often designed for simplicity rather than depth. While accessibility is important, oversimplifying tools for mass appeal can lead to shorter attention spans and a decline in critical thinking. This echoes his point in the article: tools are meant to amplify human abilities, yet the trend of dumbing them down risks doing the opposite—handicapping us rather than empowering us.