Assignment 8 – Piano

Concept
For this assignment, Fatima and I created a mini piano keyboard using Arduino with 7 notes (C, D, E, F, G, A, B). Each note is played using a button, and the pitch can be changed using the potentiometer. The switch allows the sound to shift higher or stay at the original pitch depending on its position.

Each button corresponds to a specific frequency for one note. When you press a button, the Arduino sends a signal to the buzzer to play that frequency. The blue switch works as a pitch modifier. This combination creates a mini keyboard where you can experiment with simple melodies and play around with tone changes.

Schematic

Digital Arduino

Demo Video

Reflection
To improve the project, we could’ve add more interactivity by including volume control, perhaps using a potentiometer. Another idea is to incorporate visual feedback, like an LED or an LCD display, to indicate which note or pitch is being played. These improvements would make the keyboard more versatile and closer to a real musical instrument.

Git hub

WEEK 10 – Radio

CONCEPT:

While brainstorming project ideas with Noura, we thought about how a radio works and decided it would be fun to make a simple version ourselves. Our goal was to create an Arduino “radio” that lets you switch between different songs, similar to tuning a real radio. We used a knob as the channel switch, allowing us to choose between three different songs that we got from Github and the exercises we did in class. Each channel has its own song, and turning the knob instantly switches to the next song, giving it a real radio-like feel. We also added a button that acts as a power switch to turn the radio on and off. This way, pressing the button starts the radio, and pressing it again turns it off. We even added a feature so that any song stops playing immediately when the channel changes, so you don’t have to wait for a song to finish before switching to a new one.

HOW IT WORKS:

SETUP:

 

HIGHIGHT:

The part Noura and I are most proud of is getting the button to work smoothly with the debounce feature. At first, the button would trigger multiple times with a single press, turning the radio on and off too quickly. By adding a debounce function, we made sure the button only registers one press at a time, making it much more reliable. A former student in IM (Shereena) helped us understand how debounce works and guided us in fixing this issue, explaining how it makes the button’s response stable and accurate.

Here’s a snippet of the debounce code we used:

// Variables for debounce
int buttonState;          // Current state of the button
int lastButtonState = LOW; // Previous state of the button
unsigned long lastDebounceTime = 0; // Last time the button state changed
unsigned long debounceDelay = 50;   // Debounce time in milliseconds

void loop() {
  int reading = digitalRead(buttonPin);

  // Check if the button state has changed
  if (reading != lastButtonState) {
    lastDebounceTime = millis();  // Reset debounce timer
  }

  // If enough time has passed, check if the button is pressed
  if ((millis() - lastDebounceTime) > debounceDelay) {
    if (reading != buttonState) {
      buttonState = reading;
      if (buttonState == HIGH) {
        radioState = !radioState;  // Toggle radio on/off
      }
    }
  }

  lastButtonState = reading;
}

This debounce function prevents accidental multiple triggers, making the button interaction smoother. We’re also proud of how the radio switches songs instantly when we turn the knob, making it feel real.

REFLECTION:

Working on this project with Noura was a nice experience, as we got to share our thoughts and class experience by working together. One of our main struggles was making the button work without triggering multiple times, which led us to use debounce for stability. While adding Debounce solved the problem, in the future, we’d like to explore other ways to troubleshoot and fix issues like this without relying on debugging alone.

For future improvements, we’d like to add more interactive features, such as volume control with an additional knob and possibly a small speaker for clearer sound. We could also include more “channels” with various songs or sound effects, giving users a richer experience. Another idea is to add an LED indicator that lights up when the radio is on and changes brightness with volume, making the design even more engaging. These changes would make our project more realistic for a radio and enhance.

CODE:

https://github.com/nouraalhosani/Intro-to-IM/blob/426c7d58639035c7822a4508f2e62dab34db0695/Radio.ino

 

Reading Reflection – Week #10

The idea of developing immersive technology that goes beyond the limits of the 2D screen has been concerning me for a while already, so I got very involved into the topic of the rant by Victor Bret. Interactive design was an ambiguous term back in 2011, when the text was published, and remains often misunderstood now. How can we call “pictures under glass” truly interactive if we do not engage with them physically?

Haptics are one way to push the boundaries of human-computer interaction further. When our fingers get some sort of response, even a slight vibration, it makes our experience with gadgets more engaging on a cognitive level. Here is an example from my life: my phone is always in silent mode, so when I type something, I do not hear the artificial sound of keys clicking. At some point I stopped understanding whether I am actually typing – so I turned on the “haptic response” feature. This makes the keys vibrate slightly every time I press on them, which creates a more realistic experience of using a physical keyboard.

Nonetheless, I agree with Bret that interactivity can be pushed even beyond haptics. At the same time, it is still difficult to come up with a clear solution, if there is one. Reading Bret’s ideas almost 15 years later was interesting, considering the latest inventions in AI which, to some extent, contradict with his points about the use of voice – turns out it can be used to generate something beyond linguistics.

READING WEEK – 10

Reading the rant, it made me realize how much we’ve settled for touchscreens as our main way of interacting with technology. Sure, they’re easy to use, but I think they’re holding us back from something much richer. Kids today are glued to screens, swiping and tapping all day, and I feel like it’s making them “finger-blind.”. They’re losing the chance to understand the world through real touch. Instead of feeling textures and weights, they’re just seeing everything through glass. In my opinion, that’s robbing them of a huge part of learning and growing.

And it’s not just kids. Even for adults, I feel like we’re missing out on deeper connections with tech. Take virtual reality games. Everyone loves the visuals, but games like Beat Saber feel empty to me without any sense of touch or resistance. It’s a hollow experience, one that relies only on what we see, not what we feel. I think this disconnect leaves us with flashy visuals but no real engagement.

Beat Saber – VR

In addition, I also think that the obsession with brain-computer interfaces is just as disappointing. Sure, it sounds futuristic, but I think it’s actually a step back. The idea of controlling things with thoughts alone ignores the importance of our bodies. We’re already sitting at screens too much, and this just pushes us deeper into that immobility. I believe technology should encourage movement and real interaction, not trap us in a still, disconnected state.

“Pictures Under Glass” sums up the issue perfectly. It’s like we’re numbing one of our most powerful senses for convenience. When I pick up a book or a glass, my hands get all kinds of feedback from weight, texture, and balance. But with a screen? Nothing. Even something as simple as making a sandwich taps into a level of interaction that a screen can’t touch. In my view, designers need to stop treating screens as the final answer and start creating tech that actually respects our bodies’ abilities.

Reading Reflection – Week 10

A Brief Rant on the Future of Interaction Design:

This was a really nice read! I enjoyed this new rant-style writing that was also very visually telling of what the author was ranting about. I definitely agreed with his point at the start, and that only solidified more and more as I read all of the examples he pointed out, where everyday activities seem so simple, yet are actually complex, but we just do it out of instinct.

The fact that we have different types of fundamental grips for different purposes is so interesting! It really put hand manipulation into perspective because there’s only so many options you can do with a touch screen, and all of them are the same sliding or tapping on a glassy screen.

Another really interesting point he made was how we often think vision carries us a lot, but that’s because we’re not consciously thinking about the significance of touch. This example in particular really stood out to me and made me think twice about how important it is for our hands to feel things and understand weight and texture as well.

Try this: close your eyes and tie your shoelaces. No problem at all, right? Now, how well do you think you could tie your shoes if your arm was asleep? Or even if your fingers were numb? When working with our hands, touch does the driving, and vision helps out from the back seat.

Although I do agree that “pictures under glass” technology limits our physical interaction and connection with the devices, as well as the power and capabilities of our hands, I think it’s still crucial to consider how easy and accessible digital devices can make things. Yes, we wouldn’t be able to tell how far into a book we are if it’s digital instead of physical, but it could save the burden of carrying 4 thick textbooks with you throughout the school day, or make it easier for people to access international books that aren’t physically sold in their country. There are pros and cons to either side of things, and it would be revolutionary if we could find a way to somehow combine human capabilities and glassy technology by making it multi-sensory.

Responses on the Rant:

Reading people’s responses to the author’s rant as well as the author’s responses to those readers’ responses was quite fascinating and the way he explained and defended himself was nicely straight to the point. I enjoyed how he was kind of sarcastic in the initial response and then followed up with some true examples to further defend his case, such as:

Yeah! iPhone bad! iPad bad!

No! iPad good! For now! In 1900, Eastman’s new Kodak camera was good! The film was black-and-white, but everyone loved it anyway. It was genuinely revolutionary! But it was also obvious that something was missing, and research continued. Practical color film was released thirty years later, and gradually took over.

Reading Reflection #7

A Brief Rant on the Future of Interactive Design

I believe that the author’s frustrations with the current trends/future of interactive design are completely justified. It is undeniable that many developers now seek to make things digital just for the sake of being digital, subjugating everything behind a glass panel for no substantive reason. In this pursuit of digitalization, we often forget that humans are in fact very dependent on tactility to interact with the world around us; to reduce everything to swipes on a flat, glassy screen is to gradually reduce our cognitive abilities to respond to different forms of input from our senses.

Of course, this is not to say that touch screens are devil’s spawn and must be retired, but to suggest that perhaps efforts in technological advancement are too focused on applications of touch screens and it would be much more worthwhile to explore how other human needs can be aided by other forms of interactive design.

Follow-up

It’s quite surprising how many of the responses to the rant seemed to misunderstand the author’s point. It’s unreasonable to expect the author to singlehandedly provide a solution to the problem, which seems to be what many readers expected him to do. The point of the rant was to point out a problem and hopefully direct future research to solve that problem.

I think analogy in the last section of the article sums up the author’s argument perfectly: “A child can’t understand Hamlet, but can understand Cat In The Hat. Yet, it’s Shakespeare, not Dr. Seuss, who is the centerpiece of our culture’s literature…A tool for adults should take full advantage of the adult capabilities of both mind and body. Tools that are dumbed down for children’s minds or children’s bodies are called ‘toys’.” Ultimately, it is important to keep in mind that useful, well-designed tools make things easier while helping us develop our existing skills instead of simply dumbing things down and impose homogeneity to every activity in the name of digitalization.

Week 10: Musical Instrument (with Elizabeth and Linus)

Concept

For this assignment, we were inspired by the toy pianos that we’ve all played with at least a few times as children. These toys were often quite limited, as they only had enough space to accommodate the keys for a single octave. We decided to create a miniature piano that could play the notes from C to A, with a knob that can be turned to change octaves.

Mini 8-Key Electronic Piano Toy for Children Early Musical Education, Yellow : Amazon.ae: Toys

Setup and Code

We set up a row of buttons that play a note when pressed, a potentiometer that changes the octave, and a switch that switches the notes to their respective sharp notes.

We created arrays that store the frequencies of the natural notes across seven octaves, doing the same for the sharp notes. The range of the potentiometer is mapped to a range of 0-6 and determines which octave the notes will be played in. When the switch is low, natural notes are played, and vice versa.

To prevent the notes from continuously playing, the noTone() function stops the buzzer playing any sound when no button is pressed.

Full code on Github

if (buttonState7 == LOW && buttonState6 == LOW && buttonState5 == LOW && 
    buttonState4 == LOW && buttonState3 == LOW && buttonState2 == LOW) {
  noTone(8);  // Stop any tone on pin 8
}

Demo

Reflection

Overall, this group project was a success as we managed to recreate our inspiration using Arduino. We were really satisfied with how we made sure to implement the natural notes as well as sharp notes in our project. A few struggles we faced was that the wires and resistors were really crowded on our small breadboards, making it a little difficult to reach the buttons. Ideally, we would want to get bigger breadboards in the future so that there would be more room for interacting with the product. Additionally, the bread board was just a little too small for us to add the 7th natural note, so we could only fit 6, as we’re missing a B note. 

In the future, we would like to add two different colored led lights that switch on and off along with the natural or sharp notes. This way, it would help users know if they are currently playing natural or sharp notes, as not everyone is familiar with musical notes in that sense.

 

Assignment 10: Musical Instrument – Piano

Concept:

For this assignment, we were inspired by the toy pianos that we’ve all played with at least a few times as children. These toys were often quite limited, as they only had enough space to accommodate the keys for a single octave. We decided to create a miniature piano that could play the notes from C to A, with a knob that can be turned to change octaves.

Toy Piano

Setup and Code:

We set up a row of buttons that play a note when pressed, a potentiometer that changes the octave, and a switch that switches the notes to their respective sharp notes.

We created arrays that store the frequencies of the natural notes across seven octaves, doing the same for the sharp notes. The range of the potentiometer is mapped to a range of 0-6 and determines which octave the notes will be played in. When the switch is low, natural notes are played, and vice versa.

To prevent the notes from continuously playing, the noTone() function stops the buzzer playing any sound when no button is pressed.

Github Code

if (buttonState7 == LOW && buttonState6 == LOW && buttonState5 == LOW && 
    buttonState4 == LOW && buttonState3 == LOW && buttonState2 == LOW) {
  noTone(8);  // Stop any tone on pin 8
}

 

Demo:

 

Reflection:

Overall, this group project was a success as we managed to recreate our inspiration using Arduino. We were really satisfied with how we made sure to implement the natural notes as well as sharp notes in our project. A few struggles we faced was that the wires and resistors were really crowded on our small breadboards, making it a little difficult to reach the buttons. Ideally, we would want to get bigger breadboards in the future so that there would be more room for interacting with the product. Additionally, the bread board was just a little too small for us to add the 7th natural note, so we could only fit 6, as we’re missing a B note.

In the future, we would like to add two different colored led lights that switch on and off along with the natural or sharp notes. This way, it would help users know if they are currently playing natural or sharp notes, as not everyone is familiar with musical notes in that sense.

Week 10 – Reading Reflection

I found Bret Victor’s A Brief Rant on the Future of Interaction Design really interesting because of how it critiques the touchscreen-focused “pictures under glass” model that limits our physical interaction with technology. This definitely changed my perspective on the direction technology is taking. Earlier, I thought that touchscreens are revolutionary and easy-to-use but I am now thinking about why we are willing to settle for such a limited interaction, devoid of the tactile feedback that has been essential to human experience for thousands of years. Shouldn’t we be designing technology that truly connects with the way we physically engage with the world?

I was also interested in Victor’s thoughts about the stagnation of interaction design. Instead of visionary advancements, we get small incremental changes that feel innovative but don’t really use the full potential of human capabilities. For example, I often find myself reading on my Kindle or jotting down notes on my iPad for the sake of convenience, even though I’m sacrificing the tactile feel of a physical book or the natural flow of writing on paper. This makes me further question: Are we sacrificing sensory richness for convenience? What would it take for the tech industry to prioritize deeper, more meaningful interactions over merely efficient or visually impressive ones? His argument has led me to reevaluate my own ideas about technology and wonder whether our devices could one day feel like a natural extension of our bodies, rather than just tools for quick tasks.

Week 10 | Reading Response

The author’s “Brief Rant on the Future of Interaction Design” and their follow-up response focuses on how much we lose by relying so heavily on touchscreens. I find their argument pretty convincing especially their critique of “Pictures Under Glass,” which, as they put it, restricts the real power of our hands. They back this up with simple but relatable examples, like the feel of turning a page or lifting a glass, where you can actually feel the weight, texture, and resistance. With touchscreens, on the other hand, all we get is this flat, “glassy” feeling, stripping away any real tactile feedback and creating a disconnect between our natural capabilities and the digital world.

In the follow-up response, the author clarifies that the original piece wasn’t meant to prescribe solutions. This distinction feels important because, often, when we discuss technological limitations we jump to solutions without fully understanding the problem. By reframing interaction design around human capabilities, the author challenges us to think about interfaces that go beyond visual appeal and engage more fully with our innate physical skills.  I agree with their argument that future interfaces should not only acknowledge but amplify these natural interactions rather than reduce them to simplified swipes and taps. I still think touch screens can be beneficial and can offer other tactile sensations and experiences. Just because it’s different doesn’t mean it’s less interactive. Take the example the author gives about turning a page; sure, turning a page on a flat screen isn’t exactly the same as flipping through a real book. But the way a digital page moves, the sound effect, the visual cues.  It still feels interactive and immersive in its own way. I don’t see one as being better than the other; they’re just different experiences that each bring something unique.