Week 10: Instrument

Concept

Instrument: piano.

Team members:  Fady John (fje3683) and Victor Alves Gomes Nadu (va2269).

For our instrument project, we have decided to create a piano that plays different tones of the musical scale depending on the measured distance that someone is from the sensor. To build the instrument on the breadboard, we utilized an ultrasonic sensor, LEDs, resistors, and jumper wires, creating an interactive and dynamic musical experience. The analog ultrasonic sensor was responsible for measuring the distance, allowing for an intuitive interaction with the piano. As the user moves closer or farther away, the musical output changes, providing a unique take on the instrument. As for the LEDs, they add a visual element to the project besides also offering an understanding of the distance-based musical scale since they light up corresponding to the played notes.

Code

In the code, we defined the specific pins used for each LED and provided the notes in the musical scale in the form of an array.

// notes in the musicalScale:

int musicalScale[] = {

262, 294, 330, 349, 392, 440, 493, 523

};

int ledPins[] = {

9, 12, 11, 10, 9, 12, 11, 10

};

void setup() {

Serial.begin(9600);

pinMode(trigPin, OUTPUT);

pinMode(echoPin, INPUT);

}

 

Schematic

Reflections and improvements

Initially, we planned to create drums, but we ended up settling on a piano since that was a more feasible and practical instrument. As for improvements, currently, the distance sensor is not that accurate, which is something that could be worked on. Other than that, we have managed to create something fun and so we are proud of our work.

week 10 – group assignment (Jason)

Video: https://youtu.be/d6cVEQlEJnk

For our group assignment, we weren’t sure in what way a sensor could be implemented to generate musical notes, so we first brainstormed on an instrument on which we could base this assignment: the accordion. To replicate the keys, we decided to use switches (buttons) and for “bellows” we decided on the flex sensor. We first planned out our schematic, first mapping out the switches, resistors then the analog (connecting) wires. However when actually delving into the construction of the breadboard and subsequently the coding aspect, we ran into a few problems that required us to improvise. We first realized through using the serial monitor that the output generated by the flex sensor was incredibly jittery and inconsistent. It was hard to make out a constant value range. Hence we decided to use an alternative sensor: the photoresistor. Ultimately our approach was predominantly to improvise. Once we had resolved our main code, we decided to utilize the LCD, on a whim, to show “:)” or “:(” pertaining to specific conditions. This required us to do some research on how the LCD is connected to the breadboard, and the code used to display characters: https://docs.arduino.cc/learn/electronics/lcd-displays

HOW IT WORKS:
3 buttons play different notes. The range in tone of the note varies according to the photoresistor – the more the light is conceived, the higher the note is played. Subsequently, the LCD shows “:)” when the tone frequency, determined by the analog reading from the flex sensor, is higher than the previous frequency; otherwise, it displays “:(“. This comparison is specific to each color (red, green, blue), and the frequency values are mapped accordingly.

Schematic:

Arduino Code:

// include the library code:
#include <LiquidCrystal.h>

// initialize the library by associating any needed LCD interface pin
// with the arduino pin number it is connected to
const int rs = 12, en = 11, d4 = 5, d5 = 4, d6 = 3, d7 = 2;
LiquidCrystal lcd(rs, en, d4, d5, d6, d7);

int buzzerPin = 8;
int redpin = A0;
int greenpin = A1;
int bluepin = A2;
int phopin = A3;  //
float prev=0;

void setup() {
  // put your setup code here, to run once:
  pinMode(buzzerPin, OUTPUT);
  pinMode(redpin, INPUT);
  pinMode(greenpin, INPUT);
  pinMode(bluepin, INPUT);
  pinMode(phopin, INPUT);  //
  lcd.begin(16, 2);
  Serial.begin(9600);
}

void loop() {
  // put your main code here, to run repeatedly:
  int redState = digitalRead(redpin);
  int greenState = digitalRead(greenpin);
  int blueState = digitalRead(bluepin);
  int flexState = analogRead(phopin);  // 350 to 1000
  float redvariance = 130.8 + map(flexState, 350, 1050, 0, 130.8);
  float greenvariance = 261.6 + map(flexState, 350, 1050, 0, 261.6);
  float bluevariance = 523.2 + map(flexState, 350, 1050, 0, 523.2);
  if (redState == HIGH) {
    tone(buzzerPin, redvariance);
    if (higherThanPrev(prev, redvariance)) {
      lcd.print(":)");
    } else {
      lcd.print(":(");
    }
    prev = redvariance;
    delay(100);
  } else if (greenState == HIGH) {
    tone(buzzerPin, greenvariance);
    if (higherThanPrev(prev, greenvariance)) {
      lcd.print(":)");
    } else {
      lcd.print(":(");
    }
    prev = greenvariance;
    delay(100);
  } else if (blueState == HIGH) {
    tone(buzzerPin, bluevariance);
    if (higherThanPrev(prev, bluevariance)) {
      lcd.print(":)");
    } else {
      lcd.print(":(");
    }
    prev = bluevariance;
    delay(100);
  } else {
    noTone(buzzerPin);
  }
  lcd.clear();
}

bool higherThanPrev(float prev, float now) {
  return prev < now;
}

Overall we were quite pleased with the end result, even more so with our LCD addition. However, we felt as though it was difficult to classify our project as a musical instrument since, despite the complex interplay between analog/ digital sensors, the sounds produced were less “musical” in a sense.<span style=”color: #000000;”> </span><span style=”color: #000000;”>Furthermore, we realized that whilst the tone of the sound produced by the blue switch was very clearly affected by the amount of light perceived by the photoresistor, this was not the case for the red switch</span>. It was much harder to distinguish a change in the tone of the sound. We believe it is because the red button signals the C note in the 3<sup>rd</sup> octave, and the blue one signals the C note in the 5<sup>th</sup> octave. Since the frequency of octaves is calculated with the formula “Freq = note x 2<sup>N/12</sup>”, the changes in frequencies mapped to the notes will be more significant as octaves increase. For future improvements, especially with regards to its musicality, perhaps we could have each button play a series of notes, hence the 3 switches would produce a complete tune. Rather than mapping ranges for the photoresistor, we could choose a (more than/ less than) specific value. For example, in a dark room, the complete tune would be played in a lower key whilst in a bright room, the tune would play in a higher key.

week 9 – analog input & output

Video: https://youtu.be/JsG16pEle1I

For this assignment, I decided to use a photoresistor as my analog sensor. I wanted to represent and draw a comparison of our mentality during day and night i.e., I wanted to represent alertness and calmness through LED lighting patterns. When photoresistor sensor value (psv) is between 20-150, the yellow LED will blink rapidly. If the psv is at 0, the blue LED will blink slowly. The button applies these blinking patterns to different LEDs, meaning when the button is pressed and psv is between 20-150, the blue LED will blink rapidly whilst psv at 0 will cause the yellow LED to blink slowly. Through this assignment, I was able to grasp a stronger sense of the arduino syntax but most importantly, I became more capable of working the breadboard to make correct connections. Furthermore I think I’ve cemented my abilities in reading schematics which has enabled me to make such connections on the breadboard without taking a glimpse at the fritzing breadboard layout, which I’m really pleased about. Whilst I still struggled with the coding aspect, I really enjoyed and appreciated this assignment.

Schematic:

Arduino code snippet:

if (photoState >= 20 && photoState <= 150) {
  flickerLight(currentLed);  //flickerLight(yellowPin)
  digitalWrite(currentLed == yellowPin ? bluePin : yellowPin, LOW);
  // 1) currentLed == yellowPin => checks if value is yellowPin
  // 2) ? bluePin : yellowPin => if 1st condition == true, turn off bluePin, otherwise turn off yellowPin
} else if (photoState <= 10) {
  digitalWrite(currentLed, LOW);
  flickerDark(currentLed == yellowPin ? bluePin : yellowPin);
}

if (buttonState == HIGH) {  // if button is pressed, toggles value of "currentLed" between 'yellowPin' & 'bluePin'
  currentLed = (currentLed == yellowPin) ? bluePin : yellowPin;
  delay(200);
}

I had the hardest time implementing this change: button applies blinking patterns to different LEDs. And through very extensive research I learned about the conditional (ternary) operator. I realised after hours of experimenting and researching that I needed to create a variable to keep track of the LED state: “currentLed”. By incorporating this with the conditional (ternary) operator, I was able to create a mechanism where if the button was pressed, flickerLight/ flickerDark would correspondingly be applied to the opposite LED pin. Whilst it was definitely a pain in the sense that I struggled with this specific implementation for hours, I was very pleased with the end result. With regards to future improvements/ projects, I’d like to create a moodlight. I could use the tricolour LED, switch and trimpot. The trimpot could control the colour displayed and the switch, likewise to this assignment, could afftect the flicker pattern displayed.

 

Reading Reflections – week 10!

“The only reason your mind doesn’t explode every morning from the sheer awesomeness of your balletic achievement is that everyone else in the world can do this as well.” I think this reading made so much sense after the previous reading, Physical computing greatest hits(and misses). After seeing pieces like video mirrors and the body-as-a-cursor, or after actively using facial recognition on phones where our body is at command and not just our fingers, I definitely think the future of interaction goes way beyond the capacities of our hands. 

The first in-class assignment we had for the physical computing part of the class, where we had to make a creative switch that did not use hands, it was very hard to think of something to make. This does highlight the challenge that we need to break away from established norms and thinking beyond the limitations of familiar interaction methods. 

A question that stuck with me was how do we manipulate things? But how I perceive his explanation on this and think about it is, how do we manipulate them into a way that they are intuitive? Will the dynamic mediums Bret Victor talks about, be intuitive right from the start or will they take years of trials, feedbacks and usage to finally be called familiar and usable by ‘common sense’. I also appreciate the follow up article with his responses to the comments. His reply to the brain interfaces comment is quite nice and reassuring in a world where people believe automating all the work is way more efficient since humans are way more prone to making errors. I also like the insight on the psychological and cognitive aspects but not fully clear on the Matti Bergstrom quote about finger blindness.

My takeaway from this reading would be that our interaction with digital media and devices should not have consequences to our physical body and the idea of everything we do mediated by a computer might not be a good one.

Reading reflection week 10

A breif Rant on the Future of interactive media

The reading on future interfaces was mind-blowing! It was like someone suddenly turned on a light in a room I’d been sitting in for ages, and suddenly, everything made sense in a completely new way.

The author of this reflection, wow, they had been in the thick of designing future interfaces. Real prototypes, not just concepts. That’s a level of hands-on experience few people get. It’s like being backstage at a magic show and seeing how the tricks are really done.

Their beef with the video wasn’t about nitpicking interactions but about the vision itself. It wasn’t bold enough, not enough of a leap from the mess we’re dealing with in the present. And I get that! Visions should inspire, not just be a ho-hum “yeah, that’ll do” kind of thing.

But what really hit me was the talk about our hands. Hands are amazing! I mean, we use them constantly, but do we ever really think about how intricate they are? They’re like the Swiss Army knives of our bodies. Feeling things, manipulating objects—our hands are our interface with the world.

The idea of “Pictures Under Glass” really got to me. It’s like we’re willingly giving up the richness of touch for the sake of a fancy visual. The comparison to tying shoelaces with closed eyes hit home. We’re prioritizing sight over touch, but in reality, touch is the real MVP in how we interact with things.

The rant about the gestures we use with our fingers blew my mind. The fact that we switch between different grips without even thinking about it—opening a jar, for instance—showed how intuitive our interactions really are. Our hands are built for a three-dimensional world, for manipulating objects in ways that no other creature can. It’s like a superpower we take for granted every day!

And the call to action at the end was so powerful. The future isn’t predetermined; it’s a choice. It’s up to us to push for better interfaces, ones that harness the full potential of what our bodies can do. Why settle for a single finger when we have this incredible body that can do so much more?

What do I think about the Follow up

It’s like the author just dropped a knowledge bomb and left me reeling with thoughts and arguments ricocheting around my head.

The responses they received were a mix of understanding and misconceptions. The author wasn’t seeking to solve the problem outright; they wanted to spark curiosity and inspire researchers to delve into unexplored territories. The idea was to lay down the issue and hope it would catch the eye of the right people who could initiate the necessary research. That’s a pretty bold move!

The analogy about technology evolution using Kodak’s camera was spot on. The iPad, like the black-and-white camera, is groundbreaking, but it’s clear something’s missing. The push should be towards a dynamic tactile medium, not just a flat, glassy screen with minimal haptic feedback.

Their take on voice interfaces was refreshing. While acknowledging the importance of voice, especially for certain tasks, they stressed the limitations when it comes to creation and deep understanding. Explorable environments, where you can physically manipulate things, seem like the real deal.

The exploration of gestural interfaces was intriguing. From discrete abstract gestures to waving hands in the air, each had its pros and cons, but none seemed to fully harness the potential of our hands and bodies in a three-dimensional world.

The part about brain interfaces hit hard. Why are we trying to bypass our bodies altogether? It’s like saying our bodies are inadequate for the digital age. It’s a bold reminder to adapt technology to suit our natural capabilities rather than forcing ourselves to adapt to it.

The quote about fingertips and their importance for development resonated deeply. It’s like saying if we don’t use certain faculties, we lose them. The comparison to limiting literature to Dr. Seuss for adults is both humorous and thought-provoking.

And the clever redirect about the length of the rant with the book recommendation at the end was a nice touch!

Week 10: Make a musical instrument

For our assignment, Nafiha and I drew inspiration from a synthesizer and a sampler to create our own musical instrument. Our instrument incorporates three buttons, a piezo buzzer, a potentiometer, and a bunch of wires and resistors. It is designed such that each button triggers a distinct melody, and by adjusting the potentiometer, the pitch is modified, consequently altering the played melodies.

Video:

link:https://drive.google.com/file/d/1zvd5qZeavfn0oTLdWGMqWOIxTLay6gbp/view?usp=sharing

Code:

const int switch1Pin = 12;
const int switch2Pin = 8;
const int switch3Pin = 7;
const int potentiometerPin = A0;
const int buzzerPin = 3;

int currentMelody[8];//array to store the current melody
int melodyIndex = 0;// keep track of the current note in the melody
int isPlaying = 0;//to indicate whether a melody is currently playing

//melodies for each button
int melody1[] = {262, 330, 392, 523, 392, 330, 262, 196};//melody for switch 1
int melody2[] = {330, 392, 523, 392, 330, 262, 196, 262};//melody for switch 2
int melody3[] = {392, 523, 659, 523, 392, 330, 262, 330};//melody for switch 3

void setup() {
  pinMode(switch1Pin, INPUT_PULLUP);
  pinMode(switch2Pin, INPUT_PULLUP);
  pinMode(switch3Pin, INPUT_PULLUP);
  pinMode(potentiometerPin, INPUT);
  pinMode(buzzerPin, OUTPUT);
}

void loop() {
  //potentiometer value for pitch control
  int pitch = analogRead(potentiometerPin);

  //if switch 1 is pressed
  if (digitalRead(switch1Pin) == HIGH && !isPlaying) {
    playMelody(melody1, pitch);
  }

  //if switch 2 is pressed
  if (digitalRead(switch2Pin) == HIGH && !isPlaying) {
    playMelody(melody2, pitch);
  }

  //if switch 3 is pressed
  if (digitalRead(switch3Pin) == HIGH && !isPlaying) {
    playMelody(melody3, pitch);
  }

  //check if any switch is pressed and a melody is currently playing
  if ((digitalRead(switch1Pin) == HIGH || digitalRead(switch2Pin) == HIGH || digitalRead(switch3Pin) == HIGH) && isPlaying) {
    noTone(buzzerPin);//stop playing the melody
    isPlaying = 0;//set the flag to indicate no melody is playing
  }
}

void playMelody(int melody[], int pitch) {
  //map the potentiometer reading to adjust the pitch
  int adjustedPitch = map(pitch, 0, 1023, 50, 255);

  //copy the melody to the currentMelody array
  memcpy(currentMelody, melody, sizeof(currentMelody));

  //play each note in the melody
  for (int i = 0; i < sizeof(currentMelody) / sizeof(currentMelody[0]); i++) {
    tone(buzzerPin, currentMelody[i], adjustedPitch);
    delay(250);
    noTone(buzzerPin);
  }

  //set the flag to indicate a melody is currently playing
  isPlaying = 1;
}

In terms of improving our instrument, one potential feature could be incorporating additional sound effects through the use of the potentiometer. However, overall, working on this assignment was really fun, and we’re pretty pleased with the outcome.

Week 10: Reading Response

In his reading, Bret Victor discussed the Vision of the Future, sharing his observation about the central role of hands in interactions with future technology, or as he defined them, tools. Hands in the physical world serve various activities and possess numerous senses. Through just a touch or lifting objects, we can tell so much about them, making hands our primary means of understanding the world. However, in technological contexts, especially when we envision the future, hands primarily slide across glass screens. We can only sense the glass screen and manipulate the displayed content. His main point emphasizes the need to consider other interactions beyond hands— we also have our bodies! We should start considering other forms of interaction.

As he further discussed the two main functions of our hands—feeling and manipulation—I began to wonder: If we excessively use interactions solely via the glass screen, limiting our interaction with the real world and physical objects, would we lose some of our capabilities? He defined a tool as addressing human needs by amplifying human capabilities. If this were to happen, technology would become less of a tool.

Later, his response to some of the readers’ comments, and one indeed answered my questions. Victor citing a neuroscientist’s quote about the development of “finger-blindness” caused by underutilizing hands in childhood. And it is even worse than a simple blindness that a blind person cannot locate things, but a finger blind person cannot understand the meaning and value of things. What if, just what if in the far far future maybe, we start to use only the interaction of the sliding or tapping on the screen and start to develop this kind of “finger-blindness”? Though it might seem far in the future, we’re already experiencing some changes. For instance, some people have become so accustomed to typing that they feel less comfortable writing with pen and paper. What if we no longer remember how it feels like to turn a page of a book.

What he suggests is that as we choose the future, we also choose to shape our future through our actions. For that, he suggests thinking of different types of interactions beyond just using our hands, aiming for a “dynamic medium that we can see, feel, and manipulate.” And I do think such interactions might be somewhat expensive, considering that every kind of technology has its strengths and weaknesses. Take VR, for example—indeed, we can see things in 3D, but it remains a visionary illusion where we cannot physically touch and feel things around us. What I am suggesting now aligns with what he pointed out: we need to start thinking about diverse types of interactivity to balance the excessive use of one.

Week 10 – Musical Instrument

Concept

I’ve been trying to learn the piano for the past year, so when we had to make a musical instrument I definitely knew I wanted to make something tangentially related to the piano.

Another thing I wanted to do with this week’s assignment was to add more ‘design wrapping’ to it. I liked what the professor said about Week 9’s projects — sometimes a well-wrapped simple project is much nicer than a messy technically impressive project, so before even designing what I want to do, I thought about the presence of materials and how I can use them to enhance the project.

I thought of something like Nintendo’s cardboard piano, and it sounded like it would be fun to implement. However, I looked at the documentation for Arduino Tones and learned that only 1 tone can be played at a time, even with multiple piezo buzzers/ I didn’t like the idea of creating a piano that can only play one note at a time, plus I thought this idea may have been overdone: creating a piano as a musical instrument for arduino.

Nintendo's cardboard piano for Switch is the most exciting gear of 2018

I still opted to create something piano-like, but instead of using touch as input, I decided to use the ultrasonic detector which makes it a theremin. This makes use of the 1-tone at a time limit as a feature rather than a bug, however during implementation I realized the ultrasonic detector isn’t very precise, so I wasn’t able to add the black keys to the piano without sacrificing accuracy. One more problem that came to mind when using the ultrasonic detector to determine which note to play was: “how often do I play the note”? To address this, I added a potentiometer which sets the tempo of the notes, so the theremin can play 1 note every 400ms-1000ms depending on the position of the position of the potentiometer. I realized I should also add some visual feedback to this as it might otherwise be difficult to set the desired value, so I added a blinking LED that blinks at the same rate the theremin is playing. If the LED is blinking at the same pace you want the song to be played, then you’re good!

Video

Week 10 – Reading Reflection

Picture under Glass requires more than just your hands — it requires your eyes too. You can’t navigate Picture under Glass without looking at where your fingers are as there is no touch feedback from the screen. With analog buttons, you can ‘feel’ your way around an interface, for example modern cars only have a mounted tablet as the car entertainment system, and sometimes when I’m driving and I want to skip the song, there is no safe way for me to do so without having to momentary take my eyes off the road to look at the tablet to see where the ‘>’ next song button is. In older cars with analog buttons, I don’t have to look at it to know where the button is, I can just remember that the next song button is the second from the right, and with one hand on the wheel and eyes on the road, I can interact with the car entertainment system.

I also found it interesting that in his follow up he brought up this hypothetical scenario of “Monet saying to his canvas, “Give me some water lilies. Make ’em impressionistic.”. Today with the existence of generative AI and advanced natural language processing units, it seems that the world of user interfaces has doubled down on the direction of Picture under Glass, further reducing the number of middlemen between the intent of the brain, and the wanted outcome. What I mean by this is that hands/physical movements are in some parts, a means to an end. To open a jar, we use our hands to open it because we can’t simply use our brain to open it. People spend years in art school to learn how to draw well, but generative art can accomplish what would take an untrained artist years of training to achieve in a singular sentence and a few minutes. ChatGPT further reduces the effort needed by humans to interface with the world — in the past, one would have to visit a library to read about a specific subject. Then came the internet, and using devices connected to the internet, one can search Google for the information they need. Now with ChatGPT, you can just simply ask it about whatever you need. The effort needed to accomplish anything is reduced, and I’m not sure if this is a good or bad thing. On this, the author says that “if they [brain interfaces] bypass the body, then we’ve just created a future where people can and will spend their lives completely immobile. Why do you want this future? Why would this be a good thing?”

Reading Reflection: Week 10

After reading the author’s rant on today’s tech scene and the follow up article, I found myself nodding in agreement. It’s as if they put into words what many of us have been feeling – that our tech progress feels more like cautious steps instead of the significant leaps we had hoped for. The call for a more forward-thinking approach isn’t just criticism; it’s a plea for innovation that genuinely enhances how we engage with the world.

The author’s observation regarding our bodies adapting to technology, rather than the other way around, resonates with the changing nature of our interactions with modern devices. For example, in our engagements with smartphones and virtual reality, the absence of tactile richness becomes apparent. With smartphones, our interactions are primarily confined to tapping and swiping on flat screens, lacking the tangible feedback that our hands and fingers are accustomed to in the physical world. This shift towards touchscreens, while undoubtedly convenient, introduces a disconnect between our tactile senses and the digital interfaces we navigate daily. Similarly, in the realm of virtual reality, immersion often falls short due to a lack of genuine tactile experiences. While VR technology can visually transport us to breathtaking landscapes and scenarios, the tactile feedback remains limited, hindering the full potential of our hands in interacting with this virtual realm. The conceptualization of virtual environments replicating real-world actions and fully leveraging the capabilities of our hands and bodies presents an intriguing prospect. Imagine not just seeing but feeling the virtual world – a paradigm shift that challenges the conventional boundaries of human-computer interaction.

As I contemplated these notions, I couldn’t help but reflect on the potential future resembling the video and how to prevent technology from becoming devoid of feelings and feedback. This isn’t to say that we lack feedback from current technology, but it prompts us to question whether we are achieving the best possible feedback. In essence, this exploration not only echoes the author’s insightful observations but also introduces a layer of personal introspection regarding the implications and responsibilities tied to our technological trajectory. It serves as a call to action, urging us to actively contribute to shaping a future where technology seamlessly aligns with our human experiences.