Week 10- Reading Response Post

This week’s reading about the future of interaction design made me stop and think. It talked about how designs confined under glass aren’t great for the future of how we interact with stuff. The author had some strong thoughts about a video showcasing future interactions. They were skeptical about the interactions presented in the video because they had experience working with real prototypes, not just computer-generated animations. But surprisingly, that wasn’t the main issue for them. What bothered them the most was that the video didn’t offer anything truly groundbreaking from an interaction standpoint. They felt it was just a small step forward from what’s already there, which, according to them, isn’t that great to begin with. They stress how crucial it is to have visionary ideas that truly revolutionize how we interact with technology and the world around us. It got me pondering, but honestly, I don’t fully buy into that idea.

Sure, our bodies have a ton of complex ways we handle things, how we touch, hold, and interact with everything around us. But saying that designs restricted to “pictures under glass” are all bad? I’m not on board with that. Take something as simple as a PDF file versus printing out a reading. That PDF might be called “numb” in terms of design, but let’s be real, it’s way easier to handle and interact with than dealing with a printed paper. It’s about usability and convenience, isn’t it? If it’s not convenient or easy to use, is it even really interaction?

I believe interaction goes beyond just physically touching something. It’s about how easy and helpful it is to use. Some things will always need to be tangible. There’s magic in touching, feeling textures, estimating weights, and seeing how things respond to us. Like a couch that adjusts slightly to fit you but still does its job of being a comfy place to sit. That’s something you can’t replicate behind glass.

I think it’s crucial to know what can be behind glass and what can’t. Some folks might prioritize convenience in the future, but there are things you just can’t replicate virtually. I mean, you can’t virtually brush your teeth, right?

For me, I don’t see the connection or agree with that rant about interaction design. Maybe it’s just me, though. Everyone’s got their take on things, and that’s cool.

Week 10 – Musical Instrument

For this assignment, we wanted to make a hovering keyboard. We used the ultrasonic distance measuring sensor to set specific distance ranges to specific notes. As the user would move their hand through different ranges, different notes would play. We also added a button to turn off the instrument completely in addition to implementing a maximum range beyond which the instrument doesn’t produce any sound.

Video:

#include "pitches.h"
// defines pins numbers
const int trigPin = A0;
const int echoPin = A1;
const int speakerPin = 8;
const int pushButton = A2;
// defines variables
long duration;
int distance;
void setup() {
  pinMode(trigPin, OUTPUT);  // Sets the trigPin as an Output
  pinMode(echoPin, INPUT);   // Sets the echoPin as an Input
  pinMode()
  Serial.begin(9600);  // Starts the serial communication
  
}
void loop() {
  int buttonState = digitalRead(pushButton);
  // print out the state of the button:
  Serial.println(buttonState);
  delay(1);  // delay in between reads for stability
  // Clears the trigPin
  digitalWrite(trigPin, LOW);
  delayMicroseconds(2);
  // Sets the trigPin on HIGH state for 10 micro seconds
  digitalWrite(trigPin, HIGH);
  delayMicroseconds(10);
  digitalWrite(trigPin, LOW);
  // Reads the echoPin, returns the sound wave travel time in microseconds
  duration = pulseIn(echoPin, HIGH);
  // Calculating the distance
  distance = duration * 0.034 / 2;
  // Prints the distance on the Serial Monitor
  Serial.print("Distance: ");
  Serial.println(distance);
  // Playing outside distance range or the instrument is turned off
  if (distance >= 40 || buttonState == 1) {
    noTone(speakerPin);
  }
  // Play C4 in first 10cm
  else if (distance >= 0 && distance <= 10) {
    tone(speakerPin, NOTE_C4, 1000 / 4);
  }
  // Play G4 in next 10cm
  else if (distance >= 11 && distance <= 20) {
    tone(speakerPin, NOTE_G4, 1000 / 4);
  }
  // Play A4 in next 10cm
  else if (distance >= 21 && distance <= 30) {
    tone(speakerPin, NOTE_A4, 1000 / 4);
  }
  // Play F4 in next 10cm
  else if (distance >= 31 && distance <= 40) {
    tone(speakerPin, NOTE_F4, 1000 / 4);
  }
}

Future Applications:

I think for future applications, the number of keys could be expanded to have 12 keys, and buttons to move up or down an octave so it could be a complete hovering piano with all possible keys present on a keyboard.

Week 10 Assignment: 4 Tabs Mini Piano

Concept:

In this week’s assignment, I decided to create a piano on a much smaller scale. As the title suggested, it is composed of 4 tabs. Its motivation is to make a piano that can be carried anywhere and played easily. The piano has multiple different tones to it, which can be adjusted with the use of a potentiometer. While there are digital applications where people can easily play piano, as this week’s text suggested- the importance of touch and expression when performing tasks compared to just touching a flat screen- I realized how a physical piano can bring forth a better experience than a digital one.

Prototype:

https://youtube.com/shorts/p7VoEhPxomk

Code:

#include "pitches.h"

//setting up various pins, numkeys is the number of keys on the piano, TABPins are the digital pins
const int numKeys = 4;                
const int TABPins[numKeys] = {2, 3, 4, 5}; 
const int pushButton = 6;            
const int speakerPin = 7;             

//variable to store the last pressed tab in it
int lastTAB = -1;                  
int pitch;                         
//variable for flagging system as open or close   
bool systemEnabled = false;           


void setup() {
  //set each buttonPin as input 
  for (int i = 0; i < numKeys; i++) {
    pinMode(TABPins[i],  INPUT);
  }
  //set the button as input 
  pinMode(pushButton, INPUT);
  Serial.begin(9600);
}

void loop() {
  // Check the state of the push button
  int buttonState = digitalRead(pushButton);

  // Toggle the system on/off when the button is pressed
  if (buttonState == LOW) {
    delay(50); 
    //when the button is pressed, invert the systemEnabled variable
    if (digitalRead(pushButton) == LOW) {
      systemEnabled = !systemEnabled;

      // If the system is now enabled, reset the lastTAB variable
      if (systemEnabled) {
        lastTAB = -1;
      }

      // Wait for the button to be released
      while (digitalRead(pushButton) == LOW) {
        delay(10);
      }
    }
  }

  // If the system is enabled, read the potentiometer value and play notes
  if (systemEnabled) {
    int potValue = analogRead(potentiometerPin);
    // Map potentiometer value to a pitch range
    pitch = map(potValue, 0, 1023, 200, 4000);  

    for (int i = 0; i < numKeys; i++) {
      int TABValue = digitalRead(TABPins[i]);

      // Play a note if the TAB is pressed and it's not the same TAB as the last one
      if (TABValue == LOW && i != lastTAB) {
        // note variable that stores a value from the loswet Note + the addition of the pitch which changes according to potentiometer
        int note = NOTE_B0 + pitch + i * 100;  
        // output the speaker with that note value for 1 second
        tone(speakerPin, note, 1000);
        delay(100);  

        // Update the lastTAB variable
        lastTAB = i;
      }
    }
  }
}

Reflection:

This was a fun hands-on exercise. One aspect of the project that took the most time was building the prototype which replicates the piano. The hard part on the other hand was most probably the setting up of the push button to flag the system as open or closed since I had to input in multiple delays to prevent various errors. As for future improvements, more tabs can be added within the piano to make it more feasible to produce multiple notes in one go. Also, the overall look of the piano can most probably be much better than this.

 

Week 10 – Musical Instrument

Concept

I’ve been trying to learn the piano for the past year, so when we had to make a musical instrument I definitely knew I wanted to make something tangentially related to the piano.

Another thing I wanted to do with this week’s assignment was to add more ‘design wrapping’ to it. I liked what the professor said about Week 9’s projects — sometimes a well-wrapped simple project is much nicer than a messy technically impressive project, so before even designing what I want to do, I thought about the presence of materials and how I can use them to enhance the project.

I thought of something like Nintendo’s cardboard piano, and it sounded like it would be fun to implement. However, I looked at the documentation for Arduino Tones and learned that only 1 tone can be played at a time, even with multiple piezo buzzers/ I didn’t like the idea of creating a piano that can only play one note at a time, plus I thought this idea may have been overdone: creating a piano as a musical instrument for arduino.

Nintendo's cardboard piano for Switch is the most exciting gear of 2018

I still opted to create something piano-like, but instead of using touch as input, I decided to use the ultrasonic detector which makes it a theremin. This makes use of the 1-tone at a time limit as a feature rather than a bug, however during implementation I realized the ultrasonic detector isn’t very precise, so I wasn’t able to add the black keys to the piano without sacrificing accuracy. One more problem that came to mind when using the ultrasonic detector to determine which note to play was: “how often do I play the note”? To address this, I added a potentiometer which sets the tempo of the notes, so the theremin can play 1 note every 400ms-1000ms depending on the position of the position of the potentiometer. I realized I should also add some visual feedback to this as it might otherwise be difficult to set the desired value, so I added a blinking LED that blinks at the same rate the theremin is playing. If the LED is blinking at the same pace you want the song to be played, then you’re good!

Video

Week 10 – Reading Reflection

Picture under Glass requires more than just your hands — it requires your eyes too. You can’t navigate Picture under Glass without looking at where your fingers are as there is no touch feedback from the screen. With analog buttons, you can ‘feel’ your way around an interface, for example modern cars only have a mounted tablet as the car entertainment system, and sometimes when I’m driving and I want to skip the song, there is no safe way for me to do so without having to momentary take my eyes off the road to look at the tablet to see where the ‘>’ next song button is. In older cars with analog buttons, I don’t have to look at it to know where the button is, I can just remember that the next song button is the second from the right, and with one hand on the wheel and eyes on the road, I can interact with the car entertainment system.

I also found it interesting that in his follow up he brought up this hypothetical scenario of “Monet saying to his canvas, “Give me some water lilies. Make ’em impressionistic.”. Today with the existence of generative AI and advanced natural language processing units, it seems that the world of user interfaces has doubled down on the direction of Picture under Glass, further reducing the number of middlemen between the intent of the brain, and the wanted outcome. What I mean by this is that hands/physical movements are in some parts, a means to an end. To open a jar, we use our hands to open it because we can’t simply use our brain to open it. People spend years in art school to learn how to draw well, but generative art can accomplish what would take an untrained artist years of training to achieve in a singular sentence and a few minutes. ChatGPT further reduces the effort needed by humans to interface with the world — in the past, one would have to visit a library to read about a specific subject. Then came the internet, and using devices connected to the internet, one can search Google for the information they need. Now with ChatGPT, you can just simply ask it about whatever you need. The effort needed to accomplish anything is reduced, and I’m not sure if this is a good or bad thing. On this, the author says that “if they [brain interfaces] bypass the body, then we’ve just created a future where people can and will spend their lives completely immobile. Why do you want this future? Why would this be a good thing?”

Reading Reflection: Week 10

After reading the author’s rant on today’s tech scene and the follow up article, I found myself nodding in agreement. It’s as if they put into words what many of us have been feeling – that our tech progress feels more like cautious steps instead of the significant leaps we had hoped for. The call for a more forward-thinking approach isn’t just criticism; it’s a plea for innovation that genuinely enhances how we engage with the world.

The author’s observation regarding our bodies adapting to technology, rather than the other way around, resonates with the changing nature of our interactions with modern devices. For example, in our engagements with smartphones and virtual reality, the absence of tactile richness becomes apparent. With smartphones, our interactions are primarily confined to tapping and swiping on flat screens, lacking the tangible feedback that our hands and fingers are accustomed to in the physical world. This shift towards touchscreens, while undoubtedly convenient, introduces a disconnect between our tactile senses and the digital interfaces we navigate daily. Similarly, in the realm of virtual reality, immersion often falls short due to a lack of genuine tactile experiences. While VR technology can visually transport us to breathtaking landscapes and scenarios, the tactile feedback remains limited, hindering the full potential of our hands in interacting with this virtual realm. The conceptualization of virtual environments replicating real-world actions and fully leveraging the capabilities of our hands and bodies presents an intriguing prospect. Imagine not just seeing but feeling the virtual world – a paradigm shift that challenges the conventional boundaries of human-computer interaction.

As I contemplated these notions, I couldn’t help but reflect on the potential future resembling the video and how to prevent technology from becoming devoid of feelings and feedback. This isn’t to say that we lack feedback from current technology, but it prompts us to question whether we are achieving the best possible feedback. In essence, this exploration not only echoes the author’s insightful observations but also introduces a layer of personal introspection regarding the implications and responsibilities tied to our technological trajectory. It serves as a call to action, urging us to actively contribute to shaping a future where technology seamlessly aligns with our human experiences.

Week 10 Assignment (Mariam & Mudi)

Mariam & Mudi’s Magical Musical Instrument.

Concept :

For our musical instrument, we decided to craft an innovative instrument using an Ultrasonic sensor, a button, and a Buzzer. To kick off the musical vibes, just gently hold down the button. Now, here’s where it gets interesting when you wave your hand in front of the ultrasonic sensor at varying distances it unveils a different array of notes!

int trig = 10;

int echo = 11;

int buttonPin;

long duration;

long distance;



void setup() {

pinMode(echo, INPUT);

pinMode(trig, OUTPUT);

Serial.begin(9600);

}

void loop() {

digitalWrite(trig, LOW); //triggers on/off and then reads data

delayMicroseconds(2);

digitalWrite(trig, HIGH);

delayMicroseconds(10);

digitalWrite(trig, LOW);

duration = pulseIn(echo, HIGH);

distance = (duration / 2) * .0344; //344 m/s = speed of sound. We're converting into cm

int notes[7] = {261, 294, 329, 349, 392, 440, 494}; //Putting several notes in an array

// mid C D E F G A B

buttonPin = analogRead(A0); 

if (distance < 0 || distance > 50 || buttonPin < 100) { //if not presed and not in front

noTone(12); //dont play music

}

else if ((buttonPin > 100)) { //if pressed

int sound = map(distance, 0, 50, 0, 6); //map distance to the array of notes

tone(12, notes[sound]); //call a certain note depending on distance

}

}

 

Challenges:

I wouldn’t call this one a challenge but more of a hiccup really was that we found ourselves repeatedly unplugging and replugging them due to connectivity issues and the Arduino kept on giving errors.

 

Neil Leach Alien Intelligence – BONUS

Prof. Neil did quite a good job in shedding light on the stark differences between artificial and human smarts. From ChatGPT’s unexpected brilliance to AlphaGo Zero’s mind-boggling moves, he painted a picture of AI’s vastness, urging us to hit the brakes on development. It’s not just cool tech; it’s a bit scary. Neil’s vibe warned us: we might be steering into unknown territory, where AI, per Geoffrey Hinton, starts thinking for itself. The lecture left us with a quirky truth—aliens aren’t zipping down from space; they’re brewing up in labs. AI’s the new-age extraterrestrial, and we better handle it with care. However, it did stir up some thoughts I had.

While the talk was interesting, it felt like a déjà vu of AI basics. I craved more than the usual rundown While I found it intriguing, I couldn’t shake the feeling that it might have missed the mark for our crowd. He did mention many basics about AI but it felt like old news for most of us. I was hoping for more profound insights or a deeper dive into the future.

Week 10: Reading Response

“A Brief Rant on the Future of Interaction Design” and the follow-up article:

At the beginning of the reading, right after I watched the video, I was skeptical of the point the author is trying to make. However, as I read through the entire article, I started nodding my head and agreeing with everything the author was saying. Coincidentally, right before I read the article, I was practicing the violin. Because I hadn’t touched the violin in almost 6 months, my hands were “out of shape,” and I had forgotten to warm up before I started playing. So obviously, after a few minutes, my hands started cramping. In the past 11 years of my life, this was the longest break I have taken from playing the violin, so I felt that my hands were communicating with me through the pain and reflecting my unconscious sadness that I have abandoned one of my favorite hobbies for that long. Reading the article made me aware of the emotions we feel when we use our hands to do basic things, from brushing our teeth to playing the violin. I realize our hands are not just a body part we use to do things but rather a part of who we are—an important sense that makes us more aware of ourselves and our emotions somehow. I realize that replacing the use of our hands with a touch of a button or using voice control like Siri or Alexa affects our emotions and makes us numb to the feelings we naturally have.

After I finished reading the article, I asked myself when this article was written, and to my surprise, I found out that it was written 12 years ago. I started thinking back to 2011 and realized that when this article was written, the iPhone 4s had just been released, and the iPad was only out for a year. I wonder what the author thinks now. Unfortunately, the point the author was trying to make has become the reality we live every day. We all carry small glass screens in our pockets that control every single aspect of our lives. And yes, most children are now glued to iPads instead of playing with toys and running around their neighborhood playgrounds. In the response the author wrote, he mentioned that there are ultimately two functionalities that are changing: creating and understanding. He mentions that we cannot create with the use of technology because technology lacks creativity and emotion, but it can be an aid to help us understand stuff. This got me thinking about AI and how people are using ChatGPT to create, understand, and basically everything else. How can we control the advancement of technology so that it doesn’t take over our lives and change us to lifeless robots? Will our future really look like the video? How can we stop it before it’s too late?

reading reflection: week 10

First, I should make a point of saying that I have always believed writing to be conversational. When you write something, you open a conversation that remains unfinished business until you’ve heard back from your readers. I understand that this is not how a lot of people see the position of a writer, but it’s just my view, and I’ve lived by it ( — that’s why I’ve emailed all kinds of authors from John Green to Noam Chomsky). So, when I opened the second link under this week’s readings, I was pleasantly surprised to see an author continuing the conversation with his readers, in his humorous but not really condescending tone, which I appreciated very much.

Either way, maybe the author won me over with his jokes, but I feel inclined to agree more with him that with his critics. I have long harbored a slight distaste for VR/AR technologies, but I could never put a finger on where it was coming from. But the author’s “rant” offered the words I was looking for: this obsession with Pictures Under Glass genre of technology feels like a disservice to the human glory. They simply do not engage the full potential for interaction that humans possess, and by being such limited creations, they fall short of being tools that “fit the person”. It’s like giving a lumberjack a butterknife to chop down a tree. The lumberjack will probably just do his job with the knife because that’s all he has, but the creators in his society owe it to him that they start thinking in the general direction of a chainsaw instead of how to make the butter knife more aerodynamic. Simply because the lumberjack is capable of so much more.

I can’t trace how we ended up here, though. There are paintings from the year 1900 predicting how life in the 21st century would look like. Just a few generations ago, people were expecting so much more than just thin LCDs or holographic interaction panels. But somehow, someone along the way popularized these so called Pictures Under Glass technologies, and now we’re in this strange limbo.


There’s this short film from 1967, called 1999 A.D., which shows the life of a family in the future. It predicted many of the things that we have today: cell phones, email, and online shopping. However, these technologies are depicted in very different forms than how we know them today. For example, to reply to an electronic mail, the father has to handwrite his response on a machine with a glossy surface to write on. When the mother wants to go shopping, she turns some knobs on her “fingertip shopping” machine and she’s tuned into a real retailer, where a camera automatically scans across items for sale. These predictions are now fun to look at, with the knowledge that specialized, bulky machines for emailing or remote shopping isn’t the most convenient or realistic way to go. Still, it just goes to show that at some point in history, humans did have the vision to create more dynamic mediums of interaction “that we can see, feel, and manipulate,” but it seems that we have since gone astray.