Reading Reflections – week 10!

“The only reason your mind doesn’t explode every morning from the sheer awesomeness of your balletic achievement is that everyone else in the world can do this as well.” I think this reading made so much sense after the previous reading, Physical computing greatest hits(and misses). After seeing pieces like video mirrors and the body-as-a-cursor, or after actively using facial recognition on phones where our body is at command and not just our fingers, I definitely think the future of interaction goes way beyond the capacities of our hands. 

The first in-class assignment we had for the physical computing part of the class, where we had to make a creative switch that did not use hands, it was very hard to think of something to make. This does highlight the challenge that we need to break away from established norms and thinking beyond the limitations of familiar interaction methods. 

A question that stuck with me was how do we manipulate things? But how I perceive his explanation on this and think about it is, how do we manipulate them into a way that they are intuitive? Will the dynamic mediums Bret Victor talks about, be intuitive right from the start or will they take years of trials, feedbacks and usage to finally be called familiar and usable by ‘common sense’. I also appreciate the follow up article with his responses to the comments. His reply to the brain interfaces comment is quite nice and reassuring in a world where people believe automating all the work is way more efficient since humans are way more prone to making errors. I also like the insight on the psychological and cognitive aspects but not fully clear on the Matti Bergstrom quote about finger blindness.

My takeaway from this reading would be that our interaction with digital media and devices should not have consequences to our physical body and the idea of everything we do mediated by a computer might not be a good one.

Reading reflection week 10

A breif Rant on the Future of interactive media

The reading on future interfaces was mind-blowing! It was like someone suddenly turned on a light in a room I’d been sitting in for ages, and suddenly, everything made sense in a completely new way.

The author of this reflection, wow, they had been in the thick of designing future interfaces. Real prototypes, not just concepts. That’s a level of hands-on experience few people get. It’s like being backstage at a magic show and seeing how the tricks are really done.

Their beef with the video wasn’t about nitpicking interactions but about the vision itself. It wasn’t bold enough, not enough of a leap from the mess we’re dealing with in the present. And I get that! Visions should inspire, not just be a ho-hum “yeah, that’ll do” kind of thing.

But what really hit me was the talk about our hands. Hands are amazing! I mean, we use them constantly, but do we ever really think about how intricate they are? They’re like the Swiss Army knives of our bodies. Feeling things, manipulating objects—our hands are our interface with the world.

The idea of “Pictures Under Glass” really got to me. It’s like we’re willingly giving up the richness of touch for the sake of a fancy visual. The comparison to tying shoelaces with closed eyes hit home. We’re prioritizing sight over touch, but in reality, touch is the real MVP in how we interact with things.

The rant about the gestures we use with our fingers blew my mind. The fact that we switch between different grips without even thinking about it—opening a jar, for instance—showed how intuitive our interactions really are. Our hands are built for a three-dimensional world, for manipulating objects in ways that no other creature can. It’s like a superpower we take for granted every day!

And the call to action at the end was so powerful. The future isn’t predetermined; it’s a choice. It’s up to us to push for better interfaces, ones that harness the full potential of what our bodies can do. Why settle for a single finger when we have this incredible body that can do so much more?

What do I think about the Follow up

It’s like the author just dropped a knowledge bomb and left me reeling with thoughts and arguments ricocheting around my head.

The responses they received were a mix of understanding and misconceptions. The author wasn’t seeking to solve the problem outright; they wanted to spark curiosity and inspire researchers to delve into unexplored territories. The idea was to lay down the issue and hope it would catch the eye of the right people who could initiate the necessary research. That’s a pretty bold move!

The analogy about technology evolution using Kodak’s camera was spot on. The iPad, like the black-and-white camera, is groundbreaking, but it’s clear something’s missing. The push should be towards a dynamic tactile medium, not just a flat, glassy screen with minimal haptic feedback.

Their take on voice interfaces was refreshing. While acknowledging the importance of voice, especially for certain tasks, they stressed the limitations when it comes to creation and deep understanding. Explorable environments, where you can physically manipulate things, seem like the real deal.

The exploration of gestural interfaces was intriguing. From discrete abstract gestures to waving hands in the air, each had its pros and cons, but none seemed to fully harness the potential of our hands and bodies in a three-dimensional world.

The part about brain interfaces hit hard. Why are we trying to bypass our bodies altogether? It’s like saying our bodies are inadequate for the digital age. It’s a bold reminder to adapt technology to suit our natural capabilities rather than forcing ourselves to adapt to it.

The quote about fingertips and their importance for development resonated deeply. It’s like saying if we don’t use certain faculties, we lose them. The comparison to limiting literature to Dr. Seuss for adults is both humorous and thought-provoking.

And the clever redirect about the length of the rant with the book recommendation at the end was a nice touch!

Week 10: Make a musical instrument

For our assignment, Nafiha and I drew inspiration from a synthesizer and a sampler to create our own musical instrument. Our instrument incorporates three buttons, a piezo buzzer, a potentiometer, and a bunch of wires and resistors. It is designed such that each button triggers a distinct melody, and by adjusting the potentiometer, the pitch is modified, consequently altering the played melodies.

Video:

link:https://drive.google.com/file/d/1zvd5qZeavfn0oTLdWGMqWOIxTLay6gbp/view?usp=sharing

Code:

const int switch1Pin = 12;
const int switch2Pin = 8;
const int switch3Pin = 7;
const int potentiometerPin = A0;
const int buzzerPin = 3;

int currentMelody[8];//array to store the current melody
int melodyIndex = 0;// keep track of the current note in the melody
int isPlaying = 0;//to indicate whether a melody is currently playing

//melodies for each button
int melody1[] = {262, 330, 392, 523, 392, 330, 262, 196};//melody for switch 1
int melody2[] = {330, 392, 523, 392, 330, 262, 196, 262};//melody for switch 2
int melody3[] = {392, 523, 659, 523, 392, 330, 262, 330};//melody for switch 3

void setup() {
  pinMode(switch1Pin, INPUT_PULLUP);
  pinMode(switch2Pin, INPUT_PULLUP);
  pinMode(switch3Pin, INPUT_PULLUP);
  pinMode(potentiometerPin, INPUT);
  pinMode(buzzerPin, OUTPUT);
}

void loop() {
  //potentiometer value for pitch control
  int pitch = analogRead(potentiometerPin);

  //if switch 1 is pressed
  if (digitalRead(switch1Pin) == HIGH && !isPlaying) {
    playMelody(melody1, pitch);
  }

  //if switch 2 is pressed
  if (digitalRead(switch2Pin) == HIGH && !isPlaying) {
    playMelody(melody2, pitch);
  }

  //if switch 3 is pressed
  if (digitalRead(switch3Pin) == HIGH && !isPlaying) {
    playMelody(melody3, pitch);
  }

  //check if any switch is pressed and a melody is currently playing
  if ((digitalRead(switch1Pin) == HIGH || digitalRead(switch2Pin) == HIGH || digitalRead(switch3Pin) == HIGH) && isPlaying) {
    noTone(buzzerPin);//stop playing the melody
    isPlaying = 0;//set the flag to indicate no melody is playing
  }
}

void playMelody(int melody[], int pitch) {
  //map the potentiometer reading to adjust the pitch
  int adjustedPitch = map(pitch, 0, 1023, 50, 255);

  //copy the melody to the currentMelody array
  memcpy(currentMelody, melody, sizeof(currentMelody));

  //play each note in the melody
  for (int i = 0; i < sizeof(currentMelody) / sizeof(currentMelody[0]); i++) {
    tone(buzzerPin, currentMelody[i], adjustedPitch);
    delay(250);
    noTone(buzzerPin);
  }

  //set the flag to indicate a melody is currently playing
  isPlaying = 1;
}

In terms of improving our instrument, one potential feature could be incorporating additional sound effects through the use of the potentiometer. However, overall, working on this assignment was really fun, and we’re pretty pleased with the outcome.

Week 10: Reading Response

In his reading, Bret Victor discussed the Vision of the Future, sharing his observation about the central role of hands in interactions with future technology, or as he defined them, tools. Hands in the physical world serve various activities and possess numerous senses. Through just a touch or lifting objects, we can tell so much about them, making hands our primary means of understanding the world. However, in technological contexts, especially when we envision the future, hands primarily slide across glass screens. We can only sense the glass screen and manipulate the displayed content. His main point emphasizes the need to consider other interactions beyond hands— we also have our bodies! We should start considering other forms of interaction.

As he further discussed the two main functions of our hands—feeling and manipulation—I began to wonder: If we excessively use interactions solely via the glass screen, limiting our interaction with the real world and physical objects, would we lose some of our capabilities? He defined a tool as addressing human needs by amplifying human capabilities. If this were to happen, technology would become less of a tool.

Later, his response to some of the readers’ comments, and one indeed answered my questions. Victor citing a neuroscientist’s quote about the development of “finger-blindness” caused by underutilizing hands in childhood. And it is even worse than a simple blindness that a blind person cannot locate things, but a finger blind person cannot understand the meaning and value of things. What if, just what if in the far far future maybe, we start to use only the interaction of the sliding or tapping on the screen and start to develop this kind of “finger-blindness”? Though it might seem far in the future, we’re already experiencing some changes. For instance, some people have become so accustomed to typing that they feel less comfortable writing with pen and paper. What if we no longer remember how it feels like to turn a page of a book.

What he suggests is that as we choose the future, we also choose to shape our future through our actions. For that, he suggests thinking of different types of interactions beyond just using our hands, aiming for a “dynamic medium that we can see, feel, and manipulate.” And I do think such interactions might be somewhat expensive, considering that every kind of technology has its strengths and weaknesses. Take VR, for example—indeed, we can see things in 3D, but it remains a visionary illusion where we cannot physically touch and feel things around us. What I am suggesting now aligns with what he pointed out: we need to start thinking about diverse types of interactivity to balance the excessive use of one.

Week 10 – Musical Instrument

Concept

I’ve been trying to learn the piano for the past year, so when we had to make a musical instrument I definitely knew I wanted to make something tangentially related to the piano.

Another thing I wanted to do with this week’s assignment was to add more ‘design wrapping’ to it. I liked what the professor said about Week 9’s projects — sometimes a well-wrapped simple project is much nicer than a messy technically impressive project, so before even designing what I want to do, I thought about the presence of materials and how I can use them to enhance the project.

I thought of something like Nintendo’s cardboard piano, and it sounded like it would be fun to implement. However, I looked at the documentation for Arduino Tones and learned that only 1 tone can be played at a time, even with multiple piezo buzzers/ I didn’t like the idea of creating a piano that can only play one note at a time, plus I thought this idea may have been overdone: creating a piano as a musical instrument for arduino.

Nintendo's cardboard piano for Switch is the most exciting gear of 2018

I still opted to create something piano-like, but instead of using touch as input, I decided to use the ultrasonic detector which makes it a theremin. This makes use of the 1-tone at a time limit as a feature rather than a bug, however during implementation I realized the ultrasonic detector isn’t very precise, so I wasn’t able to add the black keys to the piano without sacrificing accuracy. One more problem that came to mind when using the ultrasonic detector to determine which note to play was: “how often do I play the note”? To address this, I added a potentiometer which sets the tempo of the notes, so the theremin can play 1 note every 400ms-1000ms depending on the position of the position of the potentiometer. I realized I should also add some visual feedback to this as it might otherwise be difficult to set the desired value, so I added a blinking LED that blinks at the same rate the theremin is playing. If the LED is blinking at the same pace you want the song to be played, then you’re good!

Video

Week 10 – Reading Reflection

Picture under Glass requires more than just your hands — it requires your eyes too. You can’t navigate Picture under Glass without looking at where your fingers are as there is no touch feedback from the screen. With analog buttons, you can ‘feel’ your way around an interface, for example modern cars only have a mounted tablet as the car entertainment system, and sometimes when I’m driving and I want to skip the song, there is no safe way for me to do so without having to momentary take my eyes off the road to look at the tablet to see where the ‘>’ next song button is. In older cars with analog buttons, I don’t have to look at it to know where the button is, I can just remember that the next song button is the second from the right, and with one hand on the wheel and eyes on the road, I can interact with the car entertainment system.

I also found it interesting that in his follow up he brought up this hypothetical scenario of “Monet saying to his canvas, “Give me some water lilies. Make ’em impressionistic.”. Today with the existence of generative AI and advanced natural language processing units, it seems that the world of user interfaces has doubled down on the direction of Picture under Glass, further reducing the number of middlemen between the intent of the brain, and the wanted outcome. What I mean by this is that hands/physical movements are in some parts, a means to an end. To open a jar, we use our hands to open it because we can’t simply use our brain to open it. People spend years in art school to learn how to draw well, but generative art can accomplish what would take an untrained artist years of training to achieve in a singular sentence and a few minutes. ChatGPT further reduces the effort needed by humans to interface with the world — in the past, one would have to visit a library to read about a specific subject. Then came the internet, and using devices connected to the internet, one can search Google for the information they need. Now with ChatGPT, you can just simply ask it about whatever you need. The effort needed to accomplish anything is reduced, and I’m not sure if this is a good or bad thing. On this, the author says that “if they [brain interfaces] bypass the body, then we’ve just created a future where people can and will spend their lives completely immobile. Why do you want this future? Why would this be a good thing?”

Reading Reflection: Week 10

After reading the author’s rant on today’s tech scene and the follow up article, I found myself nodding in agreement. It’s as if they put into words what many of us have been feeling – that our tech progress feels more like cautious steps instead of the significant leaps we had hoped for. The call for a more forward-thinking approach isn’t just criticism; it’s a plea for innovation that genuinely enhances how we engage with the world.

The author’s observation regarding our bodies adapting to technology, rather than the other way around, resonates with the changing nature of our interactions with modern devices. For example, in our engagements with smartphones and virtual reality, the absence of tactile richness becomes apparent. With smartphones, our interactions are primarily confined to tapping and swiping on flat screens, lacking the tangible feedback that our hands and fingers are accustomed to in the physical world. This shift towards touchscreens, while undoubtedly convenient, introduces a disconnect between our tactile senses and the digital interfaces we navigate daily. Similarly, in the realm of virtual reality, immersion often falls short due to a lack of genuine tactile experiences. While VR technology can visually transport us to breathtaking landscapes and scenarios, the tactile feedback remains limited, hindering the full potential of our hands in interacting with this virtual realm. The conceptualization of virtual environments replicating real-world actions and fully leveraging the capabilities of our hands and bodies presents an intriguing prospect. Imagine not just seeing but feeling the virtual world – a paradigm shift that challenges the conventional boundaries of human-computer interaction.

As I contemplated these notions, I couldn’t help but reflect on the potential future resembling the video and how to prevent technology from becoming devoid of feelings and feedback. This isn’t to say that we lack feedback from current technology, but it prompts us to question whether we are achieving the best possible feedback. In essence, this exploration not only echoes the author’s insightful observations but also introduces a layer of personal introspection regarding the implications and responsibilities tied to our technological trajectory. It serves as a call to action, urging us to actively contribute to shaping a future where technology seamlessly aligns with our human experiences.

Week 10: Reading Response

“A Brief Rant on the Future of Interaction Design” and the follow-up article:

At the beginning of the reading, right after I watched the video, I was skeptical of the point the author is trying to make. However, as I read through the entire article, I started nodding my head and agreeing with everything the author was saying. Coincidentally, right before I read the article, I was practicing the violin. Because I hadn’t touched the violin in almost 6 months, my hands were “out of shape,” and I had forgotten to warm up before I started playing. So obviously, after a few minutes, my hands started cramping. In the past 11 years of my life, this was the longest break I have taken from playing the violin, so I felt that my hands were communicating with me through the pain and reflecting my unconscious sadness that I have abandoned one of my favorite hobbies for that long. Reading the article made me aware of the emotions we feel when we use our hands to do basic things, from brushing our teeth to playing the violin. I realize our hands are not just a body part we use to do things but rather a part of who we are—an important sense that makes us more aware of ourselves and our emotions somehow. I realize that replacing the use of our hands with a touch of a button or using voice control like Siri or Alexa affects our emotions and makes us numb to the feelings we naturally have.

After I finished reading the article, I asked myself when this article was written, and to my surprise, I found out that it was written 12 years ago. I started thinking back to 2011 and realized that when this article was written, the iPhone 4s had just been released, and the iPad was only out for a year. I wonder what the author thinks now. Unfortunately, the point the author was trying to make has become the reality we live every day. We all carry small glass screens in our pockets that control every single aspect of our lives. And yes, most children are now glued to iPads instead of playing with toys and running around their neighborhood playgrounds. In the response the author wrote, he mentioned that there are ultimately two functionalities that are changing: creating and understanding. He mentions that we cannot create with the use of technology because technology lacks creativity and emotion, but it can be an aid to help us understand stuff. This got me thinking about AI and how people are using ChatGPT to create, understand, and basically everything else. How can we control the advancement of technology so that it doesn’t take over our lives and change us to lifeless robots? Will our future really look like the video? How can we stop it before it’s too late?

reading reflection: week 10

First, I should make a point of saying that I have always believed writing to be conversational. When you write something, you open a conversation that remains unfinished business until you’ve heard back from your readers. I understand that this is not how a lot of people see the position of a writer, but it’s just my view, and I’ve lived by it ( — that’s why I’ve emailed all kinds of authors from John Green to Noam Chomsky). So, when I opened the second link under this week’s readings, I was pleasantly surprised to see an author continuing the conversation with his readers, in his humorous but not really condescending tone, which I appreciated very much.

Either way, maybe the author won me over with his jokes, but I feel inclined to agree more with him that with his critics. I have long harbored a slight distaste for VR/AR technologies, but I could never put a finger on where it was coming from. But the author’s “rant” offered the words I was looking for: this obsession with Pictures Under Glass genre of technology feels like a disservice to the human glory. They simply do not engage the full potential for interaction that humans possess, and by being such limited creations, they fall short of being tools that “fit the person”. It’s like giving a lumberjack a butterknife to chop down a tree. The lumberjack will probably just do his job with the knife because that’s all he has, but the creators in his society owe it to him that they start thinking in the general direction of a chainsaw instead of how to make the butter knife more aerodynamic. Simply because the lumberjack is capable of so much more.

I can’t trace how we ended up here, though. There are paintings from the year 1900 predicting how life in the 21st century would look like. Just a few generations ago, people were expecting so much more than just thin LCDs or holographic interaction panels. But somehow, someone along the way popularized these so called Pictures Under Glass technologies, and now we’re in this strange limbo.


There’s this short film from 1967, called 1999 A.D., which shows the life of a family in the future. It predicted many of the things that we have today: cell phones, email, and online shopping. However, these technologies are depicted in very different forms than how we know them today. For example, to reply to an electronic mail, the father has to handwrite his response on a machine with a glossy surface to write on. When the mother wants to go shopping, she turns some knobs on her “fingertip shopping” machine and she’s tuned into a real retailer, where a camera automatically scans across items for sale. These predictions are now fun to look at, with the knowledge that specialized, bulky machines for emailing or remote shopping isn’t the most convenient or realistic way to go. Still, it just goes to show that at some point in history, humans did have the vision to create more dynamic mediums of interaction “that we can see, feel, and manipulate,” but it seems that we have since gone astray.

Week 10 Reading

After reading this article (or rant),  I am amazed by the author’s vision. It was not until now I realized the explicit difference between current production tools and tools we used in the past. It is not that the tools we used in the past are better than the tools we use now, but personally, I always thought something was missing in the tools we use nowadays. Now I know the thing that was missing was feedback.

I am not saying that the current tools we use now do not have feedback. On the contrary, these tools have plenty of ways to provide feedback. Ringtones, vibrations, different kinds of displays, etc. However, those feedbacks all have one problem: it does not represent an aspect of the action performed. our brains have to process what the response or the feedback means, and that is what I feel is missing in current tools. If I use a hammer, when I hit the nail it gives me a force so that I know I hit something. The meaning of this feedback is completely intuitive. Therefore in this way, traditional tools are easier to learn and more intuitive.

However, I remain doubtful of the claim that the more we use our hands the better. Yes, it is true that our fingers have an incredibly rich and expressive repertoire, and we improvise from it constantly without the slightest thought. However, we have it does not mean we need to use it. I think whether it’s better or not should not be determined by how much repertoire of our hand we use, but by how intuitive it is for us to use. Therefore even though I agree with the author that the iPad or phones are not perfect tools, I remain doubtful that the tools in the future will use that many hand functions.