Week 10: Reading Response

“A Brief Rant on the Future of Interaction Design” and the follow-up article:

At the beginning of the reading, right after I watched the video, I was skeptical of the point the author is trying to make. However, as I read through the entire article, I started nodding my head and agreeing with everything the author was saying. Coincidentally, right before I read the article, I was practicing the violin. Because I hadn’t touched the violin in almost 6 months, my hands were “out of shape,” and I had forgotten to warm up before I started playing. So obviously, after a few minutes, my hands started cramping. In the past 11 years of my life, this was the longest break I have taken from playing the violin, so I felt that my hands were communicating with me through the pain and reflecting my unconscious sadness that I have abandoned one of my favorite hobbies for that long. Reading the article made me aware of the emotions we feel when we use our hands to do basic things, from brushing our teeth to playing the violin. I realize our hands are not just a body part we use to do things but rather a part of who we are—an important sense that makes us more aware of ourselves and our emotions somehow. I realize that replacing the use of our hands with a touch of a button or using voice control like Siri or Alexa affects our emotions and makes us numb to the feelings we naturally have.

After I finished reading the article, I asked myself when this article was written, and to my surprise, I found out that it was written 12 years ago. I started thinking back to 2011 and realized that when this article was written, the iPhone 4s had just been released, and the iPad was only out for a year. I wonder what the author thinks now. Unfortunately, the point the author was trying to make has become the reality we live every day. We all carry small glass screens in our pockets that control every single aspect of our lives. And yes, most children are now glued to iPads instead of playing with toys and running around their neighborhood playgrounds. In the response the author wrote, he mentioned that there are ultimately two functionalities that are changing: creating and understanding. He mentions that we cannot create with the use of technology because technology lacks creativity and emotion, but it can be an aid to help us understand stuff. This got me thinking about AI and how people are using ChatGPT to create, understand, and basically everything else. How can we control the advancement of technology so that it doesn’t take over our lives and change us to lifeless robots? Will our future really look like the video? How can we stop it before it’s too late?

reading reflection: week 10

First, I should make a point of saying that I have always believed writing to be conversational. When you write something, you open a conversation that remains unfinished business until you’ve heard back from your readers. I understand that this is not how a lot of people see the position of a writer, but it’s just my view, and I’ve lived by it ( — that’s why I’ve emailed all kinds of authors from John Green to Noam Chomsky). So, when I opened the second link under this week’s readings, I was pleasantly surprised to see an author continuing the conversation with his readers, in his humorous but not really condescending tone, which I appreciated very much.

Either way, maybe the author won me over with his jokes, but I feel inclined to agree more with him that with his critics. I have long harbored a slight distaste for VR/AR technologies, but I could never put a finger on where it was coming from. But the author’s “rant” offered the words I was looking for: this obsession with Pictures Under Glass genre of technology feels like a disservice to the human glory. They simply do not engage the full potential for interaction that humans possess, and by being such limited creations, they fall short of being tools that “fit the person”. It’s like giving a lumberjack a butterknife to chop down a tree. The lumberjack will probably just do his job with the knife because that’s all he has, but the creators in his society owe it to him that they start thinking in the general direction of a chainsaw instead of how to make the butter knife more aerodynamic. Simply because the lumberjack is capable of so much more.

I can’t trace how we ended up here, though. There are paintings from the year 1900 predicting how life in the 21st century would look like. Just a few generations ago, people were expecting so much more than just thin LCDs or holographic interaction panels. But somehow, someone along the way popularized these so called Pictures Under Glass technologies, and now we’re in this strange limbo.


There’s this short film from 1967, called 1999 A.D., which shows the life of a family in the future. It predicted many of the things that we have today: cell phones, email, and online shopping. However, these technologies are depicted in very different forms than how we know them today. For example, to reply to an electronic mail, the father has to handwrite his response on a machine with a glossy surface to write on. When the mother wants to go shopping, she turns some knobs on her “fingertip shopping” machine and she’s tuned into a real retailer, where a camera automatically scans across items for sale. These predictions are now fun to look at, with the knowledge that specialized, bulky machines for emailing or remote shopping isn’t the most convenient or realistic way to go. Still, it just goes to show that at some point in history, humans did have the vision to create more dynamic mediums of interaction “that we can see, feel, and manipulate,” but it seems that we have since gone astray.

Week 10 Reading

After reading this article (or rant),  I am amazed by the author’s vision. It was not until now I realized the explicit difference between current production tools and tools we used in the past. It is not that the tools we used in the past are better than the tools we use now, but personally, I always thought something was missing in the tools we use nowadays. Now I know the thing that was missing was feedback.

I am not saying that the current tools we use now do not have feedback. On the contrary, these tools have plenty of ways to provide feedback. Ringtones, vibrations, different kinds of displays, etc. However, those feedbacks all have one problem: it does not represent an aspect of the action performed. our brains have to process what the response or the feedback means, and that is what I feel is missing in current tools. If I use a hammer, when I hit the nail it gives me a force so that I know I hit something. The meaning of this feedback is completely intuitive. Therefore in this way, traditional tools are easier to learn and more intuitive.

However, I remain doubtful of the claim that the more we use our hands the better. Yes, it is true that our fingers have an incredibly rich and expressive repertoire, and we improvise from it constantly without the slightest thought. However, we have it does not mean we need to use it. I think whether it’s better or not should not be determined by how much repertoire of our hand we use, but by how intuitive it is for us to use. Therefore even though I agree with the author that the iPad or phones are not perfect tools, I remain doubtful that the tools in the future will use that many hand functions.

Week 9 – Reading Response

Greatest Hits (and Misses)
The reading went over how different materials were used in different projects. I think there is 2 ways of approaching a project: come up with the idea first and look at available technologies or methods, or look at available technologies or methods and think about what you can do with it.

I wouldn’t say any of them is superior than the other. However, I do want to say that it does mean knowing what’s available will be helpful in terms of thinking outside the box, doing remix, and creating your project.  In this sense, it was interesting to see how different technologies were used. Some were crazier than others, but regardless, they still make it to my reference list.

Making Interactive Art
I think the reading very well points out how certain performers or artists tend to directly guide the audience to think a specific way or come to a specific conclusion. I think it’s easier to fall into this pitfall as an interactive media creator, as there’s a limitation to different types or cases of interactions we can think of as creators, which will make us limit the conclusion and the emotions we create with our project.

I think this can both work against you or work in your favour. It is true we, as creators, should try not to have a set conclusion and just ‘educate’ our audience of it. However, at the same time, interactive media holds the power to guide people to experience certain things and, in return, think a certain thing. I think this is a strength not many forms of media has. For instance, a painting may go through so many interpretations of different individuals. While I agree the charm lies within this characteristic, it is also very true that the painter may fail in delivering the message with the correct context and depth.

Week 9 – Production

Concept)

When I thought about what I could do with 2 LEDs, I again thought about the colours. Looking at buttons and red and blue LED lights, I thought of an old-style game that used to be played on primary school sports day: “Blue Flag, Red Flag.”

How you play the game:
There’s one referee who gives orders like…
– Hold up red flag
– Don’t hold up red flag and hold up blue flag
– etc..
It makes much more sense and has more thrill in Korean grammar.. The grammatical order of English kind of giving everything away.

Anyways, I wanted to create a game control for this game.

Materials I Used:
– Arduino Uno
– 2 LEDs (red & blue)
– 4  resistors (330 ohm)
– 2 push button switches
– breadboard
– jumper wires

Video: (just realized the top part is a little cut off, but the subtitles on top of the video or examples of the orders)

Code:

const int redButton = 3;
const int blueButton = A1;
const int redLight = 6;
const int blueLight = 5;

void setup() {
  Serial.begin(9600);
  pinMode(redLight, OUTPUT);
  pinMode(blueLight, OUTPUT);
  pinMode(redButton, INPUT);
}

void loop() {

  int redState;
  int blueState;

  redState = digitalRead(redButton);
  blueState = analogRead(blueButton);

  //Serial.print("restate: ");
  //Serial.println(redState);
  //Serial.print("bluestate: ");
  //Serial.println(blueState);
  if (redState != 0)
  {
    Serial.println("red high");
    digitalWrite(redLight, HIGH);
    delay(100);
    digitalWrite(redLight, LOW);
  }
  if (blueState > 500)
  {
    Serial.println("blue high");
    digitalWrite(blueLight, HIGH);
   delay(100);
    digitalWrite(blueLight, LOW);
  }
}

 

The code is very simple as it’s just recognizing the buttons and turning on the LED lights. It is notable that the push of the buttons are recognized differently (analog and digital input) and therefore the if statement is a little different.

 

Week 9: Digital and Analog

Concept 

I tried to create an interactive traffic signal by integrating a push button and a potentiometer connected to two LEDs. Upon pressing the digital input push button, the red light illuminates, while the potentiometer, serving as the analog input, enables control over the brightness of the yellow light. This dual-input system represents a significant challenge, as initially, I could only manage them separately. 

Reflection: 

Coding proved to be an inspiring task, particularly in merging both analog and digital components seamlessly. Despite the initial difficulty, this project served as a valuable learning experience, providing insights into the intricacies of combining different inputs and coding for a cohesive interactive system. As I look ahead, the incorporation of the third color, red, looms as the next exciting phase, promising further growth and mastery of interactive design.

Materials used: 

  1. Arduino board 
  2. Potentiometer 
  3. 2 LEDs (red and yellow)
  4. Resistors 
  5. Tactile push-button switch
  6. Breadboard 
  7.  jumper wires

Video Links

Week 9 Arduino

Concept

For this assignment, I used the ultrasonic sensor to make something like a parking sensor. If the distance is above a certain threshold, the LED displays green. If the distance is below the threshold, it gradually turns red along the distance of 20 centimeters. After that, the LED turns completely red. There are two switches used as input for digital data. One is this system’s overall switch, which shuts the entire system when open. Once it’s closed, a blue LED lights up indicating the system is functioning. The other button when pressed, records the current distance the sensor gets and stores this data as the new threshold.

Video Example:

Week 9: Reading response

Artists often provide detailed interpretations and instructions alongside their interactive pieces, potentially limiting participants’ freedom to interpret and engage. The author suggests that interactive art should be seen as a conversation starter, where the created environment or instrument is a catalyst for participants to take meaningful actions and derive their own understanding of the artist’s expression. This approach emphasizes the interactive nature of the art form and encourages a more open-ended and dynamic engagement between the artist, the work, and the audience. This also helps the artist to interact with the audience and get new perspectives and ideas which he/she could contribute to their new artworks. Giving the audience their space and freedom would actually change their perspective towards the artwork. Also the listening part mentioned by the author which is the next crucial step in creating interactive art. It involves actively listening to participants. Artists are encouraged to observe and interpret how individuals engage with the designed elements, noting emotional responses, comprehension levels, and overall reactions. The variety of responses, including excitement, indifference, or sharing newfound knowledge, is seen as part of an ongoing conversation between the artist and the audience.  More than perfecting an interactive artwork, the performance phase is more important and significant because it involves the real-time, unpredictable responses of participants. During the performance phase, the artist witnesses how individuals interpret and engage with the interactive elements. This phase is a live, evolving dialogue between the artist’s creation and the audience’s reactions. It allows for a deeper understanding of the diverse ways people interact with and derive meaning from the artwork.

week 9: reading response

I believe that both of these readings highlight two important facets of physical computing.

Physical Computing’s Greatest Hits (and Misses) is an informative piece that aims to categorize (and explain) some of the most notable genres of physical computing projects. In the preamble to the piece, in which the author states that despite certain themes recurring, it doesn’t mean that the projects that use those ideas are unoriginal. In fact, they go on to assert that these forms of interaction allow for a lot of innovation and creativity. In my opinion, this is incredibly important, as it reminds us of a key principle in physical computing: the sensory input need not aim to be unique, but what we do with that input should.

Making Interactive Art: Set the Stage, Then Shut Up and Listen highlights a second key component of physical computing. Physical computing is inherently an interactive paradigm and necessitates a user-computer interaction. It might be tempting to provide clear instructions or an elaborate backstory to the project, but part of the beauty of interactive artworks is the limitless ways one can play around with them. Letting the user explore the project itself is the only way to truly realized the potential of a physical computing project.