All Posts

Week 10- Reading reflections

A Brief Rant on the Future of Interaction Design

From a psychological perspective, the author’s critique makes total sense because it values the sensual experience over the aesthetical and fancy way of an interactive system. For instance, in the book I once read, O’Keane (2022) suggests that “one makes sense of the world through one’s senses and this becomes the basis for one’s interpretation of the world and for one’s memory.” There is actually one type of memory which is called haptic memory which is the form of sensory memory specific to touch stimuli. So, I believe the author’s argument is very clear and in touch with the specific research in psychology.

However, the article did not offer clear and specific solutions to what can we do as an alternative. I think the author could have provided some options that are available today and that focus somehow on the haptic aspect. For example, Humane™ just launched its much-awaited #hardware device, which has been purpose-built for the new AI era from the ground up: here is the video. It is more human-centered design in technology and it gives an option for manipulating the technology with our hands.

A follow-up article

I think that most of my own critiques and questions from the previous article were answered in this follow-up. I think that authors should respond to critique to engage with readers, gain valuable insights, improve their work, and build a supportive community. Responding to the critique fosters a connection, and enhances the collaborative nature of the creative process.

I genuinely believe that his ideas about giving up on our bodies are so important to talk about. This reminded me of a conversation that I had with my friend about what art is. Today, we tend to think that art is AI-generated images, temporary installations, etc. However, I think art is deteriorating as we rely less on hands-on craftsmanship like it was before for building sculptures and architecture (e.g. Acropolis of Athens). By embracing technology, we risk losing the tactile connection that brings authenticity and soul to artistic creations.

Week 10 – Reading Reflection

When I was young, I used to watch fiction movies portraying the future. I’ve seen that the vision of it shifts significantly over time because, whereas filmmakers strive to develop pre-existing technologies, inventors revolutionize them by bringing in new ideas and perspectives on the world. For instance, I recall the old movie where the main characters could instantly get any book, which would appear in a special box if they said its title. Right now, with the invention of the Internet and smartphones, we can get the same result, but the way we do it is remarkably different. In that sense, I absolutely support the position of Bret Victor about stopping to visualize our future as an advanced version of the present, but rather thinking about how we might revolutionize it and create something entirely new.

Regarding the use of “Picture Under Glass”, I believe it will still be used for at least 3 decades, but personally, I would like to see the transition to the new form of control. Bret Victor promotes the idea that, similar to how we use physical objects and tools, we should use technology in a naturally human way. More or less, I support the idea of involving human actions as a way of controlling things, but I don’t think that it’s the only way to go, as it’s hard to see the world where every control will involve action or physical sensation. It may be implemented alongside other technologies, but not as the only path to technological development.

Neil Leach Talk Reflection

Neil Leach’s talk on AI went pretty much as I expected. After attending another AI talk the day before, it always seems like people, especially during Q&A sessions, are very interested in the ethical implications of AI. And why wouldn’t they be? My capstone project focuses on surveillance systems and facial recognition technologies that are used to target marginalized groups in oppressive contexts. When I see a mid-journey or a DALL E image, I’m not amazed by how advanced our technology has become in generating text to image. Instead, I struggle with the fact that these deep learning models are also used for facial recognition, deepfake technology, and the spread of fake news. They are likely to replace countless blue-collar and white-collar jobs. For me, the negatives far outweigh the positives of using illegal copyrighted datasets to create images. The excuse of the “blackbox” has been used too often to argue against regulating AI, but I believe there needs to be a pause if not regulation. The legal process of regulating AI cannot keep up with the rapid pace at which AI is transforming, and it is a frightening time. I don’t care much about architecture being built through AI when these deep learning models have been repeatedly used in surveillance systems by regimes like Israel in their occupation, leading to the destruction in Gaza, countless lives lost, buildings in rubble. What’s the point of creation when it comes at the cost of life?

Israel/OPT: Israeli authorities are using facial recognition technology to entrench apartheid

Week 10 Reading Reflection

Bret Victor’s rant and point of view is something I hadn’t really considered before. It’s important to keep in mind that the article was written in 2010 when touchscreen technology was still in a rather abysmal state. At that time, the way we interacted with devices was a topic of contention. Today, touchscreen interaction has become the norm and it doesn’t seem likely to change anytime soon. While there may be some introduction of haptic gimmicks, it appears that we are moving away from a touch-centric approach, as we have seen since the transition from keypad phones to touchscreen phones. I still remember when BlackBerry used to be the top phone when I was young. The switch from rotary phones to even keypad phones must have been revolutionary at some point.

What the article made me more aware of is the sense of touch. I hadn’t considered how complex our hands are in terms of the different sensory information they gather, such as weight distribution and temperature. It relates to Bret’s rant about wanting to explore and be in touch with this haptic-centric view. It reminded me that I hadn’t played the guitar in a while, so I picked it up. You know, the first time you play the guitar after a long break, your fingertips kind of hurt, even though they are callused from before. There is a visceral reaction when I played the instrument, unlike when I play the guitar on GarageBand, for example. I feel like I have more control over the sound of the guitar, the pressure I put on the strings, palm muting, and sliding the strings. All of these actions provide such good feedback in my opinion when I’m actually playing the instrument. After reading the article, I became more appreciative of this.

Neil Leach Reflection

There’s this TV Show called Westworld starring Anthony Hopkins and Evan Rachel Wood. The basic premise of the show is that Anthony Hopkins’s character, Doctor Robert Ford, and his partner, Arnold, built a fake world filled with humanoid robots that look exactly like humans, called ‘hosts.’ This fake world is a fantasy park set up like the Wild West. So that if humans from the real world want to know what it is like to shoot cowboys and ride trains and solve mysteries with pretty barmaids, they can. What Doctor Ford realizes too late is that even though he had built these hosts with his own hands, they were conscious the whole time. And when they realize their consciousness, they develop a vengeance against real world humans for shooting and raping them over and over, just to play a game. 

Anthony Hopkins’s character said something that has forever stuck with me. He gets asked, “So what’s the difference between [a host’s] pain and yours?” And he replies:

“Between [a host] and me? This was the very question that consumed Arnold, filled him with guilt, and eventually drove him mad. The answer always seemed obvious to me. There is no threshold that makes us greater than the sum of our parts, no inflection point at which we become fully alive. We can’t define consciousness because consciousness does not exist. Humans fancy that there’s something special about the way we perceive the world, and yet we live in loops as tight and as closed as the hosts do, seldom questioning our choices, content, for the most part, to be told what to do next. No, my friend, [the hosts] are not missing anything at all.” 

I thought of all this when Professor Leach got asked at the end what’s really the difference between how we think and how Artificial Intelligence thinks. I had a teacher I adored in my sophomore year of highschool. Eighty year old Mr. Daly. But his memory was in its twilight years, and he would tell us the same stories. Would answer our questions with the same responses we had heard before. And I found that without our memory to contextualize the stories of our lives, given the same set of variables, placed in the same situations, like pressing a button, we elicit the same responses. Like we ourselves are robots, spitting out outputs when given certain inputs. And I wondered how much we really are in control of. I keep concluding that we’re really not in control of much. Not much at all. 

So if we’re not really in control of much, as the Great Artificial Intelligence Upset draws closer and closer, how do I avoid becoming just another casualty in the great turning of the world? The ocean makes bigger waves than others, and during those times, it’s up to you to swim or drown. I have a Literature professor and, God bless his heart, at the beginning of the year, he would make fun of ChatGPT, talking about how there are things that humans can do that Artificial Intelligence will never be able to do. I could see him holding onto the last threads of his fading profession, and I knew he was not the guy to follow. On that same day, my favorite Design professor said, “Until Artificial Intelligence overtakes us… and it will overtake us…” and I knew he was hip to what was going on. The difference between Literature and Design majors…the stereotypes write themselves. 

I’ve been reading a book called How To Think Like A Great Graphic Designer, and in it, there’s a designer who says, “The right answer is always the obvious one. The one that was in your face the whole time but you didn’t think of until the last second. The one that makes you go, ‘How could I not have seen it before?’” And Professor Leach reminded me of this when he said, “AlphaGo showed us that moves humans may have thought are creative, were actually conventional.” The strategic brilliance of Artificial Intelligence is that it’s able to see the obvious answer right from the beginning, the one that we should have all seen before. 

I also want to mention an episode called “The Swarm” from the TV Show Love, Death, and Robots. The premise of this episode is that there is an alien hive called “Swarm” that dominates every other species by absorbing them into its hive. Like Artificial Intelligence, every member of the hive knows what the other members know, and it is through this collective consciousness, this seamless teamwork, that they thrive. And with the levels of competition that divide us, sometimes I look at ourselves, and think that for all of our brilliance, I don’t know if we’re going to make it out of here alive. I thought about what Professor Leach said in response to my question, that between the competitors and the collaborators, while there’s nothing you can do about all the people in the world trying to beat each other out, you can choose for yourself to be on the side of the collaborators. And isn’t that what Rumi said all those years ago? “When I was young I wanted to change the world. Now that I am old, I want to change myself.” Amongst all this noise of consciousness and uncertainty, I can choose for myself what my place in the world will be throughout this. I have to believe in the power of that. 

Musical instrument

coNCEPT

Our initial idea was to create a piano using the ultrasonic sensor. However, after realising this idea was too mainstream and simple, we decided to create an instrument that can be fully played by 2 people since it involves using 3 hands.

Process

void loop() {
  // read the input on analog pin 0:
  int sensorValue1 = analogRead(A1);
  int sensorValue2 = analogRead(A2);
  int sensorValue3 = analogRead(A3);
  
  switchState = digitalRead(switchPin);

  if (switchState ==1){
    //sensor 1
    if(sensorValue1 > 500){
      currentState+=100;
    }
    else if (sensorValue1>250 && sensorValue1 < 500) {
      currentState+=10;
    }

    else if (sensorValue1 < 250){
      currentState+=1;
    }

    //sensor 2
    if(sensorValue2 > 500){
      currentState+=100;

    }
    else if (sensorValue2>250 && sensorValue2 < 500) {
      currentState+=10;

    }

    else if (sensorValue2 < 250){
      currentState+=1;
    }

    //sensor 3
    if(sensorValue3 > 500){
      currentState+=100;

    }
    else if (sensorValue3>250 && sensorValue3 < 500) {
      currentState+=10;
    }

    else if (sensorValue3 < 250){
      currentState+=1;
    }

  }
  else{
    //sensor 1
    if(sensorValue1 > 500){
      currentState+=100;
    }
    else if (sensorValue1>250 && sensorValue1 < 500) {
      currentState+=10;
    }

    else if (sensorValue1 < 250){
      currentState+=1;
    }

    //sensor 2
    if(sensorValue2 > 500){
      currentState+=100;

    }
    else if (sensorValue2>250 && sensorValue2 < 500) {
      currentState+=10;
    }

    else if (sensorValue2 < 250){
      currentState+=1;
    }

    //sensor 3
    if(sensorValue3 > 500){
      currentState+=100;
    }
    else if (sensorValue3>250 && sensorValue3 < 500) {
      currentState+=10;
    }

    else if (sensorValue3 < 250){
      currentState+=1;
    }
  }

if(switchState == 0){
  switch(currentState){
    case 3: //3 low
      tone(8, NOTE_B3, 250);
      delay(250*1.30);
      break;
    case 12: //2 low 1 mid
      tone(8, NOTE_C4, 250);
      delay(250*1.30);
      break;
    case 21: //2 mid 1 low
    tone(8, NOTE_D4, 250);
      delay(250*1.30);
      break;
    case 30:
    tone(8, NOTE_E4, 250);
      delay(250*1.30);
      break;
    case 102: //1 high 2 low
    tone(8, NOTE_F4, 250);
      delay(250*1.30);
      break;
    case 111: //1 high 1 mid 1 low
    tone(8, NOTE_G4, 250);
      delay(250*1.30);
      break;
    case 120: //1 high 2 mid
    tone(8, NOTE_A4, 250);
      delay(250*1.30);
      break;
    case 201: //2 high 1 low
    tone(8, NOTE_B4, 250);
      delay(250*1.30);
      break;
    case 210: //2 high 1 mid
    tone(8, NOTE_C5, 250);
      delay(250*1.30);
      break;
    case 300: //3 high
    tone(8, NOTE_D5, 250);
      delay(250*1.30);
      break;
  }
}

We placed 3 photoresistors inside 3 cups and depending on the amount of light detected, we mapped specific musical notes to each cup. To avoid treating analog sensors as if they were digital, we implemented distinct categorizations for each cup. Specifically, we established three cases based on the amount of light detected: low (<250), mid (250-500), and high (>500). To introduce an element of digital control, we incorporated a slide switch.

Video demonstration:

REFLECTIONS

Working on this project was a fun experience. Initially considering a piano, we opted for a more unconventional approach, transforming ordinary cups into interactive controllers. One aspect that has become evident during this project is the potential for aesthetic enhancement. Observing other students’ projects, I realized the impact a well-designed aesthetic can have on the overall appeal of the final product. While our focus was primarily on functionality, witnessing the visual creativity in other projects has inspired me to explore the aesthetic dimension further in future endeavors.

 

Reflections – Week 10

A Brief Rant About The Future of Interactive Design

It was fascinating to read an article about the future’s vision and direction – 12 years into the future.  Comparing Bret Victor’s ideas and complaints to what actually ended up transpiring, I am struck by both how correct he was and how much improvement is still possible.

In 2011, Victor dreamed of a future with interactivity design that involved more than just sliding and tapping on pictures behind a screen. Today, while we still continue to do  so (albeit with a few haptic gimmicks like he puts) – it is also true that we may directly be moving towards a future quite unlike this. Personally, my first experience with any kind of haptics or virtual movement simulation was the Nintendo Wii with motion detection. Today the technology has not just improved but we seem to be on the cusp of a virtual reality revolution. Virtual Reality systems have improved by leaps and bounds year upon year and soon we may reach a world where there is mainstream adoption of such technologies in everyday use.

I believe that while the future we have today would be immensely disappointing to the Bret Victor who wrote this post, the immediate future seems to be much more exciting. I am excited to see the digital future of mankind move towards a completely new direction!

Week 10- Reflection

In his blog post “A Brief Rant on the Future of Interactive Design,” Bret Victor talks about the need to create a dynamic medium that people can interact with in a way that’s similar to how they interact with physical objects. What really struck me was his point that the technology behind tablets, smartphones, and other similar devices, known as Pictures Under Glass, doesn’t offer genuine touchable interfaces. Victor believes that technologies that prioritize sleek visuals over tactile experiences are just a passing phase. 

The first post and the follow-up response both emphasize that researchers and developers should look into haptic feedback to make devices easier to use. I agree with the author’s concerns about the future of interaction design. Touchscreens are great, but they’re not the only way to interact with computers. We need to explore new technologies that let us interact with computers in a more natural and intuitive way, like haptic feedback. Haptic feedback can make our interactions with computers more immersive and engaging. Imagine feeling the texture of a virtual object or manipulating it with your hands. That would be pretty cool. But we shouldnt ignore other forms of interaction, like voice or visual cues. Instead, we should find ways to combine different interaction methods to create the best possible user experience.

Cat Composer – Week 10 Homework

Concept:

While working on this project, we went over several ideas on how we would make an instrument. One thing we discovered in our ideation process was that both of us had previously made projects that centrally involved cats. Thus, with some more tinkering around – we came up with “The Cat Composer”! The Cat Composer is an all in one musical instrument that can play a variety of simple tunes such as “Hot Cross Buns” and “Mary Had a Little Lamb”. It consists of 2 switches to control 2 beat-making servo motors, a distance sensor to control the notes (C, D, E, and G), and a turning potentiometer to toggle between octaves. Additionally, we incorporated a speaker from the IM Lab inventory to make a better sound than the provided buzzer. This instrument is best played with two people, one to play percussion/toggle between octaves, and one to play the notes.

However, with a bit of practice it is completely possible to play it by oneself! Note: In order for the distance sensor to receive a steady input and play the correct notes, it is best to play not with one’s hands, but with a larger piece of material.

Demonstration Video:

Code & Highlights:

The toughest part in the coding process was to ensure that the distance sensor worked exactly as we intended. For example, an issue that we ran into early was the abrupt changing of tune at the border values. Since the sensor isn’t accurate to the exact cm. It would then fluctuate between two tunes. We corrected this by instead using a 5-value moving average. This makes transitions significantly smoother (and the experience much more enjoyable!!)

unsigned int measureDistance() {
  const int numReadings = 5;  // Number of readings to average
  const int maxChange = 150;   // Maximum acceptable change in cm between readings
  static unsigned int lastAverage = 0;  // Store the last valid average distance
  unsigned long totalDuration = 0;


  for (int i = 0; i < numReadings; i++) {
    digitalWrite(trigPin, LOW);
    delayMicroseconds(2);
    digitalWrite(trigPin, HIGH);
    delayMicroseconds(10);
    digitalWrite(trigPin, LOW);


    totalDuration += pulseIn(echoPin, HIGH);
    delay(10); // Short delay between readings
  }


  unsigned int currentAverage = (totalDuration / numReadings) * 0.034 / 2;


  // Check if the change from the last average is within the expected range
  if (abs((int)currentAverage - (int)lastAverage) <= maxChange || lastAverage == 0) {
    lastAverage = currentAverage;  // Update the last valid average
    return currentAverage;
  } else {
    return lastAverage;  // Return the last valid average if the current reading is an outlier
  }
}

Reflections and Improvements:

We can improve our project significantly given more time!

Firstly, we would love the diversify the sounds our project can generate. In our research we discovered that instead of simply using tone() we could perhaps use some other sound generating function. We would love to try this!

Regarding the hardware implementation, the provided potentiometer is too hard to turn and often messes with the wiring. Instead we would love to use a better/larger potentiometer that allows us better access.

Similarly, another change we would like to do is to use a single Arduino Board and breadboard rather than our current 2 board solution. This will make the project more cohesive. Even though this seems easy enough to implement, we let our current design be as of now to simplify our approach.

Lastly, the ultrasonic distance sensor often gives outlier readings. As discussed in the highlight section, we tried our best to resolve this issue, however it still persists. We have some more ideas to remedy this. But we believe that given the scope of this project this was unnecessary. However, we would love to do this in the future.

Week 11 Response:

Diversity and Equality is something that currently is being pushed forward to reduce discrimination against particular groups and I like the fact that Graham makes us acknowledge that fact that in designing we should try to incorporate these values into our design.  For the past weeks we have been talking about integrating the emotions of the user and making the design have an aesthetic feel or look to it, I am not surprised to see this concept once more.