Talk Reflection

 

Reflection

Prof Neil did an amazing job connecting the lines drawing the fuller picture of AI from his perspective. He started by giving a brief intro about AI and its emergence then started sharing some insights about how AI functions simplifying it with visual aid and explanation using examples making it easier to grasp the fuller picture.

The introduction, though aimed at introducing AI to those unfamiliar with it, felt overly lengthy and unengaging. Given that the audience consisted of university students well-versed in AI and its applications, spending more than half the lecture on this basic introduction seemed unnecessary and not relevant to their level of knowledge and daily use of AI.

Geoffrey Hinton’s video on the other hand was incredibly insightful, particularly regarding the concept of AI being smarter than humans because of its ability to share information. This idea resonated and truly made sense to me. It stands out as the most compelling argument I’ve encountered because it highlights how AI’s capacity to accumulate knowledge differs significantly from humans. Instead of the lengthy process of teaching each generation from scratch and hoping for marginal contributions over decades, AI’s continuous accumulation and building upon existing knowledge make it vastly smarter compared to humans.

week 10 – group assignment (Jason)

Video: https://youtu.be/d6cVEQlEJnk

For our group assignment, we weren’t sure in what way a sensor could be implemented to generate musical notes, so we first brainstormed on an instrument on which we could base this assignment: the accordion. To replicate the keys, we decided to use switches (buttons) and for “bellows” we decided on the flex sensor. We first planned out our schematic, first mapping out the switches, resistors then the analog (connecting) wires. However when actually delving into the construction of the breadboard and subsequently the coding aspect, we ran into a few problems that required us to improvise. We first realized through using the serial monitor that the output generated by the flex sensor was incredibly jittery and inconsistent. It was hard to make out a constant value range. Hence we decided to use an alternative sensor: the photoresistor. Ultimately our approach was predominantly to improvise. Once we had resolved our main code, we decided to utilize the LCD, on a whim, to show “:)” or “:(” pertaining to specific conditions. This required us to do some research on how the LCD is connected to the breadboard, and the code used to display characters: https://docs.arduino.cc/learn/electronics/lcd-displays

HOW IT WORKS:
3 buttons play different notes. The range in tone of the note varies according to the photoresistor – the more the light is conceived, the higher the note is played. Subsequently, the LCD shows “:)” when the tone frequency, determined by the analog reading from the flex sensor, is higher than the previous frequency; otherwise, it displays “:(“. This comparison is specific to each color (red, green, blue), and the frequency values are mapped accordingly.

Schematic:

Arduino Code:

// include the library code:
#include <LiquidCrystal.h>

// initialize the library by associating any needed LCD interface pin
// with the arduino pin number it is connected to
const int rs = 12, en = 11, d4 = 5, d5 = 4, d6 = 3, d7 = 2;
LiquidCrystal lcd(rs, en, d4, d5, d6, d7);

int buzzerPin = 8;
int redpin = A0;
int greenpin = A1;
int bluepin = A2;
int phopin = A3;  //
float prev=0;

void setup() {
  // put your setup code here, to run once:
  pinMode(buzzerPin, OUTPUT);
  pinMode(redpin, INPUT);
  pinMode(greenpin, INPUT);
  pinMode(bluepin, INPUT);
  pinMode(phopin, INPUT);  //
  lcd.begin(16, 2);
  Serial.begin(9600);
}

void loop() {
  // put your main code here, to run repeatedly:
  int redState = digitalRead(redpin);
  int greenState = digitalRead(greenpin);
  int blueState = digitalRead(bluepin);
  int flexState = analogRead(phopin);  // 350 to 1000
  float redvariance = 130.8 + map(flexState, 350, 1050, 0, 130.8);
  float greenvariance = 261.6 + map(flexState, 350, 1050, 0, 261.6);
  float bluevariance = 523.2 + map(flexState, 350, 1050, 0, 523.2);
  if (redState == HIGH) {
    tone(buzzerPin, redvariance);
    if (higherThanPrev(prev, redvariance)) {
      lcd.print(":)");
    } else {
      lcd.print(":(");
    }
    prev = redvariance;
    delay(100);
  } else if (greenState == HIGH) {
    tone(buzzerPin, greenvariance);
    if (higherThanPrev(prev, greenvariance)) {
      lcd.print(":)");
    } else {
      lcd.print(":(");
    }
    prev = greenvariance;
    delay(100);
  } else if (blueState == HIGH) {
    tone(buzzerPin, bluevariance);
    if (higherThanPrev(prev, bluevariance)) {
      lcd.print(":)");
    } else {
      lcd.print(":(");
    }
    prev = bluevariance;
    delay(100);
  } else {
    noTone(buzzerPin);
  }
  lcd.clear();
}

bool higherThanPrev(float prev, float now) {
  return prev < now;
}

Overall we were quite pleased with the end result, even more so with our LCD addition. However, we felt as though it was difficult to classify our project as a musical instrument since, despite the complex interplay between analog/ digital sensors, the sounds produced were less “musical” in a sense.<span style=”color: #000000;”> </span><span style=”color: #000000;”>Furthermore, we realized that whilst the tone of the sound produced by the blue switch was very clearly affected by the amount of light perceived by the photoresistor, this was not the case for the red switch</span>. It was much harder to distinguish a change in the tone of the sound. We believe it is because the red button signals the C note in the 3<sup>rd</sup> octave, and the blue one signals the C note in the 5<sup>th</sup> octave. Since the frequency of octaves is calculated with the formula “Freq = note x 2<sup>N/12</sup>”, the changes in frequencies mapped to the notes will be more significant as octaves increase. For future improvements, especially with regards to its musicality, perhaps we could have each button play a series of notes, hence the 3 switches would produce a complete tune. Rather than mapping ranges for the photoresistor, we could choose a (more than/ less than) specific value. For example, in a dark room, the complete tune would be played in a lower key whilst in a bright room, the tune would play in a higher key.

week 9 – analog input & output

Video: https://youtu.be/JsG16pEle1I

For this assignment, I decided to use a photoresistor as my analog sensor. I wanted to represent and draw a comparison of our mentality during day and night i.e., I wanted to represent alertness and calmness through LED lighting patterns. When photoresistor sensor value (psv) is between 20-150, the yellow LED will blink rapidly. If the psv is at 0, the blue LED will blink slowly. The button applies these blinking patterns to different LEDs, meaning when the button is pressed and psv is between 20-150, the blue LED will blink rapidly whilst psv at 0 will cause the yellow LED to blink slowly. Through this assignment, I was able to grasp a stronger sense of the arduino syntax but most importantly, I became more capable of working the breadboard to make correct connections. Furthermore I think I’ve cemented my abilities in reading schematics which has enabled me to make such connections on the breadboard without taking a glimpse at the fritzing breadboard layout, which I’m really pleased about. Whilst I still struggled with the coding aspect, I really enjoyed and appreciated this assignment.

Schematic:

Arduino code snippet:

if (photoState >= 20 && photoState <= 150) {
  flickerLight(currentLed);  //flickerLight(yellowPin)
  digitalWrite(currentLed == yellowPin ? bluePin : yellowPin, LOW);
  // 1) currentLed == yellowPin => checks if value is yellowPin
  // 2) ? bluePin : yellowPin => if 1st condition == true, turn off bluePin, otherwise turn off yellowPin
} else if (photoState <= 10) {
  digitalWrite(currentLed, LOW);
  flickerDark(currentLed == yellowPin ? bluePin : yellowPin);
}

if (buttonState == HIGH) {  // if button is pressed, toggles value of "currentLed" between 'yellowPin' & 'bluePin'
  currentLed = (currentLed == yellowPin) ? bluePin : yellowPin;
  delay(200);
}

I had the hardest time implementing this change: button applies blinking patterns to different LEDs. And through very extensive research I learned about the conditional (ternary) operator. I realised after hours of experimenting and researching that I needed to create a variable to keep track of the LED state: “currentLed”. By incorporating this with the conditional (ternary) operator, I was able to create a mechanism where if the button was pressed, flickerLight/ flickerDark would correspondingly be applied to the opposite LED pin. Whilst it was definitely a pain in the sense that I struggled with this specific implementation for hours, I was very pleased with the end result. With regards to future improvements/ projects, I’d like to create a moodlight. I could use the tricolour LED, switch and trimpot. The trimpot could control the colour displayed and the switch, likewise to this assignment, could afftect the flicker pattern displayed.

 

Music Box (Khaleeqa and Rujul Week 10 Assignment)

Concept:

Rujul and I collaborated on our Week 10 assignment, centered on crafting a musical instrument. Our initial concept revolved around utilizing the ultrasonic sensor and piezo speaker to create a responsive system, triggering sounds based on proximity. However, we expanded the idea by incorporating a servo motor to introduce a physical dimension to our instrument. The final concept evolved into a dynamic music box that not only moved but also emitted sound. The servo motor controlled the music box’s rotation, while the ultrasonic sensor detected proximity, prompting different tunes to play. To reset the music box, we integrated a button to return the servo motor to its initial position.

Modelling and Schematic:

 

 

 

 

Prototype:

IMG_4312

Code:

Our prototype took shape seamlessly, and the coding aspect proved relatively straightforward. One particularly intriguing segment involved establishing thresholds for distinct melodies based on the object’s position within the carousel.

void checkDistanceAndSound() {
  int duration, distance;

  digitalWrite(TRIG_PIN, LOW);
  delayMicroseconds(2);
  digitalWrite(TRIG_PIN, HIGH);
  delayMicroseconds(10);
  digitalWrite(TRIG_PIN, LOW);

  duration = pulseIn(ECHO_PIN, HIGH);
  distance = (duration * 0.0343) / 2;

  // Play different melodies for different distance ranges
  if (distance > 0 && distance <= 5 && !soundProduced) {
    playMelody1();
    soundProduced = true;
    delay(2000); // Adjust delay as needed to prevent continuous sound
  } else if (distance > 5 && distance <= 10 && !soundProduced) {
    playMelody2();
    soundProduced = true;
    delay(2000); // Adjust delay as needed to prevent continuous sound
  } else if (distance > 10 && distance <= 20 && !soundProduced) {
    playMelody3();
    soundProduced = true;
    delay(2000); // Adjust delay as needed to prevent continuous sound
  } else {
    noTone(SPEAKER_PIN);
    soundProduced = false;
  }
}

void playMelody1() {
  int melody[] = {262, 294, 330, 349, 392, 440, 494, 523}; // First melody
  int noteDurations[] = {250, 250, 250, 250, 250, 250, 250, 250};

  for (int i = 0; i < 8; i++) {
    tone(SPEAKER_PIN, melody[i], noteDurations[i]);
    delay(noteDurations[i]);
  }
}

void playMelody2() {
  int melody[] = {392, 440, 494, 523, 587, 659, 698, 784}; // Second melody
  int noteDurations[] = {200, 200, 200, 200, 200, 200, 200, 200};

  for (int i = 0; i < 8; i++) {
    tone(SPEAKER_PIN, melody[i], noteDurations[i]);
    delay(noteDurations[i]);
  }
}

void playMelody3() {
  int melody[] = {330, 330, 392, 392, 440, 440, 330, 330}; // Third melody
  int noteDurations[] = {500, 500, 500, 500, 500, 500, 1000, 500};

  for (int i = 0; i < 8; i++) {
    tone(SPEAKER_PIN, melody[i], noteDurations[i]);
    delay(noteDurations[i]);
  }
}

Reflection:

Reflecting on our project, I believe enhancing it with LEDs synchronized to specific sounds could offer a more comprehensive understanding of the ultrasonic sensor’s functionality. Each LED would correspond to a particular sound, demonstrating the sensor’s operations more visibly. Additionally, considering a larger-scale implementation for increased interactivity might further elevate the project’s engagement potential.

Reading Reflections – week 10!

“The only reason your mind doesn’t explode every morning from the sheer awesomeness of your balletic achievement is that everyone else in the world can do this as well.” I think this reading made so much sense after the previous reading, Physical computing greatest hits(and misses). After seeing pieces like video mirrors and the body-as-a-cursor, or after actively using facial recognition on phones where our body is at command and not just our fingers, I definitely think the future of interaction goes way beyond the capacities of our hands. 

The first in-class assignment we had for the physical computing part of the class, where we had to make a creative switch that did not use hands, it was very hard to think of something to make. This does highlight the challenge that we need to break away from established norms and thinking beyond the limitations of familiar interaction methods. 

A question that stuck with me was how do we manipulate things? But how I perceive his explanation on this and think about it is, how do we manipulate them into a way that they are intuitive? Will the dynamic mediums Bret Victor talks about, be intuitive right from the start or will they take years of trials, feedbacks and usage to finally be called familiar and usable by ‘common sense’. I also appreciate the follow up article with his responses to the comments. His reply to the brain interfaces comment is quite nice and reassuring in a world where people believe automating all the work is way more efficient since humans are way more prone to making errors. I also like the insight on the psychological and cognitive aspects but not fully clear on the Matti Bergstrom quote about finger blindness.

My takeaway from this reading would be that our interaction with digital media and devices should not have consequences to our physical body and the idea of everything we do mediated by a computer might not be a good one.

Reading reflection week 10

A breif Rant on the Future of interactive media

The reading on future interfaces was mind-blowing! It was like someone suddenly turned on a light in a room I’d been sitting in for ages, and suddenly, everything made sense in a completely new way.

The author of this reflection, wow, they had been in the thick of designing future interfaces. Real prototypes, not just concepts. That’s a level of hands-on experience few people get. It’s like being backstage at a magic show and seeing how the tricks are really done.

Their beef with the video wasn’t about nitpicking interactions but about the vision itself. It wasn’t bold enough, not enough of a leap from the mess we’re dealing with in the present. And I get that! Visions should inspire, not just be a ho-hum “yeah, that’ll do” kind of thing.

But what really hit me was the talk about our hands. Hands are amazing! I mean, we use them constantly, but do we ever really think about how intricate they are? They’re like the Swiss Army knives of our bodies. Feeling things, manipulating objects—our hands are our interface with the world.

The idea of “Pictures Under Glass” really got to me. It’s like we’re willingly giving up the richness of touch for the sake of a fancy visual. The comparison to tying shoelaces with closed eyes hit home. We’re prioritizing sight over touch, but in reality, touch is the real MVP in how we interact with things.

The rant about the gestures we use with our fingers blew my mind. The fact that we switch between different grips without even thinking about it—opening a jar, for instance—showed how intuitive our interactions really are. Our hands are built for a three-dimensional world, for manipulating objects in ways that no other creature can. It’s like a superpower we take for granted every day!

And the call to action at the end was so powerful. The future isn’t predetermined; it’s a choice. It’s up to us to push for better interfaces, ones that harness the full potential of what our bodies can do. Why settle for a single finger when we have this incredible body that can do so much more?

What do I think about the Follow up

It’s like the author just dropped a knowledge bomb and left me reeling with thoughts and arguments ricocheting around my head.

The responses they received were a mix of understanding and misconceptions. The author wasn’t seeking to solve the problem outright; they wanted to spark curiosity and inspire researchers to delve into unexplored territories. The idea was to lay down the issue and hope it would catch the eye of the right people who could initiate the necessary research. That’s a pretty bold move!

The analogy about technology evolution using Kodak’s camera was spot on. The iPad, like the black-and-white camera, is groundbreaking, but it’s clear something’s missing. The push should be towards a dynamic tactile medium, not just a flat, glassy screen with minimal haptic feedback.

Their take on voice interfaces was refreshing. While acknowledging the importance of voice, especially for certain tasks, they stressed the limitations when it comes to creation and deep understanding. Explorable environments, where you can physically manipulate things, seem like the real deal.

The exploration of gestural interfaces was intriguing. From discrete abstract gestures to waving hands in the air, each had its pros and cons, but none seemed to fully harness the potential of our hands and bodies in a three-dimensional world.

The part about brain interfaces hit hard. Why are we trying to bypass our bodies altogether? It’s like saying our bodies are inadequate for the digital age. It’s a bold reminder to adapt technology to suit our natural capabilities rather than forcing ourselves to adapt to it.

The quote about fingertips and their importance for development resonated deeply. It’s like saying if we don’t use certain faculties, we lose them. The comparison to limiting literature to Dr. Seuss for adults is both humorous and thought-provoking.

And the clever redirect about the length of the rant with the book recommendation at the end was a nice touch!

Week 10: Make a musical instrument

For our assignment, Nafiha and I drew inspiration from a synthesizer and a sampler to create our own musical instrument. Our instrument incorporates three buttons, a piezo buzzer, a potentiometer, and a bunch of wires and resistors. It is designed such that each button triggers a distinct melody, and by adjusting the potentiometer, the pitch is modified, consequently altering the played melodies.

Video:

link:https://drive.google.com/file/d/1zvd5qZeavfn0oTLdWGMqWOIxTLay6gbp/view?usp=sharing

Code:

const int switch1Pin = 12;
const int switch2Pin = 8;
const int switch3Pin = 7;
const int potentiometerPin = A0;
const int buzzerPin = 3;

int currentMelody[8];//array to store the current melody
int melodyIndex = 0;// keep track of the current note in the melody
int isPlaying = 0;//to indicate whether a melody is currently playing

//melodies for each button
int melody1[] = {262, 330, 392, 523, 392, 330, 262, 196};//melody for switch 1
int melody2[] = {330, 392, 523, 392, 330, 262, 196, 262};//melody for switch 2
int melody3[] = {392, 523, 659, 523, 392, 330, 262, 330};//melody for switch 3

void setup() {
  pinMode(switch1Pin, INPUT_PULLUP);
  pinMode(switch2Pin, INPUT_PULLUP);
  pinMode(switch3Pin, INPUT_PULLUP);
  pinMode(potentiometerPin, INPUT);
  pinMode(buzzerPin, OUTPUT);
}

void loop() {
  //potentiometer value for pitch control
  int pitch = analogRead(potentiometerPin);

  //if switch 1 is pressed
  if (digitalRead(switch1Pin) == HIGH && !isPlaying) {
    playMelody(melody1, pitch);
  }

  //if switch 2 is pressed
  if (digitalRead(switch2Pin) == HIGH && !isPlaying) {
    playMelody(melody2, pitch);
  }

  //if switch 3 is pressed
  if (digitalRead(switch3Pin) == HIGH && !isPlaying) {
    playMelody(melody3, pitch);
  }

  //check if any switch is pressed and a melody is currently playing
  if ((digitalRead(switch1Pin) == HIGH || digitalRead(switch2Pin) == HIGH || digitalRead(switch3Pin) == HIGH) && isPlaying) {
    noTone(buzzerPin);//stop playing the melody
    isPlaying = 0;//set the flag to indicate no melody is playing
  }
}

void playMelody(int melody[], int pitch) {
  //map the potentiometer reading to adjust the pitch
  int adjustedPitch = map(pitch, 0, 1023, 50, 255);

  //copy the melody to the currentMelody array
  memcpy(currentMelody, melody, sizeof(currentMelody));

  //play each note in the melody
  for (int i = 0; i < sizeof(currentMelody) / sizeof(currentMelody[0]); i++) {
    tone(buzzerPin, currentMelody[i], adjustedPitch);
    delay(250);
    noTone(buzzerPin);
  }

  //set the flag to indicate a melody is currently playing
  isPlaying = 1;
}

In terms of improving our instrument, one potential feature could be incorporating additional sound effects through the use of the potentiometer. However, overall, working on this assignment was really fun, and we’re pretty pleased with the outcome.

Week 10: Reading Response

In his reading, Bret Victor discussed the Vision of the Future, sharing his observation about the central role of hands in interactions with future technology, or as he defined them, tools. Hands in the physical world serve various activities and possess numerous senses. Through just a touch or lifting objects, we can tell so much about them, making hands our primary means of understanding the world. However, in technological contexts, especially when we envision the future, hands primarily slide across glass screens. We can only sense the glass screen and manipulate the displayed content. His main point emphasizes the need to consider other interactions beyond hands— we also have our bodies! We should start considering other forms of interaction.

As he further discussed the two main functions of our hands—feeling and manipulation—I began to wonder: If we excessively use interactions solely via the glass screen, limiting our interaction with the real world and physical objects, would we lose some of our capabilities? He defined a tool as addressing human needs by amplifying human capabilities. If this were to happen, technology would become less of a tool.

Later, his response to some of the readers’ comments, and one indeed answered my questions. Victor citing a neuroscientist’s quote about the development of “finger-blindness” caused by underutilizing hands in childhood. And it is even worse than a simple blindness that a blind person cannot locate things, but a finger blind person cannot understand the meaning and value of things. What if, just what if in the far far future maybe, we start to use only the interaction of the sliding or tapping on the screen and start to develop this kind of “finger-blindness”? Though it might seem far in the future, we’re already experiencing some changes. For instance, some people have become so accustomed to typing that they feel less comfortable writing with pen and paper. What if we no longer remember how it feels like to turn a page of a book.

What he suggests is that as we choose the future, we also choose to shape our future through our actions. For that, he suggests thinking of different types of interactions beyond just using our hands, aiming for a “dynamic medium that we can see, feel, and manipulate.” And I do think such interactions might be somewhat expensive, considering that every kind of technology has its strengths and weaknesses. Take VR, for example—indeed, we can see things in 3D, but it remains a visionary illusion where we cannot physically touch and feel things around us. What I am suggesting now aligns with what he pointed out: we need to start thinking about diverse types of interactivity to balance the excessive use of one.

Saiki and Elora Week 10 Assignment: RaveBot

 

Concept:

Our concept was to create an industrial DJ bot called the Rave Bot. Using an ultrasonic sensor, we aimed to create a metallic, grungy music box with a theremin-like instrument. Given its rave theme, we wanted to incorporate a strobing light effect to enhance the energetic vibe.For the theremin sound, we chose a haunted eerie techno sound, similar to the actual theremin sound.

Prototype and Modelling:

Code:

const int PIN_RED = 3;    //Red LED on pin 9
const int PIN_GREEN = 5;  //Green LED on pin 10
const int PIN_BLUE = 6;   //Blue LED on Pin 11

//variables to hold our color intensities and direction
//and define some initial "random" values to seed it

int red = 254;
int green = 1;
int blue = 127;
int red_direction = -1;
int green_direction = 1;
int blue_direction = -1;
int buttonState = 0;  // variable for reading the pushbutton status


/* This function "Set Color" will set the color of the LED
   rather than doing it over and over in the loop above. */
void setColor(int R, int G, int B) {
  analogWrite(PIN_RED, R);
  analogWrite(PIN_GREEN, G);
  analogWrite(PIN_BLUE, B);
}


#include <Ultrasonic.h>
#include <toneAC.h>

Ultrasonic ultrasonic(12, 13);

#define TONE_VOLUME 5  // 1-20 //output sound pin(positive of speaker)
int distance;

void setup() {
  pinMode(7, INPUT_PULLUP);

  Serial.begin(9600);
  //set all three pins to output mode
  pinMode(PIN_RED, OUTPUT);
  pinMode(PIN_GREEN, OUTPUT);
  pinMode(PIN_BLUE, OUTPUT);
}

void loop() {
  buttonState = digitalRead(7);

  red = red + red_direction;  //changing values of LEDs
  green = green + green_direction;
  blue = blue + blue_direction;
  if (buttonState == LOW) {

    //now change direction for each color if it reaches 255
    if (red >= 255 || red <= 0) {
      red_direction = red_direction * -100;
    }
    if (green >= 255 || green <= 0) {
      green_direction = green_direction * -10;
    }
    if (blue >= 255 || blue <= 0) {
      blue_direction = blue_direction * -150;
    }
    setColor(random(red), random(green), random(blue));

  } else if (buttonState == HIGH) {
    analogWrite(PIN_RED, 10);
    analogWrite(PIN_GREEN, 251);
    analogWrite(PIN_BLUE, 100);
  }

  distance = ultrasonic.read();  //reads the distance from the sensor
  if (distance < 120)            //range is about 120 cm
  {
    int freq = 1500 - distance * 10;  //calculates a corresponding freqeuncy
    toneAC(freq, TONE_VOLUME);        //plays the sound!(output through speaker- pin10)

    Serial.println(distance);  //just for monitoring if required
  } else {
    noToneAC();  //out of range
  }
}

 

Reflection:

We kept improvising on the theme we wanted to go for, and it turned out great for us. One thing we would have definitely liked is to have a better sounding or louder speaker. Currently, due to the power being split across multiple buttons, LEDs, and the speaker, the full range of volume is lost, especially once we close the box. However, we were really happy with the design of the box. We colored it black and spray painted chrome silver on it to give it an industrial look, which we thought a ravebot deserves. All in all, it was a super fun experience.

Week 10 – Reading Response

In this reading, Bret Victor gives a problem that widely exists in the current visionary design for future interactions. He believes that all these visionary designs for the future have ignored the human capabilities and specifically omit millions of possible things our hands can do. The future interaction vision of the Picture Under Glass is, according to Bret, a retrograde of technology and interactions.

I think that Bret indeed makes a valid point in pointing out that future interaction designs do not make the use of the full potential capabilities of human bodies. And I can imagine that once we have the designs that enable us to interact with daily objects with our full capabilities, there will be so much more things that currently are not even imaginable becoming possible and accessible. However, as the question goes in the follow-up article, Bret does not provide a solution and he just can’t give a solution.  Even when reading this article, I actually nodded in agreement with Bret’s argument, but I couldn’t personally imagine what the ideal world for Bret would be. Will we be interacting with physical objects then? But it’s obviously not a progress in terms of interactions and technology. But again, I am in line with Bret and truly look forward to the day when our fully capabilities could be achieved in interaction designs.

This article makes me think of a recent invention I saw online. It was like a physical embodiment of Siri. This physical virtual assistant can be interacted with different modes and is capable of dealing daily affairs for its users. This virtual assistant could be activated via voice control or tapping and is powered by AI. Will this be the future given the fact that it does make our life easier?

Link to this machine introduction: https://www.youtube.com/watch?v=5fQHAZWOUuY