Week 11’s Brief Reading Ran- Sorry, Response | A Brief Rant on the Future of Interaction Design

The Rant first:

Before I start dissecting, let me just put it out there that I agree with everything he’s saying here. Now, we proceed.

"A tool addresses human needs by amplifying human capabilities. A tool converts what we can do into what we want to do."

Always good to start with definitions everyone knows before diving in. He’s right about us hearing about our tools and our needs again and again. But, what makes a tool interesting? What makes one tool capable of replacing another tool? Maybe, it’s because it goes beyond what boxes we had made to determine our human capabilities for that specific task or item. The way my brain describes the core argument (in my notes) of the main article is,

I’ve never read an article that talks, in a much-than-usual amount of detail, about the functions of hands this much before. Also, could we come up with ways to interact with things with other body parts too? (That’s a tangent, so I’ll leave it there). I really liked how he mentions that despite our insane amount of nerve endings, we still decide to go with everyone’s favorite, Pictures Under Glass. This was also super cool:

How do people just think of this? When I scroll with two fingers, my fingers curve, but when I scroll with four, my fingers start flattening. Depending on what you play in the guitar, you can manipulate how your fingers bend without even realizing (bar chords vs. non-bar chords, for example).

I also liked when he talked about Alan Kay and the iPad. He “chased that carrot through decades of groundbreaking research,” decades! If we can spend that long making an iPad with our lovely Pictures Under Glass, surely we can spend some time finding other ways to interact with our hands with technology.

What I found interesting was that he did what good media criticism does: he noticed the assumed thing nobody questions. I would have thought of this, but I wouldn’t have gone all the way to actually further test my theory.

Now… The follow-up. (Since when did ranting need justification?)

  • It’s funny how people say that he didn’t offer a solution. Come up with your own solution then? Sometimes, speaking things out in the void can also end up making change. (For example, we’re reading this, and we’re thinking about what he said, and we can choose to follow his belief and try and do something different.)
  • The second argument is good because it builds on the idea that we can take something good which is existing, and make it better. It doesn’t make it bad… you just add functions that can possibly remove problems that currently exist, or just make it easier to use.
  • “My child can’t tie his shoelaces, but can use the iPad.” Well.
  • He also rebuked my thought of waving hands in the air. Your hands think they’re somewhere different than where the computer thinks they are. No thank you.

What I got from this was that, when I design things, I should remember that there are many different ways we can interact with things around us. If my work only talks to eyes and fingers, I’m wasting the whole human body. I wonder how I could implement that with a video game that’s spread worldwide. How long do we think it will take before we actually live a lifestyle that he proposes?

Week 11 – Reading Reflection

A Brief Rant on the Future of Interaction Design

I really liked this text, and I strongly agree with the author. The fact that a lot of people envision our future of interaction and technology as just super-powerful phones and laptops isn’t really encouraging. I believe that even now we have so much technologies and innovative interactive things, that saying that in the future, the superior one, it’s just a phone is really not right.

Even now people use a lot of motion, body, voice driven technology. For instance, scrolling using your head, if I recall it correctly, you bow your head up and down to control the screen. Of course, it’s not the most creative, and obviously not the best way to interact but it is still more interesting than just tapping. Voice input is also really crazy that by commanding we can control devices, even if they’re as simple as smart speakers. This just shows that there’re a lot of ways to interact beside simple “tap here, tap there”.

I also find the author’s point on touching and physical response really interesting. This is true that the senses we have in our hands is something we shouldn’t ignore, since it allows for so many ways to interact and so much new technology and art. However, I find it hard to imagine what exactly “useful” or widespread, as smartphones, we can do using these sensations. Maybe it is the reason the author talks about the future and not the present.

This part about hands made me remember some technologies from Professor Eid’s lab once again. As I wrote in the last week’s reading response, they have a device that also triggers vibrations on the fingertips of the user if they touch the object in the VR.

They also had a really cool technology I think can be expanded a lot and that fit the idea of the author perfectly: there was some kind of a handle, and an app where you can choose a texture, for instance, some kind of hard jelly. So, the handle controls a ball that you see on the screen. As you move the handle, the ball moves also. And the thing is, that this handle was also “mimicing” the texture: when you try to push the ball through the jelly, you have some resistance and even that “bouncy” feeling, and when it finally comes through — lightness and 0 resistance. I find it to be SO COOL, and the fact that it’s made using only one handle is mind-blowing. I think if it’s possible to expand this technology to make this object-control dependent on the hands, and passing these sensations to the hands, it will be exactly what the author of the text was describinng.


*This is a short video I filmed of using this device so you can see how it works

Week 11 – Musical Device

Concept

I really liked the Ultrasonic Distance Sensor, and I really love the idea of using the outer environment and motion capturing. First, I wanted just to make a device controlled by buttons/potentiometer, but then the idea of using something less obvious came to me. I thought that trying to play sound without touching anything can be really interesting. I decided to use Distance Sensor and Photoresistor for this device.

The musical device is pretty simple: the photoresistor has a threshold of 900 (basically the light that it gets if you point the flashlight right at it), and if it receives light that is higher than this value, it will make the device play. Otherwise it will be silenced. The distance sensor converts the distance into frequency: the farther the object is from it, the lower frequency will be played.

Code

The code is pretty simple. It assigns global variables, has some local variables assigned in the loop() (like the distance and the frequency). Frequency that will be output by the buzzer is determined by the distance. I used distance = duration * 0.0343 / 2; to convert the distance to cm depending on the output of the Ultrasonic device, and then freq = map((int)distance, 5, 200, 800, 200); to map the distance to frequency, so that the distance from 5 to 200 is assigned to frequencies from 800 to 200.

There’s a small block in the beggining of my loop() part to turn off and on the buzzer. It’s made like that so it can change the frequency that it will be outputting.

int lightVal = 0;
bool lightOn = false;

int trigPin = 6;
int echoPin = 5;
long duration;
float distance;
int freq;

int soundPin = 8;

void setup() {
  pinMode(trigPin, OUTPUT);
  pinMode(echoPin, INPUT);
  Serial.begin(9600);
}

void loop() {
  Serial.println(lightVal);
  digitalWrite(trigPin, LOW);
  delayMicroseconds(2);

  digitalWrite(trigPin, HIGH);
  digitalWrite(trigPin, LOW);

  duration = pulseIn(echoPin, HIGH);
  // Calculate distance in cm
  distance = duration * 0.0343 / 2;
  freq = map((int)distance, 5, 200, 800, 200);

  lightVal = analogRead(A0);
  lightOn = lightVal > 950;

  if (lightOn) {
    tone(soundPin, freq);
  } else {
    noTone(8);
  }
}

I was mainly referencing tutorials on Arduino website, like this one for the distance sensor and this one for the photoresistor, to figure out how do I mke them work.

Schematic & Preview

The schematic of the device looks like this (I tried my best to draw it correctly):

This is how it looks in real life:

And this is how it works:

Reflection

I really like how it turned out. My main goal was to use the components of Arduino we haven’t worked in the class with, so I achieved this objective. I also like the fact that I can actually “play” this instrument without even touching it – I think it’s pretty cool.

For further improvement, I believe I can make the device more “usable” because right now pointing the phone right in the middle of bunch of wires doesn’t seem too good. Also, I think I can work with the short delays the device has because now if I flicker the light, it wouldn’t catch it being turned off.

Preliminary Concept for Final project

For my final project, I plan to create a physically interactive game where the player helps recover a corrupted digital signal of a story using their feet. The idea is inspired by games like Piano Tiles and Just Dance, but instead of simply playing for points, the user is actively uncovering a hidden message through their performance. By stepping on floor pads in time with visual cues, which I am thinking will be something similar to piano tiles and the Just Dance mat, the player will gradually restore a distorted audio/visual, which makes the experience into kind of like a mystery game:

 

I will use Arduino to capture physical input and P5 to handle the visual and audio output. On the floor, there will be 4 pads made from materials like cardboard or foam with conductive layers inside (or if there is a better sensor for this idea, I will use it). Each pad will act as a button connected to the Arduino, and when someone steps on a pad, the circuit is completed, and the Arduino sends that input data to p5 through the serial communication.

On the screen, p5 will display a rhythm like visuals with vertical lanes and falling tiles, like the piano tile game. The user must step on the pad at the correct time to match the falling tiles (which I am thinking will be color based, so it matches the floor tiles, and it is more obvious). For the visuals, it will initially appear distorted, with glitch effects, static, and broken text or images to represent a broken signal along with the tiles. As the player successfully hits notes on time, the system will respond immediately by reducing the distortion, sharpening visuals, and revealing fragments of audio or text of a story. I want it to gradually form a short narrative, such as a corrupted voicemail or like a partial conversation. But if the user misses notes, the distortion stays or I might back it to where it temporarily increases.

The game basically listens through the Arduino sensors, thinks by processing the timing and accuracy of the input in p5, and responds through changes in the visuals and sound. The player will then be paying attention and adjusting their movements based on the feedback they receive, especially when they start revealing more of the story.

I prompt Gemini AI to create a picture of the game, so you can get a vision of what I am trying to achieve:

 

COMPOSER OF THE OPERA – Final Project Proposal

Concept

For my final project I had an idea inspired by the musical/film The Phantom of the Opera minus the obviously really weird creepy part. The idea is that you are a composer at an opera, and you have to be able to compose at least a good amount of the music while jump scares happen in the game.

I will be using my Arduino to create and connect the composer’s batons to my code and will connect the batons’ position to certain notes or volume or pitches or tempo (still unsure about what I want to do with this since its very complex) so when you try to move the sticks the position will translate to musical notes.

I will use p5 to code the design and the UI of the game and translate motion of the batons to actual events in the game.

I’m still unsure if I want to keep the jump scare component or if I want to keep it so that you just try to compose the music,  but for now this is the main concept I have in mind.

Visuals

 

Reading Reflection – Week 11

Reading Design Meets Disability made me think differently about how society treats disability devices. One part that stayed with me says that trying to hide these devices can show “a lack of self confidence that can communicate an implied shame.” This made sense to me because I believe disability devices should be shown proudly. There is nothing to hide, and people should feel free to embrace what makes them unique. The reading helped me understand that hiding something can sometimes send a negative message, even if people do not mean it that way.

Another idea that surprised me was how design for disability can inspire mainstream design. The text explains how the Eames leg splint influenced famous furniture. I never expected a medical object to shape everyday design. This showed me that disability centered design can be creative and important, not only functional.

The reading also changed how I think about the look of medical devices. When Aimee Mullins chooses “off the chart glamorous” prosthetics, it shows how fashion can help people feel confident. If I had to wear a hearing aid or prosthetic, I would want it to be unique and fashionable too. Something that stands out in a good way and can influence others to do the same. This is why I think fashion designers, especially designer brands, should help create these devices. It would make people feel included and inspire many other small businesses and brands globally.

Overall, the reading taught me that disability and design do not need to focus on hiding. They can focus on expression, creativity and how each person shows their own personality with something so unique.

week 9: mood lamps

For this assignment, I tried to create two lighting options using one analog sensor, one digital sensor, and two LEDs. One option acts like a simple lamp that turns on and off, while the other acts more like a mood lamp whose brightness can be adjusted. I used the potentiometer as the analog input and the button as the digital input. I used the yellow LED as the on and off indicator for the system, and I used the blue LED as the mood light. I liked this idea because it was simple, but it still made the difference between digital and analog input and output very clear.

Week9

The way it works is that the button turns the system on and off, which the yellow LED shows. When the system is on, the blue LED changes brightness based on the potentiometer. That means the yellow LED works in a digital way, because it is either fully on or fully off, while the blue LED works in an analog way because its brightness changes gradually. In that sense, the project feels like choosing between two lighting options: a basic on and off lamp, and a second lamp that can be adjusted.

Arduino File on GitHub

For the code, I used one variable to store the button state, another to store the last button state, and a variable called systemOn to remember whether the lamp is on or off. I used INPUT_PULLUP for the button, so the button normally reads HIGH and becomes LOW when pressed. Then I used an if statement to check if the button had just been pressed, and if it had, I changed the system state. After that, I used analogRead() to get the potentiometer value, and map() to convert that value from 0 to 1023 into a brightness range of 0 to 255. That brightness value is then sent to the blue LED using analogWrite().

The part of the code I am most proud of is where the potentiometer reading gets turned into brightness for the blue LED, while the button still controls the overall on and off state of the lamp. I like this part because it made the project feel more intentional. The blue LED does not just turn on randomly. It only works when the system is on, and then its brightness changes based on the potentiometer. That made the whole lamp feel more organized and easier to understand.

if (buttonState == LOW && lastButtonState == HIGH) {
   if (systemOn == 0) {
     systemOn = 1;
   } else {
     systemOn = 0;
   }
   delay(200);  
 }
 lastButtonState = buttonState;
 

 int sensorValue = analogRead(A0);
 
 
 if (systemOn == 1) {
  
   digitalWrite(yellowLed, HIGH);
   int brightness = map(sensorValue, 0, 1023, 0, 255);
   analogWrite(blueLed, brightness);
 } else {
   digitalWrite(yellowLed, LOW);
   analogWrite(blueLed, 0);
 }

I am proud of this part because it brings the whole project together. The button controls the digital state of the system, and the potentiometer controls the analog brightness of the blue LED. I think this made the assignment much easier to understand in a practical way, because it clearly separated the role of the digital input and the analog input.

One thing I liked about this assignment is that it made the difference clear. The yellow LED is either fully on or fully off, while the blue LED can be dim, medium, or bright depending on the potentiometer. If I had more time, I would probably make the mood lamp more visually expressive, maybe by adding another color.

 

Week 11 Reading Reflection

Design Meets Disability

The first thing I thought of when reading the text is when Chinese jewelry brand YVMIN designed prosthetics for the Chinese model Xiao Yang. They were honestly like no other prosthetic I had ever seen attached below are only some of the designs:

This text coupled with my memory of the model made me think about how inclusivity, design, and functionality should be combined in a product. A lot of the times people with disabilities face problems with self-esteem and self-image, so incorporating incredible designs to me feels like an important aspect of building any product. Especially when looking at how glasses are seen as fashionable items; I even remember hearing people tell me about how they would fake not being able to read letters at the doctor’s office to get glasses.

Nonetheless, I feel that a product without all three elements is incomplete. A lot of the times you see certain elements incorporated into a product for the sake of aesthetics that just end up making using the product much more difficult. Or sometimes the design just looks ugly altogether.

This reading was really eye-opening and made me reflect on how I should incorporate all three elements into my designs to make them most effiicient and accessible.

Also yes I forgot to mention, including fashion with products made for disabilities is brilliant whether its in glasses, hearing aids, prosthetics, or bionics.

Week 10: Digital (Slide Switch) and Analog (Sound Sensor)

Concept

I wanted to make a lightning system where sound controls the brightness. For my digital sensor, I used a slide switch to turn the LEDs on and off. As for my analog sensor, I used a sound sensor where every time it detects a sound, the LEDs get dimmer. I’m quite interested in sound-activated lights and even though it didn’t come with the kit, I still wanted to give it a try and I got the KY-037 from Amazon.

Full Code | Video Demo | Schematics

Code that I’m proud of

int soundValue = analogRead(soundPin);
 int change = abs(soundValue - 512);

 if (change > threshold) {
   brightness = brightness - 50; // drop brightness by 50 each sound
   Serial.println("Lowering brightness.."); // debug to make sure sound is going through and brightness lowering
   delay(200);
 }

I’m proud of this part because I had to actually understand what the sound sensor was giving me. I thought it would just tell me loud or quiet (HIGH/LOW), but it outputs a number between 0 and 1023 and in silence it sits around 512 (which is about half of that range). I also added the Serial.println myself because I had no idea if claps were even registering so I wanted to confirm it was working in the Serial Monitor before trusting the LEDs and as reassurance.

How this was made

I started with the slide switch wired to pin 2 with a 10kΩ pull-down resistor to GND, the reason for the higher Ω used was to prevent the pin from floating and giving random readings when the switch is open. When it reads LOW, both LEDs turn off and brightness resets to 255 (maximum brightness) so it starts fresh every time. I had to learn how the sound sensor worked and wired it myself. It outputs a continuous number between 0 and 1023 of the volume, sitting around 512 (mid-range value) in silence, and every loop the code reads that value, subtracts 512, and takes the absolute value to get the amplitude. If that crosses my threshold of 70 the brightness drops by 50. I had a problem with sensitivity at first as it kept triggering on background noise or missing claps and I found out that the module has a small dial you adjust with a screwdriver. I also added Serial.println debug line so I could confirm in the Serial Monitor that claps were actually registering before trusting the LEDs. The two LEDs on pins 9 and 10 each have a 330Ω resistor to GND and receive the brightness value through analogWrite using PWM.

Reflection & Future Improvements

This was quite a challenging assignment because I heavily insisted on using a sound sensor. Even though the sound sensor was new to me and not something we covered in class, I was able to apply a lot of the analog concepts we already learned (things like analogRead, analogWrite, and PWM) and it translated over and made it easier to grasp. I went back to the class notes and a few tutorials online (referenced below) to piece it together. If I were to keep going I’d add more brightness steps so the dimming feels smoother, and I’d revisit an earlier idea where the two LEDs go in opposite directions where one dims while the other brightens based on the same concept of having live sound adjusts that. Nevertheless, I’m happy with my output and how it turned out.

References

https://github.com/liffiton/Arduino-Cheat-Sheet

https://docs.arduino.cc/language-reference/

Arduino Sound Sensor: Control an LED with Sound

Reading Reflection – Week 11

Design Meets Disability

This was an interesting read, especially the idea of discretion in disability design. I honestly realized I never really questioned it before, I just kind of assumed that making things invisible or less noticeable was automatically better, but the reading made me realize it is actually much more about culture and how people feel about themselves. The example of the Eames leg splint was interesting to me because I did not expect something medical to be described as a beautiful design. The idea that a medical object could actually lead to iconic furniture made me rethink the direction of influence in design. It does not just go from mainstream to disability, but the other way around, too.

The idea of discretion vs fashion also really changed how I think about assistive devices. I have definitely grown up seeing and hearing things like hearing aids or prosthetics as something you need to hide, so the idea that invisibility might actually reinforce shame really stuck with me. The comparison with the glasses made a lot of sense because I have never thought of glasses as medical, I sometimes see them as normal or even stylish. Which makes me wonder why other devices have not gone through the same shift yet.

I also found the discussion about simplicity really relatable, especially the iPod example. I have definitely experienced how too many features in a product can actually make it harder to use. The example of simple radios for people with dementia made me think about how inclusion is not just physical, but also about how easily something can be understood without stress. Overall, this reading has me thinking about how much design depends on who is involved in the process, especially considering it needs input from mainstream designers and artists because disability design does not have to be separate or special. It can actually shape what good design looks like for everyone.