All Posts

Week 10 – Musical Instrument

Concept

I’ve been trying to learn the piano for the past year, so when we had to make a musical instrument I definitely knew I wanted to make something tangentially related to the piano.

Another thing I wanted to do with this week’s assignment was to add more ‘design wrapping’ to it. I liked what the professor said about Week 9’s projects — sometimes a well-wrapped simple project is much nicer than a messy technically impressive project, so before even designing what I want to do, I thought about the presence of materials and how I can use them to enhance the project.

I thought of something like Nintendo’s cardboard piano, and it sounded like it would be fun to implement. However, I looked at the documentation for Arduino Tones and learned that only 1 tone can be played at a time, even with multiple piezo buzzers/ I didn’t like the idea of creating a piano that can only play one note at a time, plus I thought this idea may have been overdone: creating a piano as a musical instrument for arduino.

Nintendo's cardboard piano for Switch is the most exciting gear of 2018

I still opted to create something piano-like, but instead of using touch as input, I decided to use the ultrasonic detector which makes it a theremin. This makes use of the 1-tone at a time limit as a feature rather than a bug, however during implementation I realized the ultrasonic detector isn’t very precise, so I wasn’t able to add the black keys to the piano without sacrificing accuracy. One more problem that came to mind when using the ultrasonic detector to determine which note to play was: “how often do I play the note”? To address this, I added a potentiometer which sets the tempo of the notes, so the theremin can play 1 note every 400ms-1000ms depending on the position of the position of the potentiometer. I realized I should also add some visual feedback to this as it might otherwise be difficult to set the desired value, so I added a blinking LED that blinks at the same rate the theremin is playing. If the LED is blinking at the same pace you want the song to be played, then you’re good!

Video

Week 10 – Reading Reflection

Picture under Glass requires more than just your hands — it requires your eyes too. You can’t navigate Picture under Glass without looking at where your fingers are as there is no touch feedback from the screen. With analog buttons, you can ‘feel’ your way around an interface, for example modern cars only have a mounted tablet as the car entertainment system, and sometimes when I’m driving and I want to skip the song, there is no safe way for me to do so without having to momentary take my eyes off the road to look at the tablet to see where the ‘>’ next song button is. In older cars with analog buttons, I don’t have to look at it to know where the button is, I can just remember that the next song button is the second from the right, and with one hand on the wheel and eyes on the road, I can interact with the car entertainment system.

I also found it interesting that in his follow up he brought up this hypothetical scenario of “Monet saying to his canvas, “Give me some water lilies. Make ’em impressionistic.”. Today with the existence of generative AI and advanced natural language processing units, it seems that the world of user interfaces has doubled down on the direction of Picture under Glass, further reducing the number of middlemen between the intent of the brain, and the wanted outcome. What I mean by this is that hands/physical movements are in some parts, a means to an end. To open a jar, we use our hands to open it because we can’t simply use our brain to open it. People spend years in art school to learn how to draw well, but generative art can accomplish what would take an untrained artist years of training to achieve in a singular sentence and a few minutes. ChatGPT further reduces the effort needed by humans to interface with the world — in the past, one would have to visit a library to read about a specific subject. Then came the internet, and using devices connected to the internet, one can search Google for the information they need. Now with ChatGPT, you can just simply ask it about whatever you need. The effort needed to accomplish anything is reduced, and I’m not sure if this is a good or bad thing. On this, the author says that “if they [brain interfaces] bypass the body, then we’ve just created a future where people can and will spend their lives completely immobile. Why do you want this future? Why would this be a good thing?”

Reading Reflection: Week 10

After reading the author’s rant on today’s tech scene and the follow up article, I found myself nodding in agreement. It’s as if they put into words what many of us have been feeling – that our tech progress feels more like cautious steps instead of the significant leaps we had hoped for. The call for a more forward-thinking approach isn’t just criticism; it’s a plea for innovation that genuinely enhances how we engage with the world.

The author’s observation regarding our bodies adapting to technology, rather than the other way around, resonates with the changing nature of our interactions with modern devices. For example, in our engagements with smartphones and virtual reality, the absence of tactile richness becomes apparent. With smartphones, our interactions are primarily confined to tapping and swiping on flat screens, lacking the tangible feedback that our hands and fingers are accustomed to in the physical world. This shift towards touchscreens, while undoubtedly convenient, introduces a disconnect between our tactile senses and the digital interfaces we navigate daily. Similarly, in the realm of virtual reality, immersion often falls short due to a lack of genuine tactile experiences. While VR technology can visually transport us to breathtaking landscapes and scenarios, the tactile feedback remains limited, hindering the full potential of our hands in interacting with this virtual realm. The conceptualization of virtual environments replicating real-world actions and fully leveraging the capabilities of our hands and bodies presents an intriguing prospect. Imagine not just seeing but feeling the virtual world – a paradigm shift that challenges the conventional boundaries of human-computer interaction.

As I contemplated these notions, I couldn’t help but reflect on the potential future resembling the video and how to prevent technology from becoming devoid of feelings and feedback. This isn’t to say that we lack feedback from current technology, but it prompts us to question whether we are achieving the best possible feedback. In essence, this exploration not only echoes the author’s insightful observations but also introduces a layer of personal introspection regarding the implications and responsibilities tied to our technological trajectory. It serves as a call to action, urging us to actively contribute to shaping a future where technology seamlessly aligns with our human experiences.

Week 10 Assignment (Mariam & Mudi)

Mariam & Mudi’s Magical Musical Instrument.

Concept :

For our musical instrument, we decided to craft an innovative instrument using an Ultrasonic sensor, a button, and a Buzzer. To kick off the musical vibes, just gently hold down the button. Now, here’s where it gets interesting when you wave your hand in front of the ultrasonic sensor at varying distances it unveils a different array of notes!

int trig = 10;

int echo = 11;

int buttonPin;

long duration;

long distance;



void setup() {

pinMode(echo, INPUT);

pinMode(trig, OUTPUT);

Serial.begin(9600);

}

void loop() {

digitalWrite(trig, LOW); //triggers on/off and then reads data

delayMicroseconds(2);

digitalWrite(trig, HIGH);

delayMicroseconds(10);

digitalWrite(trig, LOW);

duration = pulseIn(echo, HIGH);

distance = (duration / 2) * .0344; //344 m/s = speed of sound. We're converting into cm

int notes[7] = {261, 294, 329, 349, 392, 440, 494}; //Putting several notes in an array

// mid C D E F G A B

buttonPin = analogRead(A0); 

if (distance < 0 || distance > 50 || buttonPin < 100) { //if not presed and not in front

noTone(12); //dont play music

}

else if ((buttonPin > 100)) { //if pressed

int sound = map(distance, 0, 50, 0, 6); //map distance to the array of notes

tone(12, notes[sound]); //call a certain note depending on distance

}

}

 

Challenges:

I wouldn’t call this one a challenge but more of a hiccup really was that we found ourselves repeatedly unplugging and replugging them due to connectivity issues and the Arduino kept on giving errors.

 

Neil Leach Alien Intelligence – BONUS

Prof. Neil did quite a good job in shedding light on the stark differences between artificial and human smarts. From ChatGPT’s unexpected brilliance to AlphaGo Zero’s mind-boggling moves, he painted a picture of AI’s vastness, urging us to hit the brakes on development. It’s not just cool tech; it’s a bit scary. Neil’s vibe warned us: we might be steering into unknown territory, where AI, per Geoffrey Hinton, starts thinking for itself. The lecture left us with a quirky truth—aliens aren’t zipping down from space; they’re brewing up in labs. AI’s the new-age extraterrestrial, and we better handle it with care. However, it did stir up some thoughts I had.

While the talk was interesting, it felt like a déjà vu of AI basics. I craved more than the usual rundown While I found it intriguing, I couldn’t shake the feeling that it might have missed the mark for our crowd. He did mention many basics about AI but it felt like old news for most of us. I was hoping for more profound insights or a deeper dive into the future.

Week 10: Reading Response

“A Brief Rant on the Future of Interaction Design” and the follow-up article:

At the beginning of the reading, right after I watched the video, I was skeptical of the point the author is trying to make. However, as I read through the entire article, I started nodding my head and agreeing with everything the author was saying. Coincidentally, right before I read the article, I was practicing the violin. Because I hadn’t touched the violin in almost 6 months, my hands were “out of shape,” and I had forgotten to warm up before I started playing. So obviously, after a few minutes, my hands started cramping. In the past 11 years of my life, this was the longest break I have taken from playing the violin, so I felt that my hands were communicating with me through the pain and reflecting my unconscious sadness that I have abandoned one of my favorite hobbies for that long. Reading the article made me aware of the emotions we feel when we use our hands to do basic things, from brushing our teeth to playing the violin. I realize our hands are not just a body part we use to do things but rather a part of who we are—an important sense that makes us more aware of ourselves and our emotions somehow. I realize that replacing the use of our hands with a touch of a button or using voice control like Siri or Alexa affects our emotions and makes us numb to the feelings we naturally have.

After I finished reading the article, I asked myself when this article was written, and to my surprise, I found out that it was written 12 years ago. I started thinking back to 2011 and realized that when this article was written, the iPhone 4s had just been released, and the iPad was only out for a year. I wonder what the author thinks now. Unfortunately, the point the author was trying to make has become the reality we live every day. We all carry small glass screens in our pockets that control every single aspect of our lives. And yes, most children are now glued to iPads instead of playing with toys and running around their neighborhood playgrounds. In the response the author wrote, he mentioned that there are ultimately two functionalities that are changing: creating and understanding. He mentions that we cannot create with the use of technology because technology lacks creativity and emotion, but it can be an aid to help us understand stuff. This got me thinking about AI and how people are using ChatGPT to create, understand, and basically everything else. How can we control the advancement of technology so that it doesn’t take over our lives and change us to lifeless robots? Will our future really look like the video? How can we stop it before it’s too late?

reading reflection: week 10

First, I should make a point of saying that I have always believed writing to be conversational. When you write something, you open a conversation that remains unfinished business until you’ve heard back from your readers. I understand that this is not how a lot of people see the position of a writer, but it’s just my view, and I’ve lived by it ( — that’s why I’ve emailed all kinds of authors from John Green to Noam Chomsky). So, when I opened the second link under this week’s readings, I was pleasantly surprised to see an author continuing the conversation with his readers, in his humorous but not really condescending tone, which I appreciated very much.

Either way, maybe the author won me over with his jokes, but I feel inclined to agree more with him that with his critics. I have long harbored a slight distaste for VR/AR technologies, but I could never put a finger on where it was coming from. But the author’s “rant” offered the words I was looking for: this obsession with Pictures Under Glass genre of technology feels like a disservice to the human glory. They simply do not engage the full potential for interaction that humans possess, and by being such limited creations, they fall short of being tools that “fit the person”. It’s like giving a lumberjack a butterknife to chop down a tree. The lumberjack will probably just do his job with the knife because that’s all he has, but the creators in his society owe it to him that they start thinking in the general direction of a chainsaw instead of how to make the butter knife more aerodynamic. Simply because the lumberjack is capable of so much more.

I can’t trace how we ended up here, though. There are paintings from the year 1900 predicting how life in the 21st century would look like. Just a few generations ago, people were expecting so much more than just thin LCDs or holographic interaction panels. But somehow, someone along the way popularized these so called Pictures Under Glass technologies, and now we’re in this strange limbo.


There’s this short film from 1967, called 1999 A.D., which shows the life of a family in the future. It predicted many of the things that we have today: cell phones, email, and online shopping. However, these technologies are depicted in very different forms than how we know them today. For example, to reply to an electronic mail, the father has to handwrite his response on a machine with a glossy surface to write on. When the mother wants to go shopping, she turns some knobs on her “fingertip shopping” machine and she’s tuned into a real retailer, where a camera automatically scans across items for sale. These predictions are now fun to look at, with the knowledge that specialized, bulky machines for emailing or remote shopping isn’t the most convenient or realistic way to go. Still, it just goes to show that at some point in history, humans did have the vision to create more dynamic mediums of interaction “that we can see, feel, and manipulate,” but it seems that we have since gone astray.

Week 10 Reading

After reading this article (or rant),  I am amazed by the author’s vision. It was not until now I realized the explicit difference between current production tools and tools we used in the past. It is not that the tools we used in the past are better than the tools we use now, but personally, I always thought something was missing in the tools we use nowadays. Now I know the thing that was missing was feedback.

I am not saying that the current tools we use now do not have feedback. On the contrary, these tools have plenty of ways to provide feedback. Ringtones, vibrations, different kinds of displays, etc. However, those feedbacks all have one problem: it does not represent an aspect of the action performed. our brains have to process what the response or the feedback means, and that is what I feel is missing in current tools. If I use a hammer, when I hit the nail it gives me a force so that I know I hit something. The meaning of this feedback is completely intuitive. Therefore in this way, traditional tools are easier to learn and more intuitive.

However, I remain doubtful of the claim that the more we use our hands the better. Yes, it is true that our fingers have an incredibly rich and expressive repertoire, and we improvise from it constantly without the slightest thought. However, we have it does not mean we need to use it. I think whether it’s better or not should not be determined by how much repertoire of our hand we use, but by how intuitive it is for us to use. Therefore even though I agree with the author that the iPad or phones are not perfect tools, I remain doubtful that the tools in the future will use that many hand functions.

Week 10: Reading Reflection

These articles reminded me of something my dad always used to say, “The things that separate us from every other animal are our tongues and thumbs. They can’t speak to each other like we do. They can’t hold hammers like we do.” Try going a day without using your thumbs and you’ll realize how incapacitated we’d be without them. Thanks to our tongues and thumbs, we’ve penned symphonies and raised sky scrapers. Hallelujah.

I’ve also been reading this book called Steal Like An Artist. Here’s an excerpt that parallels the reading:

“While I love my computer, I think computers have robbed us of the feeling that we’re actually making things. Instead, we’re just typing keys and clicking mouse buttons…artist Stanley Donwood, who’s made all the album artwork for the band Radiohead, says computers are alienating because they put a sheet of glass between you and whatever is happening…Just watch someone at their computer. They’re so still, so immobile. You don’t need a scientific study (of which there are a few) to tell you that sitting in front of a computer all day is killing you, and killing your work. We need to move, to feel like we’re making something with our bodies, not just our heads. Work that only comes from the head isn’t any good. Watch a great musician play a show. Watch a great leader give a speech. You’ll see what I mean. You need to find a way to bring your body into your work. Our nerves aren’t a one-way street—our bodies can tell our brains as much as our brains tell our bodies. You know that phrase, “going through the motions”? That’s what’s so great about creative work: If we just start going through the motions, if we strum a guitar, or shove sticky notes around a conference table, or start kneading clay, the motion kickstarts our brain into thinking.”

Every generation has a distinct zeitgeist. And while I don’t think this is our only characteristic, I believe we suffer from apathy. We are an apathetic generation. And I attribute a lot of that to the time we spend on our phones. In 2016, a study found that the average person in the UK scrolls 5 miles on their phone! And that number has definitely only increased since. We spend all day absorbing information about life, but never actually live ourselves. Even when we’re off our phones, we think in the paradigms of the online-world, and bring that into our real-life interactions and conversations. It’s like using our technology the way we do has inserted a real glass wall into our lives. A lot of people feel constantly disassociated from themselves. And I think how we use technology today has something to do with that. We watch so many movies and TV shows but have lost sight of living the movie ourselves. Not watching ourselves through an audience’s eyes.

It’s like the reading said:

“We share the blood of cavemen who pushed spears into mammoths and drew pictures of them in the living room.” I was talking to my dad about this and he said, “Right? That was what it was all for? So we could jerk off to Instagram reels today.” And we had a laugh, but operating behind glass screens so much, we lose sight of who we really are as magical, living humans. My dad always says the two things to feeling the real magic of life again are sweat and breath. Sweat and breath. We can’t lose that if we’re going to keep our souls intact.

That’s another thing I remembered reading these articles. The disassociation I experience when involved in VR installations. Because I can see all of these incredible things happening, but I stumble out, wanting to feel. Wanting to touch this exciting world around me. Wanting to feel this new ground beneath my feet. But I don’t, and it’s incredibly disconcerting. I think as a culture, we’ve inundated ourselves to this. But I agree with the author, it can’t be that way forever. And if we’re going to make real art and real devices that amplify our capabilities of LIVING, something’s gonna have to give.

Week 10 Reading Response

“A Brief Rant on the Future of Interaction Design” reading:

The author of the article “A Brief Rant on the Future of Interaction Design” discusses some of the most important aspect that is not as much talked about in our current age. He addresses how the world is shifting from the usage of hands to feeling things and performing tasks with just a fingertip. We have shifted from relying on the sensations we get through our hands to just touching fingers on a flat screen to move around different applications and tasks. Why this shift though? This is mainly because of human laziness. Humans are lazy, and there’s nothing to deny there. We have tried every means possible to automate previous tedious tasks, and what did that cost us? It costs us to overthrow the tool that is the most necessary to us, and without it, we wouldn’t have come this far. Now of course we still use our hands and we have it completely intact, but we aren’t using it to their full potential. Hands as the author said, have very dense sensors on them, which means we can utilize them the best to feel what’s around us, nature, the objects, etc. With technological advancements, we are moving more towards a world with much less use of hands and ultimately we are being disconnected from our everyday world. In art and interactive media, interacting with users is a great thing and which the industry mainly focuses on, but much more interaction and experience can be gained through the implementation of physical objects instead of devices or as the author called it, “pictures under glass”. This article misses one point though, which is how the interactive arts are slowly progressing in this digitalized world. While technological advancements are leading more individuals towards a less sensory-felt future, the interactive arts do a great job at maintaining and bringing back this solid element of touch and expressions felt through it. Many projects nowadays, within interactive art, incorporate the use of hands to produce an aesthetic picture, sound, etc. This is because these creators have realized the power of hands, and through the implementation of these hands towards their physical projects, users can connect on a much deeper level with the art piece. This means that humans are still capable of bringing forth the usage of hands and feeling nature, it’s just that this time, it’s in a very new form.

In the follow-up of the author to the public criticism, the author answers many of the comments left by the audience. I liked how he mentioned about the question “My child can’t tie his shoelaces, but can use the iPad”, and I would like to add more to his response. Let’s compare a world where a child, called child A, swipes left and right on an iPad to play games, and another world where a child, called child B, plays normal, physical toys. Child A would gain more knowledge than child B since they have access to a vast number of games and probably some educational content. One thing that child A will miss greatly though are the reflexes, feelings, and the overall growth of the body. Child A’s mind will greatly grow, but it can not surpass child B’s body growth. By body growth, I do not mean it in terms of height or physical features, but more towards the feelings of touch, balance of body and getting used to it, understanding and feeling different elements, learning about what’s harmful and what’s not. Experiences make a person much stronger and they can learn much faster than just consuming information. Child B will get hurt from time to time from playing physical games and doing other activities that require movement, but at least their body will get used to it, and they will learn about how to handle situations that are dangerous to them in a much effective manner compared to child A who is sitting in one place swiping left and right. In the long run, child A will suffer a lot in the real world since his foundations and reflexes are much weaker than Child B’s; which is why individuals should not discard this important tool of using hands to feel what’s around us rather than learning what’s dangerous and what is not through textbooks.