Talk Reflection

 

Reflection

Prof Neil did an amazing job connecting the lines drawing the fuller picture of AI from his perspective. He started by giving a brief intro about AI and its emergence then started sharing some insights about how AI functions simplifying it with visual aid and explanation using examples making it easier to grasp the fuller picture.

The introduction, though aimed at introducing AI to those unfamiliar with it, felt overly lengthy and unengaging. Given that the audience consisted of university students well-versed in AI and its applications, spending more than half the lecture on this basic introduction seemed unnecessary and not relevant to their level of knowledge and daily use of AI.

Geoffrey Hinton’s video on the other hand was incredibly insightful, particularly regarding the concept of AI being smarter than humans because of its ability to share information. This idea resonated and truly made sense to me. It stands out as the most compelling argument I’ve encountered because it highlights how AI’s capacity to accumulate knowledge differs significantly from humans. Instead of the lengthy process of teaching each generation from scratch and hoping for marginal contributions over decades, AI’s continuous accumulation and building upon existing knowledge make it vastly smarter compared to humans.

Music Box (Khaleeqa and Rujul Week 10 Assignment)

Concept:

Rujul and I collaborated on our Week 10 assignment, centered on crafting a musical instrument. Our initial concept revolved around utilizing the ultrasonic sensor and piezo speaker to create a responsive system, triggering sounds based on proximity. However, we expanded the idea by incorporating a servo motor to introduce a physical dimension to our instrument. The final concept evolved into a dynamic music box that not only moved but also emitted sound. The servo motor controlled the music box’s rotation, while the ultrasonic sensor detected proximity, prompting different tunes to play. To reset the music box, we integrated a button to return the servo motor to its initial position.

Modelling and Schematic:

 

 

 

 

Prototype:

IMG_4312

Code:

Our prototype took shape seamlessly, and the coding aspect proved relatively straightforward. One particularly intriguing segment involved establishing thresholds for distinct melodies based on the object’s position within the carousel.

void checkDistanceAndSound() {
  int duration, distance;

  digitalWrite(TRIG_PIN, LOW);
  delayMicroseconds(2);
  digitalWrite(TRIG_PIN, HIGH);
  delayMicroseconds(10);
  digitalWrite(TRIG_PIN, LOW);

  duration = pulseIn(ECHO_PIN, HIGH);
  distance = (duration * 0.0343) / 2;

  // Play different melodies for different distance ranges
  if (distance > 0 && distance <= 5 && !soundProduced) {
    playMelody1();
    soundProduced = true;
    delay(2000); // Adjust delay as needed to prevent continuous sound
  } else if (distance > 5 && distance <= 10 && !soundProduced) {
    playMelody2();
    soundProduced = true;
    delay(2000); // Adjust delay as needed to prevent continuous sound
  } else if (distance > 10 && distance <= 20 && !soundProduced) {
    playMelody3();
    soundProduced = true;
    delay(2000); // Adjust delay as needed to prevent continuous sound
  } else {
    noTone(SPEAKER_PIN);
    soundProduced = false;
  }
}

void playMelody1() {
  int melody[] = {262, 294, 330, 349, 392, 440, 494, 523}; // First melody
  int noteDurations[] = {250, 250, 250, 250, 250, 250, 250, 250};

  for (int i = 0; i < 8; i++) {
    tone(SPEAKER_PIN, melody[i], noteDurations[i]);
    delay(noteDurations[i]);
  }
}

void playMelody2() {
  int melody[] = {392, 440, 494, 523, 587, 659, 698, 784}; // Second melody
  int noteDurations[] = {200, 200, 200, 200, 200, 200, 200, 200};

  for (int i = 0; i < 8; i++) {
    tone(SPEAKER_PIN, melody[i], noteDurations[i]);
    delay(noteDurations[i]);
  }
}

void playMelody3() {
  int melody[] = {330, 330, 392, 392, 440, 440, 330, 330}; // Third melody
  int noteDurations[] = {500, 500, 500, 500, 500, 500, 1000, 500};

  for (int i = 0; i < 8; i++) {
    tone(SPEAKER_PIN, melody[i], noteDurations[i]);
    delay(noteDurations[i]);
  }
}

Reflection:

Reflecting on our project, I believe enhancing it with LEDs synchronized to specific sounds could offer a more comprehensive understanding of the ultrasonic sensor’s functionality. Each LED would correspond to a particular sound, demonstrating the sensor’s operations more visibly. Additionally, considering a larger-scale implementation for increased interactivity might further elevate the project’s engagement potential.

Saiki and Elora Week 10 Assignment: RaveBot

 

Concept:

Our concept was to create an industrial DJ bot called the Rave Bot. Using an ultrasonic sensor, we aimed to create a metallic, grungy music box with a theremin-like instrument. Given its rave theme, we wanted to incorporate a strobing light effect to enhance the energetic vibe.For the theremin sound, we chose a haunted eerie techno sound, similar to the actual theremin sound.

Prototype and Modelling:

Code:

const int PIN_RED = 3;    //Red LED on pin 9
const int PIN_GREEN = 5;  //Green LED on pin 10
const int PIN_BLUE = 6;   //Blue LED on Pin 11

//variables to hold our color intensities and direction
//and define some initial "random" values to seed it

int red = 254;
int green = 1;
int blue = 127;
int red_direction = -1;
int green_direction = 1;
int blue_direction = -1;
int buttonState = 0;  // variable for reading the pushbutton status


/* This function "Set Color" will set the color of the LED
   rather than doing it over and over in the loop above. */
void setColor(int R, int G, int B) {
  analogWrite(PIN_RED, R);
  analogWrite(PIN_GREEN, G);
  analogWrite(PIN_BLUE, B);
}


#include <Ultrasonic.h>
#include <toneAC.h>

Ultrasonic ultrasonic(12, 13);

#define TONE_VOLUME 5  // 1-20 //output sound pin(positive of speaker)
int distance;

void setup() {
  pinMode(7, INPUT_PULLUP);

  Serial.begin(9600);
  //set all three pins to output mode
  pinMode(PIN_RED, OUTPUT);
  pinMode(PIN_GREEN, OUTPUT);
  pinMode(PIN_BLUE, OUTPUT);
}

void loop() {
  buttonState = digitalRead(7);

  red = red + red_direction;  //changing values of LEDs
  green = green + green_direction;
  blue = blue + blue_direction;
  if (buttonState == LOW) {

    //now change direction for each color if it reaches 255
    if (red >= 255 || red <= 0) {
      red_direction = red_direction * -100;
    }
    if (green >= 255 || green <= 0) {
      green_direction = green_direction * -10;
    }
    if (blue >= 255 || blue <= 0) {
      blue_direction = blue_direction * -150;
    }
    setColor(random(red), random(green), random(blue));

  } else if (buttonState == HIGH) {
    analogWrite(PIN_RED, 10);
    analogWrite(PIN_GREEN, 251);
    analogWrite(PIN_BLUE, 100);
  }

  distance = ultrasonic.read();  //reads the distance from the sensor
  if (distance < 120)            //range is about 120 cm
  {
    int freq = 1500 - distance * 10;  //calculates a corresponding freqeuncy
    toneAC(freq, TONE_VOLUME);        //plays the sound!(output through speaker- pin10)

    Serial.println(distance);  //just for monitoring if required
  } else {
    noToneAC();  //out of range
  }
}

 

Reflection:

We kept improvising on the theme we wanted to go for, and it turned out great for us. One thing we would have definitely liked is to have a better sounding or louder speaker. Currently, due to the power being split across multiple buttons, LEDs, and the speaker, the full range of volume is lost, especially once we close the box. However, we were really happy with the design of the box. We colored it black and spray painted chrome silver on it to give it an industrial look, which we thought a ravebot deserves. All in all, it was a super fun experience.

Week 10- Reading Response Post

This week’s reading about the future of interaction design made me stop and think. It talked about how designs confined under glass aren’t great for the future of how we interact with stuff. The author had some strong thoughts about a video showcasing future interactions. They were skeptical about the interactions presented in the video because they had experience working with real prototypes, not just computer-generated animations. But surprisingly, that wasn’t the main issue for them. What bothered them the most was that the video didn’t offer anything truly groundbreaking from an interaction standpoint. They felt it was just a small step forward from what’s already there, which, according to them, isn’t that great to begin with. They stress how crucial it is to have visionary ideas that truly revolutionize how we interact with technology and the world around us. It got me pondering, but honestly, I don’t fully buy into that idea.

Sure, our bodies have a ton of complex ways we handle things, how we touch, hold, and interact with everything around us. But saying that designs restricted to “pictures under glass” are all bad? I’m not on board with that. Take something as simple as a PDF file versus printing out a reading. That PDF might be called “numb” in terms of design, but let’s be real, it’s way easier to handle and interact with than dealing with a printed paper. It’s about usability and convenience, isn’t it? If it’s not convenient or easy to use, is it even really interaction?

I believe interaction goes beyond just physically touching something. It’s about how easy and helpful it is to use. Some things will always need to be tangible. There’s magic in touching, feeling textures, estimating weights, and seeing how things respond to us. Like a couch that adjusts slightly to fit you but still does its job of being a comfy place to sit. That’s something you can’t replicate behind glass.

I think it’s crucial to know what can be behind glass and what can’t. Some folks might prioritize convenience in the future, but there are things you just can’t replicate virtually. I mean, you can’t virtually brush your teeth, right?

For me, I don’t see the connection or agree with that rant about interaction design. Maybe it’s just me, though. Everyone’s got their take on things, and that’s cool.

Week 10 Assignment: 4 Tabs Mini Piano

Concept:

In this week’s assignment, I decided to create a piano on a much smaller scale. As the title suggested, it is composed of 4 tabs. Its motivation is to make a piano that can be carried anywhere and played easily. The piano has multiple different tones to it, which can be adjusted with the use of a potentiometer. While there are digital applications where people can easily play piano, as this week’s text suggested- the importance of touch and expression when performing tasks compared to just touching a flat screen- I realized how a physical piano can bring forth a better experience than a digital one.

Prototype:

https://youtube.com/shorts/p7VoEhPxomk

Code:

#include "pitches.h"

//setting up various pins, numkeys is the number of keys on the piano, TABPins are the digital pins
const int numKeys = 4;                
const int TABPins[numKeys] = {2, 3, 4, 5}; 
const int pushButton = 6;            
const int speakerPin = 7;             

//variable to store the last pressed tab in it
int lastTAB = -1;                  
int pitch;                         
//variable for flagging system as open or close   
bool systemEnabled = false;           


void setup() {
  //set each buttonPin as input 
  for (int i = 0; i < numKeys; i++) {
    pinMode(TABPins[i],  INPUT);
  }
  //set the button as input 
  pinMode(pushButton, INPUT);
  Serial.begin(9600);
}

void loop() {
  // Check the state of the push button
  int buttonState = digitalRead(pushButton);

  // Toggle the system on/off when the button is pressed
  if (buttonState == LOW) {
    delay(50); 
    //when the button is pressed, invert the systemEnabled variable
    if (digitalRead(pushButton) == LOW) {
      systemEnabled = !systemEnabled;

      // If the system is now enabled, reset the lastTAB variable
      if (systemEnabled) {
        lastTAB = -1;
      }

      // Wait for the button to be released
      while (digitalRead(pushButton) == LOW) {
        delay(10);
      }
    }
  }

  // If the system is enabled, read the potentiometer value and play notes
  if (systemEnabled) {
    int potValue = analogRead(potentiometerPin);
    // Map potentiometer value to a pitch range
    pitch = map(potValue, 0, 1023, 200, 4000);  

    for (int i = 0; i < numKeys; i++) {
      int TABValue = digitalRead(TABPins[i]);

      // Play a note if the TAB is pressed and it's not the same TAB as the last one
      if (TABValue == LOW && i != lastTAB) {
        // note variable that stores a value from the loswet Note + the addition of the pitch which changes according to potentiometer
        int note = NOTE_B0 + pitch + i * 100;  
        // output the speaker with that note value for 1 second
        tone(speakerPin, note, 1000);
        delay(100);  

        // Update the lastTAB variable
        lastTAB = i;
      }
    }
  }
}

Reflection:

This was a fun hands-on exercise. One aspect of the project that took the most time was building the prototype which replicates the piano. The hard part on the other hand was most probably the setting up of the push button to flag the system as open or closed since I had to input in multiple delays to prevent various errors. As for future improvements, more tabs can be added within the piano to make it more feasible to produce multiple notes in one go. Also, the overall look of the piano can most probably be much better than this.

 

Week 10 Assignment (Mariam & Mudi)

Mariam & Mudi’s Magical Musical Instrument.

Concept :

For our musical instrument, we decided to craft an innovative instrument using an Ultrasonic sensor, a button, and a Buzzer. To kick off the musical vibes, just gently hold down the button. Now, here’s where it gets interesting when you wave your hand in front of the ultrasonic sensor at varying distances it unveils a different array of notes!

int trig = 10;

int echo = 11;

int buttonPin;

long duration;

long distance;



void setup() {

pinMode(echo, INPUT);

pinMode(trig, OUTPUT);

Serial.begin(9600);

}

void loop() {

digitalWrite(trig, LOW); //triggers on/off and then reads data

delayMicroseconds(2);

digitalWrite(trig, HIGH);

delayMicroseconds(10);

digitalWrite(trig, LOW);

duration = pulseIn(echo, HIGH);

distance = (duration / 2) * .0344; //344 m/s = speed of sound. We're converting into cm

int notes[7] = {261, 294, 329, 349, 392, 440, 494}; //Putting several notes in an array

// mid C D E F G A B

buttonPin = analogRead(A0); 

if (distance < 0 || distance > 50 || buttonPin < 100) { //if not presed and not in front

noTone(12); //dont play music

}

else if ((buttonPin > 100)) { //if pressed

int sound = map(distance, 0, 50, 0, 6); //map distance to the array of notes

tone(12, notes[sound]); //call a certain note depending on distance

}

}

 

Challenges:

I wouldn’t call this one a challenge but more of a hiccup really was that we found ourselves repeatedly unplugging and replugging them due to connectivity issues and the Arduino kept on giving errors.

 

Neil Leach Alien Intelligence – BONUS

Prof. Neil did quite a good job in shedding light on the stark differences between artificial and human smarts. From ChatGPT’s unexpected brilliance to AlphaGo Zero’s mind-boggling moves, he painted a picture of AI’s vastness, urging us to hit the brakes on development. It’s not just cool tech; it’s a bit scary. Neil’s vibe warned us: we might be steering into unknown territory, where AI, per Geoffrey Hinton, starts thinking for itself. The lecture left us with a quirky truth—aliens aren’t zipping down from space; they’re brewing up in labs. AI’s the new-age extraterrestrial, and we better handle it with care. However, it did stir up some thoughts I had.

While the talk was interesting, it felt like a déjà vu of AI basics. I craved more than the usual rundown While I found it intriguing, I couldn’t shake the feeling that it might have missed the mark for our crowd. He did mention many basics about AI but it felt like old news for most of us. I was hoping for more profound insights or a deeper dive into the future.

Week 10: Reading Reflection

These articles reminded me of something my dad always used to say, “The things that separate us from every other animal are our tongues and thumbs. They can’t speak to each other like we do. They can’t hold hammers like we do.” Try going a day without using your thumbs and you’ll realize how incapacitated we’d be without them. Thanks to our tongues and thumbs, we’ve penned symphonies and raised sky scrapers. Hallelujah.

I’ve also been reading this book called Steal Like An Artist. Here’s an excerpt that parallels the reading:

“While I love my computer, I think computers have robbed us of the feeling that we’re actually making things. Instead, we’re just typing keys and clicking mouse buttons…artist Stanley Donwood, who’s made all the album artwork for the band Radiohead, says computers are alienating because they put a sheet of glass between you and whatever is happening…Just watch someone at their computer. They’re so still, so immobile. You don’t need a scientific study (of which there are a few) to tell you that sitting in front of a computer all day is killing you, and killing your work. We need to move, to feel like we’re making something with our bodies, not just our heads. Work that only comes from the head isn’t any good. Watch a great musician play a show. Watch a great leader give a speech. You’ll see what I mean. You need to find a way to bring your body into your work. Our nerves aren’t a one-way street—our bodies can tell our brains as much as our brains tell our bodies. You know that phrase, “going through the motions”? That’s what’s so great about creative work: If we just start going through the motions, if we strum a guitar, or shove sticky notes around a conference table, or start kneading clay, the motion kickstarts our brain into thinking.”

Every generation has a distinct zeitgeist. And while I don’t think this is our only characteristic, I believe we suffer from apathy. We are an apathetic generation. And I attribute a lot of that to the time we spend on our phones. In 2016, a study found that the average person in the UK scrolls 5 miles on their phone! And that number has definitely only increased since. We spend all day absorbing information about life, but never actually live ourselves. Even when we’re off our phones, we think in the paradigms of the online-world, and bring that into our real-life interactions and conversations. It’s like using our technology the way we do has inserted a real glass wall into our lives. A lot of people feel constantly disassociated from themselves. And I think how we use technology today has something to do with that. We watch so many movies and TV shows but have lost sight of living the movie ourselves. Not watching ourselves through an audience’s eyes.

It’s like the reading said:

“We share the blood of cavemen who pushed spears into mammoths and drew pictures of them in the living room.” I was talking to my dad about this and he said, “Right? That was what it was all for? So we could jerk off to Instagram reels today.” And we had a laugh, but operating behind glass screens so much, we lose sight of who we really are as magical, living humans. My dad always says the two things to feeling the real magic of life again are sweat and breath. Sweat and breath. We can’t lose that if we’re going to keep our souls intact.

That’s another thing I remembered reading these articles. The disassociation I experience when involved in VR installations. Because I can see all of these incredible things happening, but I stumble out, wanting to feel. Wanting to touch this exciting world around me. Wanting to feel this new ground beneath my feet. But I don’t, and it’s incredibly disconcerting. I think as a culture, we’ve inundated ourselves to this. But I agree with the author, it can’t be that way forever. And if we’re going to make real art and real devices that amplify our capabilities of LIVING, something’s gonna have to give.

Week 10 Reading Response

“A Brief Rant on the Future of Interaction Design” reading:

The author of the article “A Brief Rant on the Future of Interaction Design” discusses some of the most important aspect that is not as much talked about in our current age. He addresses how the world is shifting from the usage of hands to feeling things and performing tasks with just a fingertip. We have shifted from relying on the sensations we get through our hands to just touching fingers on a flat screen to move around different applications and tasks. Why this shift though? This is mainly because of human laziness. Humans are lazy, and there’s nothing to deny there. We have tried every means possible to automate previous tedious tasks, and what did that cost us? It costs us to overthrow the tool that is the most necessary to us, and without it, we wouldn’t have come this far. Now of course we still use our hands and we have it completely intact, but we aren’t using it to their full potential. Hands as the author said, have very dense sensors on them, which means we can utilize them the best to feel what’s around us, nature, the objects, etc. With technological advancements, we are moving more towards a world with much less use of hands and ultimately we are being disconnected from our everyday world. In art and interactive media, interacting with users is a great thing and which the industry mainly focuses on, but much more interaction and experience can be gained through the implementation of physical objects instead of devices or as the author called it, “pictures under glass”. This article misses one point though, which is how the interactive arts are slowly progressing in this digitalized world. While technological advancements are leading more individuals towards a less sensory-felt future, the interactive arts do a great job at maintaining and bringing back this solid element of touch and expressions felt through it. Many projects nowadays, within interactive art, incorporate the use of hands to produce an aesthetic picture, sound, etc. This is because these creators have realized the power of hands, and through the implementation of these hands towards their physical projects, users can connect on a much deeper level with the art piece. This means that humans are still capable of bringing forth the usage of hands and feeling nature, it’s just that this time, it’s in a very new form.

In the follow-up of the author to the public criticism, the author answers many of the comments left by the audience. I liked how he mentioned about the question “My child can’t tie his shoelaces, but can use the iPad”, and I would like to add more to his response. Let’s compare a world where a child, called child A, swipes left and right on an iPad to play games, and another world where a child, called child B, plays normal, physical toys. Child A would gain more knowledge than child B since they have access to a vast number of games and probably some educational content. One thing that child A will miss greatly though are the reflexes, feelings, and the overall growth of the body. Child A’s mind will greatly grow, but it can not surpass child B’s body growth. By body growth, I do not mean it in terms of height or physical features, but more towards the feelings of touch, balance of body and getting used to it, understanding and feeling different elements, learning about what’s harmful and what’s not. Experiences make a person much stronger and they can learn much faster than just consuming information. Child B will get hurt from time to time from playing physical games and doing other activities that require movement, but at least their body will get used to it, and they will learn about how to handle situations that are dangerous to them in a much effective manner compared to child A who is sitting in one place swiping left and right. In the long run, child A will suffer a lot in the real world since his foundations and reflexes are much weaker than Child B’s; which is why individuals should not discard this important tool of using hands to feel what’s around us rather than learning what’s dangerous and what is not through textbooks.

Week 9 Reading Reflection

This week’s reading has given me a lot to think about regarding the position of interactive artwork. In a way, I agree with Tigoe that interactive artworks are more like performances. The artist sets up a stage for the interactors, who essentially become the performers in this theater. This reminds me of another class I am taking on installation art, where most successful interactive pieces do not explicitly explain what the piece is about. Instead, they focus on evoking sensory reactions and exploring them in depth, aiming to elicit an emotional response from the performer or “viewers” that prompts further contemplation of the interaction and its significance. Even Andrew Schnieder’s piece, though it may seem like a fixed narrative from a distance, offers different interactions in each group setting, which I find more rewarding than a singular interpretation of paintings in a museum.

The reading on the greatest hits and misses adds another layer to this. Even a seemingly simple and commonly used interaction, such as an LED lighting up when approached, has the potential for further development. It is not an unoriginal idea if the context in which this system is set up provides a feeling that is more contextualized and open to interpretation, which I find appealing. I kinda wanna make a more contextualized theremin now if that’s possible.