Week 11 – Reading response

This article was really interesting when talking about about physical implements and how they affect user. This goes back to the recurring theme of designing interactive system where the user should be the center of the design and user feedback. The idea of pictures under glass in my opinion creates a pseudo-realistic feeling where everything is meant to imitate the actual objects not not give the user the actual feeling and new design principles are becoming more and more virtual which makes I question how much humans will be involved with tools or implements in the coming period and we may actually lose touch with what we deem as reality now. The reading itself was very self-explanatory so I do not have much to say but it was just a simple reminder to keep in touch with physical things when designing interactive systems.

Week 11 – Production Assignment (Air Piano)

Concept

The “Air Piano” is a musical instrument controlled using the distance from a sensor. I have always been fascinated by the distance sensors, using reflection of sound waves to calculate the distance of an object. I used the the HC-S04 sensor to detect the distance in this work as input for the instrument. I was inspired by the tone knob in electrical guitars which was discussed in class so I integrated it into my work. A potentiometer serves as my tone knob and its input is used to switch among 3 sound modes: Piano mode, Sci-Fi mode and Bass Mode. There are also different colored LEDS to show the user which mode is on. Red – Piano, Green – SciFi and Blue – Bass mode. The mode determines the scale of frequency produced by the buzzer. The coding behind this project was to map the distance of an object from the sensor to a value from 0 to 7 (8 values) and these values each correspond to a specific note.

Sketch

Image

Code

// A0 - Potentiometer
// 2 - Blue
// 4 - Green
// 7 - Red
// 12 - Buzzer 

const int trigPin = 10;
const int echoPin = 9;

// Mode 1 - Piano Scale
int mode1[8] = {262, 294, 330, 349, 392, 440, 494, 523};

// Mode 2 - Sci-Fi
int mode2[8] = {600, 750, 900, 1100, 1400, 1800, 2300, 3000};

// Mode 3 - Higher Bass / Bright Low Mode
int mode3[8] = {350, 420, 500, 600, 720, 850, 1000, 1200};

float duration;
int distance;

void setup() {
  pinMode(trigPin, OUTPUT);
  pinMode(echoPin, INPUT);
  pinMode(2, OUTPUT);
  pinMode(4, OUTPUT);
  pinMode(7, OUTPUT);
  pinMode(12, OUTPUT);

  Serial.begin(9600);
  digitalWrite(2, HIGH);
  digitalWrite(4, HIGH);
  digitalWrite(7, HIGH);
  delay(1000);

  digitalWrite(2, LOW);
  digitalWrite(4, LOW);
  digitalWrite(7, LOW);
}

void loop() {
  digitalWrite(trigPin, LOW);
  delayMicroseconds(2);
  digitalWrite(trigPin, HIGH);
  delayMicroseconds(10);
  digitalWrite(trigPin, LOW);
  int mode;

  int pm = analogRead(A0);

  duration = pulseIn(echoPin, HIGH);
  distance = (duration*0.0343)/2;

  Serial.print("Distance: ");
  Serial.println(distance);
  delay(100);

  if (pm >= 700) {
    mode = 0;
    digitalWrite(7, HIGH);
    digitalWrite(2, LOW);
    digitalWrite(4, LOW);
  } else if (pm >= 350) {
    mode = 1;
    digitalWrite(4, HIGH);
    digitalWrite(7, LOW);
    digitalWrite(2, LOW);
  } else {
    mode = 2;
    digitalWrite(2, HIGH);
    digitalWrite(4, LOW);
    digitalWrite(7, LOW);
  }

  if (distance >= 0 && distance <= 40) {

    int noteIndex = map(distance, 0, 40, 0, 7);
    int freq;

    if (mode == 0) freq = mode1[noteIndex];
    if (mode == 1) freq = mode2[noteIndex];
    if (mode == 2) freq = mode3[noteIndex];

    tone(12, freq);
  }
  else {
    noTone(12);
  }

  //delay(50);
}

 

 

How it was made

This work was made by first drawing the sketch of the circuit. The inputs are the resistance value of the potentiometer and the distance from the distance sensor. I had to watch a video and use the ardruino website to figure out how to configure the distance sensor. The output of the circuit are 3 LEDs and a buzzer. The LEDs where connected to digital pins, a resistor and ground. The buzzer was connected to a digital pin and ground. In the coding of the instrument, I used ChatGPT to generate an array of frequencies corresponding to the modes. I connected all the wires and components according to the sketch.

Reflection

Being someone with no knowledge in music, I thought this work will be very difficult for me but after I figured out the input and output of the program, it went quite smoothly. To improve the project, a more sensitive distance sensor can be used to improve the readings of the sensor and more modes with different frequencies can be added to create more musical effects.

Week 10 – Reading response

Making Interactive Art Text challenged my view about art. I used to see art as an aggregations of different elements and ideas that allow an artist to tell a story. I assumed interactive art was the same way were although the user takes part in the art, they will be an actor in the play of the artist. This texts challenges my notion and tells me that the artist must only set the stage and allow the user to explore. I like this idea but my questions comes to, what are the bounds of exploration and if there are bounds of exploration does the artist still not control the narrative. Take for instance open world games. There are many things you can do in the open world but not all things are allowed. There are specific characters one can interact with and are aligned towards a central theme of the story line of the game. As I explore interactive media more, I look forward to finding the line between user exploration and artist control in interactive arts

In the Physical computing greatest hits text, we see various examples of physical computing art. What specifically catches my eye is the direct bodily feedback machines such as the gloves and the cursor devices. In my opinion they give user more sense of control and more immersed into the art than objects that just measure inputs from the environment such as the field of grass. This also raises the question about the measure of interactiveness. Can one say one art is more interactive than another passed on the feedback of the control parameters of the art and can this affect the judgement and interpretation of the work.

Week 10 – Production Assignment

Concept

The concept of this project is to create an system can make light of any colour using rgb values. In the last class we learnt about analogue and digital input and output. Using this concept, I created a circuit that uses analogue input to control the brightness of a red, green and blue led to obtain a specific color, just as we could pick specific colors when coding in p5js.

A photo-resistor, potentiometer and push switch are used to control the rgb values of the LEDs and the lights were physically merged together so as to give the idea they were producing one light of a specified color.

Sketch

Code

// A0 = Potentiometer input
// A1 = Photoresistor input
// A2 = Push switch input
// 3 = Blue LED to resistor to GND
// 5 = Green LED to resistor to GND
// 7 = Red LED to resistor to GND

void setup() {
  pinMode(3, OUTPUT);
  pinMode(5, OUTPUT);
  pinMode(7, OUTPUT);

  // flash all LEDs
  digitalWrite(3, HIGH);
  digitalWrite(5, HIGH);
  digitalWrite(7, HIGH);
  delay(1000);

  digitalWrite(3, LOW);
  digitalWrite(5, LOW);
  digitalWrite(7, LOW);
}

void loop() {
  int potentiometer = analogRead(A0);
  int photoresistor = analogRead(A1);
  int pSwitch = digitalRead(A2);
  int pm = map(potentiometer, 0, 1023, 0, 255);
  int pr = map(photoresistor, 0, 1023, 155, 255);
  analogWrite(3, pr);
  analogWrite(5, pm);
  digitalWrite(7, pSwitch);
}

How It’s made

A potentiometer, photo resistor and a push switch were connected to 5V terminal. The analog input of each of these devices were read by the analog in pins A0, A1 and A2. LEDs were connected to pins 3, 5 and 7. The inputs from the potentiometer, photo resisters and push switches can be tuned to obtain the desired effect.

Physical Circuit

Digital Circuit

Reflection

This work was really fun to create. I enjoyed playing around with analog inputs and outputs. Possible improvements in the future are including codes to cause some flashing effects to show different color and state. More analog inputs can be included also to create different physical effects. More LEDs can also be added to create more visual effects.

Week 9 – Reading response

Her Code Got Humans On The Moon

The story of Margaret Hamilton was really inspiring. I was especially intrigued by her resilience to work even though she had a little daughter and she brought her to the lab to do her work. Her story about leaving a party to make a correction is something I can relate to. I recall countless times I have been at social events and a new idea or a realization on something I am working on just pops to my head. I may not be as enthusiastic about my work to leave the event but I really admire her for that.

This reading highlighted the difference between a computer and a human in when it comes to making mistakes. NASA trusted their astronauts not to make a mistake and be “perfect” but as Hamilton’s instincts proved right, one can make mistake irrespective of the training they have received so she designed a way out even though her superiors said that would never happen. This is something we will take into account when designing future projects. It should be based on the assumptions that the user is not perfect and can make some mistakes so a fail safes or a way out should be designed to ensure the program or device works as it is supposed to.

Looking back at the Artemis II mission launched a few days ago, I can only imagine the sophisticated software and computing used in this mission. If Hamilton and her team were able to design a whole software with such limited memory and storage and they were still successful I wonder what would be designed for the current mission with all the computing power and artificial intelligence available to the team. I am sure it will also contain groundbreaking innovations can usher humanity into a new age just as Hamilton’s “software” did.

 

Norman,“Emotion & Design: Attractive things work better”

The reading spoke about something which is usually unspoken of. The design element of objects. These designs if done right can make a user overlook all the flaws of a product or completely change the idea the user has over the product. Take for instance, popular brands like apple. Although there is no substantial change in the yearly iphones they released, when they change the colour of the phones or just position to camera in a different way, they are able to appeal to the beauty sense of the customer and ride on this to sell their products.

Reflecting on this reading, I go back to the first half of the semester where I used to design programs just for the sake of working. I did not pay much attention to the beauty and aesthetics of the program. Which the first encounter a user will have with my project. A boring interface can push a user away eventhough the logic or idea behind the program is amazing. This can also be seen in flashy adds and flashy designs companies make on their products. These are all strategies to reel customers in and focus their attention on exactly what they designers what them to focus on and overlook all the possible flaws of their program a

Midterm Project – SignSprint

Sign Sprint

Concept

SignSprint is a game based on computer vision that recognizes 7 different hand gestures which are Thumbs up, Thumbs down, Victory, Pointing Up, Victory, Closed Fist, I love you and Open Palm. The game works on a gesture recognition machine learning model by Google AI for Developers through MediaPipe Studio. The model can be tried out here.

The whole concept of the game is to make as many signs within a specified time period. An array of the possible hand gestures is created and one is randomly displayed at a time and the user is meant to make the hand gesture corresponding to the gesture being displayed. The score of the user is recorded and displayed at the end of the designated time period. The ML model uses measurements to accurately estimate a hand gesture and detect it. A validation condition is used to check if the detected gesture is exactly as the the target gesture and only then will the target gesture change. The model has also been set to detect one hand at a time so using multiple hands will cause the gesture not to be detected.

The main function sets up the machine learning model, detects gesture and randomly selects a target gesture

 

 

 

Code I am proud of

function drawGame() {
  background(0);
  image(video, 0, 0, width, height);
  
  // Timing game
  let elapsed = (millis() - startTime)/1000;
  if (elapsed >= gameTime) {
    gamestate = "end";
    }
  
  // Gesture detected & scoring
  let detectedName;
  let detectedEmoji;
  
  if (results && results.gestures && results.gestures.length > 0) {
    detectedName = results.gestures[0][0].categoryName;
    detectedEmoji = gestureMap[detectedName];
    
    if (targetGesture.name == detectedName && !matchCooldown) {
      score++;
      // sound for feedback
      matchCooldown = true;
      correct.play();
      pickNewTarget();
      }
    }
  
  // Target Emoji
  if (targetGesture) {
    textFont("OpenSans");
    textAlign(CENTER, TOP);
    textSize(70);
    text(targetGesture.emoji, width/2, 30);
  }
  
  // Score
  textFont(font);
  fill(255);
  textAlign(RIGHT, TOP);
  textSize(30);
  textFont("OpenSans")
  text("⭐", width-55, height-45);
  textFont(font);
  text(score, width-20, height-45);
  
  // Time remaining
  textSize(40);
  text(ceil(gameTime-elapsed), width-20, 20)  
}

The code I am proud of is the drawGame function. This is the function that contains the bulk of the game mechanism. It first shows the camera output of the the program, shows the target gesture as an emoji, detects the gesture of the player through the camera and checks if it is the same as the required gesture. If the detected and target emojis are the same, it increases the score and creates a new target. The function also displays the time left and the current score on the screen. Finally the function has a condition that automatically switches the game to the end screen when the specified time is elapsed.

How it was made

The game mainly runs on the the hand gesture machine learning model which was stated above. The biggest challenge in making this game was importing the gesture recognition model in p5js. I used cluade AI to help in this process. With the help of AI, I was able to modify the html file and create functions in order to import the right model into the p5js file which enables us to run the game. Claude AI was also used in the making of the setupMediaPipe and detectGesture() function to enable the game run the scoring system.

The game code was mostly composed of if conditions and booleans for the game logic. The start and end screen background was fully generated by Gemini AI and the sounds for the game, which are the theme sound and the correct matching sound were obtained from Freesound.

Reflection

This was a really fun game to create. I got to explore all the concepts deal with class and I got a greater understanding of decisions structures and also learn how import models into p5js. A possible improvement in the increasing the number of hands that can be in the game and the hand gesture can be further developed to remotely control other computer devices to generate art and just express creativity. I see this project as stepping stone to explore my interest in computer vision and its possible application in interactive media and I am excited to see how I can blend this knowledge and skill with physical computing.

Week 5 – Midterm Progress Report

Concept

I will be creating an interactive game called “SignSprint”. It is a typing-like game that uses American Sign Language (ASL) as input instead of the regular keyboard input. The user is supposed to make signs corresponding ASL alphabets to match letters and words displayed on the screen. The user has a specified time to completely spell the words displayed using ASL and at the end of the specified time, the accuracy and speed of the user will be displayed. There will also be a leaderboard to display the fastest users.

The inspiration behind this game is a class I am currently taking on Disabilities Studies where I am required to learn some ASL. After learning about computer vision in class and playing with a few models, I decided to combine these two ideas and create a game out of it. I believe creating this game will be a great exercise to learn ASL and have fun while doing so.

Design

This game will be built using a machine learning computer vision model to identify the alphabets of ASL as input to play the game. The game will consist of 3 main screens:

Start screen: This will the first screen and the main menu . It will contain a guide on how to play the game. It will contain a button to that opens a page to show a full list of the ASL symbols, a button that open a trial mode for users to practice their ASL in a non competitive or game like environment and finally a button that opens the competitive game mode where the user will be timed and expected to make the appropriate symbols.

Game screen: There are two main game screen. The trial screen where a user practices their ASL with no time. The user can go on as long as they want to and exiting this screen takes them back to the start screen. The other form of the game screen is the test screen where a user has 60 mins to get as many letters correct as they can. The screen will automatically closed after the 60s and lead you to the end page.

End page: This will display the leaderboard of the fastest players and contain a button or instruction to restart the game.

Frightening part of the code

The frightening part of the code is getting the machine learning model to identify the signs and interpret it into input for the game. It model has to be quick and responsive to improve the user experience overall.

The reduce this risk, I am exploring the use of ml5js library to get the appropriate model get the user input.

Week 5 – Reading Reflection

It was interesting to learn about how computers actually see and what stood out for me was the various methods employed by a computer to see and make decision or create art. Selection of a computer vision technique adds complexity to the interactive works and alters how one can interact with the work. The right technique must also be selected to minimize errors and ensure consistency in the art as some techniques are known not to perform well in certain conditions.

One possible application of this could be how an interactive artwork involving computer vision can be placed strategically in an arts exhibition to accentuate or improve the vision of the work. Carefully selected piece of art can be placed around the work to generate the needed contrast, brightness or effects for the computer vision just like how the white Foamcore was used for the LimboTime game.

The use of surveillance to generate arts was also something worth taking a look at. Are there any privacy restrictions or laws protecting the identities of the people in these forms of arts and how are their privacies protected? The work Suicide Box by the Bureau of Inverse Technology makes me question if artist actually have the right to use data or information like this to create a piece of work. It gives me the impression that they are amusing tragedy. I am also left with the question: how do they respect the dignity of those who jumped off the bridge?

Week 4 – Digital Oracle

Concept

This work  acts an oracle by generating wise sayings. Predefined times, elements, actions and destinations are stored in the program and they are randomly paired up to create the oracle. This work employs the concept of lists to store the times, elements, actions and destinations. Text in p5js concept was explored in this work by employing textAlign, textSize and textFont to create the desired output for the work.

A background animation was created for the program by creating translucent circle that randomly move around on the screening. This was done using class and arrays. To keep the design visually appealing, when a ball goes off the screen, it appears on the opposite side of the screen and and continues moving the same trajectory.

An oracles fades and a new one appears and this is done by changing the transparency of the text each frame. When the text becomes fully transparent, it is deleted and a new one is formed and displayed. When you click on the screen, the program can also change between light and dark mode using the callback function mouseClicked and an if statement

Code I’m proud of

for (let i=0; i<currentLines.length; i++) {
    text(currentLines[i], width/2, startY + (i*spacing));
  }
  
  // Fading 
  textAlpha += fadeSpeed;
  
  if (textAlpha < 0) {
    currentLines.length = 0;
    generateNewPoem();
    fadeSpeed = 2;
  }
  
  if (textAlpha > 400) {
    fadeSpeed = -2;
  }

This is the code that generates the oracle and causes the fading effect. A for loop is used to print out randomly generated text in a vertical format and the if statements gradually increase the opaqueness of the the text and then makes it transparent eventually disappear.

Embedded Sketch

How it was made

The texts in the various arrays that are randomly selected where generated with Chat GPT with the goal of creating a more natural and consistent text

Reflection

This was a good exercise to make use of my knowledge in text manipulation, arrays, classes and other concepts learnt in other lectures. This work can be improved by building on the background animations and changing the colors the text are printed in to create a more visually pleasing effect.

Week 4 – Reading reflection

The reading shifted my perspective on designing from focusing more on the functionality of the product, which is the norm, to focusing on the user of the product or what we are designing. I realized that designing an  object or tool for the public requires you to have some consultations with other parties and we should invest more effort into how the user interacts with the product rather than the product design itself. The reading creates a concern though. Can this approach to focusing on the user’s interactions lead to a compromise in the quality  or functionality of the work produced and how do we find the right balance between functionality and interactivity.