Final Project- Emotionally Reactive Room

IM Showcase Gallery:

Concept

For the purposes of the Final Project of Introduction to Interactive Media, I was  presented the challenging task of connecting Software(P5.js) and Hardware(Arduino). To achieve this, I decided to make a room that  promotes sustainability and also reacts to human emotions.

Motivation

The Project was inspired by the words of my roommate who is active member of our campus’s sustainability student interest group.  She constantly urges me to be more mindful of energy usage, especially my tendency to leave lights on unnecessarily. Moreover, she pointed out that I could be more emotionally present. That got me thinking: why not design a room that not only conserves energy but also tunes into human emotions? To achieve this, I incorporated a feature where both the music and lighting adapt to the mood of the person inside, creating an environment that is truly sentient and responsive.

P5.js Part

In the p5.js segment of my project, I began by integrating the camera setup. I utilized ml5.js along with face-api.js for emotion detection. Face-api.js is particularly well-suited for this task due to its ability to analyze specific facial points to ascertain emotions. The library offers a range of emotions including neutral, happy, angry, sad, disgusted, surprised, and fearful. However, for my project’s scope, I’m focusing on neutral, happy, sad, disgusted, and surprised.

I designed the system to handle the analysis for just one individual at a time. Although the camera captures all faces within its view, it processes emotions only for the first face detected. To guide the user, I placed text on the top left of the canvas that displays the detected emotion and its corresponding probability percentage. This not only informs the user about the types of expressions to test but also enhances the interactive aspect of the project.

To make the experience more engaging, I created specific graphics for each emotion using Canva. These graphics dynamically appear and float around the canvas as the detected emotion changes. Additionally, I’ve incorporated adaptive music into the p5.js environment; the music alters based on the detected emotion, thus varying the room’s ambiance to match the user’s current emotional expression. I also added a fullscreen feature that activates when the user presses ‘f’, allowing both the canvas and video to fill the entire screen.

Graphics Used

Happy: Sad: Surprised: Disgusted: Neutral:

Arduino Part

For the Arduino component of my project, I’ve integrated three RGB LEDs and three pressure sensors. Each pressure sensor is linked to a corresponding LED, such that pressing a sensor activates its associated LED.

In p5.js, I am analyzing five expressions and converting the detection results into binary values, represented as either 1 or 0. These values are then transmitted to the Arduino. Based on the received data, if an expression is represented by a ‘1’, the corresponding RGB LED changes its color to match the detected expression.

User Testing:

Hardware and its Pictures:

This prototype of my room features a study table, a bed, and a sofa, each equipped with pressure sensors. Above, there are three LED chandeliers hanging from the ceiling.

How it Works:

Here’s how it works:  Firstly, we initiate serial communication between the Arduino and the p5.js script. The music and camera activate simultaneously with the start of the p5.js sketch. Based on the expressions detected by the camera, the graphics on the display and the music will dynamically change to match your current mood.

When a user presses a pressure sensor on either the bed, chair, or sofa, the RGB LED positioned above that particular sensor will light up. Following this, as your facial expressions change, the color of the LED will also change accordingly: happiness triggers a green light, neutrality a blue light, sadness a red light, disgust a yellow light, and surprise a pink light. This creates a responsive environment that visually and audibly reflects your emotions.

Code:

Here’s the logic for activating the LED based on the sensor value and changing the colour based on the expression detected from the p5.js script.

// Light up the corresponding LED only
if (sensor1Value < 30 ) {
  digitalWrite(redLED1, Sad || disgusted || surprised);
  digitalWrite(greenLED1, Happy || disgusted);
  digitalWrite(blueLED1, Neutral || surprised);
} else if (sensor2Value < 40) {
  digitalWrite(redLED2, Sad || disgusted || surprised);
  digitalWrite(greenLED2, Happy || disgusted);
  digitalWrite(blueLED2, Neutral || surprised);
} else if (sensor3Value < 40) {
  digitalWrite(redLED3, Sad || disgusted || surprised);
  digitalWrite(greenLED3, Happy || disgusted);
  digitalWrite(blueLED3, Neutral || surprised);
}
P5.js Display

Surprised: Disgusted: Happy: Sad: Neutral:

Demonstration:

Part of the Project that I take the most pride in:

The part I’m most proud of is how I mapped the expression values to 0 and 1, based on the percentage of the emotion detected, and then stored them in an array. This simplification made it easier to send binary values to the Arduino. However, figuring out this code took some time, as I initially tried storing emotions and their associated values in a dictionary, which didn’t work.

// maps detected expressions to a set of predefined categories and assigns a binary value based on a threshold

function mapExpressions(expressions) {
    const expressionOrder = ['neutral', 'happy', 'sad', 'disgusted', 'surprised'];
    let expressionValues = [];

    expressionOrder.forEach(expression => {
//       if value deteced is more than 50% make it 1 otherwise 0
        let value = expressions[expression] > 0.5 ? 1 : 0; 
        expressionValues.push(value);
    });

    return expressionValues;
}

Difficulties and Future Improvements:

The most challenging aspect of this project was establishing the serial communication between p5.js and Arduino, which took a solid two days of trial and error. Despite trying numerous approaches, nothing seemed to work until I created a duplicate file, which then functioned flawlessly without errors. Another significant challenge was the coding aspect. Although the code itself was not particularly complex, integrating music and graphics with the face-api was time-consuming, necessitating updates to the HTML file.

Additionally, I encountered difficulties with the pressure sensors. Initially, I used piezo sensors, but they produced inconsistent readings. I switched to force sensors which provided more reliable results, although they required recalibration every five minutes, adding another layer of complexity to the project. I wrote two additional Arduino scripts to calibrate the sensors, which allowed me to run the serial monitor and check the pressure sensor values.

For future improvements, I would consider investing in better pressure sensors. Additionally, instead of relying on my computer’s speakers, I’d like to integrate an external speaker directly connected to the Arduino. This setup would enhance the overall functionality and user experience.

Code:

P5.js:
// Initializing varialbes and arrays

let faceapi;
let detections = [];
let expressionValues=[];
let video;
let canvas;
let happyEmojis=[];
let vx;
let vy;
let songs = [];
let currentSongIndex = -1; 

// loading grpahics and music

function preload() {
  
    happyEmojis[0]= loadImage('1.png'); 
    happyEmojis[1] = loadImage('2.png'); 
    happyEmojis[2] = loadImage('3.png'); 
    happyEmojis[3] = loadImage('4.png'); 
    happyEmojis[4]= loadImage('5.png'); 
    happyEmojis[5] = loadImage('6.png'); 
    happyEmojis[6] = loadImage('7.png'); 
    happyEmojis[7] = loadImage('8.png'); 
    happyEmojis[8]= loadImage('9.png'); 
    happyEmojis[9] = loadImage('10.png'); 
    happyEmojis[10] = loadImage('11.png'); 
    happyEmojis[11] = loadImage('12.png'); 
    happyEmojis[12]= loadImage('13.png'); 
    happyEmojis[13] = loadImage('14.png'); 
    happyEmojis[14] = loadImage('15.png'); 
    happyEmojis[15] = loadImage('16.png'); 
    happyEmojis[16]= loadImage('17.png'); 
    happyEmojis[17] = loadImage('18.png'); 
    happyEmojis[18] = loadImage('19.png'); 
    happyEmojis[19] = loadImage('20.png'); 
    happyEmojis[20] = loadImage('21.png'); 
    happyEmojis[21] = loadImage('22.png'); 
    happyEmojis[22] = loadImage('23.png'); 
  
    songs[0] = loadSound('song1.mp3');
    songs[1] = loadSound('song2.mp3');
    songs[2] = loadSound('song3.mp3');
    songs[3] = loadSound('song4.mp3');
    songs[4] = loadSound('song5.mp3');
}

// Setting up the canvas and video settings
function setup() {
    canvas = createCanvas(windowWidth, windowHeight);
    canvas.id('canvas');
    video = createCapture(VIDEO);
    video.size(windowWidth, windowHeight);
    video.id('video');
  
//   initializes the face detection
  const faceOptions = {
    withLandmarks: true,
    withExpressions: true,
    withDescriptors: true,
    minConfidence: 0.5
  };

  //initialize the model: 
  faceapi = ml5.faceApi(video, faceOptions, faceReady);
  
  image1 = new Emoji(happyEmojis[0],random(0,width-250),0,1,1);
  image2 = new Emoji(happyEmojis[1],random(0,width-250),0,0.5,1);
  image3 = new Emoji(happyEmojis[2],random(0,width-250),0,0.5,1);
  image4 = new Emoji(happyEmojis[3],random(0,width-250),0,1,1.5);
  image5 = new Emoji(happyEmojis[4],random(0,width-250),0,1,0.5);
  image6 = new Emoji(happyEmojis[5],random(0,width-250),0,1,1);
  image7 = new Emoji(happyEmojis[6],random(0,width-250),0,1,1.5);
  image8 = new Emoji(happyEmojis[7],random(0,width-250),0,1,0.5);
  image9 = new Emoji(happyEmojis[8],random(0,width-250),0,2,1);
  image10 = new Emoji(happyEmojis[9],random(0,width-250),0,1,1.5);
  image11 = new Emoji(happyEmojis[10],random(0,width-250),0,1,0.5);
  image12 = new Emoji(happyEmojis[11],random(0,width-250),0,1,1.5);
  image13 = new Emoji(happyEmojis[12],random(0,width-250),0,2,1);
  image14= new Emoji(happyEmojis[13],random(0,width-250),0,1,2);
  image15= new Emoji(happyEmojis[14],random(0,width-250),0,1,1.5);
  image16= new Emoji(happyEmojis[15],random(0,width-250),0,1,1.5);
  image17 = new Emoji(happyEmojis[16],random(0,width-250),0,1,1);
  image18 = new Emoji(happyEmojis[17],random(0,width-250),0,1,1);
  image19 = new Emoji(happyEmojis[18],random(0,width-250),0,1,1.5);
  image20 = new Emoji(happyEmojis[19],random(0,width-250),0,1,0.5);
  image21 = new Emoji(happyEmojis[20],random(0,width-250),0,1,1.5);
  image22 = new Emoji(happyEmojis[21],random(0,width-250),0,1,0.5);
  image23 = new Emoji(happyEmojis[22],random(0,width-250),0,1,0.5);
}

// adjust canvas and video size when window is resized
function windowResized() {
    
    resizeCanvas(windowWidth, windowHeight);
    video.size(windowWidth, windowHeight);
}


function draw(){

  clear();
//   drawaing expressios and drawing graphics on the screen based on the detected emotion
  drawExpressions(detections, 20, 20, 14);
  if (expressionValues.length > 1 && expressionValues[1] === 1) { // 
    image1.display();
    image1.update();
    image2.display();
    image2.update();
    image3.display();
    image3.update();
    image4.display();
    image4.update();
    image5.display();
    image5.update();
    
  
  }
  if (expressionValues.length > 1 && expressionValues[4] === 1) { // 
    image11.display();
    image11.update();
    image12.display();
    image12.update();
    image13.display();
    image13.update();
    image14.display();
    image14.update();
    
  
  }
  if (expressionValues.length > 1 && expressionValues[3] === 1) { // 
    image15.display();
    image15.update();
    image16.display();
    image16.update();
    image17.display();
    image17.update();
    image18.display();
    image18.update();
    image23.display();
    image23.update();
    // playSong(2);
  }
  if (expressionValues.length > 1 && expressionValues[2] === 1) { // 
    image7.display();
    image7.update();
    image8.display();
    image8.update();
    image9.display();
    image9.update();
    image10.display();
    image10.update();
    
  }
  if (expressionValues.length > 1 && expressionValues[0] === 1) { // 
    image6.display();
    image6.update();
    image19.display();
    image19.update();
    image20.display();
    image20.update();
    image21.display();
    image21.update();
    image22.display();
    image22.update();
    
  }

//   playingn songs based on the emotion detected
  if (expressionValues.length > 1 && expressionValues[1] === 1) {
    
    playSong(3);
  } else if (expressionValues.length > 1 && expressionValues[4] === 1) {
    // play song 1
    playSong(0);
  } else if (expressionValues.length > 1 && expressionValues[3] === 1) {
    // play song 2
    playSong(1);
  } else if (expressionValues.length > 1 && expressionValues[2] === 1) {
    // play song 3
    playSong(2);
  } else if (expressionValues.length > 1 && expressionValues[0] === 1) {
    // play song 4
    playSong(4);
  }
  
}

function playSong(index) {
  //stop any currently playing song
  for (let i = 0; i < songs.length; i++) {
    if (i !== index && songs[i].isPlaying()) {
      songs[i].stop();
    }
  }

  // play the selected song
  if (!songs[index].isPlaying()) {
    songs[index].play();
  }
}

// class to handle the grpahics
class Emoji {
    constructor(img,x,y,vx, vy) {
        this.img = img;
        this.x = x;
      this.y = y;
        this.vx = vx;
        this.vy = vy;
    }
    
    update() {
        this.x += this.vx;
        this.y += this.vy;
        // check for canvas boundaries
        if (this.x < -130 || this.x > width -200) this.vx *= -1;
        if (this.y < -110 || this.y > height -150) this.vy *= -1;
    }
// display the graphics
    display() {
        image(this.img, this.x, this.y, 500, 500);
    }
}

function keyTyped() {
  // $$$ For some reason on Chrome/Mac you may have to press f twice to toggle. Works correctly on Firefox/Mac
  if (key === 'f') {
    toggleFullscreen();
  }
}

// Toggle fullscreen state. Must be called in response
// to a user event (i.e. keyboard, mouse click)
function toggleFullscreen() {
  let fs = fullscreen(); // Get the current state
  fullscreen(!fs); // Flip it!
}

// Start detecting faces
function faceReady() {
  faceapi.detect(gotFaces);
}

// Got faces
function gotFaces(error, result) {
  if (error) {
    console.log(error);
    return;
  }
//now all the data in this detections
  detections = result; 
  
//make back ground transparent
  clear();
  

  storeExpressions(detections); 
  faceapi.detect(gotFaces);
}

// maps detected expressions to a set of predefined categories and assigns a binary value based on a threshold

function mapExpressions(expressions) {
    const expressionOrder = ['neutral', 'happy', 'sad', 'disgusted', 'surprised'];
    let expressionValues = [];

    expressionOrder.forEach(expression => {
//       if value deteced is more than 50% make it 1 otherwise 0
        let value = expressions[expression] > 0.5 ? 1 : 0; 
        expressionValues.push(value);
    });

    return expressionValues;
}

// store expressions 
function storeExpressions(detections) {
    if (detections.length > 0) {
//   for the first person in the list, map expressions
        let expressions = detections[0].expressions;
        expressionValues = mapExpressions(expressions);
        // console.log(expressionValues);

        
    }
}

// it draws the percentage of detected emotion on the left top corner of the canvas
function drawExpressions(detections, x, y, textYSpace){
  if(detections.length > 0){
    
    let {neutral, happy, angry, sad, disgusted, surprised, fearful} = detections[0].expressions;
    textFont('Helvetica Neue');
    textSize(14);
    noStroke();
    fill(44, 169, 225);
// uses nf(value, left, right) to format numbers
    text("Neutral:       " + nf(neutral*100, 2, 2)+"%", x, y);
    text("Happiness: " + nf(happy*100, 2, 2)+"%", x, y+textYSpace);
    text("Sad:            "+ nf(sad*100, 2, 2)+"%", x, y+textYSpace*2);
    text("Disgusted: " + nf(disgusted*100, 2, 2)+"%", x, y+textYSpace*3);
    text("Surprised:  " + nf(surprised*100, 2, 2)+"%", x, y+textYSpace*4);
  
  }else{
    text("Neutral: ", x, y);
    text("Happiness: ", x, y + textYSpace);
    text("Sad: ", x, y + textYSpace*2);
    text("Disgusted: ", x, y + textYSpace*3);
    text("Surprised: ", x, y + textYSpace*4);
  }
}

function keyPressed() {
  if (key == " ") {
    // important to have in order to start the serial connection!!
    setUpSerial();
  }
}

// This function will be called by the web-serial library
// with each new *line* of data. The serial library reads
// the data until the newline and then gives it to us through
// this callback function
function readSerial(data) {
  ////////////////////////////////////
  //READ FROM ARDUINO HERE
  ////////////////////////////////////
  console.log(expressionValues);
  if (data != null) {
    

    //////////////////////////////////
    //SEND TO ARDUINO HERE (handshake)
    //////////////////////////////////
    let sendToArduino = expressionValues[0] + "," + expressionValues[1] + "," + expressionValues[2] + ","  + expressionValues[3] + ","  + expressionValues[4] + "\n";
    writeSerial(sendToArduino);
  }
}
Arduino:
// Define LED pin constants
int redLED1 = 12;   // First RGB LED, red pin
int greenLED1 = 11; // First RGB LED, green pin
int blueLED1 = 10;  // First RGB LED, blue pin

int redLED2 = 9;    // Second RGB LED, red pin
int greenLED2 = 8;  // Second RGB LED, green pin
int blueLED2 = 7;   // Second RGB LED, blue pin

int redLED3 = 6;    // Third RGB LED, red pin
int greenLED3 = 5;  // Third RGB LED, green pin
int blueLED3 = 4;   // Third RGB LED, blue pin

// Define sensor pin constants
int sensor1 = A2;  // First sensor
int sensor2 = A3;  // Second sensor
int sensor3 = A4;  // Third sensor

void setup() {
  Serial.begin(9600);
  pinMode(LED_BUILTIN, OUTPUT);

  // Set LED pins to output mode
  pinMode(redLED1, OUTPUT);
  pinMode(greenLED1, OUTPUT);
  pinMode(blueLED1, OUTPUT);

  pinMode(redLED2, OUTPUT);
  pinMode(greenLED2, OUTPUT);
  pinMode(blueLED2, OUTPUT);

  pinMode(redLED3, OUTPUT);
  pinMode(greenLED3, OUTPUT);
  pinMode(blueLED3, OUTPUT);

  // Start the handshake
  while (Serial.available() <= 0) {
    digitalWrite(LED_BUILTIN, HIGH); // on/blink while waiting for serial data
    Serial.println("0,0"); // send a starting message
    delay(300);            // wait 1/3 second
    digitalWrite(LED_BUILTIN, LOW);
    delay(50);
  }
}

void loop() {
  while (Serial.available()) {
    digitalWrite(LED_BUILTIN, HIGH); // LED on while receiving data
    int sensor1Value = analogRead(sensor1);  // Read first sensor
    int sensor2Value = analogRead(sensor2);  // Read second sensor
    int sensor3Value = analogRead(sensor3);  // Read third sensor

    int Neutral = Serial.parseInt();
    int Happy = Serial.parseInt();
    int Sad = Serial.parseInt();
    int disgusted = Serial.parseInt();
    int surprised = Serial.parseInt();

    if (Serial.read() == '\n') {
      // Reset all LEDs
      digitalWrite(redLED1, LOW);
      digitalWrite(greenLED1, LOW);
      digitalWrite(blueLED1, LOW);
      digitalWrite(redLED2, LOW);
      digitalWrite(greenLED2, LOW);
      digitalWrite(blueLED2, LOW);
      digitalWrite(redLED3, LOW);
      digitalWrite(greenLED3, LOW);
      digitalWrite(blueLED3, LOW);

      // Light up the corresponding LED only
      if (sensor1Value < 30 ) {
        digitalWrite(redLED1, Sad || disgusted || surprised);
        digitalWrite(greenLED1, Happy || disgusted);
        digitalWrite(blueLED1, Neutral || surprised);
      } else if (sensor2Value < 40) {
        digitalWrite(redLED2, Sad || disgusted || surprised);
        digitalWrite(greenLED2, Happy || disgusted);
        digitalWrite(blueLED2, Neutral || surprised);
      } else if (sensor3Value < 40) {
        digitalWrite(redLED3, Sad || disgusted || surprised);
        digitalWrite(greenLED3, Happy || disgusted);
        digitalWrite(blueLED3, Neutral || surprised);
      }

      Serial.print(sensor1Value < 30);
      Serial.print(',');
      Serial.print(sensor2Value < 40);
      Serial.print(',');
      Serial.println(sensor3Value < 40);
    }
  }
  // Optional: Use built-in LED to indicate the system is running
  digitalWrite(LED_BUILTIN, HIGH);
  delay(50);
  digitalWrite(LED_BUILTIN, LOW);
  delay(300);
}

 

Final Project Idea: Emotion-Based Room

For my final project, I am planning to develop an “Emotion-Based Room.” This innovative setup will utilize a webcam to capture the facial expressions of a user, employing machine learning models to recognize various emotions such as happiness, sadness, surprise, etc. I will use the ml5.js and face-api.js libraries to implement the emotion detection feature.

Based on the detected emotions, the room’s environment will dynamically adjust to enhance the user’s experience. Using p5.js, the system will control the playback of music to match the user’s current mood. Additionally, an Arduino system will regulate the room’s lighting to complement the emotional atmosphere.

The physical space will be modeled with essential furniture such as a sofa, bed, study table, and bathroom. I plan to incorporate custom-made pressure sensors on these furniture items to further personalize the room’s response. For example, if a person sits on the sofa, the system will interpret this as a mood for studying and adjust the lighting to a more focused setting. Similarly, lying on the bed will trigger the lights to turn off and the music to cease, facilitating a conducive environment for sleep.

This project aims to create a responsive living space that adapts not only to the expressed emotions of its occupant but also to their physical interactions within the room.

Arduino + p5js in class exercises (Luke and Gunjan)

For our first exercise, we used a light sensor as the analog input value for the movement of the ellipse on the p5js screen. The rVal from Arduino is used to plot the x coordinate of the ellipse.

Demo:

Circuit:

Code:

P5Js:

//Exercise 1 P5js Code
let ellipseX; // Variable to store ellipse's x-coordinate
function setup() {
  createCanvas(600, 400); // Create a canvas of 800x400 pixels
  ellipseX = width / 2; // Set initial x-coordinate of ellipse to middle of the screen
  noStroke(); // No stroke for the ellipse
}
function draw() {
  
  background(220); // Refresh background on each frame
  fill(0, 0, 255); // Set fill color to red
  ellipse(ellipseX, height / 2, 50, 50); // Draw ellipse at current x-coordinate and middle of the screen
  
  if (!serialActive) {
    text("Press Space Bar to select Serial Port", 20, 30);
  } else {
    text("Connected", 20, 30);}
}
function keyPressed() {
  if (key == " ") {
    // important to have in order to start the serial connection!!
    setUpSerial();
  }   
}
// this callback function
function readSerial(data) {
  ////////////////////////////////////
  //READ FROM ARDUINO HERE
  ////////////////////////////////////
  if (data != null) {
    // make sure there is actually a message
    // split the message
   
    let fromArduino = split(trim(data), ",");
    // if the right length, then proceed
    if (fromArduino.length == 1) {
      // only store values here
      // do everything with those values in the main draw loop
//values from light sensor roughly ranged between 0 and 500,000 so map them between 0 and width of the screen
//use the mapped value as x-coordinate of the ellipse
      ellipseX = map(data,0,500000, 0,width);
    }
    //////////////////////////////////
    //SEND TO ARDUINO HERE (handshake)
    //////////////////////////////////
   let sendToArduino = fromArduino + "\n";
   writeSerial(sendToArduino);
  }
}

In future implementations, we may want the ball to move in a smoother fashion. We can consider adding codes that can smooth out the path of the ball’s movement. But using the photosensor to control the ball, we reinforce the knowledge about coding in Arduino and sending data from Arduino to p5.

Exercise 2:

https://drive.google.com/file/d/18aedgmMdG_DP-2rTXQYxR1kvSSRr9CIY/view

Circuit:

P5js code:

let rVal = 0;
let alpha = 255;
let left = 0; // True (1) if mouse is being clicked on left side of screen
let right = 0; // True (1) if mouse is being clicked on right side of screen
let slider;
function setup() {
  createCanvas(640, 480);
  textSize(18);
  slider = createSlider(0,255,0);
  slider.position(50,50);
}
function draw() {
  // one value from Arduino controls the background's red color
  background('white');
  // if (!serialActive) {
    text("Press Space Bar to select Serial Port", 20, 30);
  // } else {
  //   text("Connected", 20, 30);
  //   // Print the current values
    // text('rVal = ' + str(rVal), 20, 50);
  //   text('alpha = ' + str(alpha), 20, 70);
  // }
  // click on one side of the screen, one LED will light up
  // click on the other side, the other LED will light up
  // if (mouseIsPressed) {
  //   if (mouseX <= width / 2) {
  //     left = 1;
  //   } else {
  //     right = 1;
  //   }
  // } else {
  //   left = right = 0;
  // }
}
function keyPressed() {
  if (key == " ") {
    // important to have in order to start the serial connection!!
    setUpSerial();
  }
}
// This function will be called by the web-serial library
// with each new *line* of data. The serial library reads
// the data until the newline and then gives it to us through
// this callback function
function readSerial(data) {
  ////////////////////////////////////
  //READ FROM ARDUINO HERE
  ////////////////////////////////////
  if (data != null) {
    //////////////////////////////////
    //SEND TO ARDUINO HERE (handshake)
    //////////////////////////////////
    let sendToArduino = slider.value() + "\n";
    console.log(slider.value());
    writeSerial(sendToArduino);
    
    }
}

In terms of the future improvement for this one, we may want to control multiple LEDs with only one slider, or add more sliders to control many LEDs. But it helped us think about writing the code on transferring data to Arduino.

Exercise 3:

Circuit:

p5js code:

let velocity;
let gravity;
let position;
let acceleration;
let wind;
let drag = 0.99;
let mass = 50;
let rVal ;
function setup() {
  createCanvas(640, 360);
  noFill();
  position = createVector(width/2, 0);
  velocity = createVector(0,0);
  acceleration = createVector(0,0);
  gravity = createVector(0, 0.5*mass);
  wind = createVector(0,0);
}
function draw() {
  background(255);
  applyForce(wind);
  applyForce(gravity);
  velocity.add(acceleration);
  velocity.mult(drag);
  position.add(velocity);
  acceleration.mult(0);
  fill("blue");
  ellipse(position.x,position.y,mass,mass);
  if (position.y > height-mass/2) {
      velocity.y *= -0.9;  // A little dampening when hitting the bottom
      position.y = height-mass/2;
    }
  if (position.y == height - mass/2) {
      right = 1;
  } else {
      right = 0;
  }
  if (rVal > 522) {
    wind.x = 1;
} else if (rVal < 502) {
    wind.x = -1;
} else {
    wind.x = 0;
}
  
  // if (!serialActive) {
  //   text("Press Space Bar to select Serial Port", 20, 30);
  // } else {
  //   text("Connected", 20, 30);
  //   // Print the current values
  //   text('rVal = ' + str(rVal), 20, 50);
  //   text('alpha = ' + str(alpha), 20, 70);
  // }
}
function applyForce(force){
  // Newton's 2nd law: F = M * A
  // or A = F / M
  let f = p5.Vector.div(force, mass);
  acceleration.add(f);
}
// if (position.y == height - mass/2) {
    
//     //   left = 1;
//     // } else {
//       right = 1;
//     // }
//   } else {
//     // left = right = 0;
//   right = 0;
//   }
function keyPressed(){
  if (key == " ") {
    // important to have in order to start the serial connection!!
    setUpSerial();
  }
  // if (keyCode==LEFT_ARROW){
  //   wind.x=-1;
  // }
  // if (keyCode==RIGHT_ARROW){
  //   wind.x=1;
  // }
  if (keyCode==ENTER){
    // mass=random(15,80);
    position.y=-mass;
    velocity.mult(0);
  }
}
function readSerial(data) {
  ////////////////////////////////////
  //READ FROM ARDUINO HERE
  ////////////////////////////////////
  if (data != null) {
    // make sure there is actually a message
    // split the me ssage
    rVal=data;
    print(rVal);
    //////////////////////////////////
    //SEND TO ARDUINO HERE (handshake)
    //////////////////////////////////
    let sendToArduino = right + "\n";
    writeSerial(sendToArduino);
  }
}

Challenges and implementations: we had no problem making the LED blink every time the ball touches the ground. But we did face some challenges in controlling the wind with the potentiometer initially. The ball kept moving to the left and we couldn’t figure out how to control it no matter what. In the end, we figure out that the issue lies in mapping and in coding the snippets of codes we want to send from p5 into Arduino. For improvements, we want to increase the speed of the ball being controlled by the potentiometer. At the moment, it is moving rather slowly.

Week 11: Reading Reflection

Drawing upon the insightful critique presented in Bret Victor’s “A Brief Rant on the Future of Interaction Design,” I find myself resonating deeply with the underlying message of reimagining our interaction paradigms. Victor’s compelling argument against the myopic vision of future interfaces primarily reliant on fingertip interactions challenges us to broaden our conceptual horizons. Reflecting on the assertions made, it becomes evident that while touchscreen technology represents a significant advancement, it barely scratches the surface of our interactive potential.

Victor’s emphasis on utilizing the entire hand—not just the fingertips—invites us to explore the rich possibilities of tactile and gestural inputs. This approach not only enhances the depth of human-computer interaction but also aligns with ergonomic principles that advocate for natural, strain-free movements. The focus on whole-hand interaction could lead to more intuitive and physically engaging interfaces that leverage the full spectrum of human dexterity and sensory feedback.

Moreover, the notion of universal design emerges as a crucial consideration. By aspiring to create interfaces that are accessible to all, including those with different abilities, designers can cultivate inclusivity. This idea challenges us to think beyond the conventional target audience and design products that cater to a diverse user base without compromising on functionality or aesthetic appeal. Such an approach not only broadens the market reach but also reinforces the social responsibility of design.

In envisioning the future of interaction design, we should indeed consider a return to the basics, as suggested in the readings. The fundamental act of hands manipulating physical objects has shaped human tool use throughout history; thus, incorporating this intrinsic aspect of human behavior into modern technology could revolutionize how we interact with digital environments. It is not merely about enhancing existing technology but redefining what interaction entails in the digital age, moving from passive touchscreen gestures to dynamic, multi-dimensional engagements.

In summary, while advanced technologies like 3D cameras and AI-driven interfaces continue to push the boundaries, the essence of interaction design should remain grounded in the natural human experience. Emphasizing the full potential of our hands not only respects our biological heritage but also opens up a panorama of possibilities that could redefine the future landscape of technology, making it more intuitive, inclusive, and fundamentally human-centric. This perspective not only aligns with Victor’s critique but also propels it forward, suggesting a paradigm where technology complements rather than constrains human capabilities.

Assignment 7: Automatic Musical Doorbell

Concept

For the 7th Assignment in our Introduction to Interactive Media course, we were assigned the task of creating a musical instrument that incorporates at least one digital and one analog sensor. After brainstorming various ideas, we decided to design an interactive project that combines functionality with musical creativity. Our concept is an “Automatic  MusicalDoor Bell”.

Our concept envisions a scenario where, as a person approaches and comes within 20 centimeters of the distance sensor, the system is triggered to play melodious door bell music. This immediate and musical response not only alerts the presence of someone at the door but also enhances the interaction with a harmonious greeting.

Implementation

In the development of our interactive door bell project, we aimed to create a distinctive auditory experience that went beyond the typical door bell chime. To achieve this, we programmed the Piezo buzzer to emit a unique melody rather than a standard tone. This melody was carefully chosen to be pleasant and welcoming, enhancing the interactive aspect of our device.

To further enrich the sonic experience, we introduced physical elements that function as percussive instruments. Specifically, we utilized a steel bottle and a glass cup, both positioned to serve as drums. These items were chosen for their resonant qualities and the distinct sounds they produce when struck.

The action of striking these improvised drums is controlled by a servo motor, to which a drumstick is attached. As the servo motor rotates in response to the proximity detected by the distance sensor, the attached drumstick swings. It hits the steel bottle on one side and the glass cup on the other.  The combination of this sound with the melody from the buzzer results in a complex musical note that is triggered by the presence of a person within the sensor’s 20 cm range.

The entire system is designed to be responsive: as soon as the ultrasonic distance sensor detects something within its range, both the servo motor and the buzzer are activated, collaboratively producing a musical output. This not only signals the presence of a visitor but does so in an unexpectedly musical way, turning a simple functional alert into an engaging and enjoyable experience.

Demo Video

A demo video, in detail, can be seen here:

Pictures of Circuit

Difficulties and Limitations

One of the significant challenges we faced during the implementation of our project was selecting the appropriate materials for the drum components. The process of determining the ideal materials involved extensive trial and error. We experimented with various types of cups and materials for the drumstick to achieve the desired sound quality and durability.

Looking ahead, for future iterations of the project, we aim to explore a broader range of frequencies for the buzzer, allowing for more diverse musical expressions. Additionally, adjusting the angles and speeds of the servo motor holds potential for creating a more dynamic and visually appealing interaction. These enhancements will not only improve the functionality of our device but also enrich the user experience, making each interaction with the door bell a unique and enjoyable moment.

Code

#include <Servo.h>

// Constants for the pins
const int buzzerPin = 4;
const int servoPin = A5;
const int triggerPin = 6;
const int echoPin = 5;

// Create a Servo object
Servo myServo;

// Variables for the distance measurement
long duration;
int distance;

void setup() {
  // Set the buzzer pin as output
  pinMode(buzzerPin, OUTPUT);
  // Initialize the servo
  myServo.attach(servoPin);
  // Set the ultrasonic sensor pins
  pinMode(triggerPin, OUTPUT);
  pinMode(echoPin, INPUT);
  // Begin serial communication (for debugging purposes)
  Serial.begin(9600);
}

void loop() {
  // Measure the distance from the ultrasonic sensor
  digitalWrite(triggerPin, LOW);
  delayMicroseconds(2);
  digitalWrite(triggerPin, HIGH);
  delayMicroseconds(10);
  digitalWrite(triggerPin, LOW);
  duration = pulseIn(echoPin, HIGH);
  distance = duration * 0.034 / 2; 

  // Check if something is within the desired range
  if (distance < 20) { 
    // Rotate servo to 180 degrees
    myServo.write(60);
    // delay(10);
    // myServo.write(50);
    delay(100); 
    // Rotate servo back to 0 degrees
    myServo.write(0);
    delay(250); 

    // Play a melody
    playMelody();
  } else {
    // Keep the servo at 0 degrees when not activated
    myServo.write(0);
    // No sound
    noTone(buzzerPin);
  }


  // Serial.print("Distance: ");
  // Serial.println(distance);
  
  delay(100); 
}

void playMelody() {
  // Note frequencies, e.g., C4, D4, E4, F4
  int melody[] = {262, 294, 330, 349};
  int noteDurations[] = {200, 200, 200, 200};  // duration of each note in milliseconds

  for (int thisNote = 0; thisNote < 4; thisNote++) {
    // To calculate the note duration, take one second divided by the note type.
    //e.g., quarter note = 1000 / 4, eighth note = 1000/8, etc.
    int noteDuration = noteDurations[thisNote];
    tone(buzzerPin, melody[thisNote], noteDuration);

    // To distinguish the notes, set a minimum time between them.
    // The note's duration + 30% seems to work well:
    int pauseBetweenNotes = noteDuration * 1.30;
    delay(pauseBetweenNotes);

    // Stop the tone playing:
    noTone(buzzerPin);
  }
}

 

 

Week 9-Analog Input & Output

Concept

As part of the 9th Assignment of Intro to IM, we were tasked with the objective of reading input from at least one analog sensor, and at least one digital sensor (switch). This data would then be used to control at least two LEDs, one in a digital fashion and the other in an analog fashion, in some creative way.

The core idea of my project is to create an interactive light experience using a combination of analog and digital inputs to control an RGB LED in a creative and visually appealing manner. Initially, the RGB LED displays a steady red color, which gradually transitions through a spectrum of colors, creating a calming and dynamic visual effect. This smooth color transition continues indefinitely, creating a mesmerizing ambient display.

The interaction becomes more engaging with the introduction of a user-controlled digital button and a light sensor. The button, when pressed, shifts the mode of the LED from its continuous color transition to a responsive blinking state. In this state, the color of the LED’s blinking is determined by the ambient light level measured by the sensor. Specifically:

  • Below 300 units: The LED blinks in red, indicating low light conditions.
  • Between 300 and 900 units: The LED blinks in blue, corresponding to moderate light levels.
  • Above 900 units: The LED blinks in green, signifying high light intensity.

Implementation

Pictures of Circuit

With this, I produced the following circuit:

Demo Video

A demo video, in detail, can be seen here:

 

Difficulties & Improvements

While the concept of the color transition was straightforward, implementing it in Arduino presented significant challenges. Determining the correct method for transitioning from one color to another smoothly, without abrupt changes, was particularly tricky. This aspect of the project required careful tweaking of the code to ensure that the color shifts were visually pleasing.

void smoothColorTransition() {
  // Fade colors smoothly
  if (redValue > 0 && blueValue == 0) {
    redValue -= step;
    greenValue += step;
  }
  if (greenValue > 0 && redValue == 0) {
    greenValue -= step;
    blueValue += step;
  }
  if (blueValue > 0 && greenValue == 0) {
    blueValue -= step;
    redValue += step;
  }

Despite these challenges, I thoroughly enjoyed this part of the project. It was rewarding to see the seamless color transitions come to life. Looking ahead, I plan to expand this project by adding an additional LED and another button to enhance interactivity and complexity, making the experience even more engaging.

 

Concept:

The concept behind my Unusual Switch project is to transform a regular room door into an innovative electrical switch. This unconventional approach leverages the movement of the door to control the on and off state of an LED light.

Demonstration:

Circuit:

Execution:

I set up a simple circuit with a resistor and an LED. Starting with a ground wire connected to the LED, which then connects to a resistor. To integrate the door into this circuit, I attached one wire wrapped in aluminum foil to the door frame and another, connected to 5V, similarly wrapped and placed on the door. A long strip of aluminum foil on the door’s moving part completes the circuit when the door is closed, lighting up the LED. Open the door, and the circuit breaks, turning off the LED. It’s a neat little setup that turns the action of opening and closing a door into an on-off switch for the light.

Week 8 Response

The exploration of Margaret Hamilton’s contributions in “Her Code Got Humans on the Moon—And Invented Software Itself” and Donald A. Norman’s insights in “Emotion and Design” illuminate two profound yet interconnected realms of human ingenuity. Hamilton’s journey as a trailblazer in software engineering not only showcases the monumental role of software in space exploration but also reflects the broader implications of pioneering in technology. Her work, at a time when the term “software engineering” was not even coined, exemplifies the essence of innovation—venturing into the unknown with a blend of expertise, foresight, and courage. The narrative of Hamilton and her team meticulously crafting the software that would guide humans to the moon underlines the criticality of software in complex systems, a lesson that transcends industries and eras.

On the flip side, Norman’s discourse on the interplay between emotion and design serves as a philosophical counterpart to Hamilton’s empirical achievements. While Hamilton’s story underscores the technical precision and rigor necessary in engineering, Norman’s analysis delves into the subjective experience of interacting with technology. His argument that aesthetics and usability are not mutually exclusive but rather complementary forces offers a nuanced understanding of human interaction with technological products. Norman’s assertion that “attractive things work better” because they engender positive emotional responses, does not detract from usability but rather enhances it by acknowledging the human element in design.

In synthesizing insights from both readings, a critical reflection emerges on the symbiotic relationship between form and function, theory and practice, emotion and utility. Hamilton’s pioneering work in software engineering not only advanced our capabilities in space exploration but also laid the groundwork for considering how software interfaces—the very nexus between human and machine—need to be designed with both functionality and user experience in mind. This parallels Norman’s emphasis on the importance of emotional resonance in design, suggesting that the most effective designs are those that harmoniously blend aesthetic appeal with functional efficiency.

Furthermore, both narratives prompt a reflection on the role of diversity and interdisciplinary approaches in innovation. Hamilton, an outlier in her field, and Norman, bridging psychology and design, exemplify how bringing diverse perspectives to bear on challenges can lead to breakthrough solutions. This raises pertinent questions about how current and future innovations can benefit from integrating insights from seemingly disparate fields, fostering an environment where diversity of thought is not just encouraged but seen as essential to problem-solving.

Midterm Project: Greenhaven

Inspiration

I grew up in a village without a park, a place where families, friends, and pets gather to enjoy nature and each other’s company. To experience the joy of a park, I had to travel to nearby villages or cities. This sparked my love for parks and inspired my midterm project: creating a virtual park. My goal is to recreate the sense of community and fun found in a park, from children playing on swings to people walking their dogs and old friends catching up. This project is my way of bringing the park experience to those who, like me, have always admired it from afar.

Concept

My project begins with a start screen that warmly welcomes users to the virtual park experience, featuring a “Start” button that leads them into the park scene upon interaction. The journey into the virtual park commences with a meticulously designed background, setting the stage for the vibrant park life that unfolds.

Central to the park are two swings, each with figures seated and in motion, bringing a dynamic element of playfulness. These swings, animated to sway, add a lively atmosphere to the park scene. Adding to the ambiance is a bird, animated using a sprite sheet to simulate realistic motion. A click from the user sends the bird flying off the canvas, illustrating the interactive nature of this digital environment.

Music enriches the park experience through a jukebox feature. Clicking on the jukebox transitions the scene from the park to a Radio view, magnifying the jukebox and revealing its controls. The jukebox offers three buttons: one to stop the music, providing users control over the ambiance, and two others to navigate through the playlist, enhancing the immersive experience with auditory choices.

The park’s inhabitants include a sprite-sheet-based man, whom users can direct around the park. Movement is thoughtfully restricted, and the character is scaled dynamically to create a depth effect, simulating closer or further distance from the viewer as he moves vertically across the screen. This effect adds a layer of realism to the virtual exploration. A girl performing yoga serves as another point of interest. Her form transitions every 2 seconds.

Full Screen Canvas:

https://editor.p5js.org/gg2713/full/9wcrgqffm

Code I’m Proud of:

This code is my favorite part of the project because it encapsulates a significant learning curve and represents a turning point in the development process. Mastering the use of the lerp function for scaling the swing wasn’t straightforward; it took a considerable amount of time and experimentation to get right. However, once I figured it out, everything else seemed to fall into place more easily.

 

// Oscillate the scale between 0.5 and 1.2 to simulate forward and backward motion
  this.swingScale = lerp(0.4, 0.7, (sin(this.angle) + 1) / 2);
  this.angle += this.angleVelocity;
}

display() {
  // Swing frame
  stroke(139, 69, 19); 
  strokeWeight(10); 
// Left side of the frame
  line(this.x -50, this.y - this.frameHeight, this.x -50, this.y); 
  // Right side of the frame
  line(this.x + 50, this.y - this.frameHeight, this.x+ 50, this.y); 
// Top of the frame
  line(this.x -50, this.y - this.frameHeight, this.x +50, this.y - this.frameHeight); 

// Start a new drawing state
  push(); 
// Translate to the position of the swing seat
  translate(this.x, this.y-15); 
// Apply scale transformation
  scale(this.swingScale); 
  fill(160, 82, 45); 
// Seat
  rect(0, 0, this.width, this.height); 
  pop(); 
  
// Swing ropes 
  stroke(160, 82, 45); 
// Scale the stroke weight to simulate depth
  strokeWeight(2 * this.swingScale); 
// Left rope
  line(this.x-20, this.y - this.frameHeight, this.x -20 * this.swingScale, this.y - this.height/2-10 * this.swingScale); 
 // Right rope
  line(this.x + this.width/2, this.y - this.frameHeight, this.x + this.width/2 * this.swingScale, this.y-10 - this.height/2 * this.swingScale);
  if (this.figure) {
    this.figure.display();
  }
}

Another piece of code that I take great pride in is the function for handling mouse interactions, specifically the logic that enables switching between scenes. Although the code now appears straightforward, reaching this level of simplicity and effectiveness was a journey marked by significant effort and learning. Here’s the code:

function mousePressed() {
  
  if (scene === 'start') {
    // Check if the Start button is clicked
    if (mouseX >= width / 2 - 50 && mouseX <= width / 2 + 50 &&
        mouseY >= height / 2+180 && mouseY <= height / 2 + 250) {
      scene = 'park'; // Change the scene to the park
    }
  } else if (scene === 'park') {
    radio.click(mouseX, mouseY);
    // Check if the bird is clicked
    if (mouseX >= bird.x && mouseX <= bird.x + 64 && mouseY >= bird.y && mouseY <= bird.y + 64) {
      bird.fly(); // Make the bird fly
    }
  } else if (scene === 'radio') {
    if (mouseX >= 10 && mouseX <= 110 && mouseY >= height - 60 && mouseY <= height - 10) {
      scene = 'park'; 
      radio.isClicked = false;
    }
  
    if (mouseX > 523 && mouseX < 578 && mouseY > 372 && mouseY < 407) {
      radio.stopSong(); 
    }
    

    if (mouseX > 611 && mouseX <667 &&
        mouseY > 372 && mouseY < 407) {
      radio.changeSong();
    }
  }
  
}

Key Challenges Faced 

  1. Swing Depth Effect: One of the initial challenges was to create a depth effect for the swings. My first approach involved altering the coordinates, which did not yield the desired outcome. Eventually, the use of the lerp function for scaling the swings provided the solution.
  2. Scene Transitions: Transitioning between scenes (start to park, park to radio) was difficult. It took numerous trials to manage application states effectively for smooth transitions.
  3. Jukebox Functionality: The jukebox component was a multifaceted challenge. Initially, assembling its structure and design was time-intensive, requiring attention to detail to accurately represent its features. Beyond its appearance, integrating interactive functions, such as changing songs and stopping playback through radio buttons, added another layer of complexity.
  4. Animating a Running Girl: I wanted to incorporate a running girl to the park. However I was not able to achieve smooth transitions between frames. The frames for the running motion changed too abruptly, leading to an unnatural appearance.

Future Aspects:

The journey of crafting the virtual park has been incredibly fulfilling, yet I see so much more potential for it. Right now, the park feels a bit too quiet for my liking; it’s missing the hustle and bustle of daily park life. I’m dreaming of adding animations of families spreading out picnic blankets, kids chasing each other around, and elderly couples enjoying quiet conversations on benches. These little touches, I believe, will truly breathe life into the park.

One of my initial ambitions was to animate a girl running through the park. Sadly, getting her movement just right turned out to be trickier than I expected, and I had to set that idea aside. But, it’s still something I’m passionate about tackling. Seeing the project come to life has been incredibly rewarding, and it’s only fueled my desire to dive back in and add even more depth and interaction to this virtual space. The feedback has been heartening, and honestly, I can’t wait to get started on bringing these new ideas to life. The virtual park project isn’t just a task for me; it’s a passion project that I’m excited to develop further.

REFERENCES:

https://intro.nyuadim.com/author/mk7592/

 

Week 6 Reading Response

Diving into the article about computer vision for artists and designers felt like opening a door to a room where art meets science in the most fascinating way. I’ve always thought of computer vision as something you’d find in sci-fi movies or high-tech security systems, not something that could be part of creating art. The idea that artists are using this technology to make interactive pieces where the artwork changes based on how people move around it is pretty cool. It’s like the art is alive, reacting to us just as much as we react to it.

Reading about projects like Videoplace and Sorting Daemon really got me thinking. It’s one thing to look at a painting on a wall, but it’s something entirely different to be part of the art itself. The thought that my movements could influence an art piece, creating something unique each time, is both exciting and a bit mind-blowing. It’s not just about observing; it’s about participating, and that changes the whole experience of what art can be.

The technical side of things, like how computer vision can track movements and interpret them into visual changes in an art piece, was a bit complex but also intriguing. I didn’t realize so much went into making these installations work. It’s not just about having a good idea for an art piece; it’s also about solving problems, like making sure the lighting is right so the cameras can see what they need to. This made me appreciate the art even more, knowing the blend of creativity and technical skill that goes into it.

However, what stuck with me the most was thinking about the bigger picture—like what it means for a machine to watch us and then create art based on that. It’s a bit like the machine is an artist too, interpreting our actions in its own digital way. And while it’s amazing, it also raises questions about privacy and how much we’re comfortable with machines “seeing.”

Overall, the article was a peek into a future where art and technology blend in ways that make the audience a part of the artwork. It’s a reminder that creativity knows no bounds, especially when artists start playing around with tech to see what new forms of expression they can discover. It makes me wonder what other surprises the art world has in store as technology keeps advancing.