Final Project | Some ideas

I’ve always wanted to create something related to gaze tracking—mapping the intangible act of looking into something tangible that showcases the power of staring. It’s about turning private intention into public expression.

Idea 1:
Develop a human body figure in p5.js and use p5 to detect and record gaze (eyeball) movements as the user observes the figure. Then, create a physical representation of the body figure and embed LED pins in different sections. p5.js would send gaze data to an Arduino, causing the LEDs to blink in the areas the user’s gaze travels to. Simultaneously, a speaker could play audio, creating an interactive experience that visualizes and vocalizes the gaze.

Idea 2:
This concept shifts from exploring the male gaze and fetishization to a more practical application: an eye-blink sensor for preventing drowsy driving. My cousin got us into a car accident last year because he fell asleep while driving, which inspired this idea. Although less creative, it’s highly practical. The system would include a speaker that plays a reminder whenever the user closes their eyes, encouraging them to stay alert. This device would be designed to be portable, so it can easily attach to any pair of glasses.

Assignment 8 | 3 exercises of Arduino & P5 (Jheel & Linda)

EXERCISE 01: ARDUINO TO P5 COMMUNICATION 

Make something that uses only one sensor on arduino and makes the ellipse in p5 move on the horizontal axis, in the middle of the screen, and nothing on arduino is controlled by p5.

The ball moves to the left when it’s dark; to the right when it’s bright.

  • Please find the demonstration video here

Exercise 1

  • Please find the p5.js sketch here 
let port, reader, writer;
let serialActive = false;

async function getPort(baud = 9600) {
  let port = await navigator.serial.requestPort();
  await port.open({ baudRate: baud });

  // create read & write streams
  textDecoder = new TextDecoderStream();
  textEncoder = new TextEncoderStream();
  readableStreamClosed = port.readable.pipeTo(textDecoder.writable);
  writableStreamClosed = textEncoder.readable.pipeTo(port.writable);

  reader = textDecoder.readable
    .pipeThrough(new TransformStream(new LineBreakTransformer()))
    .getReader();
  writer = textEncoder.writable.getWriter();

  return { port, reader, writer };
}

class LineBreakTransformer {
  constructor() {
    this.chunks = "";
  }

  transform(chunk, controller) {
    this.chunks += chunk;
    const lines = this.chunks.split("\r\n");
    this.chunks = lines.pop();
    lines.forEach((line) => controller.enqueue(line));
  }

  flush(controller) {
    controller.enqueue(this.chunks);
  }
}

async function setUpSerial() {
  noLoop();
  ({ port, reader, writer } = await getPort());
  serialActive = true;
  runSerial();
  loop();
}

async function runSerial() {
  try {
    while (true) {
      const { value, done } = await reader.read();
      if (done) {
        reader.releaseLock();
        break;
      }
      readSerial(value);
    }
  } catch (e) {
    console.error(e);
  }
}

let rVal = 0; // Value from photoresistor

function setup() {
  createCanvas(640, 480);
  textSize(18);
}

function draw() {
  background(245, 245, 200);
  fill(0);

  if (!serialActive) {
    text("Press Space Bar to select Serial Port", 20, 30);
  } else {
    text("Connected", 20, 30);
    text('Photoresistor Value = ' + str(rVal), 20, 50);

    // Map the sensor value to the x-position of the ellipse
    let xPos = map(rVal, 0, 1023, 0, width);
    fill(100, 123, 158);
    ellipse(xPos, height / 2, 50, 50);
  }
}

function keyPressed() {
  if (key == " ") {
    setUpSerial();
  }
}

function readSerial(data) {
  if (data != null) {
    let fromArduino = trim(data); // Remove any whitespace
    rVal = int(fromArduino); // Convert the string to an integer
  }
}
void setup() {
  // put your setup code here, to run once:
  Serial.begin(9600);
}


void loop() {
  // put your main code here, to run repeatedly:
  int sensorValue = analogRead (A2);
  Serial.println (sensorValue);
  Serial.write(sensorValue);
  delay (1);
}

 

EXERCISE 02: P5 TO ARDUINO COMMUNICATION 

Make something that controls the LED brightness from p5.

  • LED brightness changes according to the slider.

  • Please find the demonstration video here

Exercise 2

  • Please find the p5.js sketch here 
let brightness = 0; 
let slider;

function setup() {
  createCanvas(400, 200);
  //make the slider
  slider = createSlider(0, 255, 127); 
  slider.position(10, 10);
  slider.style('width', '300px');

  let serialButton = createButton("Connect to Arduino");
  serialButton.position(10, 50);
  serialButton.mousePressed(setUpSerial);
}

//troubleshoot
function readSerial(data) {
  console.log("Received data:", data); // Log the received data to the console
}

function draw() {
  background(220);
  brightness = slider.value(); 
  fill(0);
  textSize(16);
  text(`LED Brightness: ${brightness}`, 10, 100);

  // Send brightness value to Arduino via Serial
  if (serialActive) {
    writeSerial(brightness + "\n"); // Append a newline character
  }
}
int ledPin = 9; // PWM pin connected to LED

void setup() {
  Serial.begin(9600);
  pinMode(ledPin, OUTPUT);
}

void loop() {
  int bright;

  if (Serial.available() > 0) {
    bright = Serial.parseInt();
    analogWrite(ledPin, bright);
  }
}

EXERCISE 03: BI-DIRECTIONAL COMMUNICATION

Every time the ball bounces one led lights up and then turns off, and you can control the wind from one analog sensor

Please find the demonstration video here

Please find the p5

let port, reader, writer;
let serialActive = false;

async function getPort(baud = 9600) {
  let port = await navigator.serial.requestPort();
  await port.open({ baudRate: baud });

  // create read & write streams
  textDecoder = new TextDecoderStream();
  textEncoder = new TextEncoderStream();
  readableStreamClosed = port.readable.pipeTo(textDecoder.writable);
  writableStreamClosed = textEncoder.readable.pipeTo(port.writable);

  reader = textDecoder.readable
    .pipeThrough(new TransformStream(new LineBreakTransformer()))
    .getReader();
  writer = textEncoder.writable.getWriter();

  return { port, reader, writer };
}

class LineBreakTransformer {
  constructor() {
    this.chunks = "";
  }

  transform(chunk, controller) {
    this.chunks += chunk;
    const lines = this.chunks.split("\r\n");
    this.chunks = lines.pop();
    lines.forEach((line) => controller.enqueue(line));
  }

  flush(controller) {
    controller.enqueue(this.chunks);
  }
}

async function setupSerial() {
  noLoop();
  ({ port, reader, writer } = await getPort());
  serialActive = true;
  runSerial();
  loop();
}

async function runSerial() {
  try {
    while (true) {
      const { value, done } = await reader.read();
      if (done) {
        reader.releaseLock();
        break;
      }
      readSerial(value);
    }
  } catch (e) {
    console.error(e);
  }
}


////////////////////////////////////////////////////////////

let velocity;
let gravity;
let position;
let acceleration;
let wind;
let drag = 0.99;
let mass = 50;
let windSpeed;


function setup() {
  createCanvas(640, 360);
  noFill();
  position = createVector(width / 2, 0);
  velocity = createVector(0, 0);
  acceleration = createVector(0, 0);
  gravity = createVector(0, 0.5 * mass);
  wind = createVector(0, 0);
  textSize(16);
}

function draw() {
  background(0);
  
  if (serialActive){
    wind.x = map(windSpeed, 0, 1023, -1, 1);
  }
  applyForce(wind);
  applyForce(gravity);
  velocity.add(acceleration);
  velocity.mult(drag);
  position.add(velocity); 
  acceleration.mult(0);
  ellipse(position.x, position.y, mass, mass);
  console.log("Wind Speed: " + windSpeed); // For debugging

  if (position.y > height - mass / 2) {
    velocity.y *= -0.9; // A little dampening when hitting the bottom
    position.y = height - mass / 2;

    // Send signal to Arduino on bounce
    if (serialActive) {
      sendBounceSignal();
    }
  }

  if (!serialActive) {
    fill(255);
    text("Press SPACE to connect to Serial Port", 20, 30);
  } else {
    fill(0, 255, 0);
    text("Connected to Serial Port", 20, 30);
  }
}

function applyForce(force) {
  // Newton's 2nd law: F = M * A or A = F / M
  let f = p5.Vector.div(force, mass);
  acceleration.add(f);
}

function keyPressed() {
  if (keyCode == LEFT_ARROW) {
    wind.x = -1;
  }
  if (keyCode == RIGHT_ARROW) {
    wind.x = 1;
  }
  if (key == " ") {
    if (!serialActive) {
      setupSerial();
    } else {
      mass = 50;
      position.y = -mass;
      velocity.mult(0);
    }
  } 
}

async function setupSerial() {
  try {
    noLoop();
    // ({ port, reader, writer } = await getPort());
    
    port = await navigator.serial.requestPort();
    await port.open({ baudRate: 9600 });
    
    
    // Create a TextDecoderStream to decode incoming bytes to text
    const textDecoder = new TextDecoderStream();
    const readableStreamClosed = port.readable.pipeTo(textDecoder.writable);

    // Create the reader to read from the decoded text stream
    reader = textDecoder.readable
    .pipeThrough(new TransformStream(new LineBreakTransformer())) // Optional: split data by lines
    .getReader();
    
    writer = port.writable.getWriter();
    serialActive = true;
    // Start reading data after successfully opening the port
    runSerial();
    loop();
  } catch (err) {
    console.error("Serial connection failed:", err);
    serialActive = false;
  }
}

function readSerial(data) {
  if (data != null) {
    let fromArduino = trim(data); // Remove any whitespace
    console.log(data);
    if (fromArduino !== "") {
      fromArduino = parseInt(fromArduino, 10);
      windSpeed = int(fromArduino); // Convert the string to an integer
      
    }
  }
}


async function sendBounceSignal() {
  try {
    if (writer) {
      await writer.write(new TextEncoder().encode("bounce\n"));
    }
  } catch (err) {
    console.error("Failed to send bounce signal:", err);
  }
}

Please find the Arduino

let brightness = 0; 
let slider;

function setup() {
  createCanvas(400, 200);
  //make the slider
  slider = createSlider(0, 255, 127); 
  slider.position(10, 10);
  slider.style('width', '300px');

  let serialButton = createButton("Connect to Arduino");
  serialButton.position(10, 50);
  serialButton.mousePressed(setUpSerial);
}

//troubleshoot
function readSerial(data) {
  console.log("Received data:", data); // Log the received data to the console
}

function draw() {
  background(220);
  brightness = slider.value(); 
  fill(0);
  textSize(16);
  text(`LED Brightness: ${brightness}`, 10, 100);

  // Send brightness value to Arduino via Serial
  if (serialActive) {
    writeSerial(brightness + "\n"); // Append a newline character
  }
}

 

Assignment 7 – Music Box! (by Jheel and Linda)

For this project, Jheel came up with this really cool idea of a music box. The basic concept is that the music box will play music when exposed to light and pause when it’s dark. To achieve this, we used a photoresistor as the analog sensor to detect light and a button as the digital sensor for manual control over music playback. 

Please find the demonstration here

Components and Setup

To make it achieve the soothing effect of a music box, we also adjusted the appearance and the speaker.

  • Adafruit Arduino Music Maker Shield: This allows us to play music files from an SD card, controlled via the Arduino. By downloading the corresponding library, we ensured smooth playback.
  • Speaker Attachment: We initially tried a buzzer, but it produced a harsh, mechanical sound that didn’t match our desired soothing effect. We switched to a consumable speaker from the IM lab, which provided a richer and more pleasant sound. We connected the speaker to the Music Maker Shield for optimal sound quality.
  • Circuit with Photoresistor: We programmed the photoresistor to detect light levels, pausing music playback when it’s dark and resuming when light is detected.
  • Push Button: We added a push button to switch between multiple music files, allowing users to vary their listening experience.
  • Prototyping Shield and Battery: To make the setup compact enough to fit inside the box, we transitioned from a breadboard to a prototyping shield. Adding a battery also made the setup portable, enhancing usability.

Code

Please find the Github Link here

Challenges

One major challenge was achieving the desired sound quality. As tested in class, the initial buzzer didn’t deliver the calm, melodic experience we wanted, so switching to a consumable speaker improved the overall audio quality. There was wiring & compatibility issue with the speaker, thus we have to fix it by adjusting the breadboard. 

Since traditional music boxes are often crafted from wood, we used our recent woodworking skills to create an elegant wooden enclosure. We sourced templates for the box online and used a laser cutter (ofc with the help of lab asisstants) to make a precise, polished exterior.

After a few rounds of trial and error with different templates, we arrived at a design that worked well for housing the components. To personalize it further, we designed an engraving for the top using Adobe Illustrator and laser-cut the design, adding a refined aesthetic touch.

After laser-cutting the box and engraving the design, we assembled all the components inside. The transition from breadboard to prototyping shield and the compact battery setup allowed everything to fit neatly within the box.

Reading reflection: Hands but more than hands

I think it’s fascinating to reflect on how much progress we’ve made in terms of integrating hand-detection technology into everyday devices, which feel quite seamless and intuitive. Hands were and still are the primary means of interaction across so many devices. It’s almost second nature to use gestures to navigate or manipulate digital spaces, and hand-based interactions are practically embedded into our muscle memory. I remember being impressed by Apple’s raise to wake function, where the device responds to being picked up or set down. This interaction, though subtle, demonstrates the extent to which technology can now “read” our intentions through physical movement alone. It creates a natural, almost invisible interaction layer that feels practical in the sense that it anticipates the user’s needs without requiring a button press. 

Also I love how he just writes a response to the comments. Looking back at the early days of blogging, it’s amazing to see how it created a platform for public discourse and enabled personal voices to connect. Blogging sparked what was perhaps the first truly global online community, which also reminds me of the previous discussion of how designers should stand in the shoes of the users when building the interactions. Many of the interactive and social principles of that era laid the groundwork for today’s principles of interactive art.

Assignment 7 | Close your eyes when the flash is on

Concept

For this assignment, I am imitating the interaction between a camera’s shutter release and the human eye’s reaction to a camera flash. The button represents the shutter release, while the LDR simulates ambient light or flash. The LEDs mimic how the human eye might react to sudden changes in light.

When the button (shutter release) is pressed, it activates the LEDs: the digital LED turns on fully, and the analog LED adjusts its brightness based on the light level detected by the LDR. If the surrounding light level is high, the LED immediately dims, simulating the eye’s reaction to bright flashes.

Implementation

Code

https://github.com/LindaD1234/Intro-To-IM/blob/039cec5b83542a4788a6a4ae24d2adfc22e446ca/Assignment%207

One new thing I discovered is the INPUT_PULLUP, which makes it easy to work with buttons and switches without needing an external resistor.

Reading reflection: Intentios

These examples are great. With digital and interactive art becoming more accessible, it’s easy to fall into the trap of relying on technology without fully considering the clarity of the message. This relates to the other reading, which emphasizes that when form becomes overly important, it can overshadow the experience, resulting in a lack of genuine interaction. I believe having a clear intention is essential. These examples work because they are all straightforward, with the philosophy of each piece rooted in interaction. For instance, Christopher Paretti’s SpeedDial is about the relationship between action and reaction to changes in sound. Similarly, other pieces that incorporate sensors, like camera, temperature, and light, prompt the user to make small inputs, which in turn produce subtle outputs. The simplicity of these interactions is very effective.

While I agree with limiting instructions in interactive art, I also think it can constrain the message being conveyed. Interactive art isn’t like a full play or novel, where the audience is guided through an experience that evolves alongside the characters. In those formats, audiences are almost manipulated by the characters’ emotions, similar to directing an actor. However, in a play, actors follow a defined structure, and that structure—together with the performance—conveys the underlying philosophy. Interactive art, on the other hand, is constrained by the duration and setting of the experience, which might require more intricate design to fully express its philosophy.

Reading Response #: Affect & Behavior & Emotion & The person that got hunman to the moon

Positive affect is critical in accomplishing challenging tasks, highlighting the importance of considering emotions in design. When people are under stress, design functions differently and should adapt to diverse emotional states. This reminds me of our earlier discussions on the adaptability of design, which should accommodate various user groups. Previously, we focused on factors like age, gender, and culture, but reflecting on this, I see how essential it is to consider potential emotional responses & their changes as well.

I also thought of Hamilton’s story in the other article—how might we apply similar design principles in that context? To what extent does a designer or coder’s background shape their work, and how do they balance ‘beauty’ with usability, especially when functionality is crucial to safety? Additionally, how does the visual appeal of a design impact its emotional effect on users?

Returning to Margaret Hamilton’s story, beyond her remarkable achievements in breaking gender barriers in a male-dominated field, her dedication to error handling is very inspiring. Her example speaks to something beyond just design, affect, and beauty. It’s about resilience and the vital role of precision in high-stakes contexts.

Midterm project | Ear Puzzle Experience

Interaction & Page design

Each page is a separate function named as displayGamePage_. Users interact with my functions by clicking the mouse & the keys.

  1. Press the canva to enter the game stage;
  2. Click M to go back to main;
  3. Click on the right arrow to enter the next game page;
  4. Click F to enter full screen.

By having small design details, such as having the icon of the cursor also as a mouse, choosing the font & the background music, and having poetry about ears at the beginning and the end, the Ear Puzzle aims to strengthen the idea of deconstruction. Putting familiar yet unfamiliar objects to a space where users view it from an unusual perspective allows them to reflect on their own relationship with the objects.

Link to full screen.

Realization & Difficulties

The most difficult part about the code is, as what I expected, checking the WIN CONDITION. 

For this initial sketch I have, the page will allow the win condition but only by chances.

I thought the issue was on the rotation(). However, I tested the same rotation logic and it worked fine for the final work. Major reason could be that I tried to cut all the images in one function in the main js sketch. Even when the user wins, the refreshing of the next page does not follow up.

function startNewGame() {
  

  let imageIndex = gameIndex - 1; // initially gameIndex = 1
  let numPiecesOptions = [4];  // number of pieces per image
  numPieces = numPiecesOptions[imageIndex]; // an array containing the number of pieces for each image
  
  let img = images[imageIndex]; // access an element in the images array at the index specified by imageIndex
  pieces = [];
  correctRotation = 0;
    
  
/////////////////////////////////////////////maybe no need
  let pieceWidth = img.width / 2
  let pieceHeight = img.height / 2

  let scaledPieceWidth = width / 2
  let scaledPieceHeight = height / 2
/////////////////////////////////////////////maybe no need

  
  // create puzzle pieces with random rotations

  for (let x = 0; x < sqrt(numPieces); x++) {
    for (let y = 0; y < sqrt(numPieces); y++) {
      let imgSection = img.get(x * pieceWidth, y * pieceHeight, pieceWidth, pieceHeight);
      let scaledX = x * scaledPieceWidth;
      let scaledY = y * scaledPieceHeight;
      let piece = new PuzzlePiece(scaledX, scaledY, scaledPieceWidth, scaledPieceHeight, imgSection);
      pieces.push(piece);
    }
  }
}

Traumatized by the chaos, I decided to break down every variable so that they don’t overrun each other. Initially, I was using square root of puzzle-pieces as an indicator of the cut. For this new (also the final) implementation, I decided to use numbers.

I also gave up on making it a win/lose situation, meaning that users won’t enter the next page automatically as they get the rotation right. The game changes to an experience, and the users have to press the right arrow to enter the next page. Honestly I don’t think it changes the concept of my game since users are still able to play with the rotation, and their eyes will tell them if it’s correct or no.

Some small issues including not being able to add the sound effect I want or not being able to avoid collision between the image and the text. Something I wanna fix in the future. Though full screen stretches the image, I still think it’s important to have it because I changed it into an experience.

 

Midterm Progress | Ear puzzle

Concept

What are the major facial features? Forehead, eye, nose, mouth, lips, chin, and skin. We interact with them everyday by looking into the mirror. One facial feature that is essential but always ignored visually is, EARS. This project aims to decontextualize one’s relationship with their body parts by putting it in an unfamiliar context

The game will allow the users to piece jigsaw puzzle together. They will be able to click on the 4 to 10 pieces of segmented images on the canva to rotate it. The number of pieces are randomly generated. When all pieces is rotate to a certain position that match the original images,  the users will score and enters the next page. If they can’t finish, there will be a refresh button

Images

Ideally I’d take the ear pictures, but I also found something on pinterest.

I also found this website that can pixelate the picture to make the ears look more odd.


Implementation

There are two directions that I could design the game. For option 1, users can only rotate the puzzles but not moving them. For option 2, users have the freedom to move around the puzzle and rotate them. We will see which one works better,

Option 1: rotate

Option 2: rotate and move

Below are some functions that I think I need.

Preload

  • load image
  • Load sound

Rotation

  • When mousepressed, rotate

Win condition

  • Check position
  • Check correct angle
  • Display text

Possible implementation & challenges:

User can upload their own pictures of the ears and it will be pixelated and randomly show up in one of the ears. 

Reading response #4: computer vision

Computer vision has strong analytical capabilities, but only with proper programming. For example, in Limbotime (the example given by the web), the computer interprets the human body as black pixels, where the head is identified as the vertex of the pixel group. This expands the range of input compared to simpler systems like p5, where input is limited to the keyboard. It also highlights the key difference between computer vision and human vision. While computer vision relies on data and algorithms, human vision rely on context and experience, which is similar to computer vision’s logic but is much more subjective.

Optimizing physical conditions, like using reflective materials or controlled environments, can improve computer vision’s accuracy. But there are broader implications, especially regarding surveillance and privacy. I’m always surprised by how quickly we’ve adapted to technologies like Instagram filters. It’s now expected for the screen to show something when you interact with them.  This also makes me wonder how computer vision might shape human vision, not just physically but in how we perceive ourselves. In terms of servailence, the privacy intrusion may not be as extreme as sci-fi films suggest, but it does subtly alter how we see and present ourselves.