Analog Sensor

Concept

For this project, I used one analog sensor and one digital sensor (switch) to control two LED lights.

The analog sensor I used was a photoresistor (light sensor). It changes how much electricity passes through it depending on how bright the light in the room is. The Arduino reads this change and adjusts the brightness of one LED  when it’s dark, the LED gets brighter, and when it’s bright, the LED becomes dimmer.

For the digital sensor, I used a pushbutton connected to a digital pin. When I press the button, it turns the second LED on or off.

To make it different from what we did in class, I added a “night light” feature. When the photoresistor detects that the room is very dark, the button-controlled LED automatically turns on, like a small night light. When the light comes back, the button goes back to working normally.

This made my project more interactive and closer to how real sensors are used in everyday devices.

 Schematic of my circuit
It shows the Arduino connected to:

  • A photoresistor and 10 kΩ resistor forming a voltage divider to read light levels.

  • A pushbutton connected to a digital pin.

  • Two LEDs , one controlled by the light sensor and the other controlled by the button

Final Results

When I tested the circuit:

  • The first LED smoothly changed its brightness depending on how much light the photoresistor sensed.

  • The second LED turned on and off with the button as expected.

  • When the room got dark, the second LED automatically turned on, working like a night light.

It was a simple but satisfying project, and the extra feature made it stand out from the class example.

Video: video-url

Arduino Code

Part of Cold I am proud of

void loop() {
  // --- Read photoresistor ---
  int lightValue = analogRead(lightPin); // 0–1023
  int brightness = map(lightValue, 0, 1023, 255, 0);
  analogWrite(ledAnalog, brightness);

  // --- Button toggle ---
  if (digitalRead(buttonPin) == LOW) {
    ledState = !ledState;
    delay(200);
  }

  // --- Night light feature ---
  if (lightValue < 300) { // If it's dark, auto turn on LED
    digitalWrite(ledDigital, HIGH);
  } else {
    digitalWrite(ledDigital, ledState ? HIGH : LOW);
  }

  // --- Print readings ---
  Serial.print("Light: ");
  Serial.print(lightValue);
  Serial.print(" | Brightness: ");
  Serial.print(brightness);
  Serial.print(" | LED State: ");
  Serial.println(ledState ? "ON" : "OFF");

  delay(200);
}

Github url: Github

Challenges and Further Improvements

While I was able to make both the analog and digital sensors work, I struggled a bit with arranging all the wires and resistors neatly on the breadboard. It took a few tries to get everything connected correctly.

I also had to test different threshold numbers for the night light feature to decide when the LED should automatically turn on. Once I found the right value, it worked well.

For my next project, I want to try using other kinds of sensors, like sound or temperature sensors, and make the circuit respond in new ways. I’ll also practice reading the code line by line to understand how each part works better before adding new features.

Week 9 – Reading Reflection

Physical Computing’s Greatest Hits (and Misses)

Reading Physical Computing’s Greatest Hits (and Misses) really made me reflect on the role of human presence and movement in interactive design. The article explores recurring motifs in physical computing, such as theremin-style sensors, body-as-cursor interfaces, tilty tables, and mechanical pixels, and evaluates which ideas succeed and which fall short. What struck me most was the idea that interaction alone is not meaningful unless it is framed with intention or context.  I found this insight particularly relevant because gestures, motion, and bodily engagement only carry meaning when integrated into a space or narrative. The article also emphasizes that even commonly used ideas can be made fresh through variation and creativity.

The discussion of emotionally responsive projects, like “remote hugs”, also inspired me to think about the potential of physical computing to create connection and presence. It made me consider designing experiences where participants’ actions are not only triggers for a response but also carriers of meaning, emotion, or narrative. I found myself imagining interactive installations or performance spaces where movement, gesture, and proximity could communicate emotion or tell a story, giving participants a sense of agency and contribution. Overall, the article reinforced the importance of centering human input and intention over technical complexity. It motivated me to experiment more boldly with interactive media, blending technology, space, and human engagement in ways that feel purposeful, immersive, and emotionally resonant.

Making Interactive Art: Set the Stage, Then Shut Up and Listen

When I was reading Making Interactive Art: Set the Stage, Then Shut Up and Listen I noticed a perspective of the audience or participant as a a co-creator, and the creator’s role is to design opportunities for exploration and discovery. The article encouraged setting the stage, providing affordances, and then stepping back to let participants engage on their own terms. This concept resonated deeply with me because I often feel the need to over-explain or control how people interact with my work, whether in interactive media projects, installations, or themed environments. Learning to trust participants’ curiosity and creativity is both challenging and exciting, and it made me rethink how I approach design: sometimes the most compelling experiences arise when the creator resists guiding every step and instead observes how others explore, interpret, and respond.

I also liked the idea  of “listening” to participant interaction. Observing how people engage, adapt, or even misuse an interactive installation can reveal insights the creator never intended, and these discoveries can guide future iterations. This connects to my interests in performance and immersive storytelling because, in both cases, the audience’s reactions shape the experience. It also made me reflect on how I design spaces and experiences outside of class projects, including themed parties or interactive setups, where I can experiment with encouraging participation rather than prescribing behavior. The article inspired me to embrace unpredictability, co-creation, and emergent experiences, reminding me that interaction is not just about technology or novelty, it is about creating a dynamic relationship between the participant, the space, and the narrative. Now, I want to apply this mindset to my projects, designing experiences where participants’ actions carry weight and meaning, and where discovery becomes a central part of engagement.

Week 8 – Unusual Switch

For my project, I created an Arduino switch that activates through physical contact, specifically, a hug. Instead of using hands, I built a simple “hug switch” using two pieces of aluminum foil connected to the Arduino. One piece was taped onto my sleeve, and the other onto a plush toy sitting on my chair. When I hugged the toy, the foil pieces touched and completed the circuit, turning on an LED on the breadboard.

This setup used digitalRead() to detect when the circuit closed, lighting up the LED as a visual indicator. It’s a very basic circuit, just two foil pads, a resistor, and an LED; but it demonstrated how the human body can act as a conductor to trigger digital inputs. I liked how small physical gestures could translate into electronic signals. The process reminded me how interaction design can make technology feel more human, even with something as simple as a hug that lights up a tiny LED.

Schematic Diagram

Week 8 – Unusual Switch

For this recent assignment I decided to build a breath-activated switch.

 

How It Works

The concept is straightforward. I took a flexible tube and placed a small metal ball inside it. At one end of the tube, I positioned two disconnected wires from my circuit. When I blow into the open end of the tube, the force of my breath pushes the metal ball along the tube until it makes contact with both wires simultaneously.

This contact completes the circuit, and—voila!—a connected blue LED lights up to signal that the switch is “on.” When I stop blowing, the ball rolls back, the circuit breaks, and the light turns off. It’s a hands-free switch that’s both fun and functional.

The Build and Components

As you can see in the photo, the setup is quite simple. Here’s what I used:

  • An Arduino as a power source.

  • A breadboard for easy prototyping.

  • One blue LED.

  • Resistors to protect the LED.

  • Jumper wires to connect everything.

  • A flexible plastic tube.

  • A small metal ball.

The circuit itself is a basic LED setup connected to the development board. The magic is all in the custom-built switch. The tube, the ball, and the carefully placed wires are what make this an “unusual” solution that perfectly met the project’s requirements.

This was a fantastic exercise in creative problem-solving and a great way to apply basic circuit principles in a new and interesting way. It proves that with a little ingenuity, you don’t always need a traditional button or switch to make things work.

Video Demo

IMG_8195

Week 8 – Unusual Switch

Concept

For my project, I used the HC-SR04 ultrasonic sensor, which measures distance by sending out a sound wave and timing how long it takes for the echo to bounce back. I used it to detect how close a person’s body is to the sensor. When the person is far (but not too far), a yellow LED lights up. When they get close, a red LED turns on instead. I chose these two colors to mimic a childhood game where the closer you got to a hidden object, the ‘warmer’ you were, so red represents ‘hot,’ and yellow means ‘warm.’

Here’s my wiring and a video of my circuit in action 🙂
https://drive.google.com/drive/folders/1kgAL550ryRCarylolh-Xjpr2KqJABRaU?usp=drive_link

and here’s my GitHub repository

 Code I’m Proud Of

long readDistance() {
 //low for clean start
 digitalWrite(trigPin, LOW);
 delayMicroseconds(2);
 digitalWrite(trigPin, HIGH);
 delayMicroseconds(10);
 digitalWrite(trigPin, LOW);
 long duration = pulseIn(echoPin, HIGH);
 //convert to cm
 //sound speed ≈ 343 m/s → 0.034 cm/µs.
 //div 2 cause roundtrip
 return duration * 0.034 / 2;
}

The part of my code I’m most proud of is the readDistance() function. Through it, I learned how the ultrasonic sensor actually works, sending a pulse, waiting for the echo, and then calculating distance using the speed of sound. I followed a YouTube tutorial to understand the basics, and then used ChatGPT to help debug the issues I ran into. I even got to use some of my physics knowledge to convert time into distance, which made it extra fun since it reminded me of things I had learned before.

Further Improvements

Sometimes the sensor glitches a bit, and I suspect it’s because of my wiring. The HC-SR04 usually needs female-to-male jumper wires to connect properly to the Arduino, but I had to improvise with what I had. Using the Serial Monitor really helped me check if the sensor readings were accurate, but I’d like to clean up my circuit and test again to make it more stable. With proper connections, I think the readings would be much smoother and more consistent.

Another improvement I’d like to try is turning this setup into a Morse code interpreter. Instead of just showing colors for ‘close’ and ‘far,’ I could make the distance readings represent dots and dashes, and then have an LCD screen display the translated message. It would make the project more interactive and add a creative twist while still keeping the hands-free concept. I think it’d be really satisfying to build something that turns simple movements into an actual form of communication.

MIDTERM – Bad Trip

 

 

Welcome to my midterm project, Neo-Euphoria Visions, an interactive audiovisual artwork created with p5.js. This project is an exploration into surreal, psychedelic self-portraiture, heavily inspired by the distinct visual language and emotional tone of the HBO series Euphoria. It uses your webcam to pull you into a hallucinatory world that reacts to your presence, blurring the lines between the viewer and the viewed.

The experience is a multi-layered trip. It begins with a simple invitation before slowly transforming your reality. Your image is recast in a cold, UV-and-pink color palette while a motion trail ghosts your every move. A glowing aura emanates from your silhouette, and hand-doodled stars twinkle into existence around you. The piece is designed to be both beautiful and unsettling, contrasting the cold, trippy visuals with organic, hot-red tears that bleed from your eyes. With a dynamic soundtrack and automated visual shifts, the goal is to create a mesmerizing, ever-changing, and deeply personal digital hallucination.


Live Sketch

You can experience Neo-Euphoria Visions live in your browser by clicking the link below.


Screenshots

How It Works & What I’m Proud Of

This project is built on a foundation of p5.js, but its soul lies in the integration of two powerful libraries: ml5.js for computer vision and GLSL for high-performance graphics shaders. The entire visual output, from the colors to the background effects, is rendered in a single, complex fragment shader that runs on the GPU. This was a critical technical decision that allows for multiple layers of real-time effects without the performance lag that would come from CPU-based pixel manipulation.

The core mechanic involves layering several computer vision and graphical processes. First, ml5.js BodyPix creates a segmentation mask of the user, which is fed into the shader. This mask allows me to separate the scene into three distinct layers: the background, a glowing “aura” directly behind the user, and the user themselves. The shader then applies different artistic effects to each layer. Simultaneously, ml5.js FaceApi tracks facial landmarks to determine the precise location of the user’s eyes. This data is used by a custom Tear class in p5.js, which draws organic, flowing tears on a transparent overlay canvas, making them appear attached to the face. I’m particularly proud of the logic that makes the tears “follow” the eyes smoothly by interpolating their position between frames, which prevents the jittery tracking that can often occur.

JavaScript

// A snippet from the Tear class showing the smooth position update
updatePosition(newX, newY) {
    if (this.path.length === 0) return;
    let head = this.path[0];
    
    // Use lerp to smoothly move the tear's origin towards the new eye position.
    // This prevents jittering if the face detection is noisy.
    let targetPos = createVector(newX, newY);
    let smoothedPos = p5.Vector.lerp(head, targetPos, 0.3);
    let delta = p5.Vector.sub(smoothedPos, head);

    for (let p of this.path) {
        p.add(delta);
    }
}

One of the best technical decisions was implementing a temporal smoothing feedback loop for the BodyPix mask. The raw mask from the model can be noisy and flicker between frames, creating harsh, blocky edges. By blending each new mask with the previous frame’s mask, the silhouette becomes much more stable and organic, which was essential for the “glowing aura” effect to work properly. Finally, the automated, timed switching between three distinct color palettes gives the project a life of its own, making the experience unpredictable and unique for every viewing.

Glsl

// A snippet from the fragment shader showing the palette switching logic
void main() {
    // ...
    vec3 personPalette_phase;
    vec3 auraPalette_phase;

    if (u_activePalette == 1) { // Palette 1: Haze & Fire
        personPalette_phase = vec3(0.0, 0.1, 0.2); // UV
        auraPalette_phase = vec3(0.1, 0.15, 0.20); // Yellow/Orange
    } else if (u_activePalette == 2) { // Palette 2: Electric Pink/Cyan
        personPalette_phase = vec3(0.6, 0.7, 0.8); // Deep UV
        auraPalette_phase = vec3(0.5, 0.6, 0.7); // Pink/Cyan
    } else { // Palette 3: Cold UV
        personPalette_phase = vec3(0.5, 0.6, 0.7); // Deepest UV
        auraPalette_phase = vec3(0.8, 0.9, 1.0); // Electric Blue/Violet
    }
    // ...
}

(A second screenshot showing a different palette, perhaps “Haze & Fire,” to illustrate the variety.)
[Screenshot showing the warmer “Haze & Fire” palette in action.]

Problems and Areas for Improvement

The single biggest challenge I encountered during development was a series of stability issues related to the ml5.js library. I initially ran into persistent “… is not a function” errors, which, after extensive debugging, I discovered were caused by a major version update (v1.0.0) that had deprecated the FaceApi model I was using. The solution was to lock the project to a specific older version (v0.12.2) in the index.html file. This was a crucial lesson in the importance of managing dependencies in web development.

Even after fixing the versioning, I faced a “race condition” where both FaceApi and BodyPix would try to initialize at the same time, causing one or both to fail silently. This resulted in features like the aura and glitch-zoom not working at all. I resolved this by re-architecting the setup process to “chain” the initializations: BodyPix only begins loading after FaceApi has confirmed it is ready. This made the entire application dramatically more reliable. For future improvements, I would love to make the background effects more diverse and audio-reactive. Having the stars pulse or the colors shift in time with the bass of the music would add another layer of immersion. I could also explore using hand-tracking via HandPose to allow the user to “paint” or interact with the stars in the background, making the experience even more creative and personal.

 

Midterm Project

Concept

Color Gate Challenge is a fast and colorful reaction game where the player controls a glowing ball that changes colors to pass through matching gates. The idea came from color-matching and reflex games I used to play, but I wanted to create something that feels more modern and bright, with glowing effects and smooth motion.

The goal of the game is to move the ball through falling gates without hitting the walls. Each gate has a color, and the player must change their ball to the same color to pass safely. If you pass the wrong color or crash into the barrier, the game ends.

My Final Game:

Code Snippet and Parts I Am Proud Of:

One of the parts I am most proud of is how I  control the player, the gates, and the full game system.
Each part of the game (player, gate, and main game) is built as a class, which made it easier to add new features later.

The player’s color can change, and the game checks if it matches the gate’s color before letting the player pass. This made the logic clear and fun.

if (this.player.checkCollision(gate)) {
this.gameOver();
return;
}

This simple check controls the whole challenge of the game.
If the player touches the wrong color or hits the gate walls, the game ends immediately.

I also added a color preview system that shows the next few gate colors, so the player can plan ahead.
It uses small color dots on the screen to help the player see which color to switch to next.

this.upcomingGates.forEach((gate, index) => {
const dot = document.createElement('div');
dot.className = 'color-dot';
dot.style.background = this.getColorString(this.player.colors[gate.color]);
colorPreview.appendChild(dot);
});

Another part I am proud of is how the speed control works. The player can press keys to make the gates fall faster or slower, and there is a live bar that shows the speed level. This made the game more interactive and customizable.

Problems and Future Improvements

At first, the game was too easy because the gates were falling too slowly, and it didn’t feel challenging. I changed the gate speed and added random colors to make it more unpredictable and exciting.

Another problem was keeping everything in the right position when resizing the window. I had to fix the player’s size and position every time the screen changed, using the windowResized() function.

In the future, I want to:

  • Add special gates that move sideways

  • Add sound effects for color switches and collisions

  • Add power-ups that give the player a shield or slow motion

  • Create a moving space background for more depth

  • Add more visual effects like explosions and particle trails

Midterm Project – Music Vent

What is Music Vent?

 

So I created this music visualizer called **Music Vent**, and the whole idea came from thinking about how we use music when we’re feeling emotional – especially when we’re sad or need to vent. You know how sometimes you just want to put on some music and let it all out? That’s exactly what this project is about.

 

The point of Music Vent is to create an immersive experience for music listening, especially for those moments when you want to vent through sad music. But here’s the thing – while you’re going through those emotions, there are these cute, colorful elements that somehow bring you ease and comfort.

 

The Concept Behind It:

 

I wanted to capture this duality that happens when we listen to music emotionally. On one hand, you have these really comforting, almost therapeutic elements:

 

– **Flying radio-cloud birds**: These little radios attached to clouds that float across the screen in the most adorable way. They’re like digital companions that keep you company while you’re listening.
– **A beautiful galaxy background**: I created this artistic Milky Way galaxy with twinkling stars and colorful dust clouds that creates this peaceful, cosmic atmosphere.
– **Soft colors and smooth animations**: Everything flows gently and uses calming colors that make you feel at ease.

 

But then on the other hand, you have the more intense, cathartic elements:

 

– **Beat-responsive visualizations**: These are the NCS-style spectrum analyzers (those green bar graphs you see in the middle) that react aggressively to the music’s beats. They can feel a bit disruptive to the peaceful vibe, but that’s intentional – they represent the raw emotional energy you’re releasing.

 

How I Built It

 

The Technical Setup

 

I built this using p5.js and JavaScript, and I tried to keep the code organized using classes so it wouldn’t become a complete mess. Here’s basically how it’s structured:

 

“`class MusicVisualizerApp {
constructor() {
this.audioManager=newAudioManager();
this.visualManager=newVisualizationManager();
this.uiManager=newUIManager();
}
}“`
I have separate managers for handling the audio, the visuals, and the user interface. This way, if I want to change how the audio analysis works, I don’t have to mess with the visual code.

 

The Audio Analysis Part

 

This was probably the trickiest part. I needed the system to actually “understand” the music and respond to it emotionally. So I created this mood detection algorithm:

 

“`javascript
class MoodProfile {
analyzeMood() {
constavgEnergy=this.average(this.analysisBuffer.map(d=>d.energy));
constavgBass=this.average(this.analysisBuffer.map(d=>d.frequencyBands.bass));
constavgHigh=this.average(this.analysisBuffer.map(d=>d.frequencyBands.high));
// Calculate emotional characteristics
this.currentMood.energy=Math.min(avgEnergy*2, 1.0);
this.currentMood.danceability=Math.min((avgBass+this.currentMood.energy) *0.8, 1.0);
this.currentMood.valence=Math.min((avgHigh+avgCentroid) *0.9, 1.0);
}
}
“`

 

Basically, the system listens to the music and analyzes different frequency bands – like how much bass there is, how much high-frequency content, the overall energy level. Then it tries to figure out the “mood” of the song and adapts the visuals accordingly.

 

The cool thing is that it can detect beats in real-time and make the black hole effect happen right when the beat hits. I spent way too much time getting the beat detection algorithm right!

 

Creating the Galaxy Background

 

I wanted something that felt cosmic and peaceful, so I created this Milky Way galaxy effect. It has about 500 twinkling stars, colorful dust clouds, and these spiral arms that slowly rotate. But here’s the cool part – when a beat hits in the music, the whole galaxy gets sucked into a black hole!

 

“`javascript
// When beats are detected, everything spirals inward
if (beatDetected) {
this.targetBlackHoleIntensity=1.0;
// Stars and particles get pulled toward the center
}
“`

 

The black hole effect was inspired by how intense emotions can feel like they’re pulling everything into them. When the beat drops, you see this dramatic transformation where all the peaceful elements get drawn into this swirling vortex with orange and purple colors.

 

### The Flying Radio-Cloud Birds

 

This was probably my favorite part to code. I took inspiration from a radio drawing I had made before and turned it into these little geometric radios that fly around attached to fluffy clouds. They spawn randomly from either side of the screen and just float across peacefully.

 

“`javascript
class RadioCloudBird {
constructor(x, y, direction=1) {
this.cloudColor=random([‘white’, ‘lightblue’, ‘pink’, ‘purple’]);
this.radioColor=random([‘brown’, ‘black’, ‘silver’, ‘gold’]);
this.bobSpeed=random(0.02, 0.05); // Makes them bob gently
}
}
“`

 

Each radio is drawn using basic geometric shapes – rectangles for the body, circles for the speakers and knobs, lines for the antenna. I had to figure out how to scale everything properly so they’d look right when flying around, but once I got it working, they became these adorable little companions that make the whole experience feel less lonely.

 

## What I Learned and Challenges I Faced

 

### Making Everything Feel Smooth

 

One thing I really focused on was making sure all the animations felt organic and not jarring. I used a lot of interpolation to smooth out the transitions:

 

“`javascript
// Instead of sudden changes, everything gradually transitions
this.values.bass = lerp(this.values.bass, newBassValue, 0.1);
this.values.energy = lerp(this.values.energy, newEnergyValue, 0.1);
“`

 

This makes the whole experience feel more natural and less like you’re watching a computer program.

 

### A Small Touch: Conversation Detection

 

I also added this feature where if the system detects you’re talking (through the microphone), it automatically lowers the music volume. I Included this interactivity feature because that one feature I really wished to see in music party listening softwares. As someone who used to listen to music bots a lot on discord a lot, I always found it annoying to manually reduce or mute the music bot whenever I wanna speak to my friends while listening. This was the initial inspiration to create this project by the way, but then I got the idea of the concept behind this visualizing experience so I focused more on it.

Here is the project on p5, have fun experiencing it!

 

Reading reflection –  The Design of Everyday Things

Honestly, the biggest thing for me in this chapter was just how validating it felt. I can’t count how many times I’ve pushed a door that was clearly a ‘pull’ and felt stupid for a second. The whole idea of the “Norman Door” made so much sense, especially since I see it constantly on campus. There are all these big doors with these handles, the kind of signifier that just screams ‘pull me.’ But half the time, they’re actually push doors. What’s even more confusing is that you can go to another building, see the exact same handle, and that one will actually be a pull. It’s like the design is actively working against you, making it impossible to learn or build a consistent mental model.

And it’s not just doors. My friend’s shower is another perfect example. It’s one of those single, modern-looking knobs with absolutely no signifiers, so there are no red or blue dots, no icons, nothing to tell you which way to turn for hot or cold. Every time I use it, it’s this huge guessing game where I have to turn it on and then quickly jump back to avoid getting blasted with freezing or scalding water. It’s a design that creates a genuinely stressful experience out of something that should be simple.

Now I can’t stop noticing this stuff everywhere. It’s made me realize that good design is basically invisible. It just works, and you don’t even think about it. Bad design, on the other hand, is loud and frustrating. So yeah, I guess my main takeaway is that I’m going to start blaming the objects around me a lot more, and myself a lot less.