Week 11 – Serial Communication

Concept:

The idea behind this project was to create a bi-directional feedback loop between the physical world and a virtual simulation. Instead of treating Arduino and p5.js as separate tools, I wanted them to behave like two parts of the same system, constantly communicating with each other.
The interaction is centered around cause and consequence. A simple physical action, turning a potentiometer, does not directly place or move objects on screen. Instead, it influences a virtual force, wind, that acts on two simulated balls. When those virtual objects collide, the system responds physically by changing the brightness of a real LED connected to the Arduino.

Method & Materials:

This project combines one analog sensor, a virtual physics simulation, and a physical output using serial communication between Arduino and p5.js.

Hardware:
  • Arduino Uno
  • 10k potentiometer (analog sensor)
  • LED
  • 220–330 Ω resistor
  • Breadboard and jumper wires
Software:
  • Arduino IDE
  • p5.js with Web Serial

The potentiometer is wired as a voltage divider between 5V and GND, with the middle pin connected to analog pin A0. The LED is connected to digital pin 9 through a resistor and grounded on the other side.
The Arduino continuously reads the potentiometer value (0–1023) and sends it to p5.js over serial. p5.js interprets this value as a wind force that affects two balls in a physics simulation. When the balls collide, p5.js sends a command back to the Arduino, which responds by increasing the LED’s brightness.

Process:

I approached this project by breaking it into three conceptual stages rather than thinking of it as separate exercises.

Link to Video Demonstration

1. Physical Input → Virtual Force

The potentiometer provides a continuous analog input. The Arduino reads this input as a raw number and sends it to p5.js without assigning it any meaning. In p5.js, this value is mapped to a horizontal wind force rather than a position. This distinction is important: instead of directly placing the balls, the potentiometer influences how they move over time. This makes the interaction feel physical and dynamic, closer to real-world motion than to simple cursor control.

2. The Virtual Event (Decision-Making in Software)

The physics simulation exists entirely inside p5.js. Gravity, wind, velocity, and drag are calculated every frame. The software also monitors the relationship between the two balls. When they touch, p5.js detects a collision event. The Arduino has no awareness of this virtual world; it does not know what balls, gravity, or collisions are. All interpretation and decision-making happen in software.

3. Virtual Event → Physical Consequence

When a collision occurs, p5.js sends a short command back to the Arduino. Under normal conditions, p5.js continuously tells the Arduino to keep the LED dim, so there is always a subtle sign that the system is running. When a collision is detected, p5.js sends a specific command that causes the Arduino to flash the LED at full brightness for a brief moment. This turns an invisible, virtual event into a tangible physical response.

Schematic:

The schematic shows the potentiometer connected as a voltage divider to analog pin A0 and the LED connected to digital pin 9 through a resistor. The Arduino communicates with the computer over USB using serial communication, allowing p5.js to both receive sensor data and send control commands back to the Arduino.

Code:

The part of the code I am most proud of is the event-based communication triggered by the collision. A single character sent from p5.js is enough to cause a visible physical reaction, showing how minimal signals can carry meaningful information when the system is designed carefully.

Link to Code [Contains Arduino code in file arduino.txt]

Result:

The final system behaves as a closed feedback loop. Turning the potentiometer changes the wind, which alters how the two balls move on screen. When the balls collide, the LED connected to the Arduino flashes brightly, translating a virtual interaction into a physical signal.

In the demonstration video, it is clear that the system is responsive in both directions: physical input affects the simulation, and events inside the simulation produce immediate physical feedback. The interaction feels cohesive rather than fragmented across hardware and software.

Reflection:

This project helped me understand interaction design as a system rather than a collection of isolated components. The Arduino and p5.js each have clearly defined roles: hardware acts as the body, sensing and responding physically, while software acts as the mind, handling physics, logic, and decisions.

Week 10 – Sound, Sensor, Mapping

Concept:

The idea behind my instrument was to create a simple, tactile musical device that translates deliberate physical control into sound. I wanted to explore a grounded interaction where turning a knob directly shapes pitch and pressing a button activates the sound. The instrument encourages slow, intentional exploration: rotating the potentiometer continuously changes the note, while the button acts as a gate that turns the sound on and off.

Link to Video Demonstration

Method & Materials:

  • Analog Sensor: 10k potentiometer used to control pitch
  • Digital Switch: Tactile button used as an on/off trigger
  • Output: Piezo buzzer to produce sound
  • The potentiometer was connected between 5V and GND, with its middle pin wired to analog pin A0. The button was connected to digital pin 2 and GND, using the Arduino’s internal pullup resistor. The piezo buzzer was connected to digital pin 9 and GND.

As the potentiometer is rotated, the Arduino reads a continuous range of values from 0 to 1023. These values are mapped to a frequency range that controls the pitch of the sound produced by the piezo buzzer.

Process:

The potentiometer provides a smooth range of values, while the button only has two states, pressed or not pressed. I experimented with reading both inputs simultaneously and learned how to use the map() function to translate raw sensor data into meaningful sound frequencies.

I also explored how using the internal pullup resistor simplifies wiring, reducing the number of external components needed. Testing different frequency ranges helped me find values that were audible without being too harsh.

Schematic:

(schematic drawing img)

The schematic shows the potentiometer wired as a voltage divider to analog pin A0, the button connected to digital pin 2 using INPUT_PULLUP, and the piezo buzzer connected to pin 9 as the sound output.

Code:

This project uses input from two sensors, a potentiometer and a button, to generate sound through a piezo buzzer. The potentiometer continuously controls pitch, while the button determines whether sound is produced. When the button is pressed, the Arduino reads the potentiometer value and maps it to a frequency range, producing a tone. When the button is released, the sound stops.

The part of the code I am most proud of is the line that maps the potentiometer’s analog values into a usable

Result:

The final prototype behaves like a simple knob-based synthesizer. Turning the potentiometer smoothly changes the pitch, while pressing the button activates the sound. The interaction feels direct and intentional, allowing the user to clearly hear the relationship between physical input and sound output.

In the demonstration video, the pitch responds immediately to changes in the knob position, showing how basic electronic components can be combined to form a functional musical interface.

Reflection:

This project helped me understand how sound, sensors, and code come together in interactive systems. Working with both analog and digital inputs clarified how different types of control shape user experience. Even with very few components, the instrument feels expressive and responsive. This exercise showed me how computational logic can be translated into sensory feedback, and how small design decisions, like mapping and thresholds, strongly influence interaction. It serves as a foundation for thinking about more complex computational instruments in the future.

Final Project Documentation

Concept

Memory Dial is a desk object that listens to the room and turns what it senses into a living visual field.
Three inputs drive everything:
1. Light from a photoresistor
2. Proximity from an ultrasonic sensor
3. Sound from the laptop microphone
These values animate a layered visualization in p5.js. Over time, the system also writes short poems that describe how the room behaved during the last interval.
The project is both physical and digital: the Arduino gathers the signals, and the browser visualizes the “aura” of the space.

Process

Hardware

The electronics sit inside or beside the desk object.

  • Photoresistor (LDR): measures brightness in the room
  • HC-SR04 Ultrasonic Sensor: detects how close a person stands
  • (Piezo sensor is present but not used in the final logic)
  • Arduino Uno: reads the sensors and streams data via Web Serial

Arduino prints values in this format:

lightValue,distanceValue,piezoValue

Software Flow

The browser-based sketch (p5.js) opens the serial port and receives the sensor stream.
Then it does five main jobs:

1. Signal Smoothing

Raw sensor data is jittery.
To make the animation feel organic, I smooth everything, which ultimately prevents flickering and gives the visuals a “breathing” quality.

smoothLight = lerp(smoothLight, currentLight, 0.18);

smoothDist  = lerp(smoothDist,  currentDist,  0.18);

smoothMicLevel = lerp(smoothMicLevel, rawMicLevel, 0.2);

2. Normalization

Each value is mapped into a range between 0 and ~1 so it can drive animation:

  • lightNorm: 0 = very dim, 1 = very bright
  • presenceNorm: 0 = far away, 1 = very close
  • micNorm: 0 = quiet, 1 = loud

These norms control different visual layers.

Visual System: What Each Sensor Controls

The sketch builds a three-layer aura that reacts in real time.

1. Light → Affective Basefield (Background)
  • Low light = blue / cool hues
  • Medium = muted purples
  • Bright room = warm oranges

The background is made of hundreds of fading circles with slow noise-driven texture (“veins”), giving it a soft atmospheric motion.

2. Distance → Flowfield and Moving Rings

When someone approaches:

  • Flowfield particles move faster and glow more
  • Radial rings become thicker and more defined
  • Blobs orbiting the center grow and multiply

Distance affects the energy level of the whole system.

3. Sound → Breathing Halo (Distorted Circles)

Microphone amplitude controls:

  • Speed of breathing
  • Amount of wobble
  • Thickness and brightness of the halo

Louder sound → faster breathing, more distortion
Quiet room → slow, smooth pulse

This halo behaves like the room’s heartbeat.

4. Timed Sparks

Every few seconds, spark particles are released from the center:

  • They drift, swirl, and fade over ~10 seconds
  • Create a subtle “memory trace” effect

This gives the visuals a slow-time texture

Poem System: Long-Duration Feedback

While the visual reacts instantly, a second system keeps track of behaviour over longer periods.

Every frame, the sketch adds to running totals:

lightNorm
presenceNorm
micNorm

After a set interval (currently short for testing), the program:

  1. Computes average light, distance, and sound
  2. Categorizes the ambience into a mood (quietDark, quietBright, busyClose, mixed)
  3. Selects a matching four-line poem from a text file (some of most popular & my favorites)
  4. Fades the poem in and out on top of the animation
  5. Resets the history buffer for the next cycle

This creates a rhythm:
instant animation → slow reflection → instant animation.

(Picture 2: Cinematic Photography)

Interaction Summary

When people walk by:

  • Their presence wakes the field
  • Their voice or music changes the breathing
  • Lighting conditions tint the entire atmosphere

After a long stretch of time, the system “writes” a short poem capturing the mood of the environment.

The desk object becomes a quiet companion that listens, reacts, and eventually reflects.

What the Viewer Actually Experiences is –

  • They see the aura pulse when sound changes
  • They see the energy rise when someone approaches
  • They feel the ambience shift when lights dim or brighten
  • They get a poem that summarizes the vibe of the past hour

The project sits between functional sensing and artistic ambience.
It doesn’t demand interaction; it responds to whatever happens.

(Video clip 1: close-up shot of breathing)

Schematic Diagram

What I am proud of:

One part I’m especially happy with is the poem selection system, because it isn’t just random text appearing every few minutes. It observes the room quietly, stores a rolling history of the ambience, and then chooses a poem that matches the emotional “weather” of the last cycle.

The logic only works because the sketch accumulates sensor values over time, instead of reacting to single spikes. For example, this block keeps a running history of the room’s brightness, proximity, and sound:

histLightSum += lightNorm; 
histDistSum += presenceNorm; 
histMicSum += breathStrength; 
histSampleCount++;

Then, when it’s time to display a poem, the system computes averages and interprets them like a mood:

let avgLight = histLightSum / histSampleCount; 
let avgDist = histDistSum / histSampleCount; 
let avgMic = histMicSum / histSampleCount; 
if (avgMic < 0.20 && avgLight < 0.30 && avgDist < 0.55) { 
moodKey = "quietDark"; } 
else if (avgMic < 0.20 && avgLight > 0.65) { 
moodKey = "quietBright"; } 
else if (avgMic > 0.45 && avgDist > 0.55) { 
moodKey = "busyClose"; } 
else { moodKey = "mixed"; }

Future Improvements

I want to expand the Memory Dial so it becomes a small personal companion. Ideas include:

  • Logging poems over time into a journal page
  • Adding touch sensing so the object reacts when someone taps the table
  • Allowing the user to “bookmark” moments when the aura looks special
  • Using AI to rewrite or remix poems based on ambience trends
  • Creating a physical enclosure for the sensors so it feels like a finished artifact

Week 13 – User Testing

For this week’s user test, I asked a couple of acquaintances to interact with the Memory Dial without explaining anything upfront. I wanted to see how much the system communicates on its own and whether the visuals give any sense of “cause and effect” without my narration.

The first reaction was always the same: “It looks pretty, but what does it do?”
That told me something immediately – people understood they were looking at a reactive visual piece, but they couldn’t tell what influenced what. They waved their hands, stepped closer, dimmed the room lights with their phone flash, spoke near the laptop mic, and slowly started noticing patterns:

Once they figured out these relationships, the piece became much more interesting to them. The visuals were described as “alive,” “breathing,” and “calming.” But almost every tester needed a little hint about the sensors before they fully understood the interaction.

Because of this, I later added a few lines on the landing page briefly explaining what the room is controlling:

After that addition, testers understood the interaction far faster.

One more observation: the timed poetry reveal was a pleasant surprise for everyone, but since the demo interval was short, the poem sometimes popped up before they had fully understood the visuals. They liked it, but it felt “mysterious without warning.”

Link to video

What worked well

The aura system made sense once users noticed the patterns.
People liked how the three layers moved differently. The breathing halo reacting to sound was the most immediately understood and got the most comments. Testers also said the overall motion felt calm instead of distracting.

The poem reveal was everyone’s favorite moment.
Even though it startled a few people, the transition and fade-in were described as “soft” and “cinematic.” Users said it made the piece feel thoughtful and “less like a tech demo and more like an experience.”

The visuals feel cohesive.
No one felt overwhelmed by the amount of motion, which was a relief because this system has a lot going on.

Room for Improvement

People needed guidance early on.
Without context, the sensors were basically invisible. Users tried to understand the visuals by guessing instead of experimenting. A tiny amount of text solved the issue, which shows the landing page actually matters.

Poem timing needs a cue.
Since users didn’t know a poem would appear, the transition felt abrupt during testing. A small hint like “The room is writing…” or a subtle dimming before the text appears might help lead the eye.

Distance-based motion was harder to notice.
Unlike sound or light, distance required more exaggerated motion. Some testers were just sitting down and moving their hands in front of the ultrasonic sensor. I guess that’s what matters more in the end since the difference might not be entirely comprehensible from a distance.

Final Thoughts

The Memory Dial works without instructions, but only after a short moment of confusion. Once people learn what the sensors represent, the piece becomes engaging and even meditative. It encourages slow interaction rather than quick action, and that resonated with users.

This test showed me that the concept is strong, but the system benefits from gentle onboarding, even something as simple as a few lines on the landing screen. With those adjustments, the Memory Dial becomes easier to understand and more enjoyable for first-time viewers.

Week 12 – Final Project Proposal

Finalized concept

For my final project, I’m building something that sits somewhere between a desk object, a quiet companion, and a tiny archivist of the room it lives in. The idea came from a simple thought: most days pass without leaving much behind, and by the time I get to the end of the night, the whole thing feels like a blur. I’m not someone who journals daily, but I like the idea of having some kind of trace of the day even if it’s abstract, incomplete, or not linguistic at all.

So the artefact is basically a small desk object that listens to the atmosphere of the space throughout the day, and later turns those ambient shifts into a soft, formless visual cloud in p5.js. It’s not interested in what the user did, only in how the day felt. It just records the vibe of a day, no eavesdropping or surveillance (and definitely no productivity tracking). I want the final thing to feel almost poetic, like the object is quietly paying attention on the side while I’m working, and at the end of the day it shows me its version of the memory.

What the Arduino will do

The Arduino will handle all the sensor-stuff during the day. I’m using:

  • a photoresistor to capture light changes,
  • an ultrasonic sensor to sense presence/absence near the desk,
  • a piezo to detect general sound/vibration spikes.

Arduino will collect these readings over time and send them to p5.js through serial. I’m keeping the Arduino’s job simple: sense → store → transmit.
I’ll also have a small physical trigger (most likely a button or dial) that the user presses at the end of the day to “reveal” the visual memory.

What p5.js will do

p5.js will take the day’s data and transform it into an atmospheric, slow-moving cloud. I’m aiming for visuals that sit in between abstract art and environmental “weather.” Light translates to color gradients, presence to density, and sound to softness or sharpness of the shape. I’m also considering a very light ml5.js layer just to classify general movement energy, so the cloud feels a bit more alive.
[Communication is mostly Arduino to p5, but later I might also send a message back to Arduino so the object can react in a small way when the memory is generated]

Early progress

So far, I’ve been sketching a few versions of the cloud visualization to figure out what feels “alive” without being overwhelming. Physically, I’ll keep the build minimal – something that looks more like a desk artefact than a tech box.
I’m hoping the final result feels calm, personal, and a little bit poetic, not a gadget trying to do too much, but an object that simply notices what the day was like and gives it a shape.

Week 11 – Reading Reflection

Pullin’s argument challenges the lazy division between design as aesthetic indulgence and disability as pure function. He dismantles the notion that assistive devices must be invisible or “normalized,” suggesting instead that disability could (and should) be a space for expressive, creative design. That was a refreshing perspective. The hearing aid, the wheelchair, the prosthetic are cultural objects that communicate identity (not just medical equipment).

Yet, what I appreciate most in Pullin’s framing is how he exposes the moral vanity of “inclusive design.” The impulse to hide difference in the name of inclusion often erases individuality altogether. Still, I wonder whether his optimism about designers embracing disability aesthetics underestimates the market’s conservatism; we live in a world where even fashion struggles to tolerate imperfection. The essay makes me question whether good design serves comfort or visibility, and whether true accessibility might require celebrating discomfort, making difference not something to be hidden, but worn, literally, in style.

Week 10 – Reading Reflection

Bret Victor’s “A Brief Rant on the Future of Interaction Design” reads less like a complaint and more like a plea for imagination. His frustration with “pictures under glass” isn’t really about touchscreens rather about how easily we mistake convenience for progress. Victor’s argument that our hands are not just pointers but thinking tools is relatable. I’d never thought of a touchscreen as “numb,” but he’s right: sliding a finger across glass is nothing like twisting, folding, or shaping an object. He’s asking designers to respect the body’s intelligence, the millions of years of evolution that made us good at feeling, gripping, and sensing the world. I agree with him almost entirely. What unsettles me is how quickly we’ve accepted numbness as normal. We’ve trained ourselves to think that friction is a flaw when it comes to UX.

The Responses piece adds an interesting layer of humility. Victor doesn’t pretend to have the solution. He admits he was ranting, not designing one. Still, his answers to critics are telling. He pushes back against gimmicks like voice control and “waving your hands in the air,” arguing that real interaction should involve the whole body and the tactile richness of touch. I found myself nodding at his line about computers needing to adapt to our bodies, not the other way around. That’s such a simple reversal, yet it cuts right through decades of design laziness. When he compares touchscreen culture to “restricting all literature to Dr. Seuss’s vocabulary,” it’s funny, but it also nails the deeper loss: we’re settling for tools built for children, not adults.

If there’s one thing I’d question, it’s Victor’s nostalgia for physicality. I agree that touch and movement matter, but I also think the human imagination adapts. The digital world is training new forms of dexterity which are mostly mental than physical. Coding, multitasking, navigating layered interfaces – these, too, are forms of “touch,” just less visible. Maybe the future of interaction design isn’t about replacing glass with holographic clay, but about balancing sensory depth with cognitive range. Victor’s rant reminds me that design should evolve with both the hand and the mind.

Week 9 – Analog + Digital

Concept

When driving at night, high-beam headlights often blind oncoming drivers. I wanted to explore a simple and affordable system that encourages responsible use of headlights, something that detects oncoming glare and helps the driver automatically or manually switch to low-beam mode. This prototype uses a photoresistor to sense incoming light and a button to simulate the driver’s manual response. Two LEDs represent the vehicle’s headlight (green) and alert indicator (red).

Mechanism

Ambient calibration: when the Arduino starts, it reads the surrounding light for about 1.5 seconds and sets that as a baseline for normal nighttime brightness. This is important in the real world because ambient light differs in places, times.
Glare detection: if the light level rises significantly above that baseline, similar to when another vehicle’s high beams hit, the red LED begins to blink as a warning.
Manual response: when the button is pressed, the green “headlight” LED dims to a predefined low-beam brightness, showing that the driver has dipped the lights.
Return to normal: releasing the button restores full brightness once the road ahead is clear.

Everything happens automatically in a simple loop, using only one analog sensor (photoresistor) and one digital input (button).

Link to video demonstration

Schematic Diagram

Code highlight

The most interesting part for me was sensing ambient light accurately.
I learned how to smooth noisy analog readings using an exponential moving average (

ema = ALPHA * raw + (1 - ALPHA) * ema;

 

) and how to calibrate a baseline from Arduino documentation and community projects.
This taught me how real-world sensors fluctuate constantly and why filtering is crucial before making decisions based on their data.

Use Cases:

  • A bike or e-scooter headlight that adjusts automatically when another vehicle approaches.
  • A low-cost training aid for driving schools, showing how to react responsibly to high beams.
  • A foundation for smart vehicle lighting projects that aim to reduce glare and improve road safety.

Further Enhancement

1. Replace the LEDs with an actual headlight driver or relay for real-world testing.
2. Use a camera or focused photodiode to distinguish streetlights from oncoming cars.
3. Add automatic dimming without manual input, fully autonomous dipping.
4. Integrate with IoT dashboards to log light data and analyze driving environments.
5. Use a microcontroller with more PWM channels to control multiple lamps independently.

Conclusion

This project started as a simple assignment, but it quickly became a meaningful experiment in safety-oriented design.
By combining analog sensing, human interaction, and basic control logic, I built a small system that mimics how real vehicles manage headlight behavior. It reminded me that even modest hardware, guided by thoughtful logic, can address real-world problems.

Week 9 – Reading Reflection

Tom Igoe’s “Making Interactive Art: Set the Stage, Then Shut Up and Listen” advocates for a shift in the role of the artist in interactive work: the artist must stop controlling the experience and instead facilitate a conversation with the audience. Igoe’s key argument is that interactivity only truly begins when the artist relinquishes interpretation and allows the audience to complete the piece. This concept challenges the creator’s ego and promotes a more humble, open-ended form of art. While I appreciate the insight, I find Igoe’s emphasis on “shut up and listen” a bit idealistic. In practice, many audiences need some level of guidance to fully engage with an interactive installation. Too much ambiguity can easily lead to frustration. Nonetheless, his metaphor of the artist as ‘a director—staging a conversation rather than dictating a lecture’ resonates strongly. At its core, this advice serves as a reminder that interaction requires mutual respect: artists must listen as much as they create.

In “Physical Computing’s Greatest Hits (and Misses),” Igoe reflects on recurring projects in interactive art, such as theremins, drum gloves, and video mirrors, and contemplates why they keep resurfacing in new forms. He doesn’t dismiss these repetitive ideas; rather, he sees their evolution as evidence of growing refinement and deeper understanding. Igoe suggests that repetition is a form of progress, not stagnation. In physical computing, each reimagining of a “classic” project offers new possibilities, whether it’s smarter sensors, more intuitive designs, or deeper contextual relevance. Igoe also rejects the notion that novelty for its own sake is the ultimate goal, calling attention to the often-overlooked value in revisiting older concepts. This stance challenges the modern fixation on innovation for innovation’s sake, emphasizing that novelty must be paired with genuine engagement and a willingness to learn from the past, not just chase aesthetics or trends.

Synthetically, both essays stress the importance of humility in interactive art. Whether talking about listening to the audience or refining established ideas, Igoe places the artist’s role not in the creation of definitive, controlled experiences but in the facilitation of dialogue and discovery. The act of interacting with art, according to Igoe, is an ongoing process that requires responsiveness and openness. The artist’s task is to create the conditions that encourage curiosity, rather than rigidly scripting the conversation. In the end, good interactive art is about paying attention to what the interaction itself reveals and adjusting accordingly, facilitating a space where discovery is as important as design.

Week 8 – Unusual Switch

For my project, I created an Arduino switch that activates through physical contact, specifically, a hug. Instead of using hands, I built a simple “hug switch” using two pieces of aluminum foil connected to the Arduino. One piece was taped onto my sleeve, and the other onto a plush toy sitting on my chair. When I hugged the toy, the foil pieces touched and completed the circuit, turning on an LED on the breadboard.

This setup used digitalRead() to detect when the circuit closed, lighting up the LED as a visual indicator. It’s a very basic circuit, just two foil pads, a resistor, and an LED; but it demonstrated how the human body can act as a conductor to trigger digital inputs. I liked how small physical gestures could translate into electronic signals. The process reminded me how interaction design can make technology feel more human, even with something as simple as a hug that lights up a tiny LED.

Schematic Diagram