Week 9: Reading Response

Physical Computing’s Greatest Hits (and misses)

I often feel like nothing is original anymore. Every time I come up with an idea, I search it and find five people who have already done it, sometimes in more impressive ways. That can be discouraging. It makes me wonder what the point is if everything has already been made. But reading Tom Igoe’s piece helped shift that mindset. He talks about the “greatest hits” of physical computing — projects like musical gloves or motion-controlled sounds — not as clichés, but as classic forms that people keep coming back to. These ideas repeat because they are approachable, fun, and full of room for variation.

What I appreciated most was the reminder that repetition doesn’t cancel out creativity. A musical glove might not be new, but the way I make it, the story I tell through it, and how I design the experience can still feel personal. Igoe encouraged adding a twist, and that made me realize I do not have to be original in concept, but in execution.

I also liked his point about meaningful gestures. A motion that triggers a sound might technically work, but if the movement feels random or doesn’t make sense in the context, the interaction loses impact. That made me think more critically about how I design user input. I want people to feel like what they do matters, and that their actions are met with responses that feel natural and thoughtful. That, to me, is the real magic of interaction.

Making Interactive Art: Set the Stage, Then Shut Up and Listen

Tom Igoe’s post made me realize I often over-explain my work. I worry people won’t get it unless I guide them, but he makes a strong case for stepping back. In interactive art, it’s not just about what I make. It’s about what the audience does with it.

I liked how he compared it to setting a stage. I provide the space and tools, but the audience brings it to life. That means accepting unexpected interpretations and trusting the piece to speak for itself. I think good design should be guidance enough. If the design is clear and intentional, it should naturally lead the audience through the experience without me having to explain everything.

Moving forward, I want to create work that invites exploration without over-directing. That kind of openness feels more honest and more meaningful.

Week 8: Reading Response

Emotion & Design: Attractive things work better

This week’s reading made me laugh a little because it called me out directly. Norman’s idea that attractive things work better really stuck with me, but not because I think they literally work better. It’s because they make us feel better about them. That feeling changes how we treat the object. Case in point: this pink-bow mug I saw online. I would buy it instantly just because it’s cute. But if I actually tried drinking from it, I know the bow handle would probably poke me in the eye. And yet, I still want it.

Cute pink bow mug

I would 100% buy this, and it would 100% poke my eye.

It reminded me of how we collect things like stickers or fancy stationery just to admire them and never actually use them. Sometimes, function becomes secondary when something looks good enough. Norman makes the case that beauty improves usability by creating positive emotions, but I think this also raises a bigger question. How far are we willing to let go of functionality just to have something pretty? And when does that stop being design and start becoming decoration? It’s something I want to think about more in my own work. I still want my interactive projects to function well, but maybe it’s okay to prioritize joy and visual pleasure too. Sometimes, looking at something beautiful is the function.

Her Code Got Humans on the Moon

Reading about Margaret Hamilton reminded me why I love creative tech in the first place. She wasn’t just writing code. She was building the foundation of what software could even be. What really stood out to me wasn’t only that she helped put people on the moon, but that she did it at a time when software engineering wasn’t even considered real engineering. She coined the term herself because nobody else was giving it the weight it deserved. That says a lot about the kind of vision she had. She wasn’t just part of the system. She was defining it.

What I found especially inspiring was her mindset around error handling. She didn’t assume the user would always follow instructions perfectly. She designed with failure in mind, and made sure the code could still function under pressure or human error. That’s a mindset I want to carry into my own work, especially when building interactive projects. Not everything needs to be perfect, but it should be ready for the unexpected. That’s not just smart coding, it’s thoughtful design. The user might not always know what to do, but the system should be kind enough to keep going.

Week 5: reading response

Seal and puppy lookalikes

 

When I look at this photo of a seal and a puppy that somehow look like long-lost twins, my human brain gets the joke instantly. I can tell they’re two different animals, and I also get why they’re being compared. There’s context, humor, and visual nuance involved. But for a computer, that kind of recognition isn’t simple. Computer vision doesn’t work like human vision. We interpret meaning and emotion, while a computer just sees pixels, shapes, and patterns. Golan Levin’s essay really drove that home. Computers are not seeing the world, they’re processing data through whatever narrow lens we’ve given them.

To help computers understand what we want them to track, we use things like face detection, color tracking, optical flow, and trained models. These tools help narrow the field and make the computer’s “guess” more accurate. But still, it’s guessing. A puppy that looks like a seal might completely throw it off if the system wasn’t trained on edge cases like this. That’s part of what makes working with computer vision both fascinating and fragile.

In interactive art, computer vision opens up exciting possibilities. We can create responsive environments, playful installations, and performances that react to motion and presence. But the same tools are also used in surveillance and monitoring. There’s a tension between creativity and control that we can’t ignore. As an artist, I think it’s important to design with awareness. Just because the system can track someone doesn’t mean it should. I want to create interactions that feel intentional and thoughtful, not invasive. At the end of the day, I want the system to respond with care, not just accuracy.

Week 4: Data Visualization

For this week’s production assignment on data visualization and generative artwork, I wanted to capture a very real and very dramatic part of my daily life: my cat Pieni’s obsession with food. Even when she’s been fed every single time she still manages to ask for more like she hasn’t eaten in centuries.

So I thought, why not visualize her lies?

Concept:

This project is a simple, animated bar chart that compares two things across a typical day:

  • How many times I actually fed Pieni

  • How many times she asked for food anyway

How It Works

Each time slot (from 8AM to 8PM) has two bars:

  • A light blue bar representing the single time I fed her (yep, I did my part).

  • A dark blue bar that pulses with animation to show how many times she pretended to be starving during that same period.

I added a slight pulsing animation to the begging bars to reflect how annoyingly persistent and dramatic her pleas are—even when her bowl is full.

Challenges & Decisions

While this project wasn’t technically hard, the main challenge was design clarity. I didn’t want it to look like a boring spreadsheet. I wanted it to be:

  • Aesthetic and cat-meme friendly

  • Easy to understand at a glance

  • Somewhat interactive (through animation)

I spent a bit of time tweaking:

  • The color scheme (cool blues)

  • Bar spacing and layout

  • Centering and visual alignment

  • The legend—cleanly placed in the top-right instead of labeling every bar

What I Learned

This week helped reinforce how storytelling and humor can make even simple data visualizations fun and engaging. It also helped me practice:

  • Pulse animation with sin() for movement

  • Using clean design principles in p5.js

  • Balancing simplicity with personality

Week 4: Reading Response

After reading Norman, I kept thinking about how often I’ve felt genuinely embarrassed, not because I did something wrong, but because something was so badly designed that it made me look like I didn’t know what I was doing. I’ve blamed myself so many times for design failures, but Norman makes it clear that it’s not me, it’s the object. One thing that still annoys me is the sink setup at Dubai Airport. The soap, water, and dryer are all built into one sleek bar, with no sign telling you which part does what. You just keep waving your hands around and hope something responds. Sometimes the dryer blasts air when you’re trying to get soap, or nothing works at all. To make things worse, some mirrors have Dyson hand dryers built in, others have tissues hidden somewhere off to the side, and there’s no way to know without ducking and peeking like a crazy person. Norman’s point about discoverability and signifiers felt especially real here. One simple label could fix all of it.

In my interactive media work, I’m starting to think more about how people approach what I build. Norman’s ideas about system image and mental models stuck with me. If someone doesn’t know what they’re supposed to do when they see my sketch, I’ve already failed as a designer. In my  work, I try to make interactive elements obvious and responsive. If something is clickable, it should look like it. If something changes, the feedback should be clear. The goal is to make users feel confident and in control, not confused or hesitant. Good design doesn’t need to explain itself. It should just make sense.

Week 3: Reading Response on What Makes Interaction Strong?

After going through this week’s reading, I realized how often the word “interactive” gets thrown around. Chris Crawford makes a sharp distinction between reaction and interaction. Just because something responds doesn’t mean it’s truly interactive. For him, strong interaction happens only when a system “listens, thinks, and responds thoughtfully,” like a real conversation. He emphasizes that true interaction requires both sides to be active participants. That stuck with me, especially since we often label anything that reacts to input as interactive, even when it’s really just one-sided. I liked how Crawford stripped the term down to something clear. It’s not about bells and whistles, it’s about actual communication.

Looking back at my own p5.js sketches, I was intentional about making them gamified because I wanted them to feel interactive, not just look interactive. I wanted them to look more like a game than a GIF because only by interacting with an artwork do you really get to sense what went behind it and what it stands for. I love the effect of something actually happening because of a user’s input. It gives users a sense of presence, like they’re not just observing but actively shaping what unfolds. That moment of response makes people feel like they’re part of the piece, not just pressing buttons on the outside. It’s rewarding for both the user and the creator, and it’s the kind of experience I want to keep building on. To me, interactivity is a back-and-forth communication.

Week 2 : Reading Reflection on Casey Reas’ Talk

When I first watched Casey Reas’ talk on chance operations at the beginning of the semester, I wasn’t entirely sure how to feel. Honestly, I was just ready to absorb whatever perspectives were being offered through different narratives. I’ve always been someone who likes control when I’m creating and knowing what’s going to happen, having a plan, getting things “right.” So the idea of building randomness into a project felt a little chaotic, maybe even risky. But Reas broke that tension down. He walked us through how he uses systems, chance, and instruction-based art to remove the artist’s ego from the process and let the artwork evolve in ways he couldn’t fully predict. The way he referenced John Cage and connected those ideas to computational art made it all click: randomness doesn’t mean lack of intent, it just shifts where the intent is. Reas isn’t just throwing things into the void and hoping for the best, he’s setting up a structure where randomness can still move freely. That clicked with me. It’s not about losing control entirely, it’s about creating a space where unexpected things can happen without everything falling apart. That made the idea of randomness feel a lot less intimidating and a lot more useful.

Since I’m writing this reflection a bit later in the semester, I’ve had more time to think about my own relationship to randomness—and honestly, I like randomness I can control. Total chaos just ends up looking like a muddy mess. I prefer when randomness happens within boundaries, where the outcome is still cohesive and intentional, even if it’s unpredictable. That’s the balance I’m drawn to: letting go a little, but not so much that the work loses meaning or direction. It’s about creating space for surprise, but still being able to call the final result your own.

Week 12 – Finalized Idea

Concept:

My project explores the fascinating intersection between physical interaction and emergent systems through a digital flocking simulation. Inspired by Craig Reynolds’ “Boids” algorithm, I’m creating an interactive experience where users can manipulate a flock of virtual entities using both hand gestures and physical controls. The goal is to create an intuitive interface that allows people to “conduct” the movement of the flock, experiencing how simple rules create complex, mesmerizing patterns.

The simulation displays a collection of geometric shapes (triangles, circles, squares, and stars) that move according to three core flocking behaviors: separation, alignment, and cohesion. Users can influence these behaviors through hand gestures detected by a webcam and physical controls connected to an Arduino.

Arduino Integration Design

The Arduino component of my project will create a tangible interface for controlling specific aspects of the flocking simulation:

  1. Potentiometer Input:
    • Function: Controls the movement speed of all entities in the flock
    • Implementation: Analog reading from potentiometer (0-1023)
    • Communication: Raw values sent to P5 via serial communication
    • P5 Action: Values mapped to speed multiplier (0.5x to 5x normal speed)
  2. Button 1 – “Add” Button:
    • Function: Adds new entities to the simulation
    • Implementation: Digital input with debouncing
    • Communication: Sends “ADD” text command when pressed
    • P5 Action: Creates 5 new boids at random positions
  3. Button 2 – “Remove” Button:
    • Function: Removes entities from the simulation
    • Implementation: Digital input with debouncing
    • Communication: Sends “REMOVE” text command when pressed
    • P5 Action: Removes 5 random boids from the simulation

The Arduino code will continuously monitor these inputs and send the appropriate data through serial communication at 9600 baud. I plan to implement debouncing for the buttons to ensure clean signals and reliable operation.

P5.js Implementation Design

The P5.js sketch handles the core simulation and multiple input streams:

  1. Flocking Algorithm:
    • Three steering behaviors: separation (avoidance), alignment (velocity matching), cohesion (position averaging)
    • Adjustable weights for each behavior to change flock characteristics
    • Four visual representations: triangles (default), circles, squares, and stars
  2. Hand Gesture Recognition:
    • Uses ML5.js with HandPose model for real-time hand tracking
    • Left hand controls shape selection:
      • Index finger + thumb pinch: Triangle shape
      • Middle finger + thumb pinch: Circle shape
      • Ring finger + thumb pinch: Square shape
      • Pinky finger + thumb pinch: Star shape
    • Right hand controls flocking parameters:
      • Middle finger + thumb pinch: Increases separation force
      • Ring finger + thumb pinch: Increases cohesion force
      • Pinky finger + thumb pinch: Increases alignment force
  3. Serial Communication with Arduino:
    • Receives and processes three types of data:
      • Analog potentiometer values to control speed
      • “ADD” command to add boids
      • “REMOVE” command to remove boids
    • Provides visual indicator of connection status
  4. User Interface:
    • Visual feedback showing connection status, boid count, and potentiometer value
    • Dynamic gradient background that subtly responds to potentiometer input
    • Click-to-connect functionality for Arduino communication

Current Progress

So far, I’ve implemented the core flocking algorithm in P5.js and set up the hand tracking system using ML5.js. The boids respond correctly to the three steering behaviors, and I can now switch between different visual representations.

I’ve also established the serial communication framework between P5.js and Arduino using the p5.webserial.js library. The system can detect previously used serial ports and automatically reconnect when the page loads.

For the hand gesture recognition, I’ve successfully implemented the basic detection of pinch gestures between the thumb and different fingers. The system can now identify which hand is which (left vs. right) and apply different actions accordingly.

Next steps include:

  1. Finalizing the Arduino circuit with the potentiometer and two buttons
  2. Implementing proper debouncing for the buttons
  3. Refining the hand gesture detection to be more reliable
  4. Adjusting the flocking parameters for a more visually pleasing result
  5. Adding more visual feedback and possibly sound responses

The most challenging aspect so far has been getting the hand detection to work reliably, especially distinguishing between left and right hands consistently. I’m still working on improving this aspect of the project.

I believe this project has exciting potential not just as a technical demonstration, but as an exploration of how we can create intuitive interfaces for interacting with complex systems. By bridging physical controls and gesture recognition, I hope to create an engaging experience that allows users to develop an intuitive feel for how emergent behaviors arise from simple rules.

Week 12: Finalized Concept

Finalized concept for the project:

My final project concept is inspired by a popular game called Piano Tiles. My idea is to create a sturdy, fully functional four-key piano connected to an Arduino. Users will be able to physically play the game using this piano, while the gameplay will be displayed on a laptop screen and recreated in p5js, with some differences like a life powerup.

Design and description of what your Arduino program will do with each input and output and what it will send to and/or receive from P5

My arduino program will be in charge of sending all the inputs from the push buttons when a player presses a key on the piano to the p5js. This will be similar to the musical instrument assignment we did in class except the speaker will not be in the arduino but rather an output on the computer from p5 to see if the player pressed the right key in the right time frame.

Design and description of what P5 program will do and what it will send to and/or receive from Arduino:

Speaking of which, my p5js program will run the graphics for the game itself with users seeing the tiles they have to click. It will receive the input from the Arduino of when the user has clicked the piano key and use logic to make sure that was correct according to the game rules. If not, the game will end unless the player has an extra life which can be received in the game by pressing all 4 tiles 3 times at a certain point in the song.

I’m currently working on the 3d design for the piano. Right now I found a file online that looks like this:

But, I need it to have 4 keys, so I am going to see if I can somehow alter the design to be for 4 keys instead of these 7 and also add a hole for the wires to come out from. I also anticipate the timing of the keys with the p5 to be hard. Firstly, there is often a delay between the arduino and the p5 I noticed when using the potentiometer in one of our assignments and that could mess up my game. Secondly, creating the tiles to fall in sync with an audio will be difficult and time consuming. I may just make it so that the tiles fall randomly, even if it is not in sync with the music. The game mechanics will still work like usual though.

Week 12 – Project Proposal

This requires some explanation:

Originally, I had a well-fleshed-out idea for the use of motion sensors to play a volleyball game. While conceptually sound, in practice, the motion sensors just could not cooperate. After a lot of testing with Arduino and its sensors, I realized that the ball would move too quick  for the sensors to process properly. Instead, I decided to make a violin.

The main mechanism in charge of producing sound will be a potentiometer, in such a way that when a bowstring is pulled back and forth, the potentiometer dial shall turn. Its analog output will be sent to p5, and detecting the bow’s movement will power a synthesizer to play sound. Next, the violin will have digital output buttons. Holding down buttons will give the arduino digital outputs also sent to p5. In p5, detecting which button is being pressed down will turn into a specific note in the scale. each of the 8 buttons represents one note, forming a full scale. This allows us to get a functional violin.