Week 11: Reading Response

A Brief Rant on the Future of Interactive Design + Follow-up

The first article is essentially arguing that the dominant vision of future technology, which is everything being a flat glassy touchscreen you slide your finger across, is not actually visionary at all. It is just a timid extension of what already exists, and what already exists ignores almost everything that makes human hands remarkable. His point is that our hands do two things extraordinarily well, they feel things and they manipulate things, and touchscreens strip both of those capabilities away in exchange for a visual interface that he calls Pictures Under Glass. I reflected on his example of making a sandwich. He asks you to pay attention to how many tiny adjustments your fingers make without you even thinking about it, switching grips, sensing weight, feeling texture, and then he asks whether we are really going to accept a future interface that is less expressive than that. That question reminded of the time I tried learning drums through one of those tablet apps, and the difference between that and sitting in front of a real kit is almost laughable. On a real drum the stick bounces back after you hit it, and that rebound produces important information. Your wrist reads it and adjusts the next stroke automatically, and I could feel even as a beginner that my hands were supposed to be learning something from that response. On the app there is nothing. You tap a flat surface, it makes a sound, and that is the entire relationship. I was learning the pattern but I was not learning to actually play, and from what I can understand, that distinction is what the author is getting at.

About his response to pushback, I actually found it more interesting than the original rant. In the part when someone asked about voice interfaces and he said he has a hard time imagining a painter telling his canvas what to do. That, again, reminded  of the drums. There is no way to describe with a voice or replicate on a screen the feeling of a snare cracking back against your stick, or the way a cymbal responds differently depending on where and how hard you hit it. That knowledge is supposed to live in your hands built up over time, and I genuinely felt the absence of it every time I went back to the app and realized my fingers were learning nothing they could transfer to a real instrument. It felt like I was practicing the appearance of playing drums without any of the physical intelligence that actually makes someone a drummer.

Week 10: Digital (Slide Switch) and Analog (Sound Sensor)

Concept

I wanted to make a lightning system where sound controls the brightness. For my digital sensor, I used a slide switch to turn the LEDs on and off. As for my analog sensor, I used a sound sensor where every time it detects a sound, the LEDs get dimmer. I’m quite interested in sound-activated lights and even though it didn’t come with the kit, I still wanted to give it a try and I got the KY-037 from Amazon.

Full Code | Video Demo | Schematics

Code that I’m proud of

int soundValue = analogRead(soundPin);
 int change = abs(soundValue - 512);

 if (change > threshold) {
   brightness = brightness - 50; // drop brightness by 50 each sound
   Serial.println("Lowering brightness.."); // debug to make sure sound is going through and brightness lowering
   delay(200);
 }

I’m proud of this part because I had to actually understand what the sound sensor was giving me. I thought it would just tell me loud or quiet (HIGH/LOW), but it outputs a number between 0 and 1023 and in silence it sits around 512 (which is about half of that range). I also added the Serial.println myself because I had no idea if claps were even registering so I wanted to confirm it was working in the Serial Monitor before trusting the LEDs and as reassurance.

How this was made

I started with the slide switch wired to pin 2 with a 10kΩ pull-down resistor to GND, the reason for the higher Ω used was to prevent the pin from floating and giving random readings when the switch is open. When it reads LOW, both LEDs turn off and brightness resets to 255 (maximum brightness) so it starts fresh every time. I had to learn how the sound sensor worked and wired it myself. It outputs a continuous number between 0 and 1023 of the volume, sitting around 512 (mid-range value) in silence, and every loop the code reads that value, subtracts 512, and takes the absolute value to get the amplitude. If that crosses my threshold of 70 the brightness drops by 50. I had a problem with sensitivity at first as it kept triggering on background noise or missing claps and I found out that the module has a small dial you adjust with a screwdriver. I also added Serial.println debug line so I could confirm in the Serial Monitor that claps were actually registering before trusting the LEDs. The two LEDs on pins 9 and 10 each have a 330Ω resistor to GND and receive the brightness value through analogWrite using PWM.

Reflection & Future Improvements

This was quite a challenging assignment because I heavily insisted on using a sound sensor. Even though the sound sensor was new to me and not something we covered in class, I was able to apply a lot of the analog concepts we already learned (things like analogRead, analogWrite, and PWM) and it translated over and made it easier to grasp. I went back to the class notes and a few tutorials online (referenced below) to piece it together. If I were to keep going I’d add more brightness steps so the dimming feels smoother, and I’d revisit an earlier idea where the two LEDs go in opposite directions where one dims while the other brightens based on the same concept of having live sound adjusts that. Nevertheless, I’m happy with my output and how it turned out.

References

https://github.com/liffiton/Arduino-Cheat-Sheet

https://docs.arduino.cc/language-reference/

Arduino Sound Sensor: Control an LED with Sound

Week 10: Reading Response

Physical Computing’s Greatest Hits (and misses)

This article goes through a recurring list of physical computing project themes that show up in classes every year, and I found it quite fascinating how the author encourages students to pursue repeated ideas. That resonated with me because I sometimes catch myself thinking of a project idea only to search if it has been done and end up just not going through with it with the mindset of someone has already done it and even better that what I would’ve done. I think that has stunted my growth and exploration that I probably could’ve learned a lot from. This also reminded me of how in traditional art, everyone paints a still life or draws a figure at some point. Nobody tells you not to draw a bowl of fruit because it has been done before. These things become a learning stepping stone in your work and I think that is just as valuable.

Making Interactive Art: Set the Stage, Then Shut Up and Listen

This article felt like a continuation of that idea and here the main argument linked interactive art to a beginning of a conversation and not so much a finished statement like a painting or a sculpture, and the moment you over-explain what something means or how someone is supposed to feel about it, you have already ended that conversation before it started. Coming from a background where I spent some time exploring traditional art, where the work is usually a fixed object that speaks for itself, I found this shift in thinking genuinely difficult to wrap my head around at first. A painting hangs on a wall and you bring yourself to it. Interactive work is different because the piece is actually incomplete until someone engages with it, and what they do becomes part of what the work is.  I resonated with his comparison to a theater director working with actors. He says you can give an actor props and suggest intentions but you cannot tell them how to feel, they have to find it themselves. I think that is a really honest way of describing what good interactive design should do, you are building the conditions for something to happen.. And I think that is harder than it sounds because there is a natural instinct when you make something to want people to get it the way you intended. I feel that every time I finish a project and immediately want to stand next to it and explain it to whoever walks by. Reading this made me realize that impulse, as understandable as it is, is actually working against the experience I am trying to create.

Week 9: Reading Response

Emotion and Design: Attractive Things Work Better

The reading starts off with the author’s personal teapot collection and notes their unique qualities: one is functionally absurd, one is ugly yet charming, and one is elegantly engineered for the stages of tea brewing. He uses these very objects to back up his claim that usability need not to be in conflict, and that in fact, things that feel good to use and look at actually perform better in our minds because of the emotional state they put us in, and a beautiful product can help a user work through minor problems that the ugly (but functional) counterpart might not. I do agree with his point on prioritizing usability alone can lead to designs that work but feel sterile, and this reminds me of the function over aesthetics mindset that reinforced in architecture, where function almost overshadows how spaces feel for the consumer. However, while I think his argument fits everyday products well, I don’t think it mirrors the same way with how architecture operates under far greater constraints like structure, material, and safety, where poor functional decisions have serious consequences and it is, from what I can see, a context in which aesthetics can’t come first, though an architectural structure can be beautiful, it ultimately has to serve its purpose of being a safe and functional space to the user.

Her Code Got Humans on the Moon

The article follows mathematician Margaret Hamilton who took a programming job at MIT as something temporary while her husband finished law school, and ended up accidentally building the foundation of software engineering while helping land humans on the moon. There came a situation when her daughter crashed the simulator by triggering a program that no astronaut was ever supposed to activate mid flight and although she flagged it as a real issue and wanted to a kind of debugging code to prevent it, NASA pushed back and claimed that astronauts were too well trained for that. Months later, it happened. I think that kind of overconfidence in human perfection is something a lot of institutions fall into, and it actually reminded me of the Titanic. The ship was considered so structurally sound that the people in charge genuinely claimed it as the “unsinkable”, and that certainty is what made them careless about the lifeboats, the speed, and the warnings. I think both cases show the same thing, which is that when you convince yourself something will never happen, you stop preparing for it, and that is exactly when it does. I truly respected Hamilton for making sure, she stayed prepared and her team was ready to fix it when it did go wrong.

Week 7: Midterm Project

Concept

Ella’s Panaderya is an interactive experience set inside a Filipino bakery, also called a panaderya. The piece invites users to explore the space, click on pastries to learn about them, control the music playing on the radio, and interact with a pot of champurado. My goal was to create something that felt warm and nostalgic to Filipino while informative for those  who aren’t.

Final Sketch!

How This Was Made

The sketch was built in p5.js using a game state system to manage which scene is displayed at any given time. Each state has its own draw function that renders the appropriate graphics. The bakery scene uses a photo background with clickable regions mapped onto the pastries and interactive objects in the image. To find the correct coordinates for each pastry, I used a temporary debug line was added inside drawBakery() that displayed live mouse x and Y values on the canvas as the mouse moved. It was easier to pinpoint the top-left and bottom-right corners of each item and calculate the width and height of each clickable zone. I just removed debug line  once all coordinates were set.

Temporary DeBug line:

fill(255, 0, 0);
textSize(10);
text(mouseX + ", " + mouseY, mouseX, mouseY);

Each pastry is a class that has its position, dimensions, name, and description. I made it so when you click a pastry, a popup displays that information. The champurado pot has its own layer with a top-down bowl drawn using shapes, a stirring animation, and a milk interaction that lightens the color of the champurado. The radio buttons are mapped to coordinates on the bakery image  (same as the pastries) and control three songs using the song functions learned from the class.

Reflection & Future Improvements

This project came together under a time crunch, so there are a few things I would have liked to develop further. The biggest one is the fan: the original plan was to make it interactive, cycling through speed settings with a sprite-based spinning animation to show the blades actually turning. Given more time, that would have added another layer of life to the bakery scene. I also would have liked to refine the popup sizing and positioning across all pastries, and possibly add a zoomed-in image for each one rather than just text. Overall though, I think the piece  succeeded in creating a small, explorable slice of a Filipino bakery that feels interactive and grounded in real cultural context.

Week 5: Midterm Progress Report

Concept & Design

My concept for the Midterm is an interactive piece of a Filipino-style bakery or a panaderya. I want to make a nostalgic and cozy piece where you can learn about different Filipino pastries and baked goods, interact with the radio to change to music (the songs being classic Filipino songs), and the electric fan in the background.

I started with a rough sketch  of the design and I’m planning to design the whole piece using pixel art and I will be using the PressStart2P font which is this pixelated looking font to really give it that nostalgic feeling. For the landing screen, I wanted it to be simple and straightforward with detailed instructions for the user  and to transition to the actual bakery, I’ll be using keyPressed() function. For the bakery, there’s four main interactivity functions for now and all of them are going to be clicked to use. The radio is going to have play, pause, next, and previous buttons that will control the music. For the electric fan, I’m planning to implement animation using sprite from the previous lesson and I want the blades of the fan to change speed when clicked (I’m still debating whether to put numbers that correspond to the speed like an actual fan). Most importantly, the shelf is gonna have different objects and when clicked, there will be a pop up that’s going to tell the user about that specific pastry. Lastly, the door is going to restart the experience starting with the landing screen.

Code Testing & Uncertain Part

I wanted to test out using the font to ensure that I actually knew how to use them for the sketch and it looked the way I wanted it to. It was quite easy to figure that out as we already had a discussion on using downloaded fonts. I also wanted to test out having images as my object and the pop when clicked for my pastries section. I spent some time and asked AI for assistance because I only remember how to do interactions with shapes but not images. I eventually figured out that p5.js doesn’t automatically detect clicks on images, so we have to manually create an invisible box around an image using four variables (X, Y, width, and height)  to track where it is on the canvas. Then in mousePressed() we check if the mouse coordinates fall inside that box, and if they do we know the image was clicked and trigger the popup.

Week 5: Reading Response

Reading this made me think about how differently computers and humans actually process vision. When I look at a painting I immediately read context, emotion, and narrative without even thinking about it. A computer just sees raw pixel data and well, to me that’s like doing pure formal description with zero interpretation, listing visual elements without ever asking what they mean. I also had no idea how much the physical environment matters. I always assumed it was mostly about the software but things like backlighting, infrared lighting, and high contrast surfaces can make or break whether an algorithm even works. It’s almost like you’re setting up the scene for a machine to read rather than a person. The surveillance aspect makes me both uncomfortable and kind of fascinated honestly. The fact that this technology came out of military and law enforcement doesn’t just go away because it’s in a gallery. I think that’s why I find something like Rokeby’s Sorting Daemon so much more compelling than installations that just use body tracking to make something pretty because it actually wrestles with that baggage. When I know something is tracking me I feel it, and I think that feeling is worth exploring more than something with purely just a cool visual. The work that sits in that discomfort feels a lot more honest to me.

Week 4: Reading Response

My first time going to a library study room, I was confused on how to get out because I didn’t know you had to click a button to leave. Although it has a sign that says “click to exit,” I was still lost because other study rooms on campus didn’t have this, and the button is on the side of the door where you usually don’t look, something Norman calls a classic signifier failure. The more I used the study rooms, I found myself still struggling as sometimes the button to leave or the ID scanner to get inside just don’t work. When I swipe my ID and nothing happens I have no idea if my card failed, the reader failed, or I did something wrong, which is exactly the feedback problem Norman describes. Getting in and getting out also use two completely different interactions with no consistent logic and it breaks any conceptual model I try to build of how the door works.

I think I can apply Norman’s principles to my own work in p5.js sketches. One thing I want to start doing is making it clearer to users how they are supposed to interact with my sketches, whether they need to click, long press, or drag to interact with my piece. Right now someone could open my sketch and have no idea what to do with it, which is the same signifier problem Norman talks about with doors. I want to incorporate small visual cues like a pulsing cursor, an animated hand icon, or a brief on-screen hint that disappears after a few seconds to guide the user naturally into the interaction without overwhelming them with instructions.

Week 4: Generative Text

Concept

I wanted to make a generative text output inspired by one of my favorite childhood comfort food: alphabet soup. It’s a tomato soup where the pasta is made out of the letters (a-z). I wanted to recreate a soup bowl that would have letters floating around. In the sketch, it shows a white dinner plate with a reddish tomato soup filling, with 80 randomly generated lowercase letters slowly bouncing around inside the soup. The letters stay contained within the circular bowl, and when they hit the edge, it reverses direction just like real pasta letters drift and bump in a bowl of soup.

My Final Sketch!

Code I’m Proud Of

 // keep inside soup
if (dist(l.x, l.y, cx, cy) > radius) {
  l.speedX *= -1;
  l.speedY *= -1;
}

Before I had these lines, my letters were all just floating around the whole sketch and I needed a way to keep the letters within the soup circle. Instead of checking rectangular walls, I used the dist() function to measure how far each letter has drifted from the center of the soup. When that distance exceeds the soup’s radius, both speed values get flipped by multiplying by -1, sending the letter bouncing back inward. This keeps all 80 letters perfectly contained within the circular soup area at all times.

How This Was Made

I used a loop that runs 80 times to create 80 letter objects, each placed at a random position inside the soup circle using polar coordinates and it picks a random angle and distance from the center, then converts them into x and y coordinates with cos() and sin(). Each letter gets a random character using String.fromCharCode(), which converts numbers 97–122 into a through z (I learned this from w3schools), along with random speed values for both directions. All of this gets stored as objects in an array called letters[]. Every frame, each letter’s position is nudged by its speed values to create the floating movement, and dist() checks whether it has drifted too far from the center and if it has, both speeds are flipped by multiplying by -1, then it shows a bouncing movement of the letter back inward and contained within the soup circle. For the font, I played around with different fonts and I landed on Courier which was the closest I could find to the chunky, typewriter-style lettering you actually see on real alphabet pasta.

Reflection and Future Improvements

This project taught me how useful polar coordinates are for placing things inside a circle instead of just rectangular, and I got a lot more comfortable using object literals inside arrays as a lightweight way to store and manage multiple moving things at once. Next time, I’d love to add mouse interaction so clicking near a letter makes it spin or dissolve, add a spoon that follows the cursor and stirs letters out of the way, or occasionally have the letters briefly cluster into short words before drifting apart again, something like the artwork “Text Rain” that we looked at in class.

Week 3: Reading Response

I think a strongly interactive system is one where users feel like their actions genuinely matter and produce meaningful responses. The interaction goes beyond just getting any reaction, and to actually getting responses that feel thoughtful and varied based on what you actually do. A strongly interactive system gives you agency almost like you feel like you’re having a conversation with it rather than just triggering pre-programmed effects. The fridge light example really clicked for me here. Sure, the light turning on is necessary for seeing inside, but what would make it strongly interactive is if different lights indicated whether the fridge is full, running low on food, or out of ice. That kind of communication transforms a basic function into something that actually responds to your needs in a meaningful way.

Honestly, I’ve been prioritizing aesthetics way too much with my designs, and I want to flip that relationship entirely. I want interaction to be the main focus, with visuals supporting it rather than the other way around. I also really want to give users actual choice in what they interact with. I could add memory too, where the sketch remembers your previous interactions and elements behave differently based on that history. I want the goal is to surpass just making pretty things that happen to be clickable and instead create experiences where meaningful interaction is actually the heart of the piece.