Week 10 Group Music Instrument

For our interactive media sound project, my partner, Yiyang, and I decided to create a simple yet expressive instrument with a few sensors, and a buzzer on Arduino Uno. We wanted to build something that was intuitive to play and produced a unique, percussive sound. The result is this force-sensitive drum. Tapping different pads creates different notes, and a toggle switch shifts the entire instrument into a higher-pitched mode.

Concept

Our initial idea was inspired by the force sensors used in class to control sound. We thought, what if we could use multiple sensors to combine frequencies and create rhythms? We brainstormed a few possibilities. Could we assign different chords to each sensor, where pressing harder makes a certain chord more prominent? Or could the sensors act as modifiers for a continuous track?

Ultimately, we settled on a more direct approach for a playable instrument. We decided to have three Force Sensitive Resistors (FSRs) that would each trigger a distinct note, like pads on a drum machine. To meet the project requirements and add another layer of interactivity, we incorporated a digital two-way switch. Flipping this switch would transpose the notes of all three pads to a higher octave, giving the player two different sound palettes to work with.

Schematic

The build was straightforward, centered around an Arduino Uno and a breadboard.

Components Used:

  • 1x Arduino Uno

  • 1x Breadboard

  • 3x Force Sensitive Resistors (FSRs) – our analog sensors

  • 1x Two-way toggle switch – our digital sensor

  • 1x Piezo Buzzer

  • Resistors (for the FSRs and switch)

  • Jumper wires and Alligator clips

Each of the three FSRs was connected to a separate analog input pin on the Arduino. This allows the Arduino to read a range of values based on how much pressure is applied. The toggle switch was connected to a digital pin to give us a simple ON/OFF (or in our case, Mode 1/Mode 2) reading. Finally, the piezo buzzer was connected to a digital pin capable of PWM (Pulse Width Modulation) to produce the tones.

The Arduino code continuously checks the state of our mode switch and reads the pressure on each of the three force sensors. If a sensor is pressed hard enough to cross a defined hitThreshold, it calls a function to play a corresponding sound.

There was evolution of our instrument. We started with a basic concept (v0.1) and then refined it by adjusting the frequency gaps between the sensors for a more distinct and musical sound (v1.0a). Finally, we tweaked the delay to give it a more responsive and percussive, drum-like feel (v1.0b).

Video/Image Documentation

Code Snippet I’m proud of

To simulate it more as a drum effect, I made this for loop to create this pitch decay effect:

// drum pitch decay effect
  for (int f = baseFreq + 40; f > baseFreq; f -= 5) {
    tone(buzzer, f);
    delay(10);
  }

Future Improvements/ Problems Encountered

Our biggest physical challenge was the alligator clips. It was indeed a handy tool to create a prototype, but their exposed metal heads made it very easy to accidentally create a short circuit if they touched. We learned to be meticulous about checking that the rubber insulators were covering the clips properly before powering on the Arduino.

On the software side, getting the sound right was an iterative process. First, we spend time exploring the pitch gapsInitially, the pitches were too close together and didn’t sound very musical. By trial and error, we adjusted the base frequencies to create a more noticeable and pleasant musical gap between the pads. Second, rhythm and feel in hand needed to match a those of a “drum machine”. We played with the delay() value in the main loop. A shorter delay made the instrument feel much more responsive and rhythmic.

If we were to continue this project, we could add more sensors for a full octave, or perhaps use the analog pressure value to control the volume (amplitude) of the note in addition to triggering it. It would also be interesting to experiment with different waveforms or sound profiles beyond the simple tones.

Week 10 Reading

In the early iPhone era, Apple’s design mimicked physical textures(leather stitching in Calendar, felt in Game Center, the green felt of iBooks’ shelves). This skeuomorphism gives digital things tactile analogs was arguably the last mainstream attempt to preserve a sense of touch through sight.

Then came iOS 7 led by Jony Ive, where Apple decisively flattened everything: gradients gone, shadows gone, buttons became text. It was a move toward visual minimalism, but also toward what Victor warns against. The glass stopped pretending to be wood, or leather, or paper. It simply became glass.

Victor’s essay makes me realize that friction is information. When you open a book or turn a knob, your hand is in dialogue with resistance; you feel the world push back. That pushback is not inefficiency. It’s meaning.

What’s fascinating is that Apple, perhaps subconsciously, has been quietly circling back to Victor’s point. The Apple Vision Pro’s “spatial computing” rhetoric reintroduces physical space and hand gestures, but ironically, without touch. You can see and move objects in 3D, but you can’t feel them. It’s embodiment without embodiment. Victor’s “hands without feedback” problem all over again.

Every major design philosophy of the 2010s has quietly absorbed the “frictionless” ethos. Uber, Tinder, and Amazon all measure success by how little thought or effort stands between desire and fulfillment. You tap once and the world rearranges itself.

But Victor’s warning, when applied here, becomes almost moral: when everything becomes too smooth, we lose the feedback loops that teach us how the world works. Swiping on a screen doesn’t just numb the fingers, it numbs cause and effect. It’s a design culture that erases the material consequences of our actions.

Week 9 Production

Concept

This piece transforms a simple circuit into a puzzle. The connection between the switches is not obvious. The user needs some time to figure out what makes the LEDs light up.

The red LED changes brightness as you turn the potentiometer. Also, pressing only one of the push buttons does nothing you have to discover the specific “sweet spot” gesture. Only when all conditions align do the LEDs respond.

Video & Image Documentation

IMG_9510

Schematic Drawing

Future Improvements

A single puzzle could evolve into something far more complex by layering additional challenges:

    • Timing urgency: Buttons must be pressed within 2 seconds of entering the sweet spot, or the blue LED only stays lit for 3 seconds before requiring you to solve it again. This adds urgency and makes the victory feel earned rather than permanent.
    • Pattern memory: The blue LED blinks a simple pattern (like short-long-short) at startup. Users must recreate this rhythm with the buttons while in the sweet spot to unlock, transforming a spatial puzzle into a temporal one.

Week 9 Reading Response

The metaphor of interactive art as a directorial process reveals creative control and trust. When Igoe compares designing interactive work to directing actors, it exposes a fundamental tension in the fear of without constant explanation, our vision will be misunderstood or ignored. But this fear might actually be the enemy of good interactive work. Traditional artists can hide behind the permanence of their statement, for example, a painting doesn’t change whether viewers “get it” or not. However, Interactive artists must confront the uncomfortable reality that their work is incomplete without the audience’s participation. The most interesting implication here is that over-explanation isn’t just bad pedagogy. It’s a form of creative cowardice, a way of avoiding the vulnerability inherent in truly collaborative expression.

What strikes me about the “greatest hits” catalog is how it inadvertently maps the evolution of human-computer intimacy. The progression from theremin-like instruments (where gesture has “little meaning by itself”) to gloves (which borrow existing gestural vocabulary) to meditation helpers (which attempt to read involuntary bodily responses) represents increasingly ambitious attempts to collapse the distance between intention and interface. Yet Igoe’s skepticism about certain forms, particularly “remote hugs” and meditation helpers, suggests that some human experiences might be fundamentally resistant to technological mediation. The machine can measure breath rate and skin response, but it cannot know meditation; it can transmit signals between paired objects, but it cannot convey warmth. This raises an uncomfortable question for interactive designers: are we sometimes trying to technologically recreate experiences that derive their meaning precisely from their technological absence?

The recurring critique of projects that confuse “presence with attention” opens up a broader philosophical question about what interaction actually proves. A sensor detecting someone standing in front of a painting tells us almost nothing about engagement, yet many interactive projects treat physical proximity as evidence of meaningful exchange. This seems related to a contemporary cultural anxiety about whether we’re truly connecting with anything. We’ve become so focused on measurable interaction (clicks, views, sensor triggers) that we’ve lost sight of the immeasurable dimension where actual meaning resides. Perhaps the most radical interactive artwork would be one that deliberately resists confirming whether interaction happened at all, forcing both creator and participant to sit with uncertainty rather than seeking the reassurance of sensor data. The blink of an LED becomes a form of emotional comfort: see, something happened, you’re not alone.

Week 8 Production Traffic Light Pedal Keyboard

Concept

This project takes inspiration from the dual-manual organ, an instrument that engages both the hands and feet to create layered musical expression. Similarly, I wanted to explore how interaction not only through hands can become a creative control method. I built a pedal-activated LED system, allowing the user to light up different colors by stepping instead of using their hands. The choice of red, yellow, and green LEDs draws directly from the universal language of traffic lights, symbolizing stop, pause, and go, which could creates a visual rhythm that also parallels musical tempo. The pedal thus becomes both a practical switch and a metaphor for coordination and rhythm through movement.

Image & Video Documentations Video(Download)

Future Improvements

Building this project helped me better understand how simple sensors and physical actions can create meaningful interaction. The foot-controlled design felt intuitive and playful, emphasizing how movement can extend beyond the hands in creative electronics. However, I also noticed that the current version only allows a binary on-off behavior. In the future, I’d like to make it more expressive, for example, by integrating pressure sensors so that the brightness or color changes based on how hard the pedal is pressed, or adding a fade-in/out animation for a smoother, more musical response. Another possible direction is to connect the pedal to sound or musical output, letting each color correspond to a tone similar to the dual-keyboard organ.

Reading Response Week 8

Norman’s ideas highlight that aesthetics are not just surface decoration but a way for design to work with the human mind rather than against it. When something looks and feels good to use, our brains relax into a more open, imaginative mode. We stop fighting the interface and start exploring through it. This isn’t about luxury or indulgence. It’s about survival in a world full of cognitive overload. A beautiful interface lowers emotional friction, it turns stress into curiosity. Even something as small as the choice of a teapot, depending on mood, shows how design can become a companion to our emotional state. Good design doesn’t simply function well It cares for the user’s attention, focus, and resilience, allowing “pleasure” to become a quiet form of intelligence.

Margaret’s “software engineering” wasn’t only about logic and precision, but it was about understanding the unpredictable human behind the code. When Hamilton built systems that could recover from error, she was acknowledging fear as real variables in the machine. She designed not for perfection but for forgiveness. The Apollo guidance software’s ability to prioritize critical tasks during overload was, at its core, a kind of empathy written in code. It recognized that even astronauts under pressure could make mistakes, and that the system should protect rather than punish them. Hamilton’s foresight turned technical design into an act of emotional intelligence, embedding trust and calm into the most high-stakes environment imaginable.

Norman and Hamilton sketch a fuller philosophy of design, one that sees beauty and reliability, emotion and logic, not as opposites but as collaborators. Norman’s teapot and Hamilton’s Apollo code both remind us that good design anticipates human vulnerability and builds grace into its response. Whether the context is a kitchen or a lunar landing, the designer’s role is to create conditions where people can think clearly, act confidently, and recover gently from error. The aesthetic and the algorithm share the same goal: to make complexity humane.

Midterm Project Documentation: All Day Breakfast

Sketch(f for fullscreen): https://editor.p5js.org/joyzheng/full/tb0uwj2nP

Overall Concept

As a visiting student at NYUAD, I found the made-to-order dining system, particularly at the All Day Breakfast counter, to be very confused. Unlike the pre-made options I was used to, the text-only menus made it difficult to visualize my order. I always confused what and how many did I ordered if there’s no picture (some are arabic food I don’t know) and I often found myself pulling out a calculator to see if my selections added up to a full meal plan.

These frictions made me want to digitalize the experience to an interactive game that aims to gamify the ordering process. The core goal is to provide a more intuitive and visual way for players to assemble a meal, manage inventory, understand the costs, and manage their spending. By turning the process into a game with clear steps and rewards (badges), the project transforms a problem/demand discovered in my life into an engaging and replayable experience.

How It Works

The game guides the player through a six-scene narrative that mirrors the real-life process and menu of getting food at the D2 dining hall A LA BRASA All Day Breakfast Counter.

UI Prototype:

UE:

Scene 1:

Start Screen: The player is presented with the All Day Breakfast counter and prompted to “Ready to Order?”. Clicking the triangle button begins the game. The badge board is also displayed here, showing the player’s progress.

Scene 2:

Choose Food: The player is shown a grill with all available food items. They must first click to pick up a pair of tongs, which then attaches to their mouse. They can then click on food items to pick them up and click on the plate to add them to their meal. The total cost is updated in real-time.

Scene 3:

Scan Items: The player takes their plate to the cashier. They must pick up the scanner tool and move it over each food item on the plate. As each item is scanned, a beep sound plays, and the item is added to a virtual receipt.

Scene 4:

Payment: The cashier opens, revealing a coin tray. The player must pay the total amount shown on the receipt by clicking on coins from a palette and dropping them into the tray.

Scene 5:

Eat: The player sits down to eat. They must pick up a fork and use it to pick up food from their plate and bring it to the character(NYUAD Girl)’s mouth to “eat” it, which plays a sound and makes the food disappear.

Scene 6:

End Screen & Badges: After the meal, the game checks if the player’s actions have met the conditions for any new badges. If so, a special animation plays. The player is then given the option to “Dine in AGAIN!”, which resets the game and starts a new session.

Technical Decisions & Game Design I’m Proud of

I am proud of completing a fully functional and well-designed game within the project timeline, especially after iterating on the initial idea. A key technical challenge was to build the entire game to be fully responsive. The core of the responsive design is a set of helper functions (updateLayoutDimensions, scaleRectangle, scaleValue) that calculate scaling factors based on the current window size versus the original 700×500 design grid. This allows every element to reposition and resize dynamically, ensuring the game is playable on any screen.

It’s also helpful to discuss with Professor Mang to improve the interactivity and replayability of the game. We came up the ideas of implementing the stock management system and humorous badge reward that every NYUAD students who went to this dining hall could resonate with(e.g., never being able to spend a whole meal plan; why is 1 meal plan 33.6? Is that 0.1 for service fee?). I design the inventory as the same as how it usually would be in the counter, for instance, there’s always only a few avocado toast and I just never being able to get tofu omelet till now. Overall, this is also very meditating and educational (in some sense) that it reminds people to feed themselves well in dining hall even when you are rushing in classes and encourage user to do a balanced meal with enough amount of fiber everyday.

// =======================================
// SCENE 2: CHOOSE FOOD
// this function calculates the responsive positions for all food items in scene 2
function buildScene2FoodGrid() {
  // clears the array of food objects to ensure a fresh start each time the grid is rebuilt (e.g., window resize)
  scene2FoodObjects = [];

  // constants that define the original pixel dimensions of the background art and the specific rectangular area within it where the food is displayed
  const sourceImageSize = { w: 1536, h: 1024 };
  const sourceFoodArea = { x: 124, y: 138, w: 1284, h: 584 };
  
  // responsive calculation
  // current on-screen position and size of the food area
  // by finding the scaling ratio between the current canvas and the original background image
  // so the grid always perfectly overlays the correct part of the background art
  const foodGridRect = {
      x: sourceFoodArea.x * (canvasWidth / sourceImageSize.w),
      y: sourceFoodArea.y * (canvasHeight / sourceImageSize.h),
      w: sourceFoodArea.w * (canvasWidth / sourceImageSize.w),
      h: sourceFoodArea.h * (canvasHeight / sourceImageSize.h)
  };
  
  // the calculated grid area is then divided into cells (8 columns by 2 rows) to position each food item
  const columns = 8;
  const rows = 2;
  const cellWidth = foodGridRect.w / columns;
  const cellHeight = foodGridRect.h / rows;
  
  // the size of each food item is based on the smaller dimension (width or height) of a grid cell
  // this prevents the food images from looking stretched
  scaled by 70% to add padding
  const itemSize = min(cellWidth, cellHeight) * 0.7;
  
  // this loop iterates through every food item defined
  for (let i = 0; i < ALL_FOOD_ITEMS.length; i++) {
    // math.floor() and % convert the 1d loop index (i) into a 2d (row, col) grid coordinate
    let row = Math.floor(i / columns);
    let col = i % columns;
    
    // calculates the final top left (x, y) coordinate for each food item
    // starts at the grid's origin
    // adds the offset for the column/row
    // adds a centering offset
    let itemX = foodGridRect.x + col * cellWidth + (cellWidth - itemSize) / 2;
    let itemY = foodGridRect.y + row * cellHeight + (cellHeight - itemSize) / 2;
    
    // a new food object is created with its calculated position and size
    // added to the array to be drawn
    scene2FoodObjects.push(new FoodItem(ALL_FOOD_ITEMS[i], itemX, itemY, itemSize));
  }
}

 

The most complex piece of code, and the one I’m most proud of, is the logic in the buildScene2FoodGrid() function. Unlike other elements that scale relative to the canvas, this grid must scale relative to the background image itself to ensure the food items are perfectly aligned with the artwork.

This logic calculates a scaling ratio based on how the background image has been stretched to fit the screen, and then applies that same ratio to the coordinates of the food grid. It’s a powerful piece of code that makes the experience feel seamless.

Challenges & Improvements

The development process was a valuable relearning game development. I’m surprised by the amount of free assets resources and tutorials for game development online. I’m also inspired by the Coffee Shop Experience example of how to use p5js to manage a game and toggle between scenes.

One of the most surprisingly time-consuming challenges was a simple debugging session that lasted hours, only to discover I had misspelled “diarrhea” as “diarreah” or “diareah” in different location. This taught me the importance of meticulous checking and creating simple debugging tools to isolate issues early.

I also got the opportunities to explore AI created assets through this project. For this huge amount of assets, AI assets might be the best option for me in order to finish on time. However, I still spent at least half of the game development just to get back and forth for “drawing a good card” of images. To be honest, I want to say Nano Banana didn’t worth the hype for image creation. For game assets development, ChatGPT might be the best choice after trying a few different apps like Midjourney or Canva. This is very lightweight and it also supports transparent background with png, so it could be directly use without manually removing the background.

For the future, I have several ideas for improvement:

  1. Expand to Other Counters: I would like to implement a similar ordering system for the D1 dining hall, which also has a confusing menu.
  2. UI Enhancements: I plan to add a toggle to hide or show the badge board, giving the player more control over their screen space.
  3. More Badges: Adding more creative badges would further increase the incentive for players to try different food combinations and spending strategies.
  4. Scene Refinement: Some scenes are quite dense with assets. In a future version, I might split complex steps into more scenes to make the layout feel cleaner and less cluttered.
  5. Real Implementation: After Midterm, I will demo this to dining hall manager to see if they want to adopt this ordering system or just using a more intuitive and interactive menu to run the dining hall more efficient.

Week 5 Midterm Progress

Concept

For my midterm project, I came up this dining hall idea at the last minute. I had originally been inspired by music interactivity in p5.js and considered continuing with my earlier idea of a meditation game. But while eating lunch, I came up with a new idea that felt both playful and relevant to my experience here at NYUAD. So, I mostly working on replanning my idea and preparing assets this week.

As a visiting student from the New York campus, I was used to the dining hall’s pre-made meals. But At NYUAD, the on-demand menus were at first a little overwhelming. Without pictures, I often had no idea what I had ordered (especially with Arabic dishes I wasn’t familiar with) and I even found myself pulling out a calculator to check how much I had left in my meal plan and how much I orderd. Counters like All Day Breakfast felt especially confusing.

So my concept is to digitalize the experience of eating at NYUAD’s D2 All Day Breakfast counter. The project will let users visualize the ordering process, making it more interactive and hopefully reducing the friction that comes with navigating the real-life menu.

User Interaction

Planned Scenes (prototype):

1.Entering the A LA BRASA counter and tapping into the menu

2.Picking up the clamp to get food from the grill to the plate

3.Scanning food on the plate at the cashier’s scanner

4.Paying with coins in the cashier tray (display receipt?)

5.Eating!!

6.Burping to finish the meal

 

Assets:

Audio:

Dining hall ambient background

Cashier scanner beep

Cash register “kaching”

Burp sound

Yumyum sound

 

Pixelated images:

A LA BRASA counter background

All Day Breakfast menu

Grill plate

Clamp

Plate

Cashier scanner

Cashier with coins tray

Coins (D5, D3, D2, D1, D0.5, D0.25)

Fork

 

Pixel art food items:

Avocado fried egg toast

Avocado toast

French toast

Fried egg

Scrambled egg

Plain omelet

Cheese omelet

Mixed vegetable omelet

Tofu omelet

Hash brown

Chicken sausage

Beef bacon

Turkey bacon

Classic pancake

Coconut banana pancake

small bowl salad

 

The Most Frightening Part & How I’m Managing It

The biggest challenge I anticipate is gathering and aligning all these assets into a coherent game within the midterm timeframe. Real-life food images can be messy and hard to unify visually. To reduce this risk, I’ve decided to make everything in pixel art style. Not only does this match the “breakfast game” aesthetic, but it also makes it much easier to align items consistently.

Since Professor Mang mentioned we can use AI to help generate assets, I’ve been experimenting with transforming photos of my own plates and my friends’ meals into pixelated versions. This approach makes asset creation more manageable and ensures I’ll be able to integrate everything smoothly into the game.

 

Week 5 Reading Response

Computer vision isn’t really “vision” in the way humans experience it, it’s more like a giant calculator crunching patterns in pixels. Where we see a friend’s smile and immediately read context, emotion, and memory, the computer just sees light values and tries to match them against models. It’s fast and can process way more images than a person ever could, but it lacks our built-in common sense. That’s why artists and developers often need to guide it using things like face detection, pose estimation, background subtraction, or optical flow to help the machine focus on what’s actually interesting. Techniques like MediaPipe that can map out your skeleton for gesture-based games, or AR apps that segment your hand so you can draw in mid-air, could let us bridge the gap between human intuition and machine literalism.

But once you start tracking people, you’re also borrowing from the world of surveillance. That’s a double-edged sword in interactive art. On one hand, it opens up playful experiences. On the other, the same tech is what powers CCTV, facial recognition in airports, and crowd analytics in malls. Some artists lean into this tension: projects that exaggerate the red boxes of face detection, or that deliberately misclassify people to reveal bias, remind us that the machine’s gaze is never neutral. Others flip it around, letting you “disappear” by wearing adversarial patterns or moving in ways the system can’t follow. So computer vision in art isn’t just about making the computer “see”, it’s also about exposing how that seeing works, what it misses, and how being watched changes the way we move.

You can also invert the logic of surveillance: instead of people being watched, what if the artwork itself is under surveillance by the audience? The camera tracks not you but the painting, and when you “stare” at it too long, the work twitches as if uncomfortable. Suddenly, the power dynamics are reversed.

Week 4 Reading Response

One everyday object that always drives me crazy, but not quite do with interactive media, is my water bottles. Every bottle I’ve owned is either too tight to open or too loose so it leaks everywhere. Especially for the current one I’m using, I always ended up struggling like it’s a test of grip strength. The most baffling example is the viral Stanley cup, people on social media complain nonstop about leaks, and yet it still become a must-have item. I’ve even seen TikToks where the “fixing the leak” process itself becomes a kind of trend. From Norman’s perspective, that feels like a failure of design: the affordance (a secure, portable cup) clashes with the signifiers (the lid/the cap) that don’t clearly tell you how to tighten it enough. The fact that millions of users are improvising and complaining is proof that the “system image” isn’t communicating what the designer intended.

Norman’s principles also made me think about my own experiments in p5.js. When I build interactive sketches, I realize I need to add clear signifiers, not just rely on “hidden” affordances. For example, I once made a generative text piece where users could drag characters around, but unless I showed some visual cue, like a subtle highlight or a cursor change, no one discovered the interaction. It’s exactly what Norman warns about: affordances exist, but if they’re invisible, they fail.

Another part of the article I found really interesting was Norman’s discussion of mapping. He uses the example of car seat adjustment buttons shaped like the actual seat, you just push the part of the button that matches the part of the seat you want to move. That struck me because it feels so obvious and natural, but you realize how many objects miss this. I think about the confusing array of stove knobs in my apartment where I’m constantly turning on the wrong burner. If those knobs had a layout that mapped directly to the burners, I’d never make that mistake. It made me realize that whether it’s an app, a sketch, or just a stove, people shouldn’t need labels and trial-and-error to figure out something so basic.