Week 10 – The Diet Drum (Deema and Rawan)

 

Our Concept: 

Our project drew inspiration from last week’s readings on human-computer interaction, particularly the ways in which technology can respond to subtle human behaviors. We explored how interactive systems often mediate our engagement with the environment and even with ourselves, creating experiences that feel responsive, social, or even playful.

With this perspective, we asked ourselves: what if an instrument didn’t just make sound, but responded directly to human behavior? Instead of rewarding interaction, it could intervene. Instead of passive engagement, it could create a performative, almost social response.

From this idea, the Diet Drum emerged — a device that reacts whenever someone reaches for a snack. The system is both humorous and relatable, externalizing the human struggle of self-control. When a hand approaches the snack bowl, a servo-powered drumstick strikes, accompanied by a short melody from a passive buzzer. The result is a playful, judgmental interaction that transforms a familiar, internal tension into an amusing and performative experience.

How It Works

  • Photoresistor (LDR): Detects hand movements by monitoring changes in light. As a hand blocks the sensor, the reading drops.
  • Servo motor: Moves a drumstick to perform a percussive strike, physically reinforcing the “warning” aspect of the interaction.
  • Passive buzzer: Plays a short melody as a playful, auditory cue.

Arduino Uno: Continuously monitors the sensor and triggers both motion and sound.

When the LDR senses that a hand has blocked the light, the Arduino makes the servo play the melody and hit the drum. This creates a clear, immediate connection between what a person does and how the system responds, showing ideas from our readings about how devices can react to gestures and sensor input.

Video Demonstration

assignment10

Challenges

Throughout development, we encountered several challenges that required both technical problem-solving and design fixing:

  • System reliability: While the setup initially worked smoothly, leaving it for some time caused it to fail. Figuring out the problem took us some time because we didn’t know what went wrong and whether it was the setup or the code. So we had to partially rebuild and retune the system to restore functionality.
  • Mechanical stability: Keeping the drumstick steady during strikes was more difficult than anticipated. Any slight movement or misalignment affected the accuracy and consistency of the strikes, requiring several adjustments.
  • Audio timing: The melody initially played too long, delaying servo motion and disrupting the intended interaction. Shortening the audio ensured that the strike and sound remained synchronized, preserving the playful effect.
  • We used AI, to help with some code difficulties we had, to fit with our original idea.

Code Highlights

One part of the code we’re especially proud of is how the sensor input is mapped to the servo’s movement.

float d = constrain(drop, MIN_DROP, MAX_DROP);
float k = (d - MIN_DROP) / float(MAX_DROP - MIN_DROP); 
int hitAngle = SERVO_HIT_MIN + int((SERVO_HIT_MAX - SERVO_HIT_MIN) * k);
unsigned long downMs = STRIKE_DOWN_MS_MAX - (unsigned long)((STRIKE_DOWN_MS_MAX - STRIKE_DOWN_MS_MIN) * k);

strikeOnce(hitAngle, downMs);

This makes the drumstick respond based on how close the hand is, so each action feels deliberate rather than just an on/off hit. It lets the system capture subtle gestures, supporting our goal of reflecting nuanced human behavior.

Future Improvements

Looking forward, we see several ways to expand and refine the Diet Drum:

  • Adaptive audio: Varying the melody or warning tone based on how close the hand is could enhance the playfulness and expressiveness.
  • Mechanical refinement: Improving the stability of the drumstick and optimizing servo speed could create smoother strikes and more consistent feedback.
  • Compact design: Reducing the size of the device for easier placement would make it more practical for everyday use.
  • Visual cues: Adding optional LEDs or visual signals could enhance the  feedback, making the system even more engaging.

Github Link:

https://github.com/deemaalzoubi/Intro-to-IM/blob/b321f2a0c4ebf566082f1ca0e0067e33c098537f/assignment10.ino

https://github.com/deemaalzoubi/Intro-to-IM/blob/b321f2a0c4ebf566082f1ca0e0067e33c098537f/pitches.h

Week 10 – Reading Reflection

Bret Victor’s rant made me rethink what we even mean when we call something “the future.” He argues that touchscreens, gesture controls, and all these “advanced” interfaces are actually making us less connected to our own abilities. Our hands are one of the deepest ways we understand the world. They know tension, pressure, texture. They think with us. But we’ve decided progress means tapping around on cold glass. When I read that, the first thing I thought of was LEGO. There is this unspoken language when you build: the way your fingers already know which brick fits, the tiny resistance before a perfect click. That sound. That feeling. It’s not just play; it is intelligence happening through the body. No screen has ever replicated that.

I’ve tried the digital LEGO builders before, and they always feel wrong. You can assemble something on the screen, sure, but there is no weight, no friction, no small ritual of digging through pieces and recognizing one by touch alone. Same with crocheting. The yarn runs differently through your fingers depending on tension, mood, the hook, your posture. You feel progress. You feel mistakes. Your hands correct before your mind catches up. Victor’s point clicked for me here: creativity is not just in the mind. It is in the wrists, fingertips, joints, and muscle memory. When interfaces ignore the body, they are not futuristic. They are incomplete.

The responses page made it clear he is not saying we need to go backwards. He is saying we should refuse a future that flattens our senses. There are richer, more human possibilities if we let our full selves participate in the interaction. For me, the future I want is textured, clickable, tuggable, threaded, snapped together. A future that feels like LEGO: discovery through touch, play, accident, correction, and joy. Innovation that doesn’t just live on a screen, but lives in your hands.

Week 9 – Reading Reflection

These readings made me think about how much pressure I put on myself to be “original.” Tom Igoe’s point that most interactive ideas have already been done was strangely comforting. It made me realize that creativity isn’t about inventing something entirely new , it’s about doing something familiar in a way that feels personal. My projects don’t have to be revolutionary; they just have to feel like mine.

What stood out most to me was his idea of stepping back and letting the audience take over. I tend to over-explain my work, wanting people to understand what I meant. But maybe it’s more powerful to just let them interact and form their own meaning. Igoe’s “set the stage, then shut up and listen” line hit hard , it’s something I need to apply not only to my projects but to how I share them.

These readings reminded me that physical computing is not just about sensors or LEDs. It’s about trust , trusting that the user will understand, trusting the materials to behave, and trusting myself to stop editing and just let the work breathe.

Week 9 – Brain on Break

Brain on Break

Concept & Inspiration

This project started from the chaos of studying late nights and running on caffeine. I wanted to make a circuit that visualizes the mental shift between being focused and completely done,  the moment when my brain decides, without warning, “we’re taking a break now.”

The concept connects light, motion, and emotion. The light sensor represents my surroundings and productivity levels , the brighter the space, the more alert I am. The foil switch stands for that physical collapse when I lean head onto the desk. Together, they create a system that reads like a tiny, glowing version of my attention span.

How It Works

The project combines one analog sensor and one digital sensor, controlling two LEDs in different ways.

  • Light sensor (Analog Input): Reads the brightness of the environment. The green LED glows brighter in bright light, symbolizing focus and clarity.

  • Foil Switch (Digital Input): Made of two pieces of aluminum foil connected to D2 and GND. When my elbow or cheek touches them together, it signals “brain on break.”

  • Red LED (Digital Output): Turns on when the foil pads touch — representing mental shutdown.

  • Green LED (Analog Output): Fades according to light level but turns completely off when the foil switch is activated.

This mix of analog and digital behavior mirrors how people work — not everything in us is gradual or logical. Sometimes focus fades; sometimes it just stops.

Circuit Design

When the light changes, the green LED fades smoothly using analogWrite(). When the foils touch, the orange LED turns on and the green one shuts off completely.

Coding:

const int LDR_PIN = A0;        // Analog input from photoresistor
const int SWITCH_PIN = 2;      // Digital input from foil switch
const int GREEN_LED = 9;        // Analog (PWM) output LED
const int RED_LED = 8;      // Digital ON/OFF LED

void setup() {
  pinMode(SWITCH_PIN, INPUT_PULLUP);  
  pinMode(GREEN_LED, OUTPUT);
  pinMode(RED_LED, OUTPUT);
  Serial.begin(9600);
}

void loop() {
  int lightValue = analogRead(LDR_PIN);     // Light level
  int switchState = digitalRead(SWITCH_PIN); // Foil contact: LOW when touched

  
  // Map light value to LED brightness (inverted so bright light = brighter LED)
  int brightness = map(lightValue, 600, 880, 0, 255);
brightness = constrain(brightness, 0, 255);
  analogWrite(GREEN_LED, brightness);

  
  // If foil switch is touched, turn on red LED
  if (switchState == LOW) {
  // Foil pads touching → brain on break
  digitalWrite(RED_LED, HIGH);   // burnout light ON
  analogWrite(GREEN_LED, 0);         // focus light OFF
} else {
  // Normal state → focused
  digitalWrite(RED_LED, LOW);    // burnout light OFF
  analogWrite(GREEN_LED, brightness); // focus light fades with light
}
 //serial monitor
  Serial.print("Light Value: ");
  Serial.print(lightValue);
  Serial.print(" | Brightness: ");
  Serial.print(brightness);
  Serial.print(" | Switch: ");
  Serial.println(switchState == LOW ? "Touched" : "Not touched");

  delay(50);
}

Testing & Results

In bright light, the green LED glows intensely — that’s focus mode. When the room dims, the light softens, mimicking a fading mind. But the real moment comes when I touch the foil pads. This happens when my head touches the desk, indicating that I fell asleep. The red LED flashes alive, and the green one shuts off instantly. It’s like watching my brain say, “enough.”

The light sensor worked better once I narrowed the range (600–880) so the fade became more dramatic. The foil switch needed tighter contact, but once secured, it triggered perfectly. 

Video demo:

8088D1BA-7146-49C3-97C8-CCAB51934422

Challenges

  • Calibrating the light sensor so the fading felt visible but not jumpy.

  • Making sure the foil switch responded to soft touches without staying on permanently.

Future Improvements

If I develop it further, I’d like to:

Include a buzzer or heartbeat sound to show the switch between focus and burnout.

Reflection

This assignment pushed me to merge function with symbolism. The project isn’t just about inputs and outputs — it’s about mood as circuitry. The light patterns represent focus, fatigue, and the strange middle space between both.

It reminded me that even in electronics, balance matters. Circuits need both current and resistance. Brains need both light and rest.

Week 8 – Unusual Switch Assignment

Concept & Inspiration

This project began with a video example shown in class where a mustache prop was used as a switch. I was fascinated by the idea that something worn on the face could become part of an electronic interaction. It reminded me that the body itself can be the input device and that playful design can still be technically meaningful. That influenced my first idea. I wanted to place aluminum foil pads above my eyebrows so that every time I scrunched them together, the circuit would close and the LED would react. It felt like a fun and expressive interaction because eyebrows are a natural part of communication.

As I started building, I realized a limitation. The wires available were not long enough to comfortably reach my face while plugged into the Arduino. The setup became impractical and would not stay connected. Instead of forcing the idea, I adapted it while keeping the core concept: using a body gesture that does not involve hands. I moved the conductive pads from my face to my elbows, which allowed me to keep the same interaction logic without fighting the hardware constraints.

The result is a simple but playful design. When the user touches their elbows together, their body closes the circuit which becomes a digital input to the Arduino that changes the LEDs. This transforms a physical gesture into a clear visual response and reinforces the connection between the human body and digital behavior.

How It Works

Two small pieces of aluminum foil are taped to the elbows. Each foil pad is connected to the Arduino: Left elbow foil → Digital Pin 2 (input) Right elbow foil → GND When the elbows are apart, the circuit is open, and the Arduino reads a HIGH signal using an internal pull-up resistor. The red LED turns on to indicate no contact. When the elbows touch each other, the conductive path through the body closes the circuit, pulling the input LOW. The green LED turns on, signaling that contact is detected. This simple interaction demonstrates digital input detection, human conductivity, and conditional output control.

Circuit Diagram:

I included a labeled schematic showing the Arduino Uno, foil pads, and LED wiring. Red LED connects to Pin 9 through a 330 resistor, Green LED to Pin 10 through a 330 Ω resistor, and all components share the same GND reference.

Arduino Code:

const int SWITCH_PIN = 2;
const int RED_LED = 9;
const int GREEN_LED = 10;

void setup() {
  pinMode(SWITCH_PIN, INPUT_PULLUP);
  pinMode(RED_LED, OUTPUT);
  pinMode(GREEN_LED, OUTPUT);
}

void loop() {
  int state = digitalRead(SWITCH_PIN);

  if (state == LOW) {
    digitalWrite(GREEN_LED, HIGH);
    digitalWrite(RED_LED, LOW);
  } else {
    digitalWrite(GREEN_LED, LOW);
    digitalWrite(RED_LED, HIGH);
  }
}

The INPUT_PULLUP keeps the signal stable when the body is not closing the circuit.

Heres the video demonstration (I used my little sister as demonstration):

C18BC5EF-F946-4D9D-9A64-42D32D1BC5B3

Challenges:

Ensuring the elbow foil stayed in place during arm movement was a big challenge since the jumper wires are pretty short.

This challenge was resolved by connecting one of the wires to from the Arduino to the breadboard and then connecting another on the same row to give me more extension to work with.

Future Improvements:

More inputs, using additional body contact points.

I could potentially add other outputs, such as sound.

I could learn a way to extend the wires so I have access to make funner projects with no limitations.

Week 8 – Reading Response

Both readings stretched the way I think about design and technology, but in very different directions. Don Norman explains that feelings are part of the user interface, while the story of Margaret Hamilton reminded me that logic and planning literally kept humans alive in space. It is interesting how design can go from a teapot being annoying to astronauts potentially facing real danger if the code is not perfect.

Norman’s point about positive emotions making us better problem solvers stuck with me. Not because I needed permission to love pretty things, but because he explains how emotions influence cognition. When something looks good, I want to engage with it. If it glitches, I will try again instead of giving up quickly. The product earns patience through beauty. He is not saying aesthetics replace usability, but that they work together. That made me rethink how I approach projects. Usability alone might function, but it does not always invite people in.

The Hamilton reading adds another dimension. She was not just writing code. She was establishing the rules of software engineering before the world understood why those rules mattered. Her commitment to planning for human error was not pessimism. It was care. She assumed that even the smartest astronauts could press the wrong button. She designed safety into the system instead of expecting humans to be perfect. That mindset feels extremely modern even though she was working with punch cards and very early computing tools.

Putting the two readings together, one message becomes clear. Design is not only about the object. It is about the human who interacts with it. Norman shows how emotion can make users more capable, and Hamilton shows how systems can support users even when they are not at their best. One focuses on delight. The other focuses on safety. Both center real people who get confused, who make mistakes, and who feel things.

As someone just starting to create interactive work, this combination feels like a guide. If I want to build something that people actually use, not just something that technically functions, I need to care about how it feels and what happens when something goes wrong. Norman gives me permission to design joy. Hamilton reminds me to design empathy. The strongest designs are created when both are valued equally.

Midterm Project – Barbie Dreamhouse

Barbie’s Dreamhouse 

Link to sketch: https://editor.p5js.org/rma9603/full/y2y3-M4zC

Whenever I play with interactive art, I try to build something that invites slow exploration instead of a single-goal game. For my final project I built Barbie’s Dreamhouse: a small interactive world with rooms to explore (Outside → Inside → Closet, Kitchen, Bedroom, Living Room), each containing subtle objects the user can click or interact with. The goal was to create a calm, nostalgic environment that encourages clicking, discovering, and lingering.

Concept

The Dreamhouse is not a “win/lose” game, it’s an exploratory scene. The idea was to capture the cozy, pastel vibe you expect from a dreamhouse and layer in small interactive details:

  • An exterior view with a theme song and a door that rings a real doorbell when clicked.

  • An interior view with hotspots for the closet, kitchen, bedroom, and living room.

  • A Closet (Wardrobe) with multiple outfit sets and selectable frames, accompanied by sparkle effects and sound.

  • A Kitchen  to pick a cupcake base, bake it, then decorate with frosting.

  • A Bedroom with a clickable book that opens a reader overlay

  • A Living Room with a TV area and remote control for channel-like images with music that resonated with the pic shown.

On a personal note: I loved Barbie as a kid, and some of my favorite Barbie movies directly inspired the look and feel of the living room — the pastel decor, playful props, and the idea of a tiny TV full of different “channels” came straight from that nostalgia.

The focus was on atmosphere: soft pastel visuals, gentle audio, and small surprises that reward clicking around.

Here is a rough sketch of what I envisioned:

Key features and interactions

Room transitions

  • Click the door from the exterior to enter the house.

  • From the interior, click room hotspots to open that room full-screen.

  •  Scene state management makes switching easy.

Closet 

  • Pages of outfit frames sliced from larger sprite sheets. (took me a while to slice them correctly).

  • Click dots to switch pages/sets, arrows to cycle frames, and a ✓ button to confirm selection.

  • Sparkle overlay + sparkle sound on selection when outfit is confirmed.

  • I added a wardrobe reset so pressing R reliably restarts the closet to its initial state.

Kitchen

  • A mini workflow: pick base → bake (progress bar) → confirm → decorate (frosting).

  • Cupcake base and frosting are separate sprites; frosting is aligned to the base bottom using computed offsets so different frosted overlays sit properly.

  • Tweaked the base preview size so the cupcake base doesn’t dominate the scene.

Bedroom

  • A book hotspot that opens a reader overlay with pages.

  • Prev/Next page buttons and R to close.

Living room

  • The living room is where my childhood Barbie inspiration shows most — pastel furniture, framed photos, and a playful TV nook. I built a small TV area with channel-like images and a responsive remote so users can flip through visuals like changing channels in a cozy movie night.

  • TV image area and remote hotspots scale responsively with the canvas; the living room’s color and props riff off my favorite Barbie movies.

Audio

  • Background theme for the exterior (looping Barbie Life in the Dreamhouse theme song).

  • Doorbell sound that plays when clicking the door — the theme song stops when the door is pressed so the bell is audible and the audio doesn’t overlap.

  • Special audio for sparkle, baking ding, closet music, and bedroom lullaby — all are conditionally played/stopped when entering/exiting rooms or selecting items.

Workflow & what went well

  1. Scene manager + state machine
    Making a small SceneManager (global state variable with states like outside, inside, closet, bedroom, living room) made it trivial to manage transitions and keep room-specific logic isolated.
  2. Drawing and assets
    I drew assets in  Canva and used AI to generate some of the visuals  at the same size as the p5 canvas where possible — this hugely simplified positioning and saved time. For sprite sheets (like cupcake bases/frostings and outfits) I sliced frames programmatically so I could treat them like tiled sprites.
  3. Small polish details
  • Preventing continuous hover sounds (door bell) by gating the knock with a boolean.

  • Ensuring music doesn’t layer (check .isPlaying() and .pause() before starting a new track).

  • Adding a “sparkle” overlay and stopping closet music when confirming a selection so the sparkle sound can be heard.

What coding I’m proud of

The piece of code I’m proudest of is  honestly the whole kitchen, specifically the sprite-slicing + alignment system for the Kitchen. I wrote utilities that trim transparent pixels from sprite frames, compute each frame’s visual center/bottom, and then use those offsets to automatically align frosting to the cupcake base across many different sprite sheets and sizes. That makes wildly different art assets behave as a single cohesive object without manual per-frame positioning. It also required careful handling of canvas scaling, timing (bake/ding), and audio overlap — a lot of little edge cases that had to work together cleanly. 

 // slice sprites  — trims each frame and computes offsets on trimmed images
  _sliceSprites() {
    // base frames
    this.baseFrames = [];
    this.baseOffsets = [];
    if (
      this.cupcakeImg &&
      this.cupcakeImg.width &&
      this.cupcakeImg.height &&
      this.baseCount > 0
    ) {
      const g = this._bestGridFor(this.cupcakeImg, this.baseCount);
      const fw = Math.round(this.cupcakeImg.width / g.cols);
      const fh = Math.round(this.cupcakeImg.height / g.rows);
      let idx = 0;
      for (let r = 0; r < g.rows; r++) {
        for (let c = 0; c < g.cols; c++) {
          if (idx >= this.baseCount) break;
          const sx = c * fw,
            sy = r * fh;
          try {
            const raw = this.cupcakeImg.get(sx, sy, fw, fh);
            const trimmed = this._trimTransparent(raw) || raw;
            this.baseFrames[idx] = trimmed;
            this.baseOffsets[idx] = this._computeContentBounds(trimmed);
          } catch (e) {
            this.baseFrames[idx] = null;
            this.baseOffsets[idx] = {
              xOffset: 0,
              yOffset: 0,
              maxY: Math.floor(fh / 2),
            };
          }
          idx++;
        }
      }
    }

    // frosting frames
    this.frostingFrames = [];
    this.frostingOffsets = [];
    if (
      this.frostingImg &&
      this.frostingImg.width &&
      this.frostingImg.height &&
      this.frostingCount > 0
    ) {
      const g = this._bestGridFor(this.frostingImg, this.frostingCount);
      const fw = Math.round(this.frostingImg.width / g.cols);
      const fh = Math.round(this.frostingImg.height / g.rows);
      let idx = 0;
      for (let r = 0; r < g.rows; r++) {
        for (let c = 0; c < g.cols; c++) {
          if (idx >= this.frostingCount) break;
          const sx = c * fw,
            sy = r * fh;
          try {
            const raw = this.frostingImg.get(sx, sy, fw, fh);
            const trimmed = this._trimTransparent(raw) || raw;
            this.frostingFrames[idx] = trimmed;
            this.frostingOffsets[idx] = this._computeContentBounds(trimmed);
          } catch (e) {
            this.frostingFrames[idx] = null;
            this.frostingOffsets[idx] = {
              xOffset: 0,
              yOffset: 0,
              maxY: Math.floor(fh / 2),
            };
          }
          idx++;
        }
      }
    }
  }

 

Areas for improvement / future work

  • Add instructions or an optional guided mode (right now the experience is intentionally exploratory, but an in-game menu could help some users).

  • Refine click detection for non-rectangular images (pixel-perfect hit testing for PNGs with transparency).

  • Add more kitchen interactions: coffee machine, more decoration options, or an inventory for outfits.


What I learned

  • Breaking the app into small room controllers (Wardrobe, Kitchen, Bedroom, LivingRoom) makes the codebase much easier to maintain and debug.

  • Small details matter: gating hover sounds, preventing overlapping music, and subtle visual feedback (sparkle, dots) make the experience feel much more polished.

  • Drawing assets at canvas scale saves tons of time when positioning interactive pieces.

Closing

I loved Barbie when I was a kid, and designing this project felt like a grown-up, interactive love letter to those movies ,  especially when building out the living room. I enjoyed making something soft and low-pressure that rewards clicking and exploration. The Dreamhouse was a great exercise in scene management, responsive layout, and polishing interactions that make users want to hang out in a piece of art.

Barbie’s Dream House – Midterm Progress

Concept + Interaction: 
I’m making a small interactive Barbie Dream House experience with mini-games in each room. You start outside the house and it tells you to “click the door”. When you do a doorbell sound plays and you go inside. The interior is a divided view with hotspots for rooms: closet, kitchen, bedroom, and living room.
Each room has a tiny activity:
Closet: wardrobe selector (browse outfits with ← / → and hit ✓ to confirm).
Kitchen: cupcake maker (choose base, pick frosting color, “bake” in an oven).
Bedroom: open a book to see a short quote or page.
Living room: click the TV to change pictures (channel changer). 

Code design:
Each interactive room is its own class/file.
Main sketch holds a simple state machine: state = “outside” | “inside” | “closet” | “kitchen” | …. When state === “closet” the sketch delegates to wardrobe.draw() and forwards clicks to wardrobe.handleClick().

I made a Wardrobe class that:

  •  loads outfit sprite-pages and slices them into frames,
  • centers and scales the current outfit into a fixed box,
  • draws left/right arrow buttons and a confirm (✓) button,
  • exposes handleClick() for arrows, page switches, and confirm.

Other rooms (Kitchen, Bedroom, LivingRoom) follow the same class pattern so swapping and testing is easy.

Here is my current progress:

https://editor.p5js.org/rma9603/full/5s96PtOnM

 

Biggest risk & what I did about it:
The scariest part was making outfit switching reliable: slicing sprite-sheets, handling transparency, and aligning click hitboxes. If that failed the closet would be unusable.

How I reduced the risk:
I wrote a tiny test that loads one outfit sprite-page and uses page.get() to draw each frame side-by-side so I could verify cropping and transparency immediately. I exported/uploaded transparent PNGs (no white bg) and re-tested them.

centralized coordinates: arrows, confirm button, and hitboxes compute from the same this.box values so visuals and clicks always line up.

Result: the sprite-sheet method works, i can slice pages reliably and the outfit switching is stable, so i’m confident i can implement the cupcake and other rooms the same modular way.

Next steps:

  •  finish the kitchen cupcake game (choose base, frosting, bake timer).
  •  Create the bedroom interaction
  •  Create living room interaction
  • add start/instructions screen and a “new session” reset button so people can restart without refreshing.
  •  hover feedback + polish (shadows, small tweaks).
  • Make it full screen.

Week 5 – Reading Reflection

What stood out to me in the reading is how limited computer vision really is compared to human vision. As humans, we don’t think twice about recognizing objects, adjusting to poor lighting, or making sense of what we see in context. A computer, on the other hand, needs specific rules and conditions to function. It does not actually understand meaning but instead works through pixels, patterns, and features. If something changes in the environment, like lighting or background, the system can easily fail. That made me realize how much of computer vision is not about “seeing” the world the way we do but about narrowing down what the computer is expected to detect.

To make computer vision work, artists and designers often shape the environment so the system has fewer obstacles. This can be done by using clear contrasts, better lighting, or markers that help the camera distinguish what matters. There are also tools like background subtraction and motion tracking that simplify what the computer needs to follow. Hardware choices such as certain cameras, filters, or infrared technology also help in making the vision system more reliable.

In interactive art, I think this creates both opportunities and challenges. On the positive side, computer vision allows art to respond to the presence and movements of people, turning viewers into active participants. It makes installations feel alive and immersive in ways that would not be possible without tracking. At the same time, it carries the same logic as surveillance because the system is always watching and recording behavior. For me, this makes computer vision powerful but also a little unsettling, since it forces us to think about what it means to be observed and how that shapes the experience of art.

Week 4: Reflection

Something That Drives Me Crazy
One thing that really frustrates me is when digital buttons or interactive elements look clickable but actually aren’t. For example, I’ll see a button that’s styled like it should do something, click it, and… nothing happens. Sometimes I double-check, hover around, or even click other areas, thinking maybe I missed something. It’s confusing, annoying, and honestly breaks the flow of using the site or app. I’ve seen it in everything from websites to apps, and even small projects can suffer if users aren’t given clear cues. It’s such a simple thing to fix, too. Adding hover effects, subtle animations, shadows, or clear visual cues can immediately signal that something is interactive, saving users time and preventing frustration.

How I Can Apply Norman’s Principles to Interactive Media
Norman’s ideas about affordances and signifiers really clicked for me. Affordances tell users what actions are possible, and signifiers indicate where they can take those actions. In my interactive media work, I try to make every clickable element obvious. For instance, in my p5.js sketches like my floating dad joke bubble project, I make sure bubbles are visually distinct, move gently to draw attention, and respond when hovered or clicked. These small cues let users know exactly what to do without guessing. I also pay attention to natural mapping. Just like Norman talks about arranging light switches to match the lights they control, I place interface elements in locations that feel intuitive, buttons go where people expect them, and interactive elements behave like their real-world counterparts would.

Reflection and Ideas for Improvement
Working on this has made me realize how tiny details in design can have a huge impact. Even something as simple as whether a bubble looks “poppable” can completely change how a user experiences the project. In the future, I’d like to experiment with multiple interactive elements at once and make sure each is clearly signaled. I’m also thinking about adding feedback for users, like subtle animations when they hover or click, so the system feels alive and responsive. Another idea is giving users more control over interactions; for example, letting them drag or rearrange elements, while still keeping things intuitive. Ultimately, I want my interactive media to be fun, obvious, and frustration-free, where users can explore naturally and enjoy the experience without ever feeling lost.