Week 10 – Reading Reflection

This piece, A Brief Rant on the Future of Interaction Design, really made me stop and think about how disconnected we’ve become from the physical world, even as our technology gets “smarter.” The author argues that our so-called “futuristic” devices are actually quite limited, everything has become a flat piece of glass we tap on, instead of something we truly interact with. He calls out this obsession with “pictures under glass” as lazy design — a downgrade from the richness of real, tactile experience.

What really stuck with me was his reminder of how incredible our hands are. They can sense texture, pressure, temperature, yet we now use them mainly to poke at screens. His comparison to trying to tie your shoes with numb fingers really drives it home. It’s not just that we’ve lost physical feedback, we’ve lost creativity and subtlety in how we use our bodies to understand the world.

But as much as I agree with him, I think his critique could use a bit more realism. There’s a reason touchscreens took over: they’re convenient, cheap, and universal. Designing physical, tactile, or responsive interfaces on a large scale would be expensive and hard to standardize. For example, the Apple Vision Pro tries to reintroduce gesture-based control, but even that feels awkward and unnatural for many people. It’s like we’ve already trained ourselves to think in 2D, to expect smooth glass, not texture or resistance.

Still, I think his rant is important because it challenges the direction of design thinking. It made me think about situations like education or online learning. imagine how much richer it would be if students could physically interact with virtual models or data instead of just scrolling and clicking. Or think of creative fields like art or architecture, where so much of the learning used to come from the feel of materials. Now, everything happens behind a screen.

So, while his rant might sound idealistic, it’s also a necessary wake-up call. It reminds us that innovation shouldn’t just mean “simpler” or “sleeker”, it should mean more human. The goal shouldn’t be to erase physicality for convenience, but to design technology that reconnects us to the world instead of flattening it.

Week 9 – Shadow and Light

Concept

For this project, I decided to make a small interactive game called Light & Shadow”. The concept is simple but fun: the yellow LED changes brightness depending on how much light the photoresistor (LDR) senses, and the blue LED only lights up when it’s dark and the button is pressed, basically “catching the shadow.” I liked that idea because it combines analog and digital inputs in a way that feels like a mini game.

The Process

The production process was mostly about setting up the circuit on the breadboard and figuring out how to connect everything correctly. Choosing the right resistors was harder than I expected,  sometimes the LEDs didn’t light up properly, and I had to try a few combinations before it worked. I also spent a lot of time drawing the schematic to make sense of the connections. I must have redrawn it several times to make it at least slightly understandable. Even now, I’m not 100% sure my plan is perfect, but it works!

Figure: the process of sketching the plan

The Finalized Sketch

The coding part was fun because it was simple enough to understand but still taught me a lot. I read the analog value from the photoresistor and mapped it to the brightness of the red LED, then used a simple if-statement to check the button and darkness to control the green LED. Seeing the LEDs react in real time to light and button presses made all the work feel rewarding.

int lightSensor = A0;
int buttonPin = 2;
int redLED = 9;
int greenLED = 8;

void setup() {
  pinMode(buttonPin, INPUT);
  pinMode(greenLED, OUTPUT);
  pinMode(redLED, OUTPUT);
  Serial.begin(9600);
}

void loop() {
  int lightValue = analogRead(lightSensor);    // 0–1023
  int brightness = map(lightValue, 0, 1023, 255, 0); // dark = bright
  analogWrite(redLED, brightness);

  int buttonState = digitalRead(buttonPin);

  if (buttonState == HIGH && lightValue < 400) {
    digitalWrite(greenLED, HIGH);
  } else {
    digitalWrite(greenLED, LOW);
  }

  delay(100);
}

Reflection

Overall, this project was a great learning experience. I got hands-on practice with analog and digital inputs, using resistors correctly, and writing basic Arduino code. I also learned patience, both in wiring and drawing schematics. Even though some parts were frustrating or confusing, it felt really satisfying to see the LEDs respond the way I wanted. Next time, I’d probably try adding a buzzer or even a little LCD display to make it even more interactive.

The video demonstration

IMG_8850

Reading reflection

Reading Physical Computing’s Greatest Hits and Misses alongside Making Interactive Art: Set the Stage, Then Shut Up and Listen really made me rethink how we interact with technology in creative spaces. Both texts explore the tension between human intention and technological behavior, but they approach it from slightly different angles. The first one dives into successes and failures in physical computing projects, highlighting that even the best ideas can flop if the execution ignores user experience or the unpredictability of the real world. The second reading, on the other hand, emphasizes listening to the audience and letting interactions evolve naturally, rather than forcing a rigid narrative onto the art or installation.

The most interesting part for me was the recurring theme of “letting go.” In physical computing, there’s often a desire to control every aspect of a system, to make it work perfectly according to the designer’s vision. But as the readings show, interaction is messy. Sensors misfire, people behave unexpectedly, and sometimes the “mistakes” end up being the most engaging parts of the project. I appreciated the reminder that in interactive art, and in technology projects more broadly, failure isn’t always a failure, it’s data, feedback, and sometimes even the spark for something better.

One question that came to mind while reading was: How do we balance designing for reliability with designing for surprise and emergent behavior? Both readings suggest that embracing unpredictability can make projects more engaging, but too much unpredictability can frustrate users. My takeaway is that the key might be thoughtful scaffolding, providing enough structure so that the system is understandable and responsive, while leaving space for improvisation and interaction to shape the experience.

Overall, I found these readings both inspiring and a little humbling. They reminded me that creativity in physical computing isn’t just about technical skill, it’s about curiosity, flexibility, and, honestly, patience with both technology and people.

Week 8 – creative switch

For my project, I designed an Arduino switch using open-end wires that respond to the camera’s power state. I taped the wires to a digital camera so that when the camera is turned on, the wires disconnect and the LED light turns off, and when the camera is turned off, the wires reconnect and the light turns on. This created a simple but clever system where the camera itself acts as a physical trigger for the circuit. I really enjoyed experimenting with the setup, testing how small adjustments in the wire placement affected the light’s response. Since I’m passionate about photography, I wanted to combine my interest in cameras with my curiosity about electronics, and this project gave me a fun way to explore both creative and technical sides at the same time.

IMG_8468 (1) The video representation

week 8 – reading response

Reading Norman’s Emotion & Design and Her Code Got Humans on the Moon made me rethink what we usually praise in tech. Norman’s main idea that “attractive things work better” makes sense at first. If a product is enjoyable to use, we’re more likely to stick with it. But honestly, it feels a bit too neat. Just because something looks good doesn’t mean it works well. I’ve had plenty of apps or gadgets that are gorgeous but a pain to actually use. Norman makes a strong point about emotion shaping usability, but sometimes I feel designers lean on aesthetics as a crutch instead of solving real problems.

On the other hand, Her Code Got Humans on the Moon reminded me that behind every “perfect” design or software is a ton of human effort — in this case, women programmers whose work literally made moon landings possible. It’s wild how long their contributions were invisible. It makes me question why we hype technology itself while ignoring the people who make it run. The “heroic inventor” story Norman sometimes leans on in design discussions seems incomplete — we rarely celebrate the actual humans doing the work.

Putting these together, I think the readings challenge the usual tech narrative. Norman focuses on emotion and aesthetics, which are important, but Her Code highlights that real success comes from skill, persistence, and problem-solving. My takeaway? Great design isn’t just about looking or feeling good — it’s also about respecting and acknowledging the humans who make it work. Otherwise, we’re praising the wrong things and missing the bigger picture.

Midterm Project – Balloon Pop

Game link:

https://editor.p5js.org/mm13942/full/lU_ibrAn2

Concept

My project is a fun and lighthearted balloon survival game where the player controls a balloon flying through the sky. The goal is simple — avoid the falling bombs, collect hearts for extra points, and stay alive as long as possible. The balloon can only move left and right, while the background scrolls continuously to give the feeling of rising into the clouds. The moment the balloon hits a bomb or the timer bar runs out, it pops and game over.

Inspiration

I was inspired by classic arcade-style games where the goal is to survive for as long as possible while dodging obstacles. I wanted to make something cheerful and colorful but still challenging. The idea of having a balloon as the main character felt fitting because it’s fragile yet expressive, and visually, it works well with soft, moving clouds and bright skies.

Production and Main Features

Initially, I was planning to do the obstacles on the sides that appear randomly:

but then I understood that it wouldn’t be a great decision, considering the size of a laptop screen, so I decided to stick to my another idea with objects falling randomly from the sky. After creating the draft of the game, I wrote down everything that needed to be added to my work to complete the game.

Key Features:

  1. Object-Oriented Design – Classes for Balloon, Bomb, Heart, and Button
  2. Pages:
    • Main page with right-aligned buttons
    • Instructions page
    • Weather choice page (Rainy, Sunny, Snowy)
    • Skin choice page (Pink, Yellow, Orange balloons)
    • Game over page with left-aligned content
  3. Gameplay Mechanics:
    • Bombs spawn with increasing difficulty (max 7)
    • Hearts spawn every 8 points (+5 score bonus)
    • Hearts spawn away from bombs to avoid interference
    • Proper collision detection with circular hitboxes (no false collisions)
    • Infinitely scrolling backgrounds based on weather
    • Score tracking with high score display
  4. Controls:
    • Arrow keys or A/D to move
    • C key for fullscreen toggle
    • Mouse clicks for all buttons
  5. Audio Support – Sound functions
  6. Highlighted Buttons – Selected weather/skin buttons get highlighted
  7. Back Buttons – On every sub-page to return to main menu

Code I’m Proud Of

Part of the code that I’m really proud of might not seem too hard, but it was definitely the most time consuming one and took a lot of trial and error. it was removing the white background from the uploaded pictures. when it first worked out, I thought everything was good but turns out P5 was creating 60 frames per second of one single image and when I played for more that 10 seconds it kept shutting down. I had to do a lot of debugging to understand what the actual problem was and was finally able to make it work without lagging, which really made me happy

processedBombImg = removeWhiteBackground(bombImg);
  processedHeartImg = removeWhiteBackground(heartImg);
  processedPinkBalloonImg = removeWhiteBackground(pinkBalloonImg);
  processedYellowBalloonImg = removeWhiteBackground(yellowBalloonImg);
  processedOrangeBalloonImg = removeWhiteBackground(orangeBalloonImg);
  
  // Initialize player balloon
  player = new Balloon(width/2, height - 80 * scaleFactor);
  
  // Create all button objects with proper positions
  setupButtons();
  
  // Set the pixel font for all text
  textFont(pixelFont);
}


// remove White Background 
function removeWhiteBackground(img) {
  // Create a graphics buffer to manipulate pixels
  let pg = createGraphics(img.width, img.height);
  pg.image(img, 0, 0);
  pg.loadPixels();
  
  // Loop through all pixels and make white ones transparent
  for (let i = 0; i < pg.pixels.length; i += 4) {
    let r = pg.pixels[i];     // Red channel
    let g = pg.pixels[i + 1]; // Green channel
    let b = pg.pixels[i + 2]; // Blue channel
    
    // If pixel is mostly white (R, G, B all > 200), make it transparent
    if (r > 200 && g > 200 && b > 200) {
      pg.pixels[i + 3] = 0; // Set alpha to 0 (transparent)
    }
  }
  pg.updatePixels();
  return pg; // Return the processed image
}

I also really liked the energy bar idea. I was really struggling with coming up with ideas, and my friend Nigina gave some feedback on my game and suggested to add this feature, which prevents the players from skipping the hearts.

function drawEnergyBar() {
  // Position in top-right corner
  let barX = width - energyBarWidth * scaleFactor - 20 * scaleFactor;
  let barY = 20 * scaleFactor;
  let barW = energyBarWidth * scaleFactor;
  let barH = energyBarHeight * scaleFactor;
  
  // Draw outer frame 
  stroke(255); // White border
  strokeWeight(3 * scaleFactor);
  noFill();
  rect(barX, barY, barW, barH);
  
  // Calculate fill width based on current energy percentage
  let fillWidth = (energyBar / 100) * barW;
  
  // Determine fill color based on energy level
  let barColor;
  if (energyBar > 60) {
    barColor = color(0, 255, 0); // Green when high
  } else if (energyBar > 30) {
    barColor = color(255, 255, 0); // Yellow when medium
  } else {
    barColor = color(255, 0, 0); // Red when low 
  }
  
  // Draw filled portion of energy bar
  noStroke();
  fill(barColor);
  rect(barX, barY, fillWidth, barH);
  
  // Draw "ENERGY" label above bar
  fill(255);
  textAlign(CENTER, BOTTOM);
  textSize(16 * scaleFactor);
  text("ENERGY", barX + barW / 2, barY - 5 * scaleFactor);
}

 

Design

Visually, I wanted it to feel airy and positive, so I used soft pastel colors, smooth cloud movement, and rounded buttons. Each page has its own layout — right-aligned buttons on the main page and left-aligned elements on the Game Over screen — to make navigation easy.

Challenges

The hardest part of my code was definitely managing how all the game elements work together  (the bombs, hearts, clouds, timer bar, and different pages). Getting everything to appear, move, and disappear smoothly without glitches took a lot of trial and error.Sometimes bombs appeared too frequently or hearts overlapped with them. I fixed this by randomizing positions with distance checks.

if (bombs.length < maxBombs && frameCount % 50 === 0) { bombs.push(new Bomb(random(width), -20)); } if (score % 10 === 0 && !heartExists) { hearts.push(new Heart(random(width), -20)); heartExists = true; }

The collision detection between the balloon and falling bombs was tricky too, since I had to make sure it felt fair and accurate using circular hitboxes. Another challenging part was balancing the gameplay, making bombs fall fast enough to be fun but not impossible, while also keeping the hearts from overlapping with them. On top of that, managing all the page transitions (main menu, instructions, weather, skins, game over) and keeping the selected settings consistent made the logic even more complex. Overall, the hardest part was making everything work together in a way that felt natural and didn’t break the flow of the game.

Future Improvements

In the future, I’d like to make the game feel more complete by adding real background music and more sound effects for popping, collecting hearts, and clicking buttons. Another improvement would be to make the difficulty change as the score increases, for example, bombs could fall faster or spawn more frequently the longer you survive. I’m also thinking of adding new power-ups like shields or magnets to make the gameplay more interesting. On the design side, animated buttons and smoother page transitions could make the menus feel more polished. Eventually, I’d love to include a high score system to track progress and make players more competitive.

 

Week 5 – Reading discussion

When I think about computer vision, what interests me most is how strange it feels to give a machine the ability to “see.” Human vision is so automatic and seamless that we don’t really think about it, but when you translate it into algorithms, you realize how fragile and mechanical that process is. I find it fascinating that a computer can pick up tiny details that our eyes might not notice, yet at the same time, it can completely miss the “big picture.” That makes me wonder whether computer vision is really about replicating human vision at all, or if it’s creating an entirely different way of perceiving the world.

What I find both exciting and unsettling is how computer vision plays with control. On one hand, it can feel magical when an artwork follows your movements, responds to your gestures, or acknowledges your presence (like in TeamLab). There’s an intimacy there, like the piece is aware of you in a way that a static painting could never be. On the other hand, I can’t help but think about surveillance every time I see a camera in an installation. Am I part of the artwork, or am I being monitored? That ambiguity is powerful, but it also puts a lot of responsibility on the artist to think about how they’re using the technology.

For me, the most interesting potential of computer vision in interactive art isn’t just the novelty of tracking people, but the chance to reflect on our relationship with being watched. In a world where surveillance cameras are everywhere, an artwork that uses computer vision almost automatically becomes a commentary on power and visibility, whether or not the artist intends it. I think that’s what makes the medium so rich: it’s not just about making art “see,” it’s about making us more aware of how we are seen.

Week 5 – Reading Reflection

Reading the essay Computer Vision for Artists and Designers made me realize how differently computers and humans actually “see.” Our eyes and brains process the world in ways that feel natural: we recognize faces instantly, understand depth, guess intentions from gestures, and fill in missing details without even noticing. Computers, on the other hand, don’t have that intuitive grasp. They just see pixels and patterns. A shadow or a little blur can confuse them. Where we understand context,  like knowing a cat is still a cat even if half hidden, computers rely on strict rules or training data, and they often fail when something doesn’t match what they’ve been taught to expect.

To bridge that gap, a lot of effort goes into helping machines track what we want them to notice. Instead of raw pixels, we give them features: edges, colors, corners, or textures. Algorithms can then use those features to keep track of an object as it moves. More recently, deep learning has allowed computers to learn patterns themselves, so they can recognize faces or bodies in a way that feels closer to human intuition (though still fragile). Sometimes, extra sensors like depth cameras or infrared are added to give more reliable information. It’s almost like building a whole toolkit around vision just to get machines to do what we take for granted with a single glance.

Thinking about how this plays into interactive art is both exciting and a little unsettling. On one hand, the ability to track people makes art installations much more engaging — an artwork can respond to where you’re standing, how you move, or even who you are (as I observed in TeamLab). That creates playful, immersive experiences that wouldn’t be possible without computer vision. But the same technology that enables this interactivity also raises questions about surveillance. If art can “see” you, then it’s also observing and recording in ways that feel uncomfortably close to security cameras. I think this tension is part of what makes computer vision so interesting in art: it’s not just about making something interactive, but also about asking us to reflect on how much we’re being watched.

Week 5 – Midterm Project Progress

For my midterm project, I decided to make a little balloon-saving game. The basic idea is simple: the balloon flies up to the sky and faces obstacles on its way, that the player needs to avoid

Concept & Production

Instead of just popping balloons, I wanted to make the balloon itself the main character. The player controls it as it floats upward, while obstacles move across the screen. The main production steps I’ve worked on so far include:

  • Making the balloon move upwards continuously.
  • Adding obstacles that shift across the screen.
  • Writing collision detection so that the balloon “fails” if it hits something.

  • Bringing back the buttons and menu look from the beginning, so the game starts cleanly.

It’s been fun turning the balloon from a simple object into something the player actually interacts with.

The Most Difficult Part
By far, the trickiest part has been the balloon popping without errors. Sometimes, the collisions were detected when they shouldn’t be, which gave me a bunch of false pops. Fixing that took way more trial and error than I expected, but I think I finally have it working in a way that feels consistent (I used help from AI and YouTube).

Risks / Issues
The main risk right now is that the game sometimes lags. Most of the time, it works fine, but once in a while, the balloon pops out of nowhere in the very beginning. I’m not sure if it’s about how I’m handling the objects or just the browser being picky. I’ll need to look into optimizing things as I add more features.

Next Steps
From here, I want to polish the interactions more, add sound effects, and make sure the game is fun to play for longer than a few seconds and looks visually appealing and more aesthetic. But overall, I feel good that the “scariest” part (getting rid of the balloon popping errors) is mostly handled.

Week 4 – Reading Response

One thing that always frustrates me are the elevator button panels. I can’t count the number of times I’ve stood in front of a shiny panel of identical metal buttons, with no clear distinction between “door open,” “door close,” or the emergency button. Sometimes the “door open” symbol is barely visible or located in an unintuitive spot, and more than once I’ve pressed the alarm instead of the open button (which is always a bit embarrassing).A small redesign could go a long way: using clearer icons, color coding, or even tactile differences would make it more intuitive. Norman’s point that “when something goes wrong, the user is often blamed instead of the design” fits perfectly here. It’s not that people are clumsy, it’s that the system doesn’t communicate well.

I’m still thinking about the line that said “Good design starts with an understanding of psychology and technology. It is not enough that we build products that work; they must be understandable and usable.” This directly brings us to the UX. UX is important because it shapes the way people experience and connect with a product, space, or interaction. Good UX reduces confusion, frustration, and wasted effort, making tasks feel natural and even enjoyable. As Norman emphasizes, design isn’t just about how something looks, it’s about how it communicates its purpose and supports human behavior. When UX is overlooked, people often end up blaming themselves for “not getting it,” when in reality it’s the design that failed to guide them.

When I think about applying Norman’s principles of design to interactive media, the biggest lesson is clarity in signifiers and feedback. In my projects, I’ve sometimes assumed that users will “just know” what to click or press, but Norman’s reminder is that what feels intuitive to me as the designer may not feel that way to someone else. Adding clear visual cues like arrows, hover highlights, or simple instructions makes interactions discoverable and satisfying. Affordances and signifiers should be obvious, users shouldn’t have to guess what to click, drag, or press. Feedback is just as important: when someone interacts with my project, they should instantly see or hear a response. That way, instead of frustration, interaction feels like a satisfying loop.