Reflection – week 8

When I was reading these two articles side by side, I was struck by how they both explore the crucial role of human emotion in successful design, even though one is about teapots and the other about space travel.

Don Norman’s piece, “Emotion & Design,” argues that attractive things aren’t just a luxury but they actually function better. He explains that when we find a product pleasing, it puts us in a positive state of mind. This positive feeling makes us more tolerant of minor problems and more creative in solving them. I can see this in my own life; when I use a beautifully designed website, I feel more patient and engaged, and I don’t get frustrated easily. It’s not just about the tool working correctly, but about how it makes me feel while I’m using it.

This idea perfectly connects to the story of Margaret Hamilton. “Her Code Got Humans on the Moon” shows that the most brilliant technical system is useless if it doesn’t account for human nature. Hamilton understood that even the most highly trained astronauts were human and could make mistakes under immense pressure. Her fight to include error-checking code, which was initially dismissed, proved to be vital. Her software was designed with a deep understanding of human stress and fallibility, making it resilient and, in the end, heroic.

For me, the powerful lesson from both authors is that true excellence in any field requires blending logic with empathy. Norman shows us that beauty improves function by improving the user’s mindset. Hamilton shows us that anticipating human error is not a sign of weak design, but of strong, intelligent design. It reminds me that in my own work and studies, embracing creativity and understanding the human element is just as important as getting the technical details right.

Assignment

Concept

I created a simple switch using aluminum foil around my index finger and thumb. The goal was to make a switch that doesn’t use hands in the traditional sense ,  instead, it works by touching two parts of my own body together. When I bring my thumb and index finger together, the LED turns on. When they are apart, the LED turns off.

For this Arduino project.  I wrapped foil around my thumb and index finger to extend the conductivity, and connected each foil piece to the Arduino using jumper wires. . This simple prototype shows how the human body can become part of an electronic circuit.

 Link to video : Video

Highlight of the code

The code itself is simple. The Arduino reads the input from the foil on your fingers using digitalRead(). When your fingers touch (closing the circuit), it reads HIGH and turns on the LED. When you separate your fingers, the input reads LOW and the LED turns off.

int footSwitch = 2;   // Define the digital pin connected to the  switch (foil pad)
int ledPin = 13;      // Define the digital pin connected to the LED

void setup() {
  pinMode(footSwitch, INPUT);   // Set the foot switch pin as an input to read HIGH/LOW
  pinMode(ledPin, OUTPUT);      // Set the LED pin as an output so we can turn it on/off
}

void loop() {
  int switchState = digitalRead(footSwitch);  // Read the current state of the  switch

  if (switchState == HIGH) {    // If the switch is pressed (fingers touching)
    digitalWrite(ledPin, HIGH); // Turn the LED on
  } else {                      
    digitalWrite(ledPin, LOW);  // Otherwise, turn the LED off
  }
  
  delay(10); // Small delay to stabilize readings and avoid flickering
}

Reflection

This prototype is simple but effective. I noticed that the foil doesn’t always maintain perfect contact, so the LED sometimes flickers if the foil slips or my skin doesn’t touch the metal fully. I could improve this by using stretchable conductive tape  to make contact more consistent.

Even with these small issues, it was exciting to see how my body can act as a switch. Using just fingers and foil, I was able to control the LED and experiment with a non-traditional, hands-free interaction. It’s a great demonstration of how electronics and the human body can be creatively combined in fun, unexpected ways.

Midterm Project

Concept

Color Gate Challenge is a fast and colorful reaction game where the player controls a glowing ball that changes colors to pass through matching gates. The idea came from color-matching and reflex games I used to play, but I wanted to create something that feels more modern and bright, with glowing effects and smooth motion.

The goal of the game is to move the ball through falling gates without hitting the walls. Each gate has a color, and the player must change their ball to the same color to pass safely. If you pass the wrong color or crash into the barrier, the game ends.

My Final Game:

Code Snippet and Parts I Am Proud Of:

One of the parts I am most proud of is how I  control the player, the gates, and the full game system.
Each part of the game (player, gate, and main game) is built as a class, which made it easier to add new features later.

The player’s color can change, and the game checks if it matches the gate’s color before letting the player pass. This made the logic clear and fun.

if (this.player.checkCollision(gate)) {
this.gameOver();
return;
}

This simple check controls the whole challenge of the game.
If the player touches the wrong color or hits the gate walls, the game ends immediately.

I also added a color preview system that shows the next few gate colors, so the player can plan ahead.
It uses small color dots on the screen to help the player see which color to switch to next.

this.upcomingGates.forEach((gate, index) => {
const dot = document.createElement('div');
dot.className = 'color-dot';
dot.style.background = this.getColorString(this.player.colors[gate.color]);
colorPreview.appendChild(dot);
});

Another part I am proud of is how the speed control works. The player can press keys to make the gates fall faster or slower, and there is a live bar that shows the speed level. This made the game more interactive and customizable.

Problems and Future Improvements

At first, the game was too easy because the gates were falling too slowly, and it didn’t feel challenging. I changed the gate speed and added random colors to make it more unpredictable and exciting.

Another problem was keeping everything in the right position when resizing the window. I had to fix the player’s size and position every time the screen changed, using the windowResized() function.

In the future, I want to:

  • Add special gates that move sideways

  • Add sound effects for color switches and collisions

  • Add power-ups that give the player a shield or slow motion

  • Create a moving space background for more depth

  • Add more visual effects like explosions and particle trails

Reading Response

When I was reading ‘Computer Vision for Artists and Designers’, one thing that stood out to me was how computer vision is so different from human vision. For us, seeing feels natural , we can look at a scene and instantly recognize people, objects, emotions, or even context. But computers don’t “see” that way. For them, an image is just a bunch of pixels with numbers. They need step-by-step methods to figure out what’s moving, what belongs in the background, or what part of the picture matters.

The reading showed me some basic techniques that help computers track what we want. For example, frame differencing looks at what’s changed from one frame to the next, background subtraction compares the scene to a saved “empty” version to spot people, and brightness thresholding picks out objects based on how light or dark they are compared to the background. These sound simple, but they can be powerful if the physical setup is right like having strong contrast or good lighting. I liked how the article talks about the environment matters just as much as the code.

Thinking about how this connects to interactive art, I can see both exciting and uneasy sides. On one hand, artists can use tracking to make playful or immersive experiences, like games that respond to your body or installations that react to your movements. That feels fun, creative, and even magical. But at the same time, the same tracking tools can be used for surveillance, like watching people without their consent or profiling them. That tension makes interactive art more thought-provoking, because it forces us to see how technology can be both empowering and controlling.

For me, this makes computer vision in art feel very alive , it’s not just about coding tricks, but about what message the artist is trying to send and how it makes us reflect on being “seen” by machines.

Midterm Draft

Concept 

I’m making a small interactive vertical Chicken Invaders style shooter game. The player controls a spaceship at the bottom of the screen and can move up, down, left, or right while shooting bullets upward. Chickens (enemies) continuously descend from the top of the screen, and some randomly drop eggs that can damage the player.

The game starts with a start screen that tells the player how to play: “Arrow keys to move, SPACE to shoot. Press SPACE to start.” Once the player starts, waves of chickens begin descending. The player earns points by shooting chickens and can lose lives if hit by enemy eggs. After losing all lives, the game transitions to a Game Over screen, showing the final score and instructions to restart by pressing ENTER.

Player interaction is simple and intuitive:

  • Arrow keys: move the spaceship in all directions

  • SPACE: shoot bullets upward

  • Collision detection: bullets destroy enemies, eggs damage player

  • Score and lives display updates in real time

  • Continuous spawning keeps the game dynamic and engaging

Code Design

The game is designed in an object-oriented structure:

  1. Player class:
    • Controls the spaceship’s position, movement, and display
    • Constrains the player within canvas boundaries
  2. Bullet class:
    • Manages bullet position, movement, and collision with enemies
  3. Enemy class:
    • Controls chicken position, downward movement, and spawning
    • Randomly generates enemy bullets (eggs)
  4. EnemyBullet class:
    • Manages eggs dropped by enemies, movement, and collision with the player
  5. Main sketch:
    • Holds a gameState variable: "start" | "playing" | "gameover"
    • Delegates drawing and updates depending on the current game state
    • Spawns new enemies continuously and updates UI elements (score, lives)

Current Game:

Risk & Challenges

The scariest part was managing continuous enemy spawning and collision detection while ensuring smooth movement and responsive player controls. If either bullets or enemy collisions failed, the gameplay would feel broken, and the game could become frustrating.

How I reduced the risk:

  • I wrote small test sketches for bullets and enemy collisions first, confirming that collisions were detected reliably

  • Spawn timers were implemented to gradually introduce enemies without overloading the canvas

  • Player boundaries were constrained using constrain() to avoid moving off-screen

  • The gameState system ensures that start, playing, and game over screens never interfere with gameplay logic

Result: collisions, enemy spawning, and player movement are smooth and reliable, so the core gameplay is stable and enjoyable.

Next Steps

  • Add background music that loops during gameplay and optional sound effects for shooting and enemy hits

  • Add visual enhancements: use emojis, icons, or sprites for player and chickens for more arcade-like feel

  • Add multiple enemy types: faster chickens, stronger enemies, or bonus targets

  • Implement power-ups: shields, rapid fire, or score multipliers

Reflection

Something that drives me crazy is when I see an “automatic” door, but it still has a handle that makes me pull it before it opens. There’s one like this in D2, and it confuses me every time. I never know if I should wait for it to open, or if I need to pull it myself. Sometimes I pull too early or too late, and it feels frustrating. This could be improved by making the design consistent: if the door is automatic, it shouldn’t need a pull at all, or at least the handle should be removed to avoid sending mixed signals.

Using Norman’s principles, this is a problem of bad signifiers. The handle suggests pulling, but the automation suggests not touching it. In interactive media, I can avoid this by making sure buttons, icons, or gestures clearly show what the user should do, without mixed signals. I could also apply discoverability and feedback , like for example, if a button is loading, it should change color or show a spinner so the user knows the system is working. That way, people don’t waste time guessing what to do, just like they shouldn’t have to with a simple door.

This is what I feel and believe would help .

Week 4

Concept

For this week’s coding assignment, I wanted to experiment with generative text, focusing on movement and interactivity rather than static words. My goal was to create a visualization where text on the screen feels like its floating, bouncing, and responding subtly to the viewer’s presence.

The concept I explored was simple: a set of words randomly placed on the canvas float gently in organic patterns. When the user moves the mouse nearby, the words grow slightly, creating a playful and responsive effect. I aimed to evoke a sense of motion and energy, so the text is not just something you read, but something you watch and interact with.

To achieve this, I combined techniques like sine and cosine functions for floating motion, randomized starting positions, and smooth interpolation to make the animation fluid. I also used color variation to give each word a soft, pastel-like appearance, creating a visually pleasing and lively scene.

Code I’m Most Proud

update() {
  // gentle floating motion
  this.x = this.baseX + cos(frameCount * this.speed + this.offset) * 20;
  this.y = this.baseY + sin(frameCount * this.speed + this.offset) * 20;

  // interactive effect: mouse proximity enlarges the word
  let d = dist(mouseX, mouseY, this.x, this.y);
  if (d < 100) {
    this.currentSize = this.size * map(d, 0, 100, 1.8, 1);
  } else {
    this.currentSize = this.size;
  }
}

display() {
  fill(this.color[0], this.color[1], this.color[2], 200);
  textSize(this.currentSize);
  text(this.txt, this.x, this.y);
}

This snippet stands out because it combines two important techniques learned in this assignment: smooth floating motion and  mouse interactivity.

  • The use of sine and cosine functions allows each word to float naturally around its base position, giving the visualization a sense of organic movement rather than rigid animation.

  • The interactive mouse effect enhances engagement: words near the mouse subtly grow. This adds a playful and interactive quality  to the text.

Final work:

Reflection

I’m happy with how the project looks now, but there are ways to make it even more fun and interactive. For example, each word could be broken into small points and animated individually, so the words could form on the screen or scatter when you move the mouse. The words could also have hover effects, like ripples, sparkles, or subtle movement when the mouse is nearby, making the interaction more engaging. Different words could behave differently as well, with action words bouncing faster while nouns float gently, and the words could change color or size based on sound or other data. Additionally, users could add their own words to the animation, creating a personalized experience. With these improvements, the text could go beyond just looking nice and become a fully interactive and playful visualization.

Reading Response

When i read the “The Arts of interactive Design” , I thought about  interactivity as a real conversation. It’s not about one side doing all the talking. It’s a loop where both sides listen, think about what was said, and then respond. If any part of that loop is weak, the whole interaction feels flat and broken.

For me, a strongly interactive system feels thoughtful and alive. It pays close attention to my actions, processes them in a meaningful way, and gives me a response that shows it “understood” me. It’s not just a simple reaction, like a light switch. It’s more like a good friend who remembers what you said earlier and brings it up again later.

Looking at my own p5 sketches, I realize they are mostly just reactive. They respond to a mouse click or a key press, but that’s it. They don’t really listen to how I do something, they don’t think or remember my past actions, and their response is often a simple, pre-set animation. There is no real dialogue.

To improve them, I need to focus on all three parts of the conversation. I want to make my sketches better listeners by paying attention to things like mouse speed or rhythm. I want to give them a simple memory so they can learn from my previous actions and change their behavior over time. Finally, I want their visual responses to feel more like an answer to what I just did, rather than a generic effect. My goal is to move from creating sketches that just react to creating sketches that feel like they are having a simple, but genuine, conversation with me.

 

Week 3

In this week’s assignment, we were asked to create a generative artwork using objects and arrays. I have always liked seeing how small movements can make interesting patterns. At first, I thought about making chaotic particles flying everywhere, but that felt messy and confusing. So I decided to make particles orbit around a center point, which allowed me to practice using objects and arrays while keeping the artwork neat and easy to see.

I also wanted it to be interactive, so:

  • Clicking the mouse adds more particles, making the pattern more complex.

  • Moving the mouse slightly affects their movement, making it feel alive.

  • Particles change colors and pulse in size so they feel like a tiny living galaxy.

Part I Am Proud Of

// Particles react to mouse movement
let centerX = width / 2 + cos(this.angle) * this.radius;
let centerY = height / 2 + sin(this.angle) * this.radius;
let dx = mouseX - centerX;
let dy = mouseY - centerY;
this.radius += 0.0005 * sqrt(dx * dx + dy * dy); // subtle attraction to mouse

I am proud of this part because it makes the particles move a little toward the mouse. I also like how the colors slowly change.

It makes the particles feel like they are  alive, instead of staying the same all the time .

// Particles change color over time
this.color[0] = (this.color[0] + 0.5) % 255;
this.color[1] = (this.color[1] + 0.3) % 255;
this.color[2] = (this.color[2] + 0.7) % 255;

How It Works / Functions

  • setup() → Creates the canvas and initial particles.

  • draw() → Updates and draws all particles every frame.

  • mousePressed() → Adds 5 new particles when clicked.

  • keyPressed() → Clears the canvas with ‘c’ or resets particles with ‘r’.

  • Particle.update() → Updates each particle’s motion, size, color, and mouse interaction.

  • Particle.display() → Draws each particle using its position and size.

Final work:

Reflection & Future Improvement

At first, I had too many ideas ,  interaction, different shapes, more complex movement , but I realized I needed to keep it simple. By focusing on particles orbiting with color and size changes, I made something that works well and looks nice.

I also learned that small changes can make a big difference. The particles reacting to the mouse and slowly changing color make the system feel alive. This project reminded me that sometimes simple ideas can be very powerful, especially when they are interactive and thoughtful.

Future Improvement:
In the future, I would like to add sound interaction, so the particles could respond to music or noise. I could also experiment with different shapes or trails instead of only circles, and maybe allow more mouse control or multiple attractor points. These changes could make the artwork even more dynamic and engaging, while keeping the smooth, organic feel.

Week 2 – Reading Reflection

After watching Casey Reas’ talk on chance operations, I realized how randomness can be a powerful tool in creative work. Instead of controlling every detail, allowing chance to play a role can lead to unexpected results that are often more interesting than what I would plan. In my own projects, I plan to experiment with random elements in small ways, like changing colors, shapes, or patterns based on a set of rules but leaving some outcomes to chance. This connects to our discussions in a different class, Understanding of IM class, where we talked about using randomness as an algorithm in conceptual art for example by letting the computer generate outcomes that are surprising but still guided by certain constraints or rules. I think the key is to find a balance between control and randomness, so the work feels intentional but still fresh and dynamic.

The talk also made me question how much control we really need in our work. Reas shows that too much control can limit creativity, but too much randomness can make a project feel chaotic. I found this idea challenging because I usually like to plan every step carefully. I kind of  think about trying a more experimental approach, even if it feels uncomfortable at first. It raises questions for me about when to intervene in a random process and when to let the algorithm or chance take over.