Reflection- Week 9

When I read Physical Computing’s Greatest Hits and Misses and Making Interactive Art: Set the Stage, Then Shut Up and Listen, I started to think more deeply about what it really means to make something interactive. The first reading talked about how many beginner projects in physical computing repeat the same ideas, like using a sensor to make lights blink or to trigger sound. At first, I felt a little unsure because my own project also used simple tools like a light sensor and a button. But as I continued reading, I understood the real message: it’s okay to build something that has been done before, as long as I make it my own and give it a purpose. That made me feel more confident about my project. It reminded me that creativity doesn’t always mean doing something completely new, but doing something familiar in a meaningful or personal way.

The second reading focused on how interactive art should let people explore freely. It said that once we build something, we should “set the stage” and then step back, allowing others to interact with it in their own way. I really liked this idea because it made me think differently about my project. When I pressed the button or covered the light sensor, I realized that I was not just testing the circuit, I was actually engaging with it and discovering what it could do.

Both readings made me see that physical computing is not just about coding or wiring but it’s about creating an experience. It’s about giving people something they can explore and learn from on their own.

Analog Sensor

Concept

For this project, I used one analog sensor and one digital sensor (switch) to control two LED lights.

The analog sensor I used was a photoresistor (light sensor). It changes how much electricity passes through it depending on how bright the light in the room is. The Arduino reads this change and adjusts the brightness of one LED  when it’s dark, the LED gets brighter, and when it’s bright, the LED becomes dimmer.

For the digital sensor, I used a pushbutton connected to a digital pin. When I press the button, it turns the second LED on or off.

To make it different from what we did in class, I added a “night light” feature. When the photoresistor detects that the room is very dark, the button-controlled LED automatically turns on, like a small night light. When the light comes back, the button goes back to working normally.

This made my project more interactive and closer to how real sensors are used in everyday devices.

 Schematic of my circuit
It shows the Arduino connected to:

  • A photoresistor and 10 kΩ resistor forming a voltage divider to read light levels.

  • A pushbutton connected to a digital pin.

  • Two LEDs , one controlled by the light sensor and the other controlled by the button

Final Results

When I tested the circuit:

  • The first LED smoothly changed its brightness depending on how much light the photoresistor sensed.

  • The second LED turned on and off with the button as expected.

  • When the room got dark, the second LED automatically turned on, working like a night light.

It was a simple but satisfying project, and the extra feature made it stand out from the class example.

Video: video-url

Arduino Code

Part of Cold I am proud of

void loop() {
  // --- Read photoresistor ---
  int lightValue = analogRead(lightPin); // 0–1023
  int brightness = map(lightValue, 0, 1023, 255, 0);
  analogWrite(ledAnalog, brightness);

  // --- Button toggle ---
  if (digitalRead(buttonPin) == LOW) {
    ledState = !ledState;
    delay(200);
  }

  // --- Night light feature ---
  if (lightValue < 300) { // If it's dark, auto turn on LED
    digitalWrite(ledDigital, HIGH);
  } else {
    digitalWrite(ledDigital, ledState ? HIGH : LOW);
  }

  // --- Print readings ---
  Serial.print("Light: ");
  Serial.print(lightValue);
  Serial.print(" | Brightness: ");
  Serial.print(brightness);
  Serial.print(" | LED State: ");
  Serial.println(ledState ? "ON" : "OFF");

  delay(200);
}

Github url: Github

Challenges and Further Improvements

While I was able to make both the analog and digital sensors work, I struggled a bit with arranging all the wires and resistors neatly on the breadboard. It took a few tries to get everything connected correctly.

I also had to test different threshold numbers for the night light feature to decide when the LED should automatically turn on. Once I found the right value, it worked well.

For my next project, I want to try using other kinds of sensors, like sound or temperature sensors, and make the circuit respond in new ways. I’ll also practice reading the code line by line to understand how each part works better before adding new features.

Reflection – week 8

When I was reading these two articles side by side, I was struck by how they both explore the crucial role of human emotion in successful design, even though one is about teapots and the other about space travel.

Don Norman’s piece, “Emotion & Design,” argues that attractive things aren’t just a luxury but they actually function better. He explains that when we find a product pleasing, it puts us in a positive state of mind. This positive feeling makes us more tolerant of minor problems and more creative in solving them. I can see this in my own life; when I use a beautifully designed website, I feel more patient and engaged, and I don’t get frustrated easily. It’s not just about the tool working correctly, but about how it makes me feel while I’m using it.

This idea perfectly connects to the story of Margaret Hamilton. “Her Code Got Humans on the Moon” shows that the most brilliant technical system is useless if it doesn’t account for human nature. Hamilton understood that even the most highly trained astronauts were human and could make mistakes under immense pressure. Her fight to include error-checking code, which was initially dismissed, proved to be vital. Her software was designed with a deep understanding of human stress and fallibility, making it resilient and, in the end, heroic.

For me, the powerful lesson from both authors is that true excellence in any field requires blending logic with empathy. Norman shows us that beauty improves function by improving the user’s mindset. Hamilton shows us that anticipating human error is not a sign of weak design, but of strong, intelligent design. It reminds me that in my own work and studies, embracing creativity and understanding the human element is just as important as getting the technical details right.

Assignment

Concept

I created a simple switch using aluminum foil around my index finger and thumb. The goal was to make a switch that doesn’t use hands in the traditional sense ,  instead, it works by touching two parts of my own body together. When I bring my thumb and index finger together, the LED turns on. When they are apart, the LED turns off.

For this Arduino project.  I wrapped foil around my thumb and index finger to extend the conductivity, and connected each foil piece to the Arduino using jumper wires. . This simple prototype shows how the human body can become part of an electronic circuit.

 Link to video : Video

Highlight of the code

The code itself is simple. The Arduino reads the input from the foil on your fingers using digitalRead(). When your fingers touch (closing the circuit), it reads HIGH and turns on the LED. When you separate your fingers, the input reads LOW and the LED turns off.

int footSwitch = 2;   // Define the digital pin connected to the  switch (foil pad)
int ledPin = 13;      // Define the digital pin connected to the LED

void setup() {
  pinMode(footSwitch, INPUT);   // Set the foot switch pin as an input to read HIGH/LOW
  pinMode(ledPin, OUTPUT);      // Set the LED pin as an output so we can turn it on/off
}

void loop() {
  int switchState = digitalRead(footSwitch);  // Read the current state of the  switch

  if (switchState == HIGH) {    // If the switch is pressed (fingers touching)
    digitalWrite(ledPin, HIGH); // Turn the LED on
  } else {                      
    digitalWrite(ledPin, LOW);  // Otherwise, turn the LED off
  }
  
  delay(10); // Small delay to stabilize readings and avoid flickering
}

Reflection

This prototype is simple but effective. I noticed that the foil doesn’t always maintain perfect contact, so the LED sometimes flickers if the foil slips or my skin doesn’t touch the metal fully. I could improve this by using stretchable conductive tape  to make contact more consistent.

Even with these small issues, it was exciting to see how my body can act as a switch. Using just fingers and foil, I was able to control the LED and experiment with a non-traditional, hands-free interaction. It’s a great demonstration of how electronics and the human body can be creatively combined in fun, unexpected ways.

Midterm Project

Concept

Color Gate Challenge is a fast and colorful reaction game where the player controls a glowing ball that changes colors to pass through matching gates. The idea came from color-matching and reflex games I used to play, but I wanted to create something that feels more modern and bright, with glowing effects and smooth motion.

The goal of the game is to move the ball through falling gates without hitting the walls. Each gate has a color, and the player must change their ball to the same color to pass safely. If you pass the wrong color or crash into the barrier, the game ends.

My Final Game:

Code Snippet and Parts I Am Proud Of:

One of the parts I am most proud of is how I  control the player, the gates, and the full game system.
Each part of the game (player, gate, and main game) is built as a class, which made it easier to add new features later.

The player’s color can change, and the game checks if it matches the gate’s color before letting the player pass. This made the logic clear and fun.

if (this.player.checkCollision(gate)) {
this.gameOver();
return;
}

This simple check controls the whole challenge of the game.
If the player touches the wrong color or hits the gate walls, the game ends immediately.

I also added a color preview system that shows the next few gate colors, so the player can plan ahead.
It uses small color dots on the screen to help the player see which color to switch to next.

this.upcomingGates.forEach((gate, index) => {
const dot = document.createElement('div');
dot.className = 'color-dot';
dot.style.background = this.getColorString(this.player.colors[gate.color]);
colorPreview.appendChild(dot);
});

Another part I am proud of is how the speed control works. The player can press keys to make the gates fall faster or slower, and there is a live bar that shows the speed level. This made the game more interactive and customizable.

Problems and Future Improvements

At first, the game was too easy because the gates were falling too slowly, and it didn’t feel challenging. I changed the gate speed and added random colors to make it more unpredictable and exciting.

Another problem was keeping everything in the right position when resizing the window. I had to fix the player’s size and position every time the screen changed, using the windowResized() function.

In the future, I want to:

  • Add special gates that move sideways

  • Add sound effects for color switches and collisions

  • Add power-ups that give the player a shield or slow motion

  • Create a moving space background for more depth

  • Add more visual effects like explosions and particle trails

Reading Response

When I was reading ‘Computer Vision for Artists and Designers’, one thing that stood out to me was how computer vision is so different from human vision. For us, seeing feels natural , we can look at a scene and instantly recognize people, objects, emotions, or even context. But computers don’t “see” that way. For them, an image is just a bunch of pixels with numbers. They need step-by-step methods to figure out what’s moving, what belongs in the background, or what part of the picture matters.

The reading showed me some basic techniques that help computers track what we want. For example, frame differencing looks at what’s changed from one frame to the next, background subtraction compares the scene to a saved “empty” version to spot people, and brightness thresholding picks out objects based on how light or dark they are compared to the background. These sound simple, but they can be powerful if the physical setup is right like having strong contrast or good lighting. I liked how the article talks about the environment matters just as much as the code.

Thinking about how this connects to interactive art, I can see both exciting and uneasy sides. On one hand, artists can use tracking to make playful or immersive experiences, like games that respond to your body or installations that react to your movements. That feels fun, creative, and even magical. But at the same time, the same tracking tools can be used for surveillance, like watching people without their consent or profiling them. That tension makes interactive art more thought-provoking, because it forces us to see how technology can be both empowering and controlling.

For me, this makes computer vision in art feel very alive , it’s not just about coding tricks, but about what message the artist is trying to send and how it makes us reflect on being “seen” by machines.

Midterm Draft

Concept 

I’m making a small interactive vertical Chicken Invaders style shooter game. The player controls a spaceship at the bottom of the screen and can move up, down, left, or right while shooting bullets upward. Chickens (enemies) continuously descend from the top of the screen, and some randomly drop eggs that can damage the player.

The game starts with a start screen that tells the player how to play: “Arrow keys to move, SPACE to shoot. Press SPACE to start.” Once the player starts, waves of chickens begin descending. The player earns points by shooting chickens and can lose lives if hit by enemy eggs. After losing all lives, the game transitions to a Game Over screen, showing the final score and instructions to restart by pressing ENTER.

Player interaction is simple and intuitive:

  • Arrow keys: move the spaceship in all directions

  • SPACE: shoot bullets upward

  • Collision detection: bullets destroy enemies, eggs damage player

  • Score and lives display updates in real time

  • Continuous spawning keeps the game dynamic and engaging

Code Design

The game is designed in an object-oriented structure:

  1. Player class:
    • Controls the spaceship’s position, movement, and display
    • Constrains the player within canvas boundaries
  2. Bullet class:
    • Manages bullet position, movement, and collision with enemies
  3. Enemy class:
    • Controls chicken position, downward movement, and spawning
    • Randomly generates enemy bullets (eggs)
  4. EnemyBullet class:
    • Manages eggs dropped by enemies, movement, and collision with the player
  5. Main sketch:
    • Holds a gameState variable: "start" | "playing" | "gameover"
    • Delegates drawing and updates depending on the current game state
    • Spawns new enemies continuously and updates UI elements (score, lives)

Current Game:

Risk & Challenges

The scariest part was managing continuous enemy spawning and collision detection while ensuring smooth movement and responsive player controls. If either bullets or enemy collisions failed, the gameplay would feel broken, and the game could become frustrating.

How I reduced the risk:

  • I wrote small test sketches for bullets and enemy collisions first, confirming that collisions were detected reliably

  • Spawn timers were implemented to gradually introduce enemies without overloading the canvas

  • Player boundaries were constrained using constrain() to avoid moving off-screen

  • The gameState system ensures that start, playing, and game over screens never interfere with gameplay logic

Result: collisions, enemy spawning, and player movement are smooth and reliable, so the core gameplay is stable and enjoyable.

Next Steps

  • Add background music that loops during gameplay and optional sound effects for shooting and enemy hits

  • Add visual enhancements: use emojis, icons, or sprites for player and chickens for more arcade-like feel

  • Add multiple enemy types: faster chickens, stronger enemies, or bonus targets

  • Implement power-ups: shields, rapid fire, or score multipliers

Reflection

Something that drives me crazy is when I see an “automatic” door, but it still has a handle that makes me pull it before it opens. There’s one like this in D2, and it confuses me every time. I never know if I should wait for it to open, or if I need to pull it myself. Sometimes I pull too early or too late, and it feels frustrating. This could be improved by making the design consistent: if the door is automatic, it shouldn’t need a pull at all, or at least the handle should be removed to avoid sending mixed signals.

Using Norman’s principles, this is a problem of bad signifiers. The handle suggests pulling, but the automation suggests not touching it. In interactive media, I can avoid this by making sure buttons, icons, or gestures clearly show what the user should do, without mixed signals. I could also apply discoverability and feedback , like for example, if a button is loading, it should change color or show a spinner so the user knows the system is working. That way, people don’t waste time guessing what to do, just like they shouldn’t have to with a simple door.

This is what I feel and believe would help .

Week 4

Concept

For this week’s coding assignment, I wanted to experiment with generative text, focusing on movement and interactivity rather than static words. My goal was to create a visualization where text on the screen feels like its floating, bouncing, and responding subtly to the viewer’s presence.

The concept I explored was simple: a set of words randomly placed on the canvas float gently in organic patterns. When the user moves the mouse nearby, the words grow slightly, creating a playful and responsive effect. I aimed to evoke a sense of motion and energy, so the text is not just something you read, but something you watch and interact with.

To achieve this, I combined techniques like sine and cosine functions for floating motion, randomized starting positions, and smooth interpolation to make the animation fluid. I also used color variation to give each word a soft, pastel-like appearance, creating a visually pleasing and lively scene.

Code I’m Most Proud

update() {
  // gentle floating motion
  this.x = this.baseX + cos(frameCount * this.speed + this.offset) * 20;
  this.y = this.baseY + sin(frameCount * this.speed + this.offset) * 20;

  // interactive effect: mouse proximity enlarges the word
  let d = dist(mouseX, mouseY, this.x, this.y);
  if (d < 100) {
    this.currentSize = this.size * map(d, 0, 100, 1.8, 1);
  } else {
    this.currentSize = this.size;
  }
}

display() {
  fill(this.color[0], this.color[1], this.color[2], 200);
  textSize(this.currentSize);
  text(this.txt, this.x, this.y);
}

This snippet stands out because it combines two important techniques learned in this assignment: smooth floating motion and  mouse interactivity.

  • The use of sine and cosine functions allows each word to float naturally around its base position, giving the visualization a sense of organic movement rather than rigid animation.

  • The interactive mouse effect enhances engagement: words near the mouse subtly grow. This adds a playful and interactive quality  to the text.

Final work:

Reflection

I’m happy with how the project looks now, but there are ways to make it even more fun and interactive. For example, each word could be broken into small points and animated individually, so the words could form on the screen or scatter when you move the mouse. The words could also have hover effects, like ripples, sparkles, or subtle movement when the mouse is nearby, making the interaction more engaging. Different words could behave differently as well, with action words bouncing faster while nouns float gently, and the words could change color or size based on sound or other data. Additionally, users could add their own words to the animation, creating a personalized experience. With these improvements, the text could go beyond just looking nice and become a fully interactive and playful visualization.

Reading Response

When i read the “The Arts of interactive Design” , I thought about  interactivity as a real conversation. It’s not about one side doing all the talking. It’s a loop where both sides listen, think about what was said, and then respond. If any part of that loop is weak, the whole interaction feels flat and broken.

For me, a strongly interactive system feels thoughtful and alive. It pays close attention to my actions, processes them in a meaningful way, and gives me a response that shows it “understood” me. It’s not just a simple reaction, like a light switch. It’s more like a good friend who remembers what you said earlier and brings it up again later.

Looking at my own p5 sketches, I realize they are mostly just reactive. They respond to a mouse click or a key press, but that’s it. They don’t really listen to how I do something, they don’t think or remember my past actions, and their response is often a simple, pre-set animation. There is no real dialogue.

To improve them, I need to focus on all three parts of the conversation. I want to make my sketches better listeners by paying attention to things like mouse speed or rhythm. I want to give them a simple memory so they can learn from my previous actions and change their behavior over time. Finally, I want their visual responses to feel more like an answer to what I just did, rather than a generic effect. My goal is to move from creating sketches that just react to creating sketches that feel like they are having a simple, but genuine, conversation with me.