Midterm Progress

For my midterm project, I’m making a driving game where the player controls a car and navigates through traffic. The car stays in place on the screen, and the world around it moves to create the illusion of motion. Players can control their car’s lane position and adjust their speed with the arrow keys.

The design centers on creating a believable driving experience in which the player is encouraged to drive recklessly. The player manages their speed while avoiding collisions with NPC cars, which follow traffic-like behavior with different speeds depending on their lanes. Headlights add an element of strategy, as flashing them can occasionally cause a slower car in front to switch lanes. Speed cameras introduce a sense of tension by encouraging players to manage speed carefully.

The code is organized with:

    • A Player class for position, speed, and headlights.

    • An EnemyCar class for NPC vehicles with varying lane speeds.

    • A Road and Environment system that handles moving lane markings, roadside trees, and cameras to simulate forward motion.

    • A main game loop that updates movement, detects collisions, and handles interactions between different classes. (e.g. Radar object detecting the Player object, Player object tailgating the EnemyCar object).

The most complex part so far has been creating the illusion of motion. Since the player’s car stays in one place, I needed to make the road, lane markings, trees, and other cars move dynamically to trick the eye into feeling like the player is driving forward. To minimize this risk, I prototyped moving background elements and tested different speeds for roadside objects versus lane lines. These tests helped me establish a convincing sense of motion, which makes the game feel much more immersive.

Sketch so far

Week 5 Reading Reflection

Vision for humans can vary. Some might not be able to distinguish colors properly. Others may need glasses to focus on little details, or see what is slightly farther away from them. However, one thing remains consistent, what humans see is not what they choose to see, but what is in front of them. They can’t see a pool if there is not a pool in front of them. If there is a house, they will see a house. However, a human’s vision will also vary on different factors: the brightness, the colors arounds them, the state of an individual’s eyes, and more. On the other hand, computer vision relies mostly on algorithms, pattern and feature recognition, etc. However, all of these are process, that, as described in the text, if they are low-level computer vision, they won’t even be able to resister a person or object in a video, making it flawed and susceptible to making crucial mistakes if not programmed properly. Nonetheless, this arguably shares a similar trait with humans when their vision is not 20/20, hinting that computer and human vision are both prone to having flaws.

One useful technique, aside from the others mentioned in the reading, such as keeping track of environmental conditions, patterns of pixels, and frame differencing or background subtraction, I found interesting is the example of the use of a “telecentric” lens to improve the performance of certain kinds of shape-based or size-based object recognition algorithms. I have always known that every algorithm in any computer system has a distinct role. However reading about how this lens has such a meticulous and specific task in detail has taught me the importance of each algorithm in the larger scale of aiming to achieve a successful, computer vision.

Personally, a computer’s vision capacity for tracking and surveillance can be quite effective in the use of interactive art, especially when it comes to immersive experiences. Its ability to “modify physical conditions” and capturing things such as movement or gestures can be useful to create responses to human actions which can be explore in immersive installations. Hence, like many immersive experiences, these responsive systems would allow participants to enhance the value and experience, conveying a more complex and meaningful kind of interactive artwork.

Reading Reflection – Week 5

I hadn’t learned much about computer vision before reading this article, except it was about enabling computers to “see” and understand visual data. What struck me most was the irony in how computer vision compares to human vision. Computers can be unbelievably precise, catching patterns, colors, or tiny movements that a human eye might overlook. But they can also completely miss things we find easy, like recognizing a familiar face in poor lighting or spotting an object that’s just been turned around. To me, that contrast really shows how naturally flexible our own vision is, while computer vision feels much more rigid and depends on whether the physical conditions it was trained for are perfectly reproduced. I came to see how important it is to design the physical environment when working with computer vision so that the system can function more effectively – for instance, arranging lighting, colors, and contrasts to make key features easily detectable.

As I thought more about computer vision, I realized that helping a computer “see” is not just about coding; it’s also about choosing the right technique for the specific task. Methods like color tracking or motion detection can work really well, but they each have their weak spots. For instance, a color tracker could get confused if the background shares the same shade, and a motion detector could be thrown off by something moving in the background that has nothing to do with the task. It made me see how much depends on matching the method to the situation; otherwise, the results fall apart.

When it comes to interactive art, computer vision’s ability to track and even surveil people feels both fascinating and a bit worrying. Tracking can make an artwork feel alive—it follows your gestures and responds in ways that draw you in. But surveillance goes further than that. It suggests being constantly observed, maybe even recorded, without realizing it. This is where the concern for privacy creeps in. One artwork that shows this really well is Rafael Lozano-Hemmer’s Surface Tension. In this piece, a large projected human eye follows you around the room. At first, it feels playful and interactive, like the artwork is aware of you, but the longer you stay, the more uncomfortable it becomes. I guess this tension is what makes interactive art with computer vision so compelling, because it not only provides us with an engaging experience but also pushes us to reflect on the aspect of being monitored.

Reading Response

When I was reading ‘Computer Vision for Artists and Designers’, one thing that stood out to me was how computer vision is so different from human vision. For us, seeing feels natural , we can look at a scene and instantly recognize people, objects, emotions, or even context. But computers don’t “see” that way. For them, an image is just a bunch of pixels with numbers. They need step-by-step methods to figure out what’s moving, what belongs in the background, or what part of the picture matters.

The reading showed me some basic techniques that help computers track what we want. For example, frame differencing looks at what’s changed from one frame to the next, background subtraction compares the scene to a saved “empty” version to spot people, and brightness thresholding picks out objects based on how light or dark they are compared to the background. These sound simple, but they can be powerful if the physical setup is right like having strong contrast or good lighting. I liked how the article talks about the environment matters just as much as the code.

Thinking about how this connects to interactive art, I can see both exciting and uneasy sides. On one hand, artists can use tracking to make playful or immersive experiences, like games that respond to your body or installations that react to your movements. That feels fun, creative, and even magical. But at the same time, the same tracking tools can be used for surveillance, like watching people without their consent or profiling them. That tension makes interactive art more thought-provoking, because it forces us to see how technology can be both empowering and controlling.

For me, this makes computer vision in art feel very alive , it’s not just about coding tricks, but about what message the artist is trying to send and how it makes us reflect on being “seen” by machines.

Computer Vision for Artists and Designers Reading Response

Q1: What are some of the ways that computer vision differs from human vision?

Humans see in a flexible and intuitive way. We can recognize a friend even if they’re in the shadows, wearing different clothes, or drawn as a cartoon. Computers, on the other hand, are much more rigid. They need clear cues, like the right lighting, steady backgrounds, and often lots of training data, just to recognize something we would see instantly. Computers don’t bring context or common sense either. If I see someone running, I might guess they’re late or playing a sport; a computer just sees moving shapes. This difference means that in art, computer vision often works best when the artist designs the environment to make it easier for the machine to see, but not to interpret it like humans. 

Q2: What are some techniques we can use to help the computer see / track what we’re interested in? 

On the technical side, artists use things like motion detection (spotting what changes between frames), background subtraction (separating a moving person from a still background), or color filters (tracking a red ball). More advanced tools can follow body joints or estimate a skeleton, which is great for dance or performance. But beyond algorithms, the environment is just as important. If you give the system good lighting, a solid backdrop, or make participants wear bright colors, the system would be able to focus on one thing and spot them more easily. It’s less about forcing the computer to be “smart” and more about designing the whole setup so the vision works smoothly.

Q3: How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

Computer vision’s ability to track people is both a strength and a challenge for interactive art. On the positive side, tracking makes the art responsive, installations can change based on where you move, how you gesture, or even how many people are in the space. This creates a playful, engaging experience that feels alive. But because the same technology is often used for surveillance, it can also make people feel watched. That changes how the audience experiences the artwork, sometimes it’s fun, but sometimes it raises concerns about privacy. Many artists lean into this tension: some use tracking purely for interaction, while others use it to make us think critically about how surveillance works in our daily lives.

Midterm Draft

Concept 

I’m making a small interactive vertical Chicken Invaders style shooter game. The player controls a spaceship at the bottom of the screen and can move up, down, left, or right while shooting bullets upward. Chickens (enemies) continuously descend from the top of the screen, and some randomly drop eggs that can damage the player.

The game starts with a start screen that tells the player how to play: “Arrow keys to move, SPACE to shoot. Press SPACE to start.” Once the player starts, waves of chickens begin descending. The player earns points by shooting chickens and can lose lives if hit by enemy eggs. After losing all lives, the game transitions to a Game Over screen, showing the final score and instructions to restart by pressing ENTER.

Player interaction is simple and intuitive:

  • Arrow keys: move the spaceship in all directions

  • SPACE: shoot bullets upward

  • Collision detection: bullets destroy enemies, eggs damage player

  • Score and lives display updates in real time

  • Continuous spawning keeps the game dynamic and engaging

Code Design

The game is designed in an object-oriented structure:

  1. Player class:
    • Controls the spaceship’s position, movement, and display
    • Constrains the player within canvas boundaries
  2. Bullet class:
    • Manages bullet position, movement, and collision with enemies
  3. Enemy class:
    • Controls chicken position, downward movement, and spawning
    • Randomly generates enemy bullets (eggs)
  4. EnemyBullet class:
    • Manages eggs dropped by enemies, movement, and collision with the player
  5. Main sketch:
    • Holds a gameState variable: "start" | "playing" | "gameover"
    • Delegates drawing and updates depending on the current game state
    • Spawns new enemies continuously and updates UI elements (score, lives)

Current Game:

Risk & Challenges

The scariest part was managing continuous enemy spawning and collision detection while ensuring smooth movement and responsive player controls. If either bullets or enemy collisions failed, the gameplay would feel broken, and the game could become frustrating.

How I reduced the risk:

  • I wrote small test sketches for bullets and enemy collisions first, confirming that collisions were detected reliably

  • Spawn timers were implemented to gradually introduce enemies without overloading the canvas

  • Player boundaries were constrained using constrain() to avoid moving off-screen

  • The gameState system ensures that start, playing, and game over screens never interfere with gameplay logic

Result: collisions, enemy spawning, and player movement are smooth and reliable, so the core gameplay is stable and enjoyable.

Next Steps

  • Add background music that loops during gameplay and optional sound effects for shooting and enemy hits

  • Add visual enhancements: use emojis, icons, or sprites for player and chickens for more arcade-like feel

  • Add multiple enemy types: faster chickens, stronger enemies, or bonus targets

  • Implement power-ups: shields, rapid fire, or score multipliers

Midterm Progress

Concept:

I was inspired by the dressing game that I liked to play when I was a child, I attached the image below. In my version, players can click on a character’s shirt, pants, and shoes to change their colors and create different outfits. The game is simple and interactive, allowing players to explore many combinations and experiment with different styles. It has a start screen to begin the game, a dressing area where the character is displayed with a calm background, and an end screen with a restart option so the game can be played again. I wanted to recreate the fun and creativity I experienced as a child, while keeping the game easy to use and visually pleasing. I wanted to capture joy of trying new looks and making choices in a playful and colorful way.

Inspiration:

Design and User Interactive:

The game has a simple and playful design, using soft and pleasant colors to create a friendly atmosphere. Most of the visuals are based on images that I created myself and uploaded into the project. Currently, the character is represented with basic shapes that change color when the user clicks on them, giving a simple interactive experience. In the future, I plan to replace the shapes with a full mannequin, where each part of her clothing will change whenever clicked. I will also create and upload a full set of clothing images into the p5 sketch, allowing for a more detailed and visually appealing dress-up experience.

The game itself is highly interactive, allowing players to click directly on the character’s shirt, pants, and shoes to change colors. Buttons like PLAY, FINISH, and RESTART respond to clicks to move between game states. This interactivity makes the game engaging, as players can experiment with different outfit combinations and immediately see the results on the character.

function mouseClicked() {
  if (gameState === "start" && playButton.clicked(mouseX, mouseY)) {
    playButton.action();
  } 
  else if (gameState === "playing") {
    // Change clothes when clicking on body parts
    if (mouseX > width / 2 - 50 && mouseX < width / 2 + 50 && mouseY > 300 && mouseY < 400) {
      currentPants = (currentPants + 1) % pantsColors.length; // Change pants
    }
    else if (mouseX > width / 2 - 50 && mouseX < width / 2 + 50 && mouseY > 150 && mouseY < 300) {
      currentShirt = (currentShirt + 1) % shirtColors.length; // Change shirt
    }
    else if (mouseX > width / 2 - 50 && mouseX < width / 2 + 50 && mouseY > 400 && mouseY < 500) {
      currentShoes = (currentShoes + 1) % shoesColors.length; // Change shoes
    }

Sketch:

The most frightening part and what I did to reduce this risk 

One of the most challenging parts was managing the game states start screen, playing screen, and end screen, because each screen had different buttons and interactions. It was difficult to make sure the right buttons showed at the right time and that clicking them went to the correct screen.

I solved this by creating a Button class to handle all clickable buttons in the same way and by using separate functions to draw each screen. This kept the code organized and made it easy to add or change buttons later. I also made sure the mouseClicked() function only responded to buttons for the current screen. As a result, the screens change smoothly, the game feels easy to use, and the interactions are clear for the player.

Reflection and Future Improvements:

So far, I really like the concept of my project and I’m excited to experiment and see the final result. For future improvements, I plan to replace the simple shapes with detailed images of the clothes, making the character and outfits more visually appealing. I also want to add sound effects that play when the user clicks on each clothing item to make the game more interactive. Additionally, I would like the character to have a speech bubble with text whenever the user clicks on her. These features will make the game more dynamic, engaging, and fun for players.

Reading Reflection – Week 5

In this reading, Levin argues that the tools, techniques, and physical setups required for basic computer vision are no longer exclusive to only specialists, they can now be used by students and artists. Reading this article really got me thinking about how different human vision is from computer vision.

One way computer vision differs from human vision is that computers don’t have a built-in understanding; they just see pixels and values. They need algorithms to interpret color, brightness, motion, etc. Humans, on the other hand, have an innate understanding.

Some techniques we can use to help the computer see and track what we’re interested in include frame differencing (detecting motion by looking at changes between frames), background subtraction (using a static background and removing it to isolate objects), and brightness thresholding (using contrast to separate what you want from what you don’t).

Computer vision’s capacity for tracking and surveillance has its pros and cons. On the positive side, tracking allows for more responsive, immersive, and personal interactive art. On the downside, issues like surveillance, privacy and consent arise. If art is tracking people, there’s always a risk of misuse and discomfort.

Week 5: Midterm Progress

Concept

For my midterm project, I’ve decided to make a Star Wars-themed game, where the user plays as Luke Skywalker. The aim of the game is to defeat the Stormtroopers on the spaceship. The aesthetic of my game is going to look similar to the image below. I want to go for the pixel-style simple game.

Gangs wars pixel

I got the inspiration for my game’s mechanics from Street Fighter. But instead of two players fighting against each other, I’ll make it a one-player game fighting against NPC enemies.

I’m thinking of making my game look something like this:

Street Fighter II The World Warrior! / KEN Longplay / 4K HD 60 FPS - YouTube

Design and User Interaction

  • The player (Luke) will be able to move and attack.
  • The Stormtroopers will spawn on the opposite side of the screen and run towards Luke.
  • The game will include a time-limit (which goes on until all his lives/HP is gone, or to maximize points within the time-limit).

Challenges and Risks

The most complex part is probably figuring out how Stormtroopers attack. I’m probably going to program them in a single class, which will spawn all of the enemies and their attack patterns. I’m also uncertain about the point system. I’m not sure whether to go for an HP/lives system, or a points system where he gains points for killing each enemy. But either way I’ll include a time limit.

To minimize the risk, I’m planning to divide the player and NPCs into two separate classes. I’ll also experiment with a simple collision detection test to confirm that Luke’s attacks properly register against enemies.

 

This is what my sketch looks like so far:

Midterm Project Progress


<iframe

 

Concept

For my midterm, I wanted to make a project that isn’t just fun, but also means something to me, feels useful to create, and reflects my interests, which for me is combining women’s empowerment, sports, and music. Because this project allows so much freedom, my mind kept jumping from idea to idea. I finally decided on a concept that felt meaningful and achievable: an interactive story experience where users guide a female athlete through multiple sports — football, basketball, and martial arts — while overcoming challenges in the form of negative comments from people, culminating in a celebratory dance that reflects the user’s gameplay.

I also wanted to incorporate music, one of my biggest passions, into the gameplay. Each action the user takes (scoring a goal, making a basket, or landing a punch) will add a layer of sound to a final anthem, letting players create a rhythm-based celebration that reflects their journey and success.

To make the experience authentic and inspiring, I decided to use real female athletes as the characters for each stage: Sam Kerr for football, Breanna Stewart for basketball, and Ronda Rousey, the first female UFC champion, for martial arts. Each athlete has a quick introduction before their stage to give context and highlight their achievements and the obstacles they’ve overcome.

My project aims to:

  • Educate users about the basics of each sport.
  • Empower women by breaking stereotypes and highlighting achievements.
  • Engage users through rhythm-based gameplay that integrates music and beats.
  • Inspire users to create a unique anthem reflecting the character’s journey.

Design

I want the design to be interactive and bold. The journey starts with a fullscreen instruction screen, and each stage is preceded by a quick introduction of the athlete, with a short text and a character.

The gameplay is divided into stages:

  1. Football: Users dribble and shoot as Sam Kerr. Negative comments appear as barriers, which shatter when the user scores, adding beats to the anthem.
  2. Basketball: Users control Breanna Stewart, shooting and dribbling while shattering barriers and adding beats to the music.
  3. Martial Arts: Users perform punches and kicks with Ronda Rousey, breaking barriers that add more beats to the anthem.
  4. Celebration / Dance: The beats from previous stages combine into a unique anthem, and character performs dance moves that sync with the rhythm, celebrating the journey.

Visually, I’m keeping the design stylized and clean, using shapes for interactive objects. Sounds, like cheering, percussion, and piano, respond dynamically to user actions, making the experience engaging.

Frightening / Challenging Aspects

There are a few parts of this project that I’m worried might be tricky:

  • Making the rhythm-based gameplay work: I’m unsure about getting the sounds from scoring, punching, or breaking barriers to line up and feel smooth.
  • Creating each sport stage: Each sport has different moves and controls, and it might be hard to make them all work well.
  • Drawing the characters and barriers: Making all the athletes and objects using shapes in p5.js could take a lot of time and get complicated.
  • Combining all the beats into a final anthem: I’m unsure if the sounds from football, basketball, and martial arts will come together nicely to make a fun and satisfying rhythm at the end.

Risk Prevention

To manage the different sports stages, I will use separate layers so only the relevant athlete, objects, and barriers show up at the right time. For example, when the user moves from football to basketball, the football elements disappear and the basketball stage appears. Designing all the characters and barriers from scratch in p5.js could take a long time, so I started making them on separate canvases to work faster and keep things organized before combining them into the full project. As well as I started on the intro page. Making interactions feel natural is also tricky because I need to detect clicks, key presses, or movements on objects made of multiple shapes. I’ve started experimenting with this early to make sure the gameplay will feel smooth.