Midterm Progress

Concept

Well, for my project, I have always been really inspired by the Japanese art of Kintsugi (金継ぎ). What I love about it is the philosophy of it that, instead of hiding the cracks in broken pottery, it highlights them with gold. It treats the breakage and repair as a beautiful part of the object’s history, and I find that to be incredibility interesting. This is why I want to bring that idea into a digital space.

For now, I think I’ll call my project the “Kintsugi Canvas.” I don’t intend it to be a typical drawing tool; I want to build an interactive artwork where your first action is to break something. You simply shatter a digital canvas with a click, and then watch as the program algorithmically mends the damage with simulated gold, creating a unique piece of art from the flaw you introduced. Of course, this is the base idea for now, but I’ve been thinking about the kind of object that I want to be fixed; is it a bowl that the user breaks and then repairs again? (as seen in the photo below). Well, I’m not decided yet, but pretty sure it’ll be among those lines. Of course, my goal is to make it like a game, so I want to fit all these elements together.

The Art of Kintsugi: “In the fractures, we find our light.” — Zen Art Gallery

Gameplay Loop

The core gameplay loop is a simple, two-step journey that puts the player in the role of both destroyer and observer.

*** Shatter: The player’s primary action. A single click on the blank canvas triggers a fracturing process. Cracks spread dynamically from the point of impact, breaking the canvas into unique polygonal shards.

*** Mend: The game’s generative response. Once shattered, the system automatically traces the new cracks with an animated golden seam, “healing” the canvas. The player watches as a beautiful, final artwork emerges from the damage they initiated.

For now, till I decide on the actual technicalities of the game, I’ve decided to keep controls are minimal and intuitive: click to start a new round, press ‘S‘ to save your creation, and ‘R‘ to reset the canvas. These, of course, will increase once I have decided on the idea completely.

So, to state it simply, the game will give the users places to go to and from there they can shatter or mend. For now, I’m planning on a museum, bedrooms, personal times, etc. Some examples I’m planning to incorporate as the base are:

Gen4 same art style, meusuem with paintings 2397059461.png

Gen4  a cozy dorm room with red recliner or comfortable share suitable for storytelling, no 16256789.png

Gen4 same art style, christmas tree and lights 4115991441.png

You simply choose an object, and you can shatter it, mend it, etc!

The Core Design Challenge

For the game to be compelling, the act of breaking the canvas has to feel satisfying and organic. The biggest technical and design challenge is probably generating crack patterns that look natural. A static or predictable pattern would make the core mechanic feel repetitive and robotic, ruining the player’s experience. The entire project hinged on making this moment of destruction feel alive.

The Solution

To brainstorm, I’m planning a concept focusing entirely on this procedural generation, a system of agents I call “growers.” Each grower is a point with a position and direction that carves a path. By giving these agents a simple set of rules—move, wander slightly, branch occasionally, and die on collision—complex, organic patterns emerge, which is totally needed for the project.

 

Week 5: Midterm Progress

Concept

For my midterm project I have decided to create an adventure mystery game where the user is taken through an interactive journey where they make decisions based on clues provided to them. The game has different paths with different possible endings, giving the user control over the narrative of the story. With the starting point being in a long hallway where they will receive their first clue before entering the room, with the use of signifiers and feedback to lead the user where they need to be. Once the door is opened they are in the first room where they are supposed to make the first choice that decides which path they will be taking. Through this project I hope to bring together everything we’ve learnt so far technically and conceptually to bring the project to life and make it as engaging and interesting as possible.

Code Design

Character.js:
Represents a simple character with a body and head that can be drawn anywhere.

Hallway.js:
Draws the hallway, handles wall scaling, door animation, and hallway character.

Room.js:
Draws the rooms (intro, basement, office) and manages buttons and room characters.

Sketch.js:
Main controller that sets up p5.js, loads assets, switches between hallway and rooms, and handles input.

Current Progress

I have created an outline of the starting path of the game, particular the hallway and introductory room where the user makes their first choice. As well as a rough outline of what the first two rooms will be like. For the hallway I have integrated sound effects and a little walking and door opening animation to create an environment that draws the user in as soon as they start the game.

Challenges

The most difficult thing to do in this project so far was the animation of walking toward the door at the beginning, which required exact perspective scaling and smooth transitions. At first, the walls were not scaling consistently, and the door animation felt unnatural, making it hard to build the idea of depth for the viewer. I had trouble related to the door opening and the player’s movement being properly synchronized, meaning that certain awkward jumps between states occurred. To get around that, I rewrote the code to include a Hallway class to concentrate all logic related to the hallway, introduced a scaling value based on wallDist to scale the walls proportionally, and smoothed out the door animation via incremental increments to doorAnim. I also centralized the movement logic so that the footsteps and perspective changes are in sync, which led to smoother and more believable walking animation. Overall, I’d say making different elements compatible is the most difficult part but with the right organization of the code with abstraction and encapsulation.

Reflections

Working on the rough outline of what I want my project to look like and creating a skeleton for my code was very beneficial in learning what is the right way to approach this project. I now have a cleared idea of the path I’d like to take with this work and what I want to focus on. My first next step would be focusing on the narrative and the different storytelling elements that I will be integrating to guide the user. I feel like I need to focus on creating a path that is easy to follow and come up with reasonable signifiers.

Midterm Progress

For my midterm project, I’m making a driving game where the player controls a car and navigates through traffic. The car stays in place on the screen, and the world around it moves to create the illusion of motion. Players can control their car’s lane position and adjust their speed with the arrow keys.

The design centers on creating a believable driving experience in which the player is encouraged to drive recklessly. The player manages their speed while avoiding collisions with NPC cars, which follow traffic-like behavior with different speeds depending on their lanes. Headlights add an element of strategy, as flashing them can occasionally cause a slower car in front to switch lanes. Speed cameras introduce a sense of tension by encouraging players to manage speed carefully.

The code is organized with:

    • A Player class for position, speed, and headlights.

    • An EnemyCar class for NPC vehicles with varying lane speeds.

    • A Road and Environment system that handles moving lane markings, roadside trees, and cameras to simulate forward motion.

    • A main game loop that updates movement, detects collisions, and handles interactions between different classes. (e.g. Radar object detecting the Player object, Player object tailgating the EnemyCar object).

The most complex part so far has been creating the illusion of motion. Since the player’s car stays in one place, I needed to make the road, lane markings, trees, and other cars move dynamically to trick the eye into feeling like the player is driving forward. To minimize this risk, I prototyped moving background elements and tested different speeds for roadside objects versus lane lines. These tests helped me establish a convincing sense of motion, which makes the game feel much more immersive.

Sketch so far

Week 5 Reading Reflection

Vision for humans can vary. Some might not be able to distinguish colors properly. Others may need glasses to focus on little details, or see what is slightly farther away from them. However, one thing remains consistent, what humans see is not what they choose to see, but what is in front of them. They can’t see a pool if there is not a pool in front of them. If there is a house, they will see a house. However, a human’s vision will also vary on different factors: the brightness, the colors arounds them, the state of an individual’s eyes, and more. On the other hand, computer vision relies mostly on algorithms, pattern and feature recognition, etc. However, all of these are process, that, as described in the text, if they are low-level computer vision, they won’t even be able to resister a person or object in a video, making it flawed and susceptible to making crucial mistakes if not programmed properly. Nonetheless, this arguably shares a similar trait with humans when their vision is not 20/20, hinting that computer and human vision are both prone to having flaws.

One useful technique, aside from the others mentioned in the reading, such as keeping track of environmental conditions, patterns of pixels, and frame differencing or background subtraction, I found interesting is the example of the use of a “telecentric” lens to improve the performance of certain kinds of shape-based or size-based object recognition algorithms. I have always known that every algorithm in any computer system has a distinct role. However reading about how this lens has such a meticulous and specific task in detail has taught me the importance of each algorithm in the larger scale of aiming to achieve a successful, computer vision.

Personally, a computer’s vision capacity for tracking and surveillance can be quite effective in the use of interactive art, especially when it comes to immersive experiences. Its ability to “modify physical conditions” and capturing things such as movement or gestures can be useful to create responses to human actions which can be explore in immersive installations. Hence, like many immersive experiences, these responsive systems would allow participants to enhance the value and experience, conveying a more complex and meaningful kind of interactive artwork.

Reading Reflection – Week 5

I hadn’t learned much about computer vision before reading this article, except it was about enabling computers to “see” and understand visual data. What struck me most was the irony in how computer vision compares to human vision. Computers can be unbelievably precise, catching patterns, colors, or tiny movements that a human eye might overlook. But they can also completely miss things we find easy, like recognizing a familiar face in poor lighting or spotting an object that’s just been turned around. To me, that contrast really shows how naturally flexible our own vision is, while computer vision feels much more rigid and depends on whether the physical conditions it was trained for are perfectly reproduced. I came to see how important it is to design the physical environment when working with computer vision so that the system can function more effectively – for instance, arranging lighting, colors, and contrasts to make key features easily detectable.

As I thought more about computer vision, I realized that helping a computer “see” is not just about coding; it’s also about choosing the right technique for the specific task. Methods like color tracking or motion detection can work really well, but they each have their weak spots. For instance, a color tracker could get confused if the background shares the same shade, and a motion detector could be thrown off by something moving in the background that has nothing to do with the task. It made me see how much depends on matching the method to the situation; otherwise, the results fall apart.

When it comes to interactive art, computer vision’s ability to track and even surveil people feels both fascinating and a bit worrying. Tracking can make an artwork feel alive—it follows your gestures and responds in ways that draw you in. But surveillance goes further than that. It suggests being constantly observed, maybe even recorded, without realizing it. This is where the concern for privacy creeps in. One artwork that shows this really well is Rafael Lozano-Hemmer’s Surface Tension. In this piece, a large projected human eye follows you around the room. At first, it feels playful and interactive, like the artwork is aware of you, but the longer you stay, the more uncomfortable it becomes. I guess this tension is what makes interactive art with computer vision so compelling, because it not only provides us with an engaging experience but also pushes us to reflect on the aspect of being monitored.

Reading Response

When I was reading ‘Computer Vision for Artists and Designers’, one thing that stood out to me was how computer vision is so different from human vision. For us, seeing feels natural , we can look at a scene and instantly recognize people, objects, emotions, or even context. But computers don’t “see” that way. For them, an image is just a bunch of pixels with numbers. They need step-by-step methods to figure out what’s moving, what belongs in the background, or what part of the picture matters.

The reading showed me some basic techniques that help computers track what we want. For example, frame differencing looks at what’s changed from one frame to the next, background subtraction compares the scene to a saved “empty” version to spot people, and brightness thresholding picks out objects based on how light or dark they are compared to the background. These sound simple, but they can be powerful if the physical setup is right like having strong contrast or good lighting. I liked how the article talks about the environment matters just as much as the code.

Thinking about how this connects to interactive art, I can see both exciting and uneasy sides. On one hand, artists can use tracking to make playful or immersive experiences, like games that respond to your body or installations that react to your movements. That feels fun, creative, and even magical. But at the same time, the same tracking tools can be used for surveillance, like watching people without their consent or profiling them. That tension makes interactive art more thought-provoking, because it forces us to see how technology can be both empowering and controlling.

For me, this makes computer vision in art feel very alive , it’s not just about coding tricks, but about what message the artist is trying to send and how it makes us reflect on being “seen” by machines.

Computer Vision for Artists and Designers Reading Response

Q1: What are some of the ways that computer vision differs from human vision?

Humans see in a flexible and intuitive way. We can recognize a friend even if they’re in the shadows, wearing different clothes, or drawn as a cartoon. Computers, on the other hand, are much more rigid. They need clear cues, like the right lighting, steady backgrounds, and often lots of training data, just to recognize something we would see instantly. Computers don’t bring context or common sense either. If I see someone running, I might guess they’re late or playing a sport; a computer just sees moving shapes. This difference means that in art, computer vision often works best when the artist designs the environment to make it easier for the machine to see, but not to interpret it like humans. 

Q2: What are some techniques we can use to help the computer see / track what we’re interested in? 

On the technical side, artists use things like motion detection (spotting what changes between frames), background subtraction (separating a moving person from a still background), or color filters (tracking a red ball). More advanced tools can follow body joints or estimate a skeleton, which is great for dance or performance. But beyond algorithms, the environment is just as important. If you give the system good lighting, a solid backdrop, or make participants wear bright colors, the system would be able to focus on one thing and spot them more easily. It’s less about forcing the computer to be “smart” and more about designing the whole setup so the vision works smoothly.

Q3: How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

Computer vision’s ability to track people is both a strength and a challenge for interactive art. On the positive side, tracking makes the art responsive, installations can change based on where you move, how you gesture, or even how many people are in the space. This creates a playful, engaging experience that feels alive. But because the same technology is often used for surveillance, it can also make people feel watched. That changes how the audience experiences the artwork, sometimes it’s fun, but sometimes it raises concerns about privacy. Many artists lean into this tension: some use tracking purely for interaction, while others use it to make us think critically about how surveillance works in our daily lives.

Midterm Draft

Concept 

I’m making a small interactive vertical Chicken Invaders style shooter game. The player controls a spaceship at the bottom of the screen and can move up, down, left, or right while shooting bullets upward. Chickens (enemies) continuously descend from the top of the screen, and some randomly drop eggs that can damage the player.

The game starts with a start screen that tells the player how to play: “Arrow keys to move, SPACE to shoot. Press SPACE to start.” Once the player starts, waves of chickens begin descending. The player earns points by shooting chickens and can lose lives if hit by enemy eggs. After losing all lives, the game transitions to a Game Over screen, showing the final score and instructions to restart by pressing ENTER.

Player interaction is simple and intuitive:

  • Arrow keys: move the spaceship in all directions

  • SPACE: shoot bullets upward

  • Collision detection: bullets destroy enemies, eggs damage player

  • Score and lives display updates in real time

  • Continuous spawning keeps the game dynamic and engaging

Code Design

The game is designed in an object-oriented structure:

  1. Player class:
    • Controls the spaceship’s position, movement, and display
    • Constrains the player within canvas boundaries
  2. Bullet class:
    • Manages bullet position, movement, and collision with enemies
  3. Enemy class:
    • Controls chicken position, downward movement, and spawning
    • Randomly generates enemy bullets (eggs)
  4. EnemyBullet class:
    • Manages eggs dropped by enemies, movement, and collision with the player
  5. Main sketch:
    • Holds a gameState variable: "start" | "playing" | "gameover"
    • Delegates drawing and updates depending on the current game state
    • Spawns new enemies continuously and updates UI elements (score, lives)

Current Game:

Risk & Challenges

The scariest part was managing continuous enemy spawning and collision detection while ensuring smooth movement and responsive player controls. If either bullets or enemy collisions failed, the gameplay would feel broken, and the game could become frustrating.

How I reduced the risk:

  • I wrote small test sketches for bullets and enemy collisions first, confirming that collisions were detected reliably

  • Spawn timers were implemented to gradually introduce enemies without overloading the canvas

  • Player boundaries were constrained using constrain() to avoid moving off-screen

  • The gameState system ensures that start, playing, and game over screens never interfere with gameplay logic

Result: collisions, enemy spawning, and player movement are smooth and reliable, so the core gameplay is stable and enjoyable.

Next Steps

  • Add background music that loops during gameplay and optional sound effects for shooting and enemy hits

  • Add visual enhancements: use emojis, icons, or sprites for player and chickens for more arcade-like feel

  • Add multiple enemy types: faster chickens, stronger enemies, or bonus targets

  • Implement power-ups: shields, rapid fire, or score multipliers

Midterm Progress

Concept:

I was inspired by the dressing game that I liked to play when I was a child, I attached the image below. In my version, players can click on a character’s shirt, pants, and shoes to change their colors and create different outfits. The game is simple and interactive, allowing players to explore many combinations and experiment with different styles. It has a start screen to begin the game, a dressing area where the character is displayed with a calm background, and an end screen with a restart option so the game can be played again. I wanted to recreate the fun and creativity I experienced as a child, while keeping the game easy to use and visually pleasing. I wanted to capture joy of trying new looks and making choices in a playful and colorful way.

Inspiration:

Design and User Interactive:

The game has a simple and playful design, using soft and pleasant colors to create a friendly atmosphere. Most of the visuals are based on images that I created myself and uploaded into the project. Currently, the character is represented with basic shapes that change color when the user clicks on them, giving a simple interactive experience. In the future, I plan to replace the shapes with a full mannequin, where each part of her clothing will change whenever clicked. I will also create and upload a full set of clothing images into the p5 sketch, allowing for a more detailed and visually appealing dress-up experience.

The game itself is highly interactive, allowing players to click directly on the character’s shirt, pants, and shoes to change colors. Buttons like PLAY, FINISH, and RESTART respond to clicks to move between game states. This interactivity makes the game engaging, as players can experiment with different outfit combinations and immediately see the results on the character.

function mouseClicked() {
  if (gameState === "start" && playButton.clicked(mouseX, mouseY)) {
    playButton.action();
  } 
  else if (gameState === "playing") {
    // Change clothes when clicking on body parts
    if (mouseX > width / 2 - 50 && mouseX < width / 2 + 50 && mouseY > 300 && mouseY < 400) {
      currentPants = (currentPants + 1) % pantsColors.length; // Change pants
    }
    else if (mouseX > width / 2 - 50 && mouseX < width / 2 + 50 && mouseY > 150 && mouseY < 300) {
      currentShirt = (currentShirt + 1) % shirtColors.length; // Change shirt
    }
    else if (mouseX > width / 2 - 50 && mouseX < width / 2 + 50 && mouseY > 400 && mouseY < 500) {
      currentShoes = (currentShoes + 1) % shoesColors.length; // Change shoes
    }

Sketch:

The most frightening part and what I did to reduce this risk 

One of the most challenging parts was managing the game states start screen, playing screen, and end screen, because each screen had different buttons and interactions. It was difficult to make sure the right buttons showed at the right time and that clicking them went to the correct screen.

I solved this by creating a Button class to handle all clickable buttons in the same way and by using separate functions to draw each screen. This kept the code organized and made it easy to add or change buttons later. I also made sure the mouseClicked() function only responded to buttons for the current screen. As a result, the screens change smoothly, the game feels easy to use, and the interactions are clear for the player.

Reflection and Future Improvements:

So far, I really like the concept of my project and I’m excited to experiment and see the final result. For future improvements, I plan to replace the simple shapes with detailed images of the clothes, making the character and outfits more visually appealing. I also want to add sound effects that play when the user clicks on each clothing item to make the game more interactive. Additionally, I would like the character to have a speech bubble with text whenever the user clicks on her. These features will make the game more dynamic, engaging, and fun for players.

Reading Reflection – Week 5

In this reading, Levin argues that the tools, techniques, and physical setups required for basic computer vision are no longer exclusive to only specialists, they can now be used by students and artists. Reading this article really got me thinking about how different human vision is from computer vision.

One way computer vision differs from human vision is that computers don’t have a built-in understanding; they just see pixels and values. They need algorithms to interpret color, brightness, motion, etc. Humans, on the other hand, have an innate understanding.

Some techniques we can use to help the computer see and track what we’re interested in include frame differencing (detecting motion by looking at changes between frames), background subtraction (using a static background and removing it to isolate objects), and brightness thresholding (using contrast to separate what you want from what you don’t).

Computer vision’s capacity for tracking and surveillance has its pros and cons. On the positive side, tracking allows for more responsive, immersive, and personal interactive art. On the downside, issues like surveillance, privacy and consent arise. If art is tracking people, there’s always a risk of misuse and discomfort.