Midterm Progress

I want to create a personalized DJ experience that allows users to choose different music genres, with the environment adapting accordingly. The idea is to present an interactive space where visuals, lighting, and animations react dynamically to the music and make it feel like a real party.

When the experience starts, a button launches the animation, and clicking anywhere switches songs while updating the environment to match the new song. The visuals rely on p5.Amplitude() to analyze the music’s intensity and adjust the movement of butterfly-like shapes accordingly (I reused my previous code to draw the butterflies).

One of the biggest challenges was managing these transitions without them feeling too sudden or chaotic. Initially, switching between songs resulted in jarring color and lighting changes, breaking the immersion. To fix this, I used lerpColor() to gradually shift the background and object colors rather than having them change instantly. Another issue was synchronizing the visuals with the audio in a meaningful way, at first, the amplitude mapping was too sensitive, making the animations look erratic -this still needs improvement maybe I will try modifying the amplitude scaling.

Moving forward, I plan to expand the genre selection with more styles and refine how users interact with the interface. I want each environment to reflect the music’s vibe.

Week 5 response

Computer vision differs from human vision in that humans perceive the world more holistically and understand visual cues based on experience and context, whereas computers use quantitative forms of image representation. Instead of recognizing things based on mental processes, machines use algorithmic and pattern recognition techniques based on pixel-based image representation.

Thus, compared to humans, computers also have difficulty identifying objects with different illuminations and directions, unless they are highly trained with varied databases. Just as humans estimate depth and motion based on vision and general knowledge, computer programs need specific methods such as optical flow detection, edge detection or machine learning algorithms to deduce similar information.

The power of computer vision to capture motion and analyze visual information has a profound effect on interactive art. Artists can take advantage of these technologies and use them to create installations that respond dynamically to the viewer’s movements, gestures, or even facial expressions and create immersive, interactive experiences. However, these technologies can also raise ethical issues, related to privacy and surveillance if we talk about the use of facial recognition and motion detection in interactive artworks. Consequently, artists working with computer vision must carefully weigh their creative possibilities with the ethical implications linked to surveillance culture.

Week 5: Reading Response

Computer vision is really different from how humans see the world. While we naturally process images, depth, and context with our brains, computers rely on algorithms and sensors to make sense of what they’re “seeing.” Humans can instantly recognize faces, emotions, and even artistic meaning without much effort, but computers need tons of data and training to even get close to that level of understanding. Plus, human vision adjusts easily to different lighting or angles, while computers often struggle unless the conditions are just right.

To help computers track or recognize what we want, we use techniques like edge detection, motion tracking, and pattern recognition. Edge detection helps separate objects from the background, while motion tracking follows movement across frames in a video. Pattern recognition is also huge—by training a model with a bunch of images, it can learn to recognize faces, shapes, or even specific objects. But to get accurate results, we usually have to clean up the data first, removing noise or adjusting lighting so the system doesn’t get confused.

The ability of computer vision to track and monitor things has a big impact on interactive art. Artists use it to create pieces that respond to movement or presence, making the experience more immersive and engaging. But at the same time, it raises ethical concerns—these same tools can be used for surveillance, sometimes without people knowing. So while computer vision opens up exciting possibilities for art, it also forces us to think about privacy and how we balance creativity with responsibility.

Week 5: Midterm Progress

Concept:

I decided on a calming spa game where the user (or viewer) sees a person lying in a clinic setting, complete with subtle animations like steam or aroma particles. Even though I haven’t fully finalized all interactivity elements, my plan is to allow some simple interactions, such as choosing different spa treatments or changing certain visual elements. For now, the main focus is creating the environment and making it look professional and aesthetically pleasing.

Code and Design:

I started laying out the main structure in p5.js. I separated the code into different functions to keep things organized:

  • drawClinicBackground(): Sets the scene with the walls, floor, decorations, and additional details like a window or posters.
  • drawTreatmentBed(): Draws the bed and pillow for the patient.
  • drawPatient(): Renders the patient’s upper body and face, including minimal facial features.
  • drawSteam(): Handles the animation of steam or aroma particles rising around the face.

I’m also planning to introduce classes if the animation or interactivity becomes more complex, especially if I need multiple interactive objects or more sophisticated animations. This modular approach helps keep things clean. If I need to expand later—maybe adding sound effects, more interactive objects, or advanced animations—I can easily integrate new code.

This is what the design is supposed to look like:

Version 1.0.0

Frightening/Challenging aspects:

One of the most uncertain parts of my project is making the environment feel truly interactive and alive. I’m worried about how performance might be affected if I add a lot of animations or interactive elements at once. Another concern is making sure the art style and animations blend nicely so that the scene doesn’t look disjointed.

To reduce this risk, I wrote some test code to experiment with particle systems and layering. Specifically, I tested out how many steam particles I can animate in real-time without causing a slowdown. I also experimented with gradient backgrounds, images, and more detailed drawings to see how far I could push the visuals before I start seeing performance drops.

function drawSteam() {
  // Draw and update each steam particle
  noStroke();
  for (let i = 0; i < steamParticles.length; i++) {
    let p = steamParticles[i];
    
    fill(255, 255, 255, p.alpha);
    ellipse(p.x, p.y, p.size, p.size);
    
    // Move the particle upwards
    p.y -= p.speed;
    // Small horizontal "drift"
    p.x += map(noise(p.y * 0.01, p.x * 0.01), 0, 1, -0.2, 0.2);
    // Gradually fade out
    p.alpha -= 0.2;
    
    // Reset the particle when it goes out of range
    if (p.y < height / 2 - 80 || p.alpha < 0) {
      steamParticles[i] = createSteamParticle();
      steamParticles[i].y = height / 2 + random(0, 30);
      steamParticles[i].alpha = random(100, 150);
    }
  }
}

 

Things to prevent:

  • Overcomplicating the Code: I’m trying not to throw everything in one giant file without structure. By using separate functions (and potentially classes), I’ll keep my code organized and easier to debug.
  • Performance Bottlenecks: Adding too many particles or large images could slow down the sketch. I’m keeping an eye on frame rates and testing on different devices so I can catch performance issues early.
  • Poor User Experience: If I add too many clickable elements or extra features, it might overwhelm the user and make the scene less relaxing. I want a balanced level of interaction that doesn’t feel cluttered.
  • Lack of Testing: I plan to test small sections of the code often, rather than waiting until the end. This way, I can catch bugs and performance issues as soon as they pop up.

Week 5 – Reading Response

Computer vision also differs from human vision in that it doesn’t “see” like humans do—it reads images as raw pixel data without reference to context or meaning. Where humans intuitively see objects, emotions, and intent, computers need algorithms to define patterns, edges, and movement. We naturally adjust for differences in light, angle, or occlusions, but computer vision generally needs to be programmed further to compensate. Humans also employ depth perception and prior knowledge to make sense of 3D space, while computers typically work on 2D images and need additional techniques like stereo cameras or depth sensors to estimate depth.

So that computers can more easily track what we’re interested in, we use techniques like frame differencing ( movement by detecting differences between frames of video), background subtraction (new objects are highlighted against a static scene), and brightness thresholding (objects are highlighted based on light contrast). More advanced techniques include edge detection, feature tracking, and deep learning algorithms that can detect faces, gestures, or objects. For interactive art, computer vision is frequently utilized by artists to explore themes of control and visibility, as seen in works like Sorting Daemon and Suicide Box.

Midterm Progress Report

Concept:

For my midterm project, I’m creating a grid version of Color by Number game. I’ve always loved these types of games because they help me relax, focus, and feel super satisfied as the artwork slowly comes to life. My goal is to bring that same experience to users by making a game that’s easy to use, fun, and visually appealing.

The game gives users a color palette, where each color is linked to a letter or number. The image itself is made up of a grid, with each cell labeled to show which color should go there. All users have to do is click on a grid cell, and the color fills in automatically. It’s quite simple for users to use.

Challenging Aspects and Risk Prevention

1. One of the biggest challenges was ensuring precise click detection on the image grid. Since the image itself doesn’t inherently support click interactions, I had to create an invisible grid and lay it on top of the image. This allowed the game to register mouse clicks accurately while keeping the visual presentation clean. However, this was difficult because I had to align the overlay perfectly so that the user wouldn’t see it, but it would still work effectively for detecting clicks and filling in colors.

2. Another challenge was allowing users to select colors from the palette and ensuring the correct color was applied to the chosen grid cell. I had to implement logic that detected which color was clicked on and then stored that color as the “active” selection. The game then needed to apply this color to any grid cell the user clicked until a new color was chosen. The difficulty came in precisely mapping click positions to the correct palette color while keeping the interface responsive and user-friendly.

class ColoringPage {
  constructor(name, imagePath, rows, cols, palette) {
    this.name = name;
    this.img = loadImage(imagePath);
    this.rows = rows;
    this.cols = cols;
    this.cellSize = 600 / this.cols;
    this.grid = Array.from({ length: this.rows }, () => Array(this.cols).fill(null));
    this.palette = palette;
    this.selectedColor = Object.values(palette)[0].color;
  }

  display() {
    this.drawPalette();
    image(this.img, 100, 90, 600, 600);
    this.drawGrid();
    this.drawColoredGrid();
  }

  drawGrid() {
    stroke(0, 50);
    noFill()
    for (let row = 0; row < this.rows; row++) {
      for (let col = 0; col < this.cols; col++) {
        rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize);
      }
    }
  }

  drawColoredGrid() {
    for (let row = 0; row < this.rows; row++) {
      for (let col = 0; col < this.cols; col++) {
        if (this.grid[row][col]) {
          fill(this.grid[row][col]);
          rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize);
        }
      }
    }
  }

  drawPalette() {
  let keys = Object.keys(this.palette);
  let x = (width - keys.length * 60) / 2; // Decrease 70 to 60 for less spacing
  let y = 20;
  
  noStroke();

  for (let i = 0; i < keys.length; i++) {
    let colorValue = this.palette[keys[i]].color;
    
    fill(colorValue);
    ellipse(x + i * 60 + 30, y + 25, 50, 50); 
    let c = color(colorValue);
    let brightnessValue = (red(c) * 0.299 + green(c) * 0.587 + blue(c) * 0.114); // Standard luminance formula

    fill(brightnessValue < 128 ? 255 : 0); // White text for dark colors, black text for light colors
    
    textSize(14);
    textAlign(CENTER, CENTER);
    text(this.palette[keys[i]].label, x + i * 60 + 30, y + 25); // Adjusted positioning to match circles
  }
}

 

Week 5 – Midterm Progress

Inspiration & Concept

During high school, I made a small “app” on Code.org using JavaScript (in blocks), where it shows off facts about elements within groups of elements (Gases, Metals, and Nonmetals). When pressing one of the buttons, it gives facts about that specific group, with drawings of my characters that are associated with each group, along with facts about that element (like how Carbon was associated within the Non-Metals group of the periodic table, and next to the drawing of him is facts about him)

Because of this, I decided that for the upcoming midterm project, to create an interactive visual novel game. The game takes place in a school of “The Periodic Table” where chemical elements are personified as classmates and teachers. Throughout the game, the user plays as a personified version of Carbon meets other elements like Hydrogen, Oxygen, Nitrogen learning about their properties, and behaviors, where it involves dialogue and branching choices that influence Carbon journey.

 

Design

For this project, I decided to keep the design minimal and simple where the color scheme would be muted, along with the characters be hand-drawn, while the background can be made using images and Adobe Photoshop. Backgrounds depict different areas of the hall, where the main character gets to pick either to meet with the other side characters, inspired by the older designs created:

 

 

 

 

 

 

 

 

 

The game will include a dialogue box at the bottom for text and choices, with animated character portraits appearing as they speak.

I decided to draw a sketch of the background on my phone to gain insight into what to include for the characters and how will the game proceed (Since I am planning on adding small characters within that same background to allow the main character to choose who to interact with).

 

Frightening/Challenging aspects

  1. Since visual novels rely on smooth transitions between scenes, it can be tricky to structure code so that dialogue, backgrounds, and choices update properly without breaking the game. If scene changes aren’t handled correctly, the game might get stuck on one scene, display the wrong text, or fail to update choices.
  2. Since p5.js runs in a continuous draw() loop, managing character dialogue, choices, and images dynamically can get messy if not structured well. If too many global variables or functions handle game logic, it may become hard to debug, leading to unexpected behavior (e.g., text not changing, buttons not working).
  3. The game must reset everything without refreshing the browser. If variables aren’t properly reset, restarting might carry over data from the previous playthrough. If objects are recreated incorrectly, images or dialogue might not load properly.
  4. I fear that sounds (like background music) may overlap, resulting in distorting my game

Prevention

  1. In order to ensure smooth transitions, I will use an OOP structure (Scene and Game classes) to manage transitions cleanly. I will also test each scene’s logic separately before adding more in case of any errors. I will also use classes based on the key aspects of the game:
    1. Character class to store element info
    2. Scene class to handle dialogues, backgrounds, and choices
    3. Game class to control transitions
  2. In order to make sure sound is used properly, I will Use sounds inside events like when the mouse is clicked or a key is pressed, along with stopping looping sounds before playing new ones to avoid overlap.

 

Week 5: Midterm Update

For my midterm, I decided to do a spin-off of a classic maze that is also heavily inspired by 2 other things I love: mystery and cats. The story of the game involves a person who is admiring an abandoned tomb that is said to hold a large treasure with their cat. Suddenly, something spooks the cats and it runs off into the tomb. It is then revealed that the tomb is like a maze, and the user must navigate through it. The catch? It’s pitch black apart from the small light from their flashlight and there are booby traps all around them. My idea is to have the user be able to call for the cat when a key like the spacebar is pressed, and when the cat responds, a larger part of the maze is temporarily revealed, as if the cat is just ahead of the user, running away or perhaps even guiding them through the maze until the user reaches the end/the treasure. Turning into a booby trap will result in “death” and the user will need to start over. I’m kind of imagining the layout to look like story pages or a comic book with the interactive game in the middle.

This task is certainly difficult as it requires a number of different elements, object-oriented programming, design/aesthetics, object animation, and even the possible use of sprites. Those sprites and getting the character and cat to move smoothly through the maze are the most challenging. Plus, getting the animation for booby traps and dealing with the character’s collision with them. Before I could even begin coding, it was really important that I knew what everything I would be designing looked like, so I tried to make it easier by finding gifs for specific animations I needed that could not be done in the program. That way, I knew exactly what I needed to try and create in the program myself and what I already have. For example, I found this gif of a cat being scared which I can use in the beginning scene when the cat is spooked and runs (I’m not sure if this website supports gifs, but you can kind of get the idea). I’m also working on creating the classes for all of the elements in the game right now. Here’s an example of what I have for the player, the traps, and even the cat:

class Player {
    constructor(x, y) {
        this.x = x;
        this.y = y;
        this.size = 20;
    }
    move() {
        if (keyIsDown(LEFT_ARROW)) this.x -= 2;
        if (keyIsDown(RIGHT_ARROW)) this.x += 2;
        if (keyIsDown(UP_ARROW)) this.y -= 2;
        if (keyIsDown(DOWN_ARROW)) this.y += 2;
    }
    display() {
        fill(255);
        ellipse(this.x, this.y, this.size);
    }
}

class Cat {
    constructor(x, y) {
        this.x = x;
        this.y = y;
    }
}

class Trap {
    constructor(x, y) {
        this.x = x;
        this.y = y;
        this.size = 20;
    }
    display() {
        fill(255, 0, 0);
        rect(this.x, this.y, this.size, this.size);
    }
    checkCollision(player) {
        return dist(player.x, player.y, this.x, this.y) < this.size;
    }
}

 

 

Week 5 Project

For my project I decided upon doing a top-down shooting game where the player would fight zombies (sprites are a work in progress). The main concept is that the player would be able to use a variety of weapons to attack an endless wave of zombies. This week, I worked on the array to have zombies endlessly spawn and the player’s movement.

The biggest challenge was getting the enemies to follow the player not with lerp, but to slowly puruse the player. This was the code that I had to use to get the ‘enemies’ to follow the player in a linear steps-per-frame model.

for (let ball of balls) {
   //direction ball is "facing"
   let direction = createVector(playerX - ball.x, playerY - ball.y);
   //playerSpeed control
   direction.setMag(ballSpeed); 
   ball.add(direction);

 

Week5-reading response

Reading  Computer Vision for Artists and Designers made me think about the stark differences between how humans and computers perceive the world. Human vision is incredibly contextual—we naturally filter out distractions, infer meaning, and recognize objects even when they are partially obscured. In contrast, computer vision relies on algorithms and mathematical models to process images. It doesn’t “see” in the way we do; instead, it detects patterns, edges, and contrasts based on pixel values. I found it fascinating how much work goes into making computers interpret images in ways that feel natural to us.

One of the key techniques for improving computer vision’s ability to track what we’re interested in is feature detection—using algorithms like edge detection, motion tracking, and machine learning models trained on labeled datasets. I also noticed how lighting, contrast, and background control play huge roles in making computer vision more accurate. This made me reflect on its implications for interactive art. I think computer vision’s capacity for tracking and surveillance creates an interesting tension in artistic practice—it allows for dynamic, responsive installations, but it also brings ethical concerns. I felt a bit unsettled thinking about how the same technology that enables playful, immersive experiences can also be used for surveillance. It makes me wonder how artists can challenge or subvert these systems in creative ways.