Midterm Project – Chic & Click

Concept:

Chic & Click is an interactive dress-up game where players create stylish outfits by mixing and matching hats, tops, and skirts on a mannequin. The project is inspired by the dressing games I loved playing as a child, which sparked my interest in fashion and interactivity. Before building the game, I researched similar games and gathered references to refine the concept of changing outfits, making sure that each clothing piece fits properly and looks visually appealing (I attached the inspiration image below). The mission of the game is to provide a fun and creative experience that allows players to explore their personal style. Interactivity is at the core of the game: players click on the hat, top, or skirt areas to cycle through clothing options, accompanied by audio feedback and visual effects like neon glows and a camera flash when taking a photo of their final outfit. 

Inspiration from the dressing game:

Sketch:

Link to the sketch: https://editor.p5js.org/Aizhan/sketches/h5QYwQTMS

Key Elements of the Game:

Starting Page:
The game begins with a clean and minimalist start page designed to be visually pleasant, using pastel and calm colors throughout the game. On this page, players see the game cover and instructions, along with a “Start” button. When pressed, it transitions to the main playing page. The background image was designed using a ChatGPT image generator, while the button is created using shapes and text. The game can also be played in full-screen mode by pressing the “F” key, providing a bigger and more immersive experience.

Playing Page:
On the playing page, players can interact with the mannequin to change outfits. The clothing items—hats, tops, and skirts/pants were designed in Canva, with five options for each category, giving a total of 15 different clothing pieces. When a player clicks on a clothing area, they can go through the different options, and after the fifth item, it loops back to the first. The clothes come in various colors and styles, and each click plays a game click sound, making the game more engaging and interactive.

Result Page:
After clicking the “Finish” button, the chosen outfit is displayed on a result page with a photo studio background. A fun gaming song starts playing as the page appears, creating a celebratory mood. This page includes a “Take a Photo” button,  which triggers a camera sound, a visual flash effect, and automatically downloads the screenshot to the user’s computer as “MyOutfit.png.” This allows players to save and share their styled outfits. The second button of the page is “Finish”, and as the users press the button, they move on to the next page.

Restart Page:
The restart page thanks the player for playing and provides a “Restart” button. The background image remains consistent with the aesthetic of the game, maintaining the calm pastel theme, and allows players to start a new round of outfit creation easily.

Code snippet and parts I am proud of:

I am proud of how I implemented the camera effect and photo feature. The flash effect combined with the screenshot download feels polished and gives the game a fun feeling To be honest, this was a last-minute addition, as I initially only had music and the background. The p5 references and tutorials really helped me bring it to life.

// Flash effect and screenshot saving
function drawFlash() {
  let elapsed = millis() - flashStartTime;
  if (elapsed < 500) { // flash duration 0.5 seconds
    fill(255, 255, 255, 150);
    rect(0, 0, width, height);
    if (cameraImage) {
      image(cameraImage, width / 2 - 100, height / 2 - 100, 200, 200);
    }
  } else if (showFlash) {
    // Save the canvas as an image
    saveCanvas(cnv, "MyOutfit", "png");
    showFlash = false;
  }
}

I am also proud of how I handled the clothing interactivity. Using arrays and a simple .next() function to cycle through hats, tops, and skirts kept the code clean and easy to manage, while keeping the gameplay smooth and responsive.

// Clothing class 
class Clothing {
  constructor(images, offsetX, offsetY) {
    this.images = images;       // array of clothing images
    this.offsetX = offsetX;     // array of X offsets for each image
    this.offsetY = offsetY;     // array of Y offsets for each image
    this.current = 0;           // index of the currently displayed clothing
    this.show = false;          // whether to display this clothing
  }

  // Loops back to the first item after the last one
  next() {
    this.current = (this.current + 1) % this.images.length;
  }

  // Display the current clothing item on the canvas
  display(x, y, w, h, extraOffsetY = 0) {
    if (this.show) { // only draw if show is true
      let img = this.images[this.current];
      let aspect = img.width / img.height;       // maintain aspect ratio
      let targetW = w;
      let targetH = targetW / aspect;
      let offsetX = this.offsetX[this.current] || 0; // use offset for proper alignment
      let offsetY = this.offsetY[this.current] || 0;
      image(img, x + offsetX, y + offsetY + extraOffsetY, targetW, targetH);
    }
  }
}

I was also inspired by other students’ work to add neon hover effects on the buttons, which were fun to code and added polish without making the program more complicated.

function applyNeonEffect(btn) {
  let x = btn.x, y = btn.y, w = btn.width, h = btn.height;
  
  // Check if mouse is over the button
  if (mouseX > x - 20 && mouseX < x + w + 20 && mouseY > y - 20 && mouseY < y + h + 20) {
    // Apply neon glow effect
    btn.style("box-shadow", "0 0 10px #fff, 0 0 20px #fff, 0 0 30px #fff, 0 0 50px #fff");
  } else {
    // Remove glow when mouse is not over
    btn.style("box-shadow", "none");
  }
}

Problems/Future Improvements:

One of the most difficult parts of this project was getting the clothes to fit correctly on the mannequin. Each clothing image had a different size and shape, so I spent several hours trying to align them properly. At first, it was tricky because using the same coordinates for every clothing item caused them to look off. I solved this by creating offset arrays for X and Y positions for each clothing item. For example, topOffsetsX, topOffsetsY, skirtOffsetsX, skirtOffsetsY, and hatOffsetsY allowed me to manually adjust the position of each item so it would sit correctly on the mannequin.

Regarding future improvements, I would focus on a few key areas. One of them would be adding more clothing options and categories, like shoes, accessories, or jackets, would give the game more variety and customization. I could also implement a drag-and-drop feature so users can position clothes more freely instead of just clicking to cycle through them. Overall, this project was a great way for me to combine coding and design. I learned how to manage interactive elements, solve alignment issues, and create a smooth user experience.  It also improved my problem-solving skills, especially when dealing with image alignment and user interactions. This midterm game gave me a better understanding of how design and programming can come together to create an engaging user experience.

Midterm

  • Embed or link to your sketch (e.g. link to the sketch in fullscreen mode)
  • Include code snippets and one or more images
  • Describe the overall concept of your project (1-2 paragraphs)
  • Describe how your project works and what parts you’re proud of (e.g. good technical decisions, good game design) 2-3 paragraphs
  • Describe some areas for improvement and problems that you ran into (resolved or otherwise) (1-2 paragraphs)

This is the final output:

Conceptualization:

For this project, I decide to create a design includes interactivity with sound. When I looked at how musicians create their music, they always apply something called as “soundtrack” that could be dragged around to edit their produced songs. So, for this midterm project, I decide to build on this concept.

However, it would be too technical if I just recreate a soundtrack control system and it wouldn’t be very artistic. To add certain aesthetics to the design and to simplify the system. I decided to create a visual system that is similar to this picture.

With different color representing different sound, user could drag the gridlines to increase/decrease the length of certain sound. With colored blocks designated to certain sound, I decrease the difficulty to choose which sounds are available to the sound track. Also, I want to design a system in which the user could click on the grid inside and change the color(grid) of the sound. There should also be a play button which plays the sound block once the user finish design the block.

How the program works:

There are two pages in total, the first page allow users to read instructions so that they will grasp idea on how this program work. Then the second page allow user to interact freely with their creativity.

 

When users adjust the color of the grid, the “music”. that is created with the sound blocks would be different. Additionally, users are allowed to control the length of each block. Overall, I added a adjust tempo feature to control the total length of the block.

The code that I am proud of is the rebuild function. It is the core of this program as user interacted with the grid, its sound will be altered based on how user rebuild the entire grid.

function rebuildBlocks() {
  // Precompute row boundaries (equal height rows)
  const ys = [];
  for (let r = 0; r <= N_ROWS; r++) {
    ys.push(gridY + (r * gridH / N_ROWS));
  }

  blocks = [];
  for (let r = 0; r < N_ROWS; r++) {
    const row = [];
    const xs = [gridX, ...handles[r], gridX + gridW];

    for (let c = 0; c < N_COLS; c++) {
      const x0 = xs[c], x1 = xs[c + 1];
      const y0 = ys[r], y1 = ys[r + 1];

      // Aesthetic default color pattern:
      // Use a repeating gradient based on row/column indices.
      let baseState;
      const pattern = (r + c) % 5;
      switch (pattern) {
        case 0: baseState = 0; break; // blue
        case 1: baseState = 1; break; // red
        case 2: baseState = 2; break; // yellow
        case 3: baseState = 3; break; // green
        case 4: baseState = 4; break; // grey (silent)
      }

      // Slight random variation for natural look
      if (random() < 0.15) {
        baseState = floor(random(0, 5));
      }

      row.push(new Block(x0, y0, x1 - x0, y1 - y0, baseState, r, c));
    }
    blocks.push(row);
  }
}

In the future I believe it is a good practice to improve the aesthetics of the user interface and additionally, I could add several sound effect blocks so that users could alter the entire style of the music.

Midterm Project – Barbie Dreamhouse

Barbie’s Dreamhouse 

Link to sketch: https://editor.p5js.org/rma9603/full/y2y3-M4zC

Whenever I play with interactive art, I try to build something that invites slow exploration instead of a single-goal game. For my final project I built Barbie’s Dreamhouse: a small interactive world with rooms to explore (Outside → Inside → Closet, Kitchen, Bedroom, Living Room), each containing subtle objects the user can click or interact with. The goal was to create a calm, nostalgic environment that encourages clicking, discovering, and lingering.

Concept

The Dreamhouse is not a “win/lose” game, it’s an exploratory scene. The idea was to capture the cozy, pastel vibe you expect from a dreamhouse and layer in small interactive details:

  • An exterior view with a theme song and a door that rings a real doorbell when clicked.

  • An interior view with hotspots for the closet, kitchen, bedroom, and living room.

  • A Closet (Wardrobe) with multiple outfit sets and selectable frames, accompanied by sparkle effects and sound.

  • A Kitchen  to pick a cupcake base, bake it, then decorate with frosting.

  • A Bedroom with a clickable book that opens a reader overlay

  • A Living Room with a TV area and remote control for channel-like images with music that resonated with the pic shown.

On a personal note: I loved Barbie as a kid, and some of my favorite Barbie movies directly inspired the look and feel of the living room — the pastel decor, playful props, and the idea of a tiny TV full of different “channels” came straight from that nostalgia.

The focus was on atmosphere: soft pastel visuals, gentle audio, and small surprises that reward clicking around.

Here is a rough sketch of what I envisioned:

Key features and interactions

Room transitions

  • Click the door from the exterior to enter the house.

  • From the interior, click room hotspots to open that room full-screen.

  •  Scene state management makes switching easy.

Closet 

  • Pages of outfit frames sliced from larger sprite sheets. (took me a while to slice them correctly).

  • Click dots to switch pages/sets, arrows to cycle frames, and a ✓ button to confirm selection.

  • Sparkle overlay + sparkle sound on selection when outfit is confirmed.

  • I added a wardrobe reset so pressing R reliably restarts the closet to its initial state.

Kitchen

  • A mini workflow: pick base → bake (progress bar) → confirm → decorate (frosting).

  • Cupcake base and frosting are separate sprites; frosting is aligned to the base bottom using computed offsets so different frosted overlays sit properly.

  • Tweaked the base preview size so the cupcake base doesn’t dominate the scene.

Bedroom

  • A book hotspot that opens a reader overlay with pages.

  • Prev/Next page buttons and R to close.

Living room

  • The living room is where my childhood Barbie inspiration shows most — pastel furniture, framed photos, and a playful TV nook. I built a small TV area with channel-like images and a responsive remote so users can flip through visuals like changing channels in a cozy movie night.

  • TV image area and remote hotspots scale responsively with the canvas; the living room’s color and props riff off my favorite Barbie movies.

Audio

  • Background theme for the exterior (looping Barbie Life in the Dreamhouse theme song).

  • Doorbell sound that plays when clicking the door — the theme song stops when the door is pressed so the bell is audible and the audio doesn’t overlap.

  • Special audio for sparkle, baking ding, closet music, and bedroom lullaby — all are conditionally played/stopped when entering/exiting rooms or selecting items.

Workflow & what went well

  1. Scene manager + state machine
    Making a small SceneManager (global state variable with states like outside, inside, closet, bedroom, living room) made it trivial to manage transitions and keep room-specific logic isolated.
  2. Drawing and assets
    I drew assets in  Canva and used AI to generate some of the visuals  at the same size as the p5 canvas where possible — this hugely simplified positioning and saved time. For sprite sheets (like cupcake bases/frostings and outfits) I sliced frames programmatically so I could treat them like tiled sprites.
  3. Small polish details
  • Preventing continuous hover sounds (door bell) by gating the knock with a boolean.

  • Ensuring music doesn’t layer (check .isPlaying() and .pause() before starting a new track).

  • Adding a “sparkle” overlay and stopping closet music when confirming a selection so the sparkle sound can be heard.

What coding I’m proud of

The piece of code I’m proudest of is  honestly the whole kitchen, specifically the sprite-slicing + alignment system for the Kitchen. I wrote utilities that trim transparent pixels from sprite frames, compute each frame’s visual center/bottom, and then use those offsets to automatically align frosting to the cupcake base across many different sprite sheets and sizes. That makes wildly different art assets behave as a single cohesive object without manual per-frame positioning. It also required careful handling of canvas scaling, timing (bake/ding), and audio overlap — a lot of little edge cases that had to work together cleanly. 

 // slice sprites  — trims each frame and computes offsets on trimmed images
  _sliceSprites() {
    // base frames
    this.baseFrames = [];
    this.baseOffsets = [];
    if (
      this.cupcakeImg &&
      this.cupcakeImg.width &&
      this.cupcakeImg.height &&
      this.baseCount > 0
    ) {
      const g = this._bestGridFor(this.cupcakeImg, this.baseCount);
      const fw = Math.round(this.cupcakeImg.width / g.cols);
      const fh = Math.round(this.cupcakeImg.height / g.rows);
      let idx = 0;
      for (let r = 0; r < g.rows; r++) {
        for (let c = 0; c < g.cols; c++) {
          if (idx >= this.baseCount) break;
          const sx = c * fw,
            sy = r * fh;
          try {
            const raw = this.cupcakeImg.get(sx, sy, fw, fh);
            const trimmed = this._trimTransparent(raw) || raw;
            this.baseFrames[idx] = trimmed;
            this.baseOffsets[idx] = this._computeContentBounds(trimmed);
          } catch (e) {
            this.baseFrames[idx] = null;
            this.baseOffsets[idx] = {
              xOffset: 0,
              yOffset: 0,
              maxY: Math.floor(fh / 2),
            };
          }
          idx++;
        }
      }
    }

    // frosting frames
    this.frostingFrames = [];
    this.frostingOffsets = [];
    if (
      this.frostingImg &&
      this.frostingImg.width &&
      this.frostingImg.height &&
      this.frostingCount > 0
    ) {
      const g = this._bestGridFor(this.frostingImg, this.frostingCount);
      const fw = Math.round(this.frostingImg.width / g.cols);
      const fh = Math.round(this.frostingImg.height / g.rows);
      let idx = 0;
      for (let r = 0; r < g.rows; r++) {
        for (let c = 0; c < g.cols; c++) {
          if (idx >= this.frostingCount) break;
          const sx = c * fw,
            sy = r * fh;
          try {
            const raw = this.frostingImg.get(sx, sy, fw, fh);
            const trimmed = this._trimTransparent(raw) || raw;
            this.frostingFrames[idx] = trimmed;
            this.frostingOffsets[idx] = this._computeContentBounds(trimmed);
          } catch (e) {
            this.frostingFrames[idx] = null;
            this.frostingOffsets[idx] = {
              xOffset: 0,
              yOffset: 0,
              maxY: Math.floor(fh / 2),
            };
          }
          idx++;
        }
      }
    }
  }

 

Areas for improvement / future work

  • Add instructions or an optional guided mode (right now the experience is intentionally exploratory, but an in-game menu could help some users).

  • Refine click detection for non-rectangular images (pixel-perfect hit testing for PNGs with transparency).

  • Add more kitchen interactions: coffee machine, more decoration options, or an inventory for outfits.


What I learned

  • Breaking the app into small room controllers (Wardrobe, Kitchen, Bedroom, LivingRoom) makes the codebase much easier to maintain and debug.

  • Small details matter: gating hover sounds, preventing overlapping music, and subtle visual feedback (sparkle, dots) make the experience feel much more polished.

  • Drawing assets at canvas scale saves tons of time when positioning interactive pieces.

Closing

I loved Barbie when I was a kid, and designing this project felt like a grown-up, interactive love letter to those movies ,  especially when building out the living room. I enjoyed making something soft and low-pressure that rewards clicking and exploration. The Dreamhouse was a great exercise in scene management, responsive layout, and polishing interactions that make users want to hang out in a piece of art.

Midterm Project

Inspiration

The idea for Blackhole was born from one of my all-time favorite casual games: hole.io. The core concept of “the more you eat, the more you grow” fascinated me. However, my goal was never to recreate an existing game, but to reinterpret that mechanic in an entirely different environment.

In hole.io, the objective is to have the largest area among all players, consuming the environment and other players to maintain your crown. So I though of reimagining that concept in space? What if the hole was a blackhole preying everything in its path? This became the foundation of my project; A space-themed, single-player survival game centered around growth, gravity, and chaos.

Concept

Blackhole is a single-player arcade-style game set in deep space. The player controls a blackhole that grows by consuming asteroids and powerups, while avoiding larger threats.

To bring the theme to life, I envisioned asteroids as natural “food” for the blackhole (blending the space aesthetic with the satisfying loop of consuming and expanding). To make the gameplay more dynamic, I added powerups that appear on screen at random intervals, offering temporary boosts or effects to make the experience more engaging.

The main goal is simple: survive as long as possible while growing your blackhole and collecting stars for bonus points. The longer you last, the higher your score.

Implementation

Developing Blackhole was an iterative process. I began by prioritizing functionality, ensuring the mechanics worked before focusing on visuals or audio.

Blackhole

The foundation of the game lies in the Blackhole class, responsible for handling player growth, shrinking, and interaction with other entities. For simplicity, I represented all objects (asteroids and powerups) as circles. This made collision detection intuitive and computationally efficient. Using the dist() function in p5.js, I determined when the player’s blackhole had consumed another object or collided with a threat.

Blackhole Class:

class Blackhole{
  constructor(){
    this.size=25;
    this.speed=3;
    this.x = 100;
    this.y = 100;
  }
  
  // call when eats star or expand powerup
  grow(){
    this.size *= 1.2;
  }
  
  shrink(){
    this.size *= 0.8;
  }
  
  move() {
    if (keyIsDown(UP_ARROW)) {
      // guarding against boundaries
      if (this.y - this.speed > 0){
        this.y -= this.speed;
      }
    }
    if (keyIsDown(DOWN_ARROW)) {
      // guarding against boundaries
      if (this.y + this.speed < height){
        this.y += this.speed;
      }
    }
    if (keyIsDown(RIGHT_ARROW)) {
      // guarding against boundaries
      if (this.x + this.speed < width){
        this.x += this.speed;
      }
    }
    if (keyIsDown(LEFT_ARROW)) {
      // guarding against boundaries
      if (this.x - this.speed > 0){
        this.x -= this.speed;
      } 
    }
  }
  
  display(){
    stroke('white');
    strokeWeight(2);
    fill('black');
    circle(this.x, this.y, this.size);
  }
  
  // returns boolean
  collides(obj) {
    let d = dist(this.x, this.y, obj.x, obj.y);
    return d <= this.size/2;
  }
}

Powerups and Movement

Initially, I designed the powerups to spawn randomly and stay on screen indefinitely. However, I quickly realized that this cluttered the visuals and made it hard to differentiate between background stars and powerups. Even adding a blinking effect before disappearance didn’t feel natural.

Taking inspiration from Tetris Rush, I redesigned the mechanic. Now, powerups drop from the top of the screen and disappear at the bottom, creating a natural movement cycle. This solved both visibility and clutter issues while keeping the game lively and visually coherent.

Powerup Class:

class Powerups {
  constructor(_x, _y, _type, _img) {
    this.x = _x;
    this.y = _y;
    this.type = _type;
    this.img = _img;
    this.fallSpeed = random(1, 3); // random falling speed
  }

  // to have powerups fall from up to down
  move() {
    this.y += this.fallSpeed;
    if (this.y > height) {
      this.y = height;
    }
  }

  display() {
    textSize(20);
    
    if (this.type == "fast") {
      this.size = textWidth(''); // for detecting collissions
      text('', this.x, this.y);
    } 
    else if (this.type == "freeze") {
      this.size = textWidth('❄️'); // for detecting collissions
      text('❄️', this.x, this.y);
    } 
    else if (this.type == "grow") {
      // image for grow
      push();                   
      imageMode(CENTER);
      this.size = 75; // for detecting collissions
      image(this.img, this.x, this.y, this.size, this.size)
      pop();
      
    } 
    else if (this.type == "shrink") {
      // image for shrink
      push(); 
      this.size = 40; // for detecting collissions
      imageMode(CENTER);
      image(this.img, this.x, this.y, this.size, this.size)
      pop();
      
    } 
    else if (this.type == "star") {
      this.size = textWidth('⭐️'); // for detecting collissions
      text('⭐️', this.x, this.y);
    } 
    else if (this.type == "bomb") {
      this.size = textWidth(''); // for detecting collissions
      text('', this.x, this.y);
    }
  }
}

Asteroids

For this class it was difficult preventing the asteroids from moving in predictable or repetitive patterns. To make the game feel organic and chaotic, I implemented an Asteroid class that allows each asteroid to move in random directions and change course periodically.

Each asteroid picks a random direction from a list of possible vectors including diagonal and straight-line movement and updates its velocity every few seconds. This ensures the movement feels spontaneous, mimicking the drifting, floating nature of space.

Asteroid class

class Asteroid {
  constructor(_x, _y, _img, _size) {
    this.x = _x;
    this.y = _y;
    this.img = _img
    this.size = _size;

    // separate velocities
    // this.vx = random([-1, 1]); // left or right
    // this.vy = random([-1, 1]); // up or down
    this.speed = 1.5;
    
    this.setRandomDirection();
  }

  // asteroids move in any direction randomly
  setRandomDirection() {
    let choices = [
      [1, 0],   // right
      [-1, 0],  // left
      [0, 1],   // down
      [0, -1],  // up
      [1, 1],   // diagonal down-right
      [-1, 1],  // diagonal down-left
      [1, -1],  // diagonal up-right
      [-1, -1]  // diagonal up-left
    ];

    let dir = random(choices);
    this.vx = dir[0]; // x-axis dir
    this.vy = dir[1]; // y-axis dir
  }

  move() {
    
    // every 2s change direction
    if (frameCount % 120 === 0) {
      this.setRandomDirection();
    }
    
    this.x += this.vx * this.speed;
    this.y += this.vy * this.speed;

    // Bounce off edges
    if (this.x < 0 || this.x > width)this.vx *= -1;
    if (this.y < 0 || this.y > height) this.vy *= -1;
  }
  
  display() {
    push();                   
    imageMode(CENTER);
    image(this.img, this.x, this.y, this.size, this.size);
    pop();
  }
}

Code I’m Proud of

One of the features I’m most proud of implementing is the freeze and speed-up powerups. Increasing/ Decreasing the speed the speed at collision was easy but how do I revert those changes back to their original state after a fixed duration (around 4 seconds).

I knew I could use frameCount to track time progression, but the question was where and how to integrate it efficiently without breaking the game loop. The solution came through the use of two external constants: freezeTimer and fastTimer.

Whenever the player collided with a powerup, the corresponding timer was assigned a specific value. In each frame of the game, these timers would decrement until they reached zero. Then the original state was restored.

// timers
  if (freezeTimer > 0) {
    freezeTimer--; // keep decreasing with every frame count
  } else if (fastTimer > 0) {
    player.speed = 5;
    fastTimer--; // keep decreasing with every frame count
  } else {
    // resetting speed for player and aesteroids
    player.speed = 3;
    for (let asteroid of asteroids) {
      asteroid.speed = 2;
    }
  }

Design and Visuals

For the backdrop, the choice was obvious, a starry night sky. I later replaced my simple circular asteroids with actual asteroid images to enhance immersion.

Sound design also became an important aspect. I added distinct sound cues for eating asteroids and activating powerups. Interestingly, while I was coding, the song “All the Stars” started playing and it immediately felt like the perfect fit. So I included it as background music, giving the game a cinematic atmosphere.

Instructions Screen

Game Screen

Game Over screen

Key Challenges and Solutions

  1. Resizing Mechanics
    One of the more subtle but tricky challenges I faced was maintaining a consistent visual layout across different screen sizes. A problem I faced was that whenever I would resize on the instructions or Game Over screen, it would give me an error and paint a black empty backdrop. This was because I used noLoop() to freeze the game on both the instruction screen and the Game Over screen. To solve this, I ensured that after resizing, the program would temporarily call redraw function (even when in a noLoop() state) to refresh the layout and correctly reposition all visual elements according to the new width and height.

  2. Blackhole and Asteroid size

Another major design challenge came from the scaling mechanics of the Blackhole and the Asteroids. Since the central idea of the game revolves around “eating to grow,” the player’s size could increase infinitely if left unchecked. Thereby, eventually covering the entire canvas and breaking the gameplay. Similarly, large asteroids could spawn that were nearly impossible to avoid or defeat, making the game feel unfair or chaotic.

To address this, I implemented maximum and minimum size constraints for both the Blackhole and the Asteroids. The Blackhole’s growth is now capped at a defined MAX_SIZE, preventing it from expanding beyond the playable area, while asteroids are spawned within a limited size range relative to the player’s current size. This balancing ensures that gameplay remains challenging yet visually coherent.

Potential Developments

While I’m proud of the current version of Blackhole, there’s plenty of room for evolution.

  • Improved Scoring System:
    Currently, the score depends on survival time and the number of stars collected. In the future, I’d like to integrate the player’s size into the scoring formula (though my earlier attempts caused the score to grow too rapidly to track meaningfully).

  • Explosive Powerups:
    Introducing a special powerup that destroys all asteroids within a radius could add an exciting burst element and reward risky gameplay.

  • Dynamic Difficulty:
    Over time, the game could progressively increase asteroid speed or bomb spawn rate, keeping the challenge alive as the player grows.

Midterm Project – Goalkeeper Challenge

Concept
As a kid, I admired strikers for scoring beautiful goals, but now I understand what a difficult and beautiful job goalkeepers have in saving their team. My admiration for goalkeeping grew, especially after watching the incredible 2022 World Cup Final between Argentina and France in Qatar.

In the game I’ve made, the player moves the goalkeeper’s gloves with a mouse or trackpad to save penalties. Each player starts with three lives and loses one for every goal conceded. Players can unlock new skins for their gloves. Making 10 saves in a single game unlocks the first skin, and reaching 15 saves in one session unlocks a second one.

The game also saves each player’s personal best record, allowing multiple users to compete and track their high scores. There is also a special golden ball: saving it grants an extra life (+1), but conceding a goal from it costs two lives instead of the usual one. The controls are simple: press ‘F’ for fullscreen and ‘S’ to view and select skins. There is also a hidden ‘R’ key to reset the game and clear all data.

Highlight of the code

function saveUserData() {
  let userData = {
    highScore: highScoreSaves,
    unlockedSkins: unlockedSkins
  };
  localStorage.setItem(currentUser, JSON.stringify(userData));
}

One smart technical decision I learned in my Intro to CS class was to automatically save each player’s progress and unlocked skins. This ensures that players don’t lose their scores and achievements when they close or reload the game. It definitely adds to the experience, making it feel like a real game that saves user accounts and their rewards.

if (dist(ball.x, ball.y, keeper.x, keeper.y) < keeper.size / 2) {
  saveSound.play();
  saves++;
  ball.reset();
}

Another important feature is collision detection, which I implemented using the mathematical dist() function. The method of checking for an overlap between two circles, which was also taught in my Intro to CS class, is a much easier way to calculate collisions in a simple game. When a collision is detected, the code plays the ‘save’ sound, increments the save counter, and resets the ball to its starting position. I think this approach is very simple, short, and super-efficient.

Sketch
https://editor.p5js.org/da3490/sketches/e03ZtWqpi

Reflection
I drew on my previous experience creating a Jetpack Joyride duplicate in my Intro to CS class to build this game’s features, including the login and menu pages, score tracking, user record saving, collision detection, and skins. While developing the UI took some time, it wasn’t logically difficult.

I would say that making the game adaptive to different screen sizes took the most effort. Instead of using fixed pixel values, I had to use percentages and mathematical relationships, such as width * 0.5 and height * 0.3. This approach was tricky but ensures the game looks and works properly on any device. I also added glow and glass effects to enhance the UI aesthetics. Using drawingContext and shadowBlur, I created transparent rectangles that produced a modern ‘glass’ look. While this style may be less popular now, it was a prominent trend in 2023-2024.

Future improvements
For future improvements, I would love to replicate FIFA-style penalties with a two-player mode: one player aims and shoots while the other plays as the goalkeeper.

To achieve a more realistic, front-facing perspective instead of the current view, I would need to implement 3D-like mechanics. This would involve angle calculations and scaling the ball’s size to create a sense of depth. The goalkeeper would use keyboard buttons to move in all eight directions (left, right, up, down, and diagonals) and would have a jump mechanic to make saves. For the shooter, the mouse would control the aim, while holding down the spacebar would determine the power of the shot.

Midterm Draft 1

Concept:

For my midterm project, I decided to create a football obstacle game where the player dribbles through moving defenders to score against a moving goalkeeper. The game gets progressively harder after each goal: defenders increase in number and speed, and the goalkeeper moves faster. The player clicks to shoot when near the goal.

The aim is to design a game that is fun and interactive, combining timing, precision, and quick thinking. I will also include engaging visuals and sounds to give feedback for player actions, like kicking the ball, scoring, or colliding with defenders. This makes the game feel immersive and responsive.

Design:

The game starts with an on-screen instruction explaining the controls: the arrow keys to move, and the mouse click to shoot. Only when the player presses a key or clicks the button does the game begin. The player, defenders, goalkeeper, and the ball in the game will be represented using circles for simplicity and visual consistency. The player will be a larger colored circle that moves freely across the field in response to user input, with the ball represented as a smaller circle attached to the player while dribbling. Defenders will be smaller circles in distinct colors, moving along random paths to create challenges, while the goalkeeper will be a medium-sized circle that moves horizontally along the goal line to block shots. When the player shoots, the ball will separate from the player and travel independently toward the goal.

Classes to be used:

  • Player Class: Controls movement and shooting. Properties include position, size, and speed. Methods: Methods or functions include display(), move(), and shoot()

  • Defender Class: Moves across the field, increasing in number and speed as the game progresses. Properties include position, size, speed, and direction. Methods or functions include display() and move()

  • Goalkeeper Class: Moves left and right along the goal line. Properties include position, width, height, and speed. Methods or functions include display() and move()

  • Ball Class: Moves toward the goal when shot. Properties include position, size, speed, and moving. Methods or functions include display(), move()

Gameplay Flow:

  1. Start Screen: Displays instructions like “Use arrow keys to dribble, click to shoot, avoid defenders.”

  2. Gameplay:

    • Player moves freely around the field with the ball

    • Defenders move continuously in patterns.

    • Player avoids collisions with defenders.

    • When near the goal, clicking the mouse shoots the ball.

  3. Scoring & Difficulty:

    • Passing the goalkeeper scores +1.

    • After each goal, more defenders appear, defender speed increases, and goalkeeper speed increases slightly.

    • Player position resets for the next attempt.

  4. Restart / New Session:

    • Players can restart without refreshing the page.

    • Score and high score are displayed.

Frightening/Challenging Aspect:

The most challenging aspect of this project will likely be keeping the ball consistently attached to the player while allowing free movement around the field. Although the player and ball are conceptually separate objects, the ball must move in perfect sync with the player in all directions – up, down, left, right, and diagonally, which can be difficult to achieve smoothly. This challenge becomes even greater when combining it with collision detection against multiple defenders and a moving goalkeeper. The ball must remain attached until the player decides to shoot, at which point it detaches and moves independently toward the goal.

But this is also the most important part of the project, and implementing it correctly will help create an engaging and fun experience.

Risk Reduction:

To reduce the risk of issues with keeping the ball attached to the player, I plan to implement a step-by-step testing approach. First, I will start by coding the player and ball as separate objects and linking the ball’s position directly to the player’s coordinates with an offset, ensuring that it moves smoothly in all directions. I will test this initially without any defenders or a goalkeeper to confirm that the ball follows perfectly. Next, I will gradually add collision detection with a single defender, then multiple defenders, and finally the goalkeeper, checking at each stage that the ball remains properly aligned. Additionally, I will use simple shapes, such as circles, for all objects to simplify calculations.

Midterm Progress

Concept:

I think it’s safe to say that a lot of events on campus have photobooths, and even stores in malls. I wanted to do something similar, but fully digital. While my current sketch isn’t very aesthetic yet, I plan to model it after the Urban Outfitters photobooth, adding playful visuals and frames to make the experience interactive and fun, like a real-life booth.

The user interacts with the photobooth through a series of pages. They start at the main booth screen, where they can click a button to enter their name on the Start Page. After submitting their name, they move to the Filter Page to see a live video feed and choose a filter by pressing the arrow keys. Once they select a filter and click “OK,” they reach the Frame Screen, where they can take a photo. Finally, the Thank You Screen appears with an option to restart, which loops them back to the booth, letting them take another photo. Each page guides the user step by step, making the experience clear and interactive.

Code design :

All the pages are in their own class.

    • PhotoBooth: The main screen where users start. Displays the booth and a button to enter the next step.
    • StartPage: Handles the user’s name input and the “Start” button to move to the filter selection.
    • FilterPage: Shows a live video feed and lets users choose a filter using arrow keys, with an “OK” button to proceed.
    • FrameScreen: Applies the selected filter to the video and allows the user to take a photo with a camera button.
    • ThankYouScreen: Confirms the photo has been taken and offers a button to restart the booth.
    • Flow control: currentScreen determines which class is displayed, and button/key interactions manage transitions between screens.

Difficulties:

I think that throughout implementing this, the filter page class was the most troubling. I had to use the translate function to mirror my video feed because I wanted a mirrored picture as the result, and I needed it to only apply this to the video, and not the text and button. 

//mirror the video (so it feels like a mirror)

    push();

    translate(width, 0);

    scale(-1, 1);

    image(this.video, 0, 0, width, height);

    pop();

Another key part of the project is the filters and how the selection works. Each filter is applied as an overlay on the live video feed, and the user can browse through them using the left and right arrow keys. I also made sure the selection loops continuously, so moving past the last filter brings you back to the first one.

 //sepia filter overlay

    if (this.filters[this.currentFilter] === "sepia") {

      fill(112, 66, 20, 60);

      noStroke();

      rect(0, 0, width, height);




      //black & white filter using canvas filter

    } else if (this.filters[this.currentFilter] === "bw") {

      push();

      tint(255, 255);

      drawingContext.filter = "grayscale(100%)";

      translate(width, 0);

      scale(-1, 1);

      image(this.video, 0, 0, width, height);

      drawingContext.filter = "none";

      pop();

    }

 

nextFilter() {

    //go to next filter (wraps around)

    this.currentFilter = (this.currentFilter + 1) % this.filters.length;

  }




  prevFilter() {

    //go to previous filter (wraps around)

    this.currentFilter =

      (this.currentFilter - 1 + this.filters.length) % this.filters.length;

  }

To deal with, or let’s say minimize the risks with these two implementations, I first tested them out in a separate sketch on p5.js, and when I made sure they worked how I wanted them to, I added them to my project flow. I also added console logs to help me debug and then removed them once everything was working as expected.

Lastly, this project really tested my understanding of how order matters. I had to use .hide() a lot to make sure buttons and features weren’t carried on between screens. 

Reflection:

Now that the structure is solid, I can focus on making it look appealing. I’m open to any suggestions and even suggestions for my current flow!

Here’s my Sketch:

Reading Reflection Week 5

I’ve always thought of digital art and computer vision as really complex concepts, almost intimidating, because they felt so tied to technical expertise. What I appreciated about Levin’s article is how he breaks it down so it feels a little less intimidating. Reading it, I realized that these tools don’t always need to be heavy or advanced to be meaningful; even simple algorithms can create powerful, interactive experiences. 

That point resonated with me because I’ve actually worked on a VR project before for a class final. It was a “build your own concert” type of game, and looking back, I can see how much it connected to the kind of work Levin describes. We weren’t doing anything revolutionary with the code, but the way people could move and interact with the environment really made the experience. Especially since our target audience was those who can’t attend concerts due to health concerns. His emphasis on preparing the environment as much as the software clicked with me here. 

One of the main ways human and computer vision are different is through interpretation. While we humans can think outside the box, make different connections, see things from various angles, and recognize their different forms. Computers see things as a set of pixels and only ‘think’ in terms of structured algorithms. It’s very literal. What happens when we combine both?

This is why I was especially drawn to Krueger’s Videoplace. I’d read about him in Understanding IM before, and I’ve always liked his idea of using the human body itself as the interface. It made me think about how Steve Jobs framed the first iPhone: our fingers are the “best pointers out there,” which is why touchscreens felt so natural. It’s a reminder that sometimes the most effective tech feels the least like “technology” and the most like an extension of ourselves.

At the same time, I couldn’t help but think about the ethical side. While Levin highlights the playfulness and creative potential of computer vision, I also worry about issues like privacy and misuse. As an artist or designer, you can try to account for these risks, but sometimes it’s beyond your control once the technology is out in the world. For me, that overlap between limitless creativity and real-world responsibility is something I want to keep in mind as I continue building my portfolio in IM.

Overall, this reading made computer vision feel less like an intimidating black box and more like a tool I could actually experiment with. It left me curious not just about what I can create, but about how to do it thoughtfully.

Reading Reflection – Week 5

I found the essay to be incredibly refreshing and insightful. I believe the distinction it draws between computer and human vision is a crucial one; it’s not simply that computers are a less sophisticated version of us, but that they perceive the world in a fundamentally different, more literal and mathematical way. It’s fascinating to think that a complex process like “seeing” can be broken down into these logical, computational steps. I liked the author’s emphasis on the idea that we don’t just need to write better code, but we also need to create environments that are conducive to how computers “see.” The practical advice about using controlled lighting to create high-contrast silhouettes or employing infrared light to track objects without visual distraction was really nice to learn about. It makes me think that the art of computer vision in an interactive setting is as much about stagecraft and environmental design as it is about programming, which is a perspective I hadn’t considered before.

The essay’s discussion on the role of tracking and surveillance in interactive art was, I think, the most thought-provoking part. It raises profound questions about the relationship between the observer and the observed, and how that dynamic shifts when the artwork itself is watching you. The concept of the “gaze” of the machine is something I find both compelling and a little unnerving, and the essay really delves into that duality. I liked that it pushed me to consider the ethical implications of these technologies in art. When an installation is collecting data on its viewers’ movements and interactions, it brings up important questions about privacy and how that data is used. The idea that surveillance technologies can be repurposed for play, self-discovery, and creating beautiful, emergent systems is a powerful counter-narrative to the more dystopian applications we often hear about.

Midterm Progress

Concept

Well, for my project, I have always been really inspired by the Japanese art of Kintsugi (金継ぎ). What I love about it is the philosophy of it that, instead of hiding the cracks in broken pottery, it highlights them with gold. It treats the breakage and repair as a beautiful part of the object’s history, and I find that to be incredibility interesting. This is why I want to bring that idea into a digital space.

For now, I think I’ll call my project the “Kintsugi Canvas.” I don’t intend it to be a typical drawing tool; I want to build an interactive artwork where your first action is to break something. You simply shatter a digital canvas with a click, and then watch as the program algorithmically mends the damage with simulated gold, creating a unique piece of art from the flaw you introduced. Of course, this is the base idea for now, but I’ve been thinking about the kind of object that I want to be fixed; is it a bowl that the user breaks and then repairs again? (as seen in the photo below). Well, I’m not decided yet, but pretty sure it’ll be among those lines. Of course, my goal is to make it like a game, so I want to fit all these elements together.

The Art of Kintsugi: “In the fractures, we find our light.” — Zen Art Gallery

Gameplay Loop

The core gameplay loop is a simple, two-step journey that puts the player in the role of both destroyer and observer.

*** Shatter: The player’s primary action. A single click on the blank canvas triggers a fracturing process. Cracks spread dynamically from the point of impact, breaking the canvas into unique polygonal shards.

*** Mend: The game’s generative response. Once shattered, the system automatically traces the new cracks with an animated golden seam, “healing” the canvas. The player watches as a beautiful, final artwork emerges from the damage they initiated.

For now, till I decide on the actual technicalities of the game, I’ve decided to keep controls are minimal and intuitive: click to start a new round, press ‘S‘ to save your creation, and ‘R‘ to reset the canvas. These, of course, will increase once I have decided on the idea completely.

So, to state it simply, the game will give the users places to go to and from there they can shatter or mend. For now, I’m planning on a museum, bedrooms, personal times, etc. Some examples I’m planning to incorporate as the base are:

Gen4 same art style, meusuem with paintings 2397059461.png

Gen4  a cozy dorm room with red recliner or comfortable share suitable for storytelling, no 16256789.png

Gen4 same art style, christmas tree and lights 4115991441.png

You simply choose an object, and you can shatter it, mend it, etc!

The Core Design Challenge

For the game to be compelling, the act of breaking the canvas has to feel satisfying and organic. The biggest technical and design challenge is probably generating crack patterns that look natural. A static or predictable pattern would make the core mechanic feel repetitive and robotic, ruining the player’s experience. The entire project hinged on making this moment of destruction feel alive.

The Solution

To brainstorm, I’m planning a concept focusing entirely on this procedural generation, a system of agents I call “growers.” Each grower is a point with a position and direction that carves a path. By giving these agents a simple set of rules—move, wander slightly, branch occasionally, and die on collision—complex, organic patterns emerge, which is totally needed for the project.