Week 5 – Midterm Progress

Inspiration

The idea is creating a top-down shooter game. I was inspired by  Nuclear Throne, where players control a character in a confined space and must shoot incoming enemies. The thrill of dodging enemies while strategically shooting at them creates an engaging and fast-paced gameplay experience. The goal was to recreate this immersive feeling while keeping the implementation simple and beginner-friendly using p5.js.

Concept and User Interaction

The game concept revolves around a player-controlled character that moves around the screen and shoots at enemy units that spawn randomly and chase the player. The user can interact with the game in the following ways:

  • Movement: The player uses the arrow keys or WASD to move in different directions.
  • Shooting: The player shoots bullets towards the mouse cursor by pressing the spacebar.
  • Enemies: Randomly spawned enemies move towards the player and can be destroyed by bullets.
  • Survival Challenge: The player must continuously avoid enemies while shooting them down.

This simple yet engaging mechanic ensures a dynamic game experience where quick reflexes and strategic positioning are key to survival.

Designing the Code Structure

Before diving into the code, I designed a modular approach to keep the project manageable and scalable. The core elements of the game were broken down into:

  1. Player Class: Handles movement, shooting, and rendering.
  2. Bullet Class: Manages bullet behavior, movement, and collision detection.
  3. Enemy Class: Controls enemy spawning, movement, and interaction with bullets.
  4. Game Loop: Updates and renders all game elements in each frame.
  5. Collision Handling: Detects when bullets hit enemies and removes them from the game.
  6. Enemy Spawning System: Ensures a steady challenge for the player.

By structuring the game this way, each component is easy to manage and modify.

Example – Player Class:

class Player {
  constructor() {
    this.pos = createVector(width / 2, height / 2);
    this.speed = 4;
  }

  update() {
    if (keyIsDown(UP_ARROW) || keyIsDown(87)) this.pos.y -= this.speed;
    if (keyIsDown(DOWN_ARROW) || keyIsDown(83)) this.pos.y += this.speed;
    if (keyIsDown(LEFT_ARROW) || keyIsDown(65)) this.pos.x -= this.speed;
    if (keyIsDown(RIGHT_ARROW) || keyIsDown(68)) this.pos.x += this.speed;
    
    this.pos.x = constrain(this.pos.x, 0, width);
    this.pos.y = constrain(this.pos.y, 0, height);
  }

  show() {
    fill(0, 255, 0);
    ellipse(this.pos.x, this.pos.y, 30, 30);
  }

  shoot() {
    if (millis() - lastShotTime > 200) {
      bullets.push(new Bullet(this.pos.x, this.pos.y));
      lastShotTime = millis();
    }
  }
}

Identifying and Addressing Key Challenges

One of the most challenging parts of the project is collision detection between bullets and enemies. Ensuring that fast-moving bullets accurately register hits on enemies can be tricky, especially in a game with rapid movement and frequent object interactions. Also, I wanted to add a multiplayer gameplay experience, so 2 players could play in the same session. However, I do not think it is possible without the use of socket.io.

Next Steps

Moving forward, possible improvements could include:

  • Adding different enemy types with unique behaviors.
  • Implementing a score system to track progress.
  • Introducing power-ups to enhance gameplay variety.
  • Multiplayer Mode: Implementing real-time multiplayer gameplay using Socket.IO so that two players can play together from different machines. This would involve syncing player movement, bullets, and enemies across connected clients through a Node.js server.

By integrating multiplayer functionality, the game could become even more engaging and interactive. Using real-time communication, players could strategize together, compete for the highest score, or even introduce cooperative play against waves of enemies. Setting up server-side logic to handle multiple players efficiently is a challenge but would greatly enhance the gaming experience.

Assignment 5: Midterm Progress

For the upcoming midterm project, I decided to code an interactive game. Now, when it came to choosing a game, I was reminded of a game I used to play when I was younger, Cooking Mama, which revolved around making different kinds of recipes. That, along with my own personal bias towards baked goods, gave me an idea. I am planning to create a memory game that is set in a bakery. The game itself is quite striaghtforward, with the user selecting a recipe, then having to select the right ingredients within the given time frame in order to make it

The game flow starts with a menu from which the user has to select an item to bake. They are then shown the ingredients required for a few seconds in order to memorize them, then are required to select those ingredients from the shelf (within a given time limit). If they manage, the item is baked, and they are shown the winning end screen. If they fail to select all the ingredients, or select the wrong ingredients, they are shown the failure end screen.

I have managed so far to create a base version of the game, just to see whether I would actually be able to code it or not. My current code consists of a Game class, which keeps track of which dish is chosen, which ingredients are needed, and whether you won or lost. In order to play the game, the Game class also stores the different “stages” of the game, namely the start screen, menu, memory phase, selection phase, and the result phase.

class Game {
  constructor() {
    this.state = "start"; // start, menu, memory, selection, result, baking, end
    this.selectedDish = null;
    this.correctIngredients = [];
    this.ingredientShelf = [];
    this.selectedIngredients = [];
    this.timer = 500; // Updated timer for selection
    this.showPlayAgainButton = false; // Flag to show "Play Again" button
    this.showWinningMessage = false; // Flag to show winning message
    this.showFailureMessage = false; // Flag to show failure message
  }

  display() {
    if (this.state === "start") {
      this.showStartScreen();
    } else if (this.state === "menu") {
      this.showMenu();
    } else if (this.state === "memory") {
      this.showMemory();
    } else if (this.state === "selection") {
      this.showSelection();
    } else if (this.state === "result") {
      this.showResult();
    } else if (this.state === "baking") {
      this.showBaking();
    } else if (this.state === "end") {
      this.showEndScreen();
    }
  }

  handleClick() {
    if (this.state === "start") {
      this.state = "menu";
    } else if (this.state === "menu") {
      this.handleMenuSelection();
    } else if (this.state === "selection") {
      this.handleIngredientSelection();
    } else if (this.state === "end" && this.showPlayAgainButton) {
      this.resetGame();
      this.state = "menu"; // Ensure it goes back to the menu
    }
  }

  showStartScreen() {
    textAlign(CENTER, CENTER);
    textSize(32);
    text("Press Anywhere to Start", width / 2, height / 2);
  }

  showMenu() {
    textAlign(CENTER, CENTER);
    textSize(24);
    text("Choose a Dish:", width / 2, height / 4);
    let y = 200;
    for (let dish of Object.keys(ingredientsData)) {
      text(dish, width / 2, y);
      y += 50;
    }
  }

  handleMenuSelection() {
    let y = 200;
    let index = 0;
    for (let dish of Object.keys(ingredientsData)) {
      if (mouseY > y - 15 && mouseY < y + 15) {
        this.selectedDish = dish;
        this.correctIngredients = ingredientsData[dish];
        this.state = "memory";
        setTimeout(() => this.state = "selection", 3000); // Show memory phase briefly
        this.createIngredientShelf();
      }
      y += 50;
      index++;
    }
  }

  showMemory() {
    textAlign(CENTER, CENTER);
    textSize(24);
    text("Memorize These Ingredients!", width / 2, 50);
    let x = 100;
    for (let ingredient of this.correctIngredients) {
      fill(200, 100, 100);
      ellipse(x, height / 2, 50);
      textSize(14);
      text(ingredient, x, height / 2 + 40);
      x += 150;
    }
  }

  createIngredientShelf() {
    let allIngredients = ["Flour", "Egg", "Chocolate", "Sugar", "Butter", "Milk", "Vanilla", "Apple", "Salt", "Baking Powder", "Lemon", "Cinnamon"];
    this.ingredientShelf = [];
    let x = 100;
    for (let ingredient of allIngredients) {
      this.ingredientShelf.push(new Ingredient(x, height - 100, ingredient));
      x += 100;
    }
  }

  showSelection() {
    textSize(24);
    text(`Select the ingredients for ${this.selectedDish}`, width / 2, 50);
    text(`Time Left: ${Math.ceil(this.timer / 60)}s`, width - 100, 50);
    this.timer--;
    if (this.timer <= 0) {
      this.state = "result";
    }
    this.showShelf();
  }

  showShelf() {
    fill(200, 200, 200); // Shelf background
    rect(0, height - 150, width, 150);
    for (let ingredient of this.ingredientShelf) {
      ingredient.display();
    }
  }

  handleIngredientSelection() {
    for (let ingredient of this.ingredientShelf) {
      if (ingredient.isClicked(mouseX, mouseY)) {
        ingredient.selected = !ingredient.selected;
        if (ingredient.selected) {
          this.selectedIngredients.push(ingredient.name);
        } else {
          this.selectedIngredients = this.selectedIngredients.filter(i => i !== ingredient.name);
        }
      }
    }
    if (this.selectedIngredients.length === this.correctIngredients.length) {
      this.state = "result";
    }
  }

  showResult() {
    let isCorrect = this.selectedIngredients.sort().toString() === this.correctIngredients.sort().toString();
    textAlign(CENTER, CENTER);
    textSize(32);
    if (isCorrect) {
      this.showWinningMessage = true;
      this.showFailureMessage = false;
      this.state = "baking";
      setTimeout(() => {
        this.state = "end";
        this.showPlayAgainButton = true; // Show play again button
      }, 3000);
    } else {
      this.showFailureMessage = true;
      this.showWinningMessage = false;
      this.state = "end";
      this.showPlayAgainButton = true; // Show play again button immediately
    }
  }

  showBaking() {
    textAlign(CENTER, CENTER);
    textSize(32);
    text("Baking...", width / 2, height / 2);
    setTimeout(() => {
      text("Ding! Your dish is ready!", width / 2, height / 2 + 50);
    }, 2000);
  }

  showEndScreen() {
    if (this.showWinningMessage) {
      textAlign(CENTER, CENTER);
      textSize(32);
      text(`Your ${this.selectedDish} is now ready!`, width / 2, height / 2);
      this.showPlayAgainButton = true; // Immediately show the Play Again button
    }

    if (this.showFailureMessage) {
      textAlign(CENTER, CENTER);
      textSize(32);
      text("Oh no! Those were the wrong ingredients!", width / 2, height / 2);
    }

    if (this.showPlayAgainButton) {
      // Display "Play Again" button
      fill(200);
      rect(width / 2 - 100, height / 2 + 80, 200, 50);
      fill(0);
      textSize(24);
      text("Play Again", width / 2, height / 2 + 105);
    }
  }

  resetGame() {
    this.selectedDish = null;
    this.correctIngredients = [];
    this.ingredientShelf = [];
    this.selectedIngredients = [];
    this.timer = 310; // Reset timer
    this.showPlayAgainButton = false; // Reset play again button
    this.showWinningMessage = false; // Reset winning message
    this.showFailureMessage = false; // Reset failure message
  }
}

While building the game, I am currently and further expecting to face challenges. The most prominant being:

  • Managing the game states: the game has multiple stages (as I mentioned earlier), and I want to ensure that there are smooth transitions between these stages
  • User interaction: the player’s selection should be correctly registered, and the game must check if the chosen ingredients match the correct sets already stored.
  • Game reset: the game should clear the previous players seections and reset for a new round automatically
  • Graphics creation and implementation: The graphics need to be created, and then also smoothly implemented within the game, not looking patchy or out of place.

The game in it’s current format works, but doesn’t look very appealing. I am currently using circles as placeholders for the ingredients, though I plan to create graphics for the backgrounds, the ingredients, and the start and end screens, as well as an animation for the “baking” scene if the user succeeds. There will also be background music implemented, with a “ding” sound effect if the user succeeds and the item successfully is baked.

 

Assignment 5 – Reading Response

This week’s reading was quite fascinating. It explored the way computers “see” pictures or videos. I had always known that a computer would not have the holistic knowledge to interpret images like we do, but this article helped explain this vision in a more in-depth way, bringing up questions I really hadn’t considered before. As humans, our brains are automatically wired to understand what we’re looking at as we already have the necessary context to understand the item and its use. Computers, however, only see pixels without any specific meaning. Our brains are capable of so much that we take for granted, such as easy recognition, adapting to different lighting, and naturally filtering out unnecessary information. On the other hand, computers need to be specially designed to carry out any of the above-mentioned tasks. They just simply lack the holistic approach that we are so attuned to.

This article also goes on to mention several ways we can aid computers to make their vision more effective, such as the use of specialized cameras and lenses to help their tracking needs. Moreover, this article grapples with the relationship between these tracking capabilities and interactive artwork. It explores the dual nature of this technology, as it can create wonderful interactive experiences, while still enabling surveilamce, a duality which is uncomfortable for many to accept. The article gives examples of several artists who decided to commentate on this dual nature, effectively creating art that uses technology to comment on surveillance itself, creating artowrks that serve as both an interactive piece, but also an extensive thought problem.

Week 5: Reading Response

When I first delved into Golan Levin’s discussion of computer vision, I was amazed by how fundamentally different it is from our human way of perceiving the world. We can glance at a room and instantly recognize faces, objects, or even subtle differences in lighting, but a computer needs methodical instructions to interpret even the simplest movements or contrasts. Techniques like background subtraction and frame differencing demonstrate how each pixel’s change or brightness must be computed step by step, and how crucial controlled environments can be for effective tracking. In a way, I found this both exciting and challenging: exciting because it opens up possibilities for precise, algorithmic interactions, but challenging because it shows just how quickly a system can fail if the lighting shifts or a background changes color. Through these examples, I realized that crafting the right physical setup—whether that means backlighting a subject or using distinct color markers—can dramatically improve what a computer can “see” and do.

Building on these observations, I became even more intrigued when I saw how these vision techniques are applied in interactive art. While projects like Myron Krueger’s Videoplace demonstrate the joyful, participatory side of computer vision—allowing people to engage with entire environments using body gestures—David Rokeby’s Sorting Daemon reminds us that the same technology can be used for unsettling surveillance. Reflecting on these projects, I’ve come to appreciate how computer vision can empower artists to transform spectators into active participants, yet it also raises important ethical questions about privacy and consent. This duality fascinates me: on one hand, the capacity to create immersive, responsive installations feels almost magical; on the other, the act of monitoring people, even for art, can be deeply discomforting. Levin’s exploration emphasizes that as designers and creators, we need to balance our excitement for technical innovation with a thoughtful awareness of its broader social implications.

Week 5: Midterm Progress

Concept

I’m in the process of creating a haunted-house-themed game where the player has purchased a dilapidated property, mainly because it was so cheap—no one else wanted it. The catch is that it’s infested with ghosts. My overarching goal is for the player to banish these ghosts, wave by wave, ultimately revealing the house’s true beauty once it’s free of any supernatural presence. I want to capture a sense of gradual transformation: at first, the environment is dim and unnerving, but it transitions to a bright, welcoming home as the player defeats all the ghosts.

Progress So Far

So far, I have built out a state machine that includes an intro screen, a main “PLAY” state, and placeholders for the final “WIN” and “END” screens. In the intro, the game briefly explains the story—that the house was bought cheaply because of the hauntings—then moves to the main gameplay once the user clicks the start button. The basic logic for wave progression is in place: after the initial wave of slower ghosts, I plan to introduce a second wave of faster ghosts, and ultimately a boss ghost that requires multiple hits to defeat. Each ghost’s code has been thoroughly tested in small increments, ensuring that the transition between waves feels smooth. I also integrated a rudimentary health system; each time a ghost vanishes without being clicked, the player’s health decreases, raising the stakes as they progress.

Class Implementation

A key aspect of my current setup is the Ghost class, which I designed to handle movement, timing, and click interaction. By encapsulating these behaviors, I’ve managed to keep my main draw() loop more organized and make it simpler to test the game’s logic wave by wave. Here is a condensed version of the Ghost class:

class Ghost {
  constructor(x, y) {
    this.x = x;
    this.y = y;
    this.size = 60;
    this.visible = true;
    this.wasClicked = false;
    this.vx = random(-2, 2);
    this.vy = random(-2, 2);
    this.spawnTime = millis();
    this.lifespan = 5000; // 5 seconds
  }

  move() {
    if (!this.visible) return;
    this.x += this.vx;
    this.y += this.vy;
    if (this.x < 0 || this.x + this.size > width) {
      this.vx *= -1;
    }
    if (this.y < 0 || this.y + this.size > height) {
      this.vy *= -1;
    }
    if (millis() - this.spawnTime > this.lifespan) {
      this.visible = false;
    }
  }

  display() {
    if (this.visible) {
      image(ghostImg, this.x, this.y, this.size, this.size);
    }
  }

  isClicked(mx, my) {
    return (
      this.visible &&
      mx > this.x && mx < this.x + this.size &&
      my > this.y && my < this.y + this.size
    );
  }

  vanish() {
    this.visible = false;
    this.wasClicked = true;
  }
}

Using this as a foundation, I have also been working on subclasses like FastGhost and BossGhost to provide unique behaviors—faster speed, shorter lifespans, or requiring multiple hits to defeat. This object-oriented structure ensures I can easily add or modify ghost types without complicating the main game flow.

Frightening / Challenging Aspects

One of the biggest challenges I’m facing is tuning the difficulty so it feels suspenseful without being overly punishing. In particular, I need to strike a balance between ghost speed, lifespan, and the number of ghosts per wave. If ghosts vanish too slowly, it’s too easy; if they move or time out too quickly, it becomes frustrating. Another tricky part is creating a strong contrast between the haunting atmosphere at the start and the serene, beautiful environment at the end—this requires careful coordination of art assets, lighting (or color usage), and the timing of transitions so players truly feel like they’ve “rescued” the house from a dark fate.

Risk Prevention

I have set up the game flow using distinct states—“INTRO,” “PLAY,” “WIN,” and “END”—to keep code separated and avoid any messy overlaps. Testing each state individually helps isolate potential bugs early. I also made sure that all ghost interaction, movement, and collision logic lives within their respective classes, so if there’s an issue with a particular ghost type, I know exactly where to look for a fix. By incrementally adding waves and testing them (rather than coding all three at once), I can ensure that each wave behaves as intended and that wave transitions don’t break the health or scoring systems. This structured approach reduces the chance of large-scale errors going unnoticed until late in development.

Next Steps

I still need to fully integrate the final boss ghost, which will serve as the game’s climactic encounter. Once the boss is in place, I will refine the wave progression so that defeating the boss triggers a major visual shift—the unveiling of the house’s hidden beauty. Finally, I’ll spend time polishing transitions between states, adjusting ghost behaviors, and ensuring that the health system and scoring remain engaging but fair. My aim is for players to feel a real sense of accomplishment when they see the haunting gloom replaced by a warm, inviting dwelling, thereby completing the game’s central promise of transforming a scary, cheap purchase into a comfortable new home.

 

Week 5: Midterm Progress

Concept

Back home, one of the places I used to spend most of my time was my workspace in my room. As the saying goes, “You spend most of your life inside your head. Make it a nice place to be” ; for my case would be my workspace. That, combined with my passion for filming content, sparked my obsession with decorating my workspace. I studied, ate, worked, and even slept there (quite literally ).

Since I’ve left that space behind in Bangladesh, I wanted to recreate a piece of it through an interactive artwork in p5.js for my midterm project. My goal is to visually mimic my workspace and integrate interactivity based on what I’ve learned in class. For starters, i plan it to have the feels of an open world game where you can interact and explore the objects around you. Since my workspace is so busy with a lot of stuff, it creates the perfect scenario for exploration. I also plan to add a knock-on-my-door ish start screen instead of a regular one.I think this will not just meet the project’s requirements for interactivity but also make it feel more personal— it the closest thing I have to my old workspace now.

Design Process

Now that i have jotted down my concept, I realized I had unintentionally set myself up for a lengthy challenge. Hunting for assets that matched my aesthetic turned into a frustrating game of almost-but-not-quite—everything either messed with the vibe or threw off the whole look I was going for.

I ultimately made the (inevitable) decision to draw everything from scratch. To begin the production process, I created a rough sketch of my concept:

Description of the image

At this point, I felt overwhelmed by the number of objects I was adding and my initial plan to make all of them interactive. Then I remembered  professor Mang’s advice which was something along the lines of-what matters most is that the project is aesthetically pleasing and clean, rather than being overloaded with features. Moving forward, I might limit interactivity, making some objects static to maintain a clean look and also for ease.

I decided to work further on deciding the aesthetic, I adjusted my initial sketch, settling on a more cohesive design:

Description of the image

I wanted to include myself in the scene, so I drew my character sleeping at the desk—just as I often did in real life. I initially considered adding interactive movement to my character but ultimately decided against interactivity for two main reasons:

  1. The sheer amount of drawings I’d have to draw to animate it.
  2. Potential issues with the layering, which could interfere with other interactive elements.

To avoid complications, I’ll position my character far from the interactive elements. My workspace had a blue theme with touches of white and pink, so I chose those as my main color palette. I also opted for a flat 2D style, as shading objects would take too much time and require changing the direction of shadow if I later changed their placement.

Challenges & Possible Fixes

1. Preparing the Assets

I started by drawing my character’s hair particles in the app Resprite and attempted to create movement. However, I realized that different segments of the hair needed to move at different speeds based on their density and weight, meaning a single sprite wouldn’t be enough. This led me to manually draw multiple hair segments, and the assets piled up quickly.

Description of the image

I took this photo halfway through the process of drawing each hair segment. To ease the workflow, I loaded them into Pixel Studio to create a spritesheet. However, I ran into a major issue—the file size exceeded p5.js’s permitted limit. Compressing the spritesheet resulted in a significant loss of quality, which is a problem since the hair is a large, prominent element on the screen.

At this stage, I’m still unsure how to proceed while maintaining the quality. I may need to explore alternative ways to optimize the sprites or adjust the animation approach.

Possible fix:

1.Splitting the hair into separate layers and animating them dynamically in p5.js instead of pre-rendering frames.

Or, 2. Using vector-based movement instead of raster spritesheets.

2. Layering Issues & Depth Management

Since some objects are interactive and others are static, layering them incorrectly might make it hard to interact with certain elements or could cause visual glitches. I also feel like the range of interactivity of one object could hamper the others.

Possible Fix:

1.Use z-index ordering within the p5.js draw loop to ensure the correct stacking of elements.

Or,2. Implementing collision detection if certain objects should not be interactable when behind others. However, I’m yet to figure out the logic for that.

3. Interactivity & User Experience

With so many objects in the workspace, interacting with them all might feel cluttered or overwhelming. There’s also the risk of users not realizing which objects are interactive.

Possible Fix:

Add subtle visual cues, like slight movements, highlights, or hover effect/audio feedback/different interaction types (e.g., clicking, dragging, hovering) to make interactions more varied and engaging.

 

Wrapping It Up

While I initially set out to create an interactive replica, the design process has made me rethink how interactivity and aesthetics can coexist without overwhelming. With ongoing challenges like asset optimization, performance tuning, and user interaction design, I’m continuously refining my approach. Ultimately, I hope this piece not only meets the technical requirements but also captures some of my personality-keeping things interesting to urge the user to continue the explorations would be hard though.

Midterm Project Progress: Interactive Fantasy Game

For this project, I decided to transform a short fantasy story I wrote in primary school into an interactive game using p5.js. The game has multiple choices that affect the storyline, but in the end, all paths lead to the same outcome. There are a total of 5 key decision points and 14 different screens in the game.

One of my biggest challenges was implementing the buttons. I initially tried to create them using object-oriented programming (OOP), but it became too confusing because the button positions change so frequently from screen to screen so i just used the built-in function createButton().

Currently, the game has a pink background, but I plan to replace it with images that reflect the setting of each screen, adding to the immersive experience. I also intend to incorporate sound effects that correspond to the events in the story to further enhance the game’s atmosphere.

During this week, I’ll be focusing on:

**Using OOP for the buttons instead of builtin functions

**Adding some sounds and background images

The part I’m most proud of so far is the overall structure of the different screens and managing the transitions between them as it  took a lot of time to figure out how to switch between screens smoothly.

Here is the code snippet of the screens and buttons changing:

function draw() {
  background('pink');
  //The Screen is picked based on the screen number
  if (screen === 0) {
    showStartScreen();
    } else if (screen === 1) {
    showBirthdayScreen();
  } else if (screen === 11) {
    showSuppliesScreen();
  } else if (screen === 12) {
    showWeaponScreen();
  } else if (screen === 111) {
    showNightScreen();
  } else if (screen === 112) {
    showMorningScreen();
  } else if (screen === 121) {
    showNightScreen();
  } else if (screen === 122) {
    showMorningScreen();
  } else if (screen === 1111 || screen === 1121 || screen === 1211 || screen === 1221) {
    showRiverScreen();
  } else if (screen === 1112 || screen === 1122 || screen === 1212 || screen === 1222) {
    showForestScreen();
  } else if (screen === 11000 || screen === 12000 || screen === 21000 || screen === 22000) {
    showNextScreen();
  } else if (screen === 5000) {
    showDragonCaveScreen();
  } else if (screen === 5001) {
    showInsideScreen();
  } else if (screen === 5002) {
    showOutsideScreen();
  } else if (screen === 5003) {
    showTrapScreen();
  } else if (screen === 262626) {
    showFinalScreen();
  }
}

function mousePressed() {
  if (screen == 0 && isMouseOver(enterButton)) {
    screen = 1;
    hideAllButtons();
    showBirthdayScreen();
  } else if (screen == 1) {
    if (isMouseOver(suppliesButton)) {
      screen = 11;
      hideAllButtons();
      showSuppliesScreen();
    } else if (isMouseOver(weaponButton)) {
      screen = 12;
      hideAllButtons();
      showWeaponScreen();
    }
  } else if (screen === 11) {
    if (isMouseOver(nightButton)) {
      screen = 111;
      hideAllButtons();
      showNightScreen();
    } else if (isMouseOver(morningButton)) {
      screen = 112;
      hideAllButtons();
      showMorningScreen();
    }
  } else if (screen === 12) {
    if (isMouseOver(nightButton)) {
      screen = 121;
      hideAllButtons();
      showNightScreen();
    } else if (isMouseOver(morningButton)) {
      screen = 122;
      hideAllButtons();
      showMorningScreen();
    }
  } else if (screen === 111) {
    if (isMouseOver(riverButton)) {
      screen = 1111;
      hideAllButtons();
      showRiverScreen();
    } else if (isMouseOver(forestButton)) {
      screen = 1112;
      hideAllButtons();
      showForestScreen();
    }
  } else if (screen === 112) {
    if (isMouseOver(riverButton)) {
      screen = 1121;
      hideAllButtons();
      showRiverScreen();
    } else if (isMouseOver(forestButton)) {
      screen = 1122;
      hideAllButtons();
      showForestScreen();
    }
  } else if (screen === 121) {
    if (isMouseOver(riverButton)) {
      screen = 1211;
      hideAllButtons();
      showRiverScreen();
    } else if (isMouseOver(forestButton)) {
      screen = 1212;
      hideAllButtons();
      showForestScreen();
    }
  } else if (screen === 122) {
    if (isMouseOver(riverButton)) {
      screen = 1221;
      hideAllButtons();
      showRiverScreen();
    } else if (isMouseOver(forestButton)) {
      screen = 1222;
      hideAllButtons();
      showForestScreen();
    }
  } else if (screen === 1111 || screen === 1121 || screen === 1211 || screen === 1221) {
    if (isMouseOver(fishButton)) {
      screen = 11000;
      hideAllButtons();
      showNextScreen();
    } else if (isMouseOver(riverspiritsButton)) {
      screen = 12000;
      hideAllButtons();
      showNextScreen();
    }
  } else if (screen === 1112 || screen === 1122 || screen === 1212 || screen === 1222) {
    if (isMouseOver(firefliesButton)) {
      screen = 21000;
      hideAllButtons();
      showNextScreen();
    } else if (isMouseOver(forestspiritsButton)) {
      screen = 22000;
      hideAllButtons();
      showNextScreen();
    }
  } else if (screen === 11000 || screen === 12000 || screen === 21000 || screen === 22000) {
    if (isMouseOver(next1Button)) {
      screen = 5000;
      hideAllButtons();
      showDragonCaveScreen();
    }
  } else if (screen === 5000) {
    if (isMouseOver(insideButton)) {
      screen = 5001;
      hideAllButtons();
      showInsideScreen();
    } else if (isMouseOver(outsideButton)) {
      screen = 5002;
      hideAllButtons();
      showOutsideScreen();
    } else if (isMouseOver(trapButton)) {
      screen = 5003;
      hideAllButtons();
      showTrapScreen();
    }
  } else if (screen === 5001 || screen === 5002 || screen === 5003) {
    if (isMouseOver(next2Button)) {
      screen = 262626;
      hideAllButtons();
      showFinalScreen();
    }
  }
}

Here is the game:

Week 5: Reading Response

The main difference between computer and human vision is that while humans can instantly recognize faces, objects, and contexts, a computer sees only raw pixel data unless explicitly programmed to interpret it. We have the ability to recognize people, expressions, colors, animals, etc instantly that computers have to be taught how to do and even that to only some extent. To help computers “see” or track objects of interest, techniques such as frame differencing, background subtraction, and brightness thresholding are used to extract meaningful features from video input. These methods allow interactive systems to detect movement, isolate subjects, and respond dynamically to user actions, forming the foundation of many interactive media works. However, each of these techniques has limitations, such as sensitivity to lighting changes or reliance on high-contrast visuals, highlighting the need for carefully designed environments to optimize detection accuracy.

Still, computer vision has improved vastly over the past few decades and has especially become much more accessible for artists, designers, and even beginner programmers to incorporate vision-based interactivity into their work. Software environments like Processing, Max/MSP/Jitter, and Macromedia Director provide frameworks for integrating computer vision, either through direct coding or via plug-ins and toolkits. Additionally, stand-alone applications like BigEye and EyesWeb expand these capabilities, enabling real-time motion tracking and expressive gesture analysis that is much easier to get your hands on.

In interactive art, computer vision’s capacity for tracking and surveillance introduces both creative potential and ethical concerns. Artists can leverage motion detection and object tracking to create immersive, responsive installations that engage audiences much more than just standing there staring. However, the same tools that enable interactivity can also invoke concerns about surveillance, privacy, and data collection. Since interactive art often explores the relationship between technology and human experience, the ability of computer vision to monitor and track movement, especially without explicit user consent, raises questions about autonomy and control. What information is the computer storing? Can it be used against me? These are questions people who are being recorded don’t even have the chance to ask. Ultimately, while computer vision expands the possibilities for dynamic and participatory art, its application must be carefully considered to balance both innovation and creativity with ethical responsibility.

Week 5 Reading

What are some of the ways that computer vision differs from human vision?

The biggest distinction is that while when we see a photo, it is intuitive for us to distinguish a person from the background, it is comparatively harder for computers to make the distinction. This means, early computers had to use movement to distinguish which pixels on a display belonged to the object or person of interest and which were only part of a background. Furthermore, to detect movement computers had to calculate the change in color or brightness of pixels between frames whereas these things are quite simple for human vision.

What are some techniques we can use to help the computer see / track what we’re interested in?

frame differencing – comparing frames to determine movement.

background subtraction – has an original background scene, compares with captured scene to determine what is not part of the background and is the object of interest.

brightness thresholding – looking for changes in luminosity to determine the position/change in position of objects.

How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

It allows for the extra dimension of interaction. Many artworks, including the one with the poem’s falling letters. This means that you can take more than just key or button input sand by using computer vision to track human movements, you make the artwork more intuitive to operate (consider that people may not know you need to press a button but if you see your own image on the projector you already know how to control that shadow).

Reading Response 4 – Computer Vision for Artists and Designers (Week 5)

In his article, Levin delves into the relationship between code and creative expression, illustrating how coding and computation offer a unique medium for artists to explore new forms of interactivity and non-verbal communication. This perspective was particularly eye-opening for me, as it shed light on how computation is not just a tool for efficiency or automation but also a canvas for artistic exploration.

One of the most fascinating aspects discussed in the article was computer vision. While the term itself is somewhat new to me, I was surprised to learn that efforts to advance this field began over half a century ago. It is remarkable to realize that machines can now collect visual data and “interpret” it, mimicking human perception in ways that were once the realm of science fiction. Computer vision models allow computers to identify human features, recognize expressions, and even infer emotions—all of which have groundbreaking implications, not only for fields like surveillance and security but also for art. In interactive media, for instance, artists are using computer vision to create installations that respond dynamically to human presence, movement, or even facial expressions, transforming passive spectators into active participants in digital art.

However, despite its exciting artistic applications, computer vision carries an eerie undertone due to its origins. The fact that this field was initially a military endeavor makes its transition into the realm of creative expression feel somewhat uncanny. The same technology that was once developed for warfare—such as guiding missiles or identifying enemy targets—is now being used to make art installations more immersive. This contrast raises an unsettling question: can a technology born from conflict and control ever be fully dissociated from its original intent?

Beyond its history, the rapid advancement of computer vision presents an undeniable threat to human privacy. Today, no one is truly safe from being recognized, analyzed, and cataloged by ubiquitous surveillance cameras, facial recognition systems, and AI-powered security networks. What was once considered futuristic is now an everyday reality—public spaces are filled with CCTV cameras that can track individuals in real time, while social media platforms use facial recognition to tag people in photos automatically. While some of these applications serve practical or even artistic purposes, they also blur the boundaries between technological progress and ethical concerns. When does interactivity cross into intrusion? At what point does an artistic exploration of human expression become indistinguishable from surveillance?