Week5-reading response

Reading  Computer Vision for Artists and Designers made me think about the stark differences between how humans and computers perceive the world. Human vision is incredibly contextual—we naturally filter out distractions, infer meaning, and recognize objects even when they are partially obscured. In contrast, computer vision relies on algorithms and mathematical models to process images. It doesn’t “see” in the way we do; instead, it detects patterns, edges, and contrasts based on pixel values. I found it fascinating how much work goes into making computers interpret images in ways that feel natural to us.

One of the key techniques for improving computer vision’s ability to track what we’re interested in is feature detection—using algorithms like edge detection, motion tracking, and machine learning models trained on labeled datasets. I also noticed how lighting, contrast, and background control play huge roles in making computer vision more accurate. This made me reflect on its implications for interactive art. I think computer vision’s capacity for tracking and surveillance creates an interesting tension in artistic practice—it allows for dynamic, responsive installations, but it also brings ethical concerns. I felt a bit unsettled thinking about how the same technology that enables playful, immersive experiences can also be used for surveillance. It makes me wonder how artists can challenge or subvert these systems in creative ways.

Week 5 – Reading Response

Computer vision differs from human vision in many ways. One of the key differences would be that human vision is naturally adaptive to differences in lighting and can naturally understand objects, track motion and use context to recognize different emotions and patterns. On the other hand computer vision heavily relies on mathematical algorithms do detect objects and track movement. Difference in lighting usually causes computer vision to break or not work properly and it is generally impossible for computers to understand context without the use of advanced ai.

There are several ways with which we can help the computer “see” and track what we want it to track. One of the ways is frame comparing, where we tell the computer to compare consecutive frames and with that detect changes which indicate movement. To further improve this concept we could also use background extraction techniques which help us isolate the objects we want to track or see and ignore everything that is in the background.

Computer vision has a wide potential use in interactive media. Many artists have used it to create amazing interactive art which in my opinion feels more interactive then just clicking buttons. Artist use computer vision to create playful interactive experiences which fully immerse the user who feels in complete control of the movement of the object. I believe in the future in combination with ai, computer vision will completely take over in the interactive media industry.

Midterm Progress

Concept:

 

I thought of two main ideas when thinking of making a game, and to highlight  the previous themes of randomness I decided to make a gambling-oriented game. The first idea I thought of was to make a blackjack game but with Russian roulette stakes. The second idea was to just simulate a Russian roulette. I continued with the latter game because it is more intuitive and the game would be more feasible to implement in p5.js.

 

Design:

 

I drew a basic sketch due to simply show the gameplay features where the level progresses based on the health of the person interacting with the game. If it is feasible, I will also implement another health bar for the dealer, where the player can shoot the dealer as well. I am focusing on the gameplay elements more than the artwork, since the backdrop is intentionally designed to be gloomy and dark. (The filling in of the sketches also represents a vast darkness obscuring the dealer’s torso).

 

The lighting portrayed will also be minimal, and the dealer’s face is drawn in a way where the dealer is wearing a mask that obscures his facial features (essentially like a uniform nearly fully garment that wraps around his entire face, making the facial features appear opaque). I will improve the sketch and render the artwork in SVG.

 

 

Challenges:

The main challenges are producing the artwork with the goal of making the experience as fun as possible, as I might have to compromise on certain gameplay features. I also faced challenges creating the artwork in a correct SVG format, but I am continuing to research this in order to be able to finalize my methods in sketching and digitalizing the artwork. Another challenge is the animations, and to solve this challenge I will focus on 4-5 main animations, one where the dealer shoots a bullet, one where the dealer loads the gun away from the player’s perspective, and one where the dealer shoots a blank.

 

Reading Response:

 

Computer Vision differs from human vision in the sense that the processing of images and light is sub-consciously performed in human vision. On the other hand, algorithms allow computer vision to map light and images into mathematical equivalents in real-time within programs, subsequently allowing the image to be processed.

 

One such method is frame differencing, two frames are taken, where one frame represents a certain amount of light and a representation of the image in pixels, and the second frame represents a corresponding frame with changes in light and pixels. The differences , namely in color and luminosity, between these two frames is summed across all of the pixels’ locations. Similarly, detecting presence performs these same calculations of differences but with a reference of the background of the image instead of two similar frames.

 

I believe that computer vision’s capacity for tracking and surveillance creates possibilities with no bounds, where interactive artwork can take form through the medium of augmented reality, as well as the depiction of interactive media artwork simulating motion and pictures with life-like movement. As highlighted in the reading, this capacity is not limited by definition of the generalizability of computer vision algorithms, where no single algorithm is generalizable, and several different possible algorithms can work in harmony to create a digital interactive artwork.

 

Week 5 – Midterm Progress

Concept:
My midterm project was inspired by classic shoot ‘em up games like Space Invaders, Galaxian/Galaga, or the countless variations that followed. These games had a huge impact on video games and pop culture as a whole, and have an iconic style that brings back childhood memories for many. I wanted to try and emulate this retro aesthetic and provide an enjoyable take on the space shooter genre.
I am still undecided on the specific end goal of my game, since I can see the appeal of both an endless rush to see how long you can survive as opposed to clearing stages and progressing. I am leaning towards the former and having enemies speed up as time progresses, as an homage to how in Space Invaders the game would speed up since less enemies being on screen meant the machine was able to run faster. Either way, I intend to provide a simple experience of piloting a spaceship and taking on enemy ships while dodging obstacles like asteroids and comets.

Design:
As previously mentioned, I wanted to mainly focus on pixelated sprites to fit the time period. The backgrounds and menus, on the other hand, will probably be more modern by comparison for the sake of providing a better overall experience. The gameplay itself will get harder as the player progresses, through things like having more enemies that move/shoot faster, and maybe gradually adding in obstacles as you reach certain checkpoints.
So far, I have created classes for the player character and enemy ships, the bullets being fired, and a game class to keep track of the ongoing round. The ship and bullet classes contain attributes like their position and movement, current sprite, health and fire rate, etc. The game class keeps track of the time and score, has arrays for enemies and obstacles, and keeps track of the game state (e.g. ongoing, win, loss).

Challenges:

  • Keeping track of scenes being displayed for menus, game levels, win/loss screens, etc.
  • Detecting collisions during gameplay, determining whether the collision needs to be addressed, and handling it accordingly (e.g. destroy enemy ship on contact with the player’s bullet)
  • Figuring out how to display menus in an intuitive manner, and handling clicking on buttons or alternatively navigating via keyboard inputs.

Risk Prevention:
I have started working on some helper functions for generic tasks like reading/handling keyboard inputs at different points in time, as well as checking for collisions between objects through rough circular hitboxes (in order to use radii as a measure). What I am still working on is coming up with a way to handle mouse interactivity on menus and putting it in one helper function.

midterm proggres

I’ve been working on my Maze Garden project, and while it’s not finished yet, I’m excited about where it’s headed. So far, I’ve built the foundation of a maze that blends the intricate paths of a traditional labyrinth with the natural beauty of a garden. I used a procedural maze generation algorithm to create varied layouts, and I experimented with adding hidden clues and mini-puzzles along the way to keep things interesting.

I’ve focused on making the visuals as engaging as possible by using p5.js to draw lush hedges, vibrant flower beds, and subtle garden ornaments. I even integrated soft ambient sounds like birds chirping and leaves rustling to create a calming, immersive atmosphere. I think I worked like that, refining the design through multiple iterations to capture the right balance of mystery and tranquility. or maybe I make it catch the butterflies in the garden instead I’ve also got the idea of instead of the maze I will intgraite more on the idea of making the game catching the butterflies instead because it may be simpler

In the future, I plan to expand on these ideas further. I will be refining the user controls and adding more interactive elements to make the experience even more engaging. There are still plenty of features and tweaks I want to implement, and I’m looking forward to continuing this creative journey until the project truly feels complete.

Challenges :

I think the most challenging part would be capturing that realistic, natural feel purely through code. For instance, creating organic textures for hedges, bushes, and ornamental patches much like those in the Versailles gardens—requires careful use of noise functions and gradients. Additionally, animating butterflies to move in a lifelike manner without looking too mechanical can be quite tricky. Balancing all these visual details while keeping the sketch performant is a significant challenge, especially i you layer on more elements.also making the butterfly catching moment very realistic

Week 5 – Midterm Progress

Inspiration

The idea is creating a top-down shooter game. I was inspired by  Nuclear Throne, where players control a character in a confined space and must shoot incoming enemies. The thrill of dodging enemies while strategically shooting at them creates an engaging and fast-paced gameplay experience. The goal was to recreate this immersive feeling while keeping the implementation simple and beginner-friendly using p5.js.

Concept and User Interaction

The game concept revolves around a player-controlled character that moves around the screen and shoots at enemy units that spawn randomly and chase the player. The user can interact with the game in the following ways:

  • Movement: The player uses the arrow keys or WASD to move in different directions.
  • Shooting: The player shoots bullets towards the mouse cursor by pressing the spacebar.
  • Enemies: Randomly spawned enemies move towards the player and can be destroyed by bullets.
  • Survival Challenge: The player must continuously avoid enemies while shooting them down.

This simple yet engaging mechanic ensures a dynamic game experience where quick reflexes and strategic positioning are key to survival.

Designing the Code Structure

Before diving into the code, I designed a modular approach to keep the project manageable and scalable. The core elements of the game were broken down into:

  1. Player Class: Handles movement, shooting, and rendering.
  2. Bullet Class: Manages bullet behavior, movement, and collision detection.
  3. Enemy Class: Controls enemy spawning, movement, and interaction with bullets.
  4. Game Loop: Updates and renders all game elements in each frame.
  5. Collision Handling: Detects when bullets hit enemies and removes them from the game.
  6. Enemy Spawning System: Ensures a steady challenge for the player.

By structuring the game this way, each component is easy to manage and modify.

Example – Player Class:

class Player {
  constructor() {
    this.pos = createVector(width / 2, height / 2);
    this.speed = 4;
  }

  update() {
    if (keyIsDown(UP_ARROW) || keyIsDown(87)) this.pos.y -= this.speed;
    if (keyIsDown(DOWN_ARROW) || keyIsDown(83)) this.pos.y += this.speed;
    if (keyIsDown(LEFT_ARROW) || keyIsDown(65)) this.pos.x -= this.speed;
    if (keyIsDown(RIGHT_ARROW) || keyIsDown(68)) this.pos.x += this.speed;
    
    this.pos.x = constrain(this.pos.x, 0, width);
    this.pos.y = constrain(this.pos.y, 0, height);
  }

  show() {
    fill(0, 255, 0);
    ellipse(this.pos.x, this.pos.y, 30, 30);
  }

  shoot() {
    if (millis() - lastShotTime > 200) {
      bullets.push(new Bullet(this.pos.x, this.pos.y));
      lastShotTime = millis();
    }
  }
}

Identifying and Addressing Key Challenges

One of the most challenging parts of the project is collision detection between bullets and enemies. Ensuring that fast-moving bullets accurately register hits on enemies can be tricky, especially in a game with rapid movement and frequent object interactions. Also, I wanted to add a multiplayer gameplay experience, so 2 players could play in the same session. However, I do not think it is possible without the use of socket.io.

Next Steps

Moving forward, possible improvements could include:

  • Adding different enemy types with unique behaviors.
  • Implementing a score system to track progress.
  • Introducing power-ups to enhance gameplay variety.
  • Multiplayer Mode: Implementing real-time multiplayer gameplay using Socket.IO so that two players can play together from different machines. This would involve syncing player movement, bullets, and enemies across connected clients through a Node.js server.

By integrating multiplayer functionality, the game could become even more engaging and interactive. Using real-time communication, players could strategize together, compete for the highest score, or even introduce cooperative play against waves of enemies. Setting up server-side logic to handle multiple players efficiently is a challenge but would greatly enhance the gaming experience.

Week 5 – Midterm Progress 

Week 5 – Midterm Progress

Concept:
For my midterm project, I wanted to create a game based on something I love, cats! Growing up and living in Abu Dhabi, I’ve noticed that there are a lot of stray cats, so I wanted to design a game where the player drives around a city, rescues stray cats, and takes them to a shelter. I got inspired by a photography project I did last semester about the spirit of street cats in Abu Dhabi. I went around the city in Abu Dhabi  and captured these cats lives and the environment they are in. (link to the photos). The game will combine movement mechanics, object interactions, and a simple pet care system. The goal of the game is to rescue and rehome all the stray cats before the game ends.

User Interaction and Design:
For the interaction, I would like to implement a way for the player to control the car using arrow keys to move around. The stray cats will be at random locations in the city and if the car touches a cat, it is rescued and sent to a shelter. I was also thinking of adding more to the game, where inside the shelter, the player can click on the cat to heal or feed them. Finally, once all the cats are healthy, the game ends and displays a win screen.
→ Visual:
Start Screen: Shows the game instructions and a “start” button.
Game Screen: Has a city background with a moving car, stray cats, and a shelter section.
End Screen: Congratulates the player and has a restart button.

Code Structure:
In order to ensure the code is organized, I plan to use Object-Oriented Programming by creating three main classes.
Car Class:
– Player movement (arrow keys).
– Checks for collisions with pets.
Pet Class:
– Stores pet location and condition (hungry, injured).
– Moves to the shelter when rescued.
Shelter Class:
– Displays rescued pets.
– Tracks pet status and healing progress.

Challenging Part & How I’m Addressing It:
I think that the most frightening part of this project is implementing collision detection between the car and pets. Because the game involves movement, I need to find a way to detect when the car “rescues” a pet. To try to solve this collision detection, I wrote a small sample using dist() function to check if two objects are close enough to interact. This will reduce my risk by confirming that object detection works before I use it in the full game.

function checkCollision(car, pet) {
let d = dist(car.x, car.y, pet.x, pet.y);
return d < 30;  // If distance is small, they collide
}

Assignment 5: Midterm Progress

For the upcoming midterm project, I decided to code an interactive game. Now, when it came to choosing a game, I was reminded of a game I used to play when I was younger, Cooking Mama, which revolved around making different kinds of recipes. That, along with my own personal bias towards baked goods, gave me an idea. I am planning to create a memory game that is set in a bakery. The game itself is quite striaghtforward, with the user selecting a recipe, then having to select the right ingredients within the given time frame in order to make it

The game flow starts with a menu from which the user has to select an item to bake. They are then shown the ingredients required for a few seconds in order to memorize them, then are required to select those ingredients from the shelf (within a given time limit). If they manage, the item is baked, and they are shown the winning end screen. If they fail to select all the ingredients, or select the wrong ingredients, they are shown the failure end screen.

I have managed so far to create a base version of the game, just to see whether I would actually be able to code it or not. My current code consists of a Game class, which keeps track of which dish is chosen, which ingredients are needed, and whether you won or lost. In order to play the game, the Game class also stores the different “stages” of the game, namely the start screen, menu, memory phase, selection phase, and the result phase.

class Game {
  constructor() {
    this.state = "start"; // start, menu, memory, selection, result, baking, end
    this.selectedDish = null;
    this.correctIngredients = [];
    this.ingredientShelf = [];
    this.selectedIngredients = [];
    this.timer = 500; // Updated timer for selection
    this.showPlayAgainButton = false; // Flag to show "Play Again" button
    this.showWinningMessage = false; // Flag to show winning message
    this.showFailureMessage = false; // Flag to show failure message
  }

  display() {
    if (this.state === "start") {
      this.showStartScreen();
    } else if (this.state === "menu") {
      this.showMenu();
    } else if (this.state === "memory") {
      this.showMemory();
    } else if (this.state === "selection") {
      this.showSelection();
    } else if (this.state === "result") {
      this.showResult();
    } else if (this.state === "baking") {
      this.showBaking();
    } else if (this.state === "end") {
      this.showEndScreen();
    }
  }

  handleClick() {
    if (this.state === "start") {
      this.state = "menu";
    } else if (this.state === "menu") {
      this.handleMenuSelection();
    } else if (this.state === "selection") {
      this.handleIngredientSelection();
    } else if (this.state === "end" && this.showPlayAgainButton) {
      this.resetGame();
      this.state = "menu"; // Ensure it goes back to the menu
    }
  }

  showStartScreen() {
    textAlign(CENTER, CENTER);
    textSize(32);
    text("Press Anywhere to Start", width / 2, height / 2);
  }

  showMenu() {
    textAlign(CENTER, CENTER);
    textSize(24);
    text("Choose a Dish:", width / 2, height / 4);
    let y = 200;
    for (let dish of Object.keys(ingredientsData)) {
      text(dish, width / 2, y);
      y += 50;
    }
  }

  handleMenuSelection() {
    let y = 200;
    let index = 0;
    for (let dish of Object.keys(ingredientsData)) {
      if (mouseY > y - 15 && mouseY < y + 15) {
        this.selectedDish = dish;
        this.correctIngredients = ingredientsData[dish];
        this.state = "memory";
        setTimeout(() => this.state = "selection", 3000); // Show memory phase briefly
        this.createIngredientShelf();
      }
      y += 50;
      index++;
    }
  }

  showMemory() {
    textAlign(CENTER, CENTER);
    textSize(24);
    text("Memorize These Ingredients!", width / 2, 50);
    let x = 100;
    for (let ingredient of this.correctIngredients) {
      fill(200, 100, 100);
      ellipse(x, height / 2, 50);
      textSize(14);
      text(ingredient, x, height / 2 + 40);
      x += 150;
    }
  }

  createIngredientShelf() {
    let allIngredients = ["Flour", "Egg", "Chocolate", "Sugar", "Butter", "Milk", "Vanilla", "Apple", "Salt", "Baking Powder", "Lemon", "Cinnamon"];
    this.ingredientShelf = [];
    let x = 100;
    for (let ingredient of allIngredients) {
      this.ingredientShelf.push(new Ingredient(x, height - 100, ingredient));
      x += 100;
    }
  }

  showSelection() {
    textSize(24);
    text(`Select the ingredients for ${this.selectedDish}`, width / 2, 50);
    text(`Time Left: ${Math.ceil(this.timer / 60)}s`, width - 100, 50);
    this.timer--;
    if (this.timer <= 0) {
      this.state = "result";
    }
    this.showShelf();
  }

  showShelf() {
    fill(200, 200, 200); // Shelf background
    rect(0, height - 150, width, 150);
    for (let ingredient of this.ingredientShelf) {
      ingredient.display();
    }
  }

  handleIngredientSelection() {
    for (let ingredient of this.ingredientShelf) {
      if (ingredient.isClicked(mouseX, mouseY)) {
        ingredient.selected = !ingredient.selected;
        if (ingredient.selected) {
          this.selectedIngredients.push(ingredient.name);
        } else {
          this.selectedIngredients = this.selectedIngredients.filter(i => i !== ingredient.name);
        }
      }
    }
    if (this.selectedIngredients.length === this.correctIngredients.length) {
      this.state = "result";
    }
  }

  showResult() {
    let isCorrect = this.selectedIngredients.sort().toString() === this.correctIngredients.sort().toString();
    textAlign(CENTER, CENTER);
    textSize(32);
    if (isCorrect) {
      this.showWinningMessage = true;
      this.showFailureMessage = false;
      this.state = "baking";
      setTimeout(() => {
        this.state = "end";
        this.showPlayAgainButton = true; // Show play again button
      }, 3000);
    } else {
      this.showFailureMessage = true;
      this.showWinningMessage = false;
      this.state = "end";
      this.showPlayAgainButton = true; // Show play again button immediately
    }
  }

  showBaking() {
    textAlign(CENTER, CENTER);
    textSize(32);
    text("Baking...", width / 2, height / 2);
    setTimeout(() => {
      text("Ding! Your dish is ready!", width / 2, height / 2 + 50);
    }, 2000);
  }

  showEndScreen() {
    if (this.showWinningMessage) {
      textAlign(CENTER, CENTER);
      textSize(32);
      text(`Your ${this.selectedDish} is now ready!`, width / 2, height / 2);
      this.showPlayAgainButton = true; // Immediately show the Play Again button
    }

    if (this.showFailureMessage) {
      textAlign(CENTER, CENTER);
      textSize(32);
      text("Oh no! Those were the wrong ingredients!", width / 2, height / 2);
    }

    if (this.showPlayAgainButton) {
      // Display "Play Again" button
      fill(200);
      rect(width / 2 - 100, height / 2 + 80, 200, 50);
      fill(0);
      textSize(24);
      text("Play Again", width / 2, height / 2 + 105);
    }
  }

  resetGame() {
    this.selectedDish = null;
    this.correctIngredients = [];
    this.ingredientShelf = [];
    this.selectedIngredients = [];
    this.timer = 310; // Reset timer
    this.showPlayAgainButton = false; // Reset play again button
    this.showWinningMessage = false; // Reset winning message
    this.showFailureMessage = false; // Reset failure message
  }
}

While building the game, I am currently and further expecting to face challenges. The most prominant being:

  • Managing the game states: the game has multiple stages (as I mentioned earlier), and I want to ensure that there are smooth transitions between these stages
  • User interaction: the player’s selection should be correctly registered, and the game must check if the chosen ingredients match the correct sets already stored.
  • Game reset: the game should clear the previous players seections and reset for a new round automatically
  • Graphics creation and implementation: The graphics need to be created, and then also smoothly implemented within the game, not looking patchy or out of place.

The game in it’s current format works, but doesn’t look very appealing. I am currently using circles as placeholders for the ingredients, though I plan to create graphics for the backgrounds, the ingredients, and the start and end screens, as well as an animation for the “baking” scene if the user succeeds. There will also be background music implemented, with a “ding” sound effect if the user succeeds and the item successfully is baked.

 

Assignment 5 – Reading Response

This week’s reading was quite fascinating. It explored the way computers “see” pictures or videos. I had always known that a computer would not have the holistic knowledge to interpret images like we do, but this article helped explain this vision in a more in-depth way, bringing up questions I really hadn’t considered before. As humans, our brains are automatically wired to understand what we’re looking at as we already have the necessary context to understand the item and its use. Computers, however, only see pixels without any specific meaning. Our brains are capable of so much that we take for granted, such as easy recognition, adapting to different lighting, and naturally filtering out unnecessary information. On the other hand, computers need to be specially designed to carry out any of the above-mentioned tasks. They just simply lack the holistic approach that we are so attuned to.

This article also goes on to mention several ways we can aid computers to make their vision more effective, such as the use of specialized cameras and lenses to help their tracking needs. Moreover, this article grapples with the relationship between these tracking capabilities and interactive artwork. It explores the dual nature of this technology, as it can create wonderful interactive experiences, while still enabling surveilamce, a duality which is uncomfortable for many to accept. The article gives examples of several artists who decided to commentate on this dual nature, effectively creating art that uses technology to comment on surveillance itself, creating artowrks that serve as both an interactive piece, but also an extensive thought problem.

Week 5: Reading Response

When I first delved into Golan Levin’s discussion of computer vision, I was amazed by how fundamentally different it is from our human way of perceiving the world. We can glance at a room and instantly recognize faces, objects, or even subtle differences in lighting, but a computer needs methodical instructions to interpret even the simplest movements or contrasts. Techniques like background subtraction and frame differencing demonstrate how each pixel’s change or brightness must be computed step by step, and how crucial controlled environments can be for effective tracking. In a way, I found this both exciting and challenging: exciting because it opens up possibilities for precise, algorithmic interactions, but challenging because it shows just how quickly a system can fail if the lighting shifts or a background changes color. Through these examples, I realized that crafting the right physical setup—whether that means backlighting a subject or using distinct color markers—can dramatically improve what a computer can “see” and do.

Building on these observations, I became even more intrigued when I saw how these vision techniques are applied in interactive art. While projects like Myron Krueger’s Videoplace demonstrate the joyful, participatory side of computer vision—allowing people to engage with entire environments using body gestures—David Rokeby’s Sorting Daemon reminds us that the same technology can be used for unsettling surveillance. Reflecting on these projects, I’ve come to appreciate how computer vision can empower artists to transform spectators into active participants, yet it also raises important ethical questions about privacy and consent. This duality fascinates me: on one hand, the capacity to create immersive, responsive installations feels almost magical; on the other, the act of monitoring people, even for art, can be deeply discomforting. Levin’s exploration emphasizes that as designers and creators, we need to balance our excitement for technical innovation with a thoughtful awareness of its broader social implications.