Week 5 – Midterm Progress

Concept 

For my midterm project, I am developing a game called Balloon Popper, which incorporates everything we have learned so far. In this game, balloons fall from the top like rain, and the player must pop them before they reach the bottom. The more balloons the player pops, the faster they fall and the more balloons appear, increasing the challenge dynamically. The score is based on the number of balloons popped.

Code Structure

The game will be structured around object-oriented programming (OOP) principles, utilizing classes and functions to manage different elements:

Balloon Class: Defines properties such as position, speed, size, and color. Handles movement and collision detection.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
// Balloon Class
class Balloon {
constructor(x, y, size) {
this.x = x;
this.y = y;
this.size = size;
this.color = color(random(255), random(255), random(255));
}
move() {
this.y += 2; // moving ballon downwards
}
display() {
fill(this.color);
noStroke();
ellipse(this.x, this.y, this.size, this.size * 1.3); // Oval balloon shape
// Balloon Class class Balloon { constructor(x, y, size) { this.x = x; this.y = y; this.size = size; this.color = color(random(255), random(255), random(255)); } move() { this.y += 2; // moving ballon downwards } display() { fill(this.color); noStroke(); ellipse(this.x, this.y, this.size, this.size * 1.3); // Oval balloon shape
// Balloon Class
class Balloon {
  constructor(x, y, size) {
    this.x = x;
    this.y = y;
    this.size = size;
    this.color = color(random(255), random(255), random(255));
  }

  move() {
    this.y += 2; // moving ballon downwards
  }

  display() {
    fill(this.color);
    noStroke();
    ellipse(this.x, this.y, this.size, this.size * 1.3); // Oval balloon shape

 

Shooter Class: Represents a player-controlled shooter at the bottom of the screen, used to aim and pop balloons.

Game Manager: Handles overall game logic, including score tracking, difficulty scaling, and user interactions.

Interactivity: The player moves the shooter left and right and fires projectiles to pop balloons.

Challenges and Uncertain Aspects

One of the most complex aspects of this project is implementing multiple difficulty levels (Easy, Medium, Hard). I am unsure of how feasible it will be within the project timeline. Additionally, I was initially uncertain about whether to allow players to pop balloons using a shooter at the bottom or direct mouse clicks.

Risk Mitigation and Adjustments

To ensure feasibility, I decided to focus on dynamic speed increase as the main difficulty progression instead of distinct levels. This allows the game to scale naturally in difficulty without the need for predefined level transitions. However, I may still explore the possibility of adding a multi-level aspect if time permits. Additionally, I have chosen to implement shooters at the bottom rather than mouse clicking, as this adds an extra layer of interactivity and skill to the game.

Week 5 – Reading Reflection

  • What are some of the ways that computer vision differs from human vision?

Human vision involves cognitive processes that allow us to interpret context, recognize objects without any limitation to the lighting conditions and angles, and also to make intuitive inferences. In contrast, computer vision relies on complex algorithms that analyze pixel data without context or intuition. Unlike human vision, which naturally adapts to varying conditions, computer vision relies on structured methods such as frame differencing, background subtraction, and brightness thresholding to detect motion, presence, or objects of interest​.

  • What are some techniques we can use to help the computer see / track what we’re interested in? 

As recorded in the paper, one of the greatest challenges in computer vision is enabling computers to make accurate detections and distinguish between “what is” and “what was”— key factor in motion and presence detection. Several techniques help achieve this: Frame Differencing: This method detects motion by comparing differences between consecutive frames, identifying areas where pixel values have changed. Background Subtraction: This technique captures an image of an empty scene as a reference and then compares incoming frames against it. Any changes are flagged as new objects. However, it is highly sensitive to lighting variations. Brightness Thresholding: Controlled illumination and surface treatments (such as using high-contrast materials or backlighting) help distinguish objects based on their brightness levels, making tracking more effective in interactive environments. By combining these methods, computer vision can better track motion, recognize objects, and adapt to artistic applications​

  • How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

The surveillance capacity and tracking ability of computer vision can be used to store and present anomalous data in a creatively artistic way. Many artists have integrated these capabilities to create interactive installations that respond to human movement and behavior. Myron Krueger’s Videoplace (1969-1975), for example, allowed participants to interact with digital graphics using only their silhouettes, demonstrating how computer vision can enable body-driven interaction. Similarly, Messa di Voce (2003) used head-tracking and speech analysis to create a dynamic visual experience where graphics appeared to emerge from performers’ mouths, merging performance with real-time digital augmentation.

Week 5 – Reading Response

Computer vision differs from human vision in many ways. One of the key differences would be that human vision is naturally adaptive to differences in lighting and can naturally understand objects, track motion and use context to recognize different emotions and patterns. On the other hand computer vision heavily relies on mathematical algorithms do detect objects and track movement. Difference in lighting usually causes computer vision to break or not work properly and it is generally impossible for computers to understand context without the use of advanced ai.

There are several ways with which we can help the computer “see” and track what we want it to track. One of the ways is frame comparing, where we tell the computer to compare consecutive frames and with that detect changes which indicate movement. To further improve this concept we could also use background extraction techniques which help us isolate the objects we want to track or see and ignore everything that is in the background.

Computer vision has a wide potential use in interactive media. Many artists have used it to create amazing interactive art which in my opinion feels more interactive then just clicking buttons. Artist use computer vision to create playful interactive experiences which fully immerse the user who feels in complete control of the movement of the object. I believe in the future in combination with ai, computer vision will completely take over in the interactive media industry.

Week 5 – Midterm Progress 

Week 5 – Midterm Progress

Concept:
For my midterm project, I wanted to create a game based on something I love, cats! Growing up and living in Abu Dhabi, I’ve noticed that there are a lot of stray cats, so I wanted to design a game where the player drives around a city, rescues stray cats, and takes them to a shelter. I got inspired by a photography project I did last semester about the spirit of street cats in Abu Dhabi. I went around the city in Abu Dhabi  and captured these cats lives and the environment they are in. (link to the photos). The game will combine movement mechanics, object interactions, and a simple pet care system. The goal of the game is to rescue and rehome all the stray cats before the game ends.

User Interaction and Design:
For the interaction, I would like to implement a way for the player to control the car using arrow keys to move around. The stray cats will be at random locations in the city and if the car touches a cat, it is rescued and sent to a shelter. I was also thinking of adding more to the game, where inside the shelter, the player can click on the cat to heal or feed them. Finally, once all the cats are healthy, the game ends and displays a win screen.
→ Visual:
Start Screen: Shows the game instructions and a “start” button.
Game Screen: Has a city background with a moving car, stray cats, and a shelter section.
End Screen: Congratulates the player and has a restart button.

Code Structure:
In order to ensure the code is organized, I plan to use Object-Oriented Programming by creating three main classes.
Car Class:
– Player movement (arrow keys).
– Checks for collisions with pets.
Pet Class:
– Stores pet location and condition (hungry, injured).
– Moves to the shelter when rescued.
Shelter Class:
– Displays rescued pets.
– Tracks pet status and healing progress.

Challenging Part & How I’m Addressing It:
I think that the most frightening part of this project is implementing collision detection between the car and pets. Because the game involves movement, I need to find a way to detect when the car “rescues” a pet. To try to solve this collision detection, I wrote a small sample using dist() function to check if two objects are close enough to interact. This will reduce my risk by confirming that object detection works before I use it in the full game.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function checkCollision(car, pet) {
let d = dist(car.x, car.y, pet.x, pet.y);
return d < 30; // If distance is small, they collide
}
function checkCollision(car, pet) { let d = dist(car.x, car.y, pet.x, pet.y); return d < 30; // If distance is small, they collide }
function checkCollision(car, pet) {
let d = dist(car.x, car.y, pet.x, pet.y);
return d < 30;  // If distance is small, they collide
}

Midterm Progress Report

Concept/Design

For my midterm, I wanted to create a escape room type of game, where the players have to like find objects around the room to be able to escape. In terms of design, I want to structure it so that the players must find five objects for example within a room and answer a series of questions to unlock the next clue. So, the game might begin with an introductory hint that guides the player to find the first object. Once they find it, they are either provided with another hint or asked a question. Answering the question correctly would then lead them to the next clue or object, however anwering it wrong may either let them retry or offer an extra hint. As the player progresses, they will uncover more objects, until they have all the objects needed to escape. I will probably have the game feature different rooms for the player to choose from, and each room will present a unique set of clues, objects, and questions to solve.

Challenging Code/Uncertainty

For this project I think the most complex part of the it is like an ordered clicking type of mechanism. For this game specifically, I think this is needed because I want the players to interact with objects in a specific order,  whether it’s pressing buttons or flipping swithces , I want them to do it in a specific order. By adding this system in my code, where objects must be clicked in the correct order, it will make sure that players engage with the puzzle thoughtfully rather than just clicking randomly and finding the key the first try.

To minimize this uncertainty, I tried to write the code to keep track of which shapes the player has clicked by using variables like rectClicked, triClicked, and circClicked. These start as false, meaning the player hasn’t clicked them yet. Then, in the mousePressed() function, I set rules so that the shapes can only be clicked in the correct order. For example, the player has to click the rectangle first, then the triangle, and finally the circle. If the player tries to click them out of order, the game won’t move forward. This helps make sure the game flow stays smooth and clear for the player. below is this code:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function mousePressed() {
if (!rectClicked && mouseX > 50 && mouseX < 130 && mouseY > 100 && mouseY < 150) { //first rectangle is clicked
rectClicked = true;
} else if (rectClicked && !triClicked && mouseX > 170 && mouseX < 230 && mouseY > 100 && mouseY < 150) { //triangle clicked true if rectangle clicked first
triClicked = true;
} else if (rectClicked && triClicked && !circClicked && dist(mouseX, mouseY, 320, 125) < 25) {//circle clicked true if rectangle and triangle clicked before
circClicked = true;
escape = true; //clicking circle = players escapes
}
}
function mousePressed() { if (!rectClicked && mouseX > 50 && mouseX < 130 && mouseY > 100 && mouseY < 150) { //first rectangle is clicked rectClicked = true; } else if (rectClicked && !triClicked && mouseX > 170 && mouseX < 230 && mouseY > 100 && mouseY < 150) { //triangle clicked true if rectangle clicked first triClicked = true; } else if (rectClicked && triClicked && !circClicked && dist(mouseX, mouseY, 320, 125) < 25) {//circle clicked true if rectangle and triangle clicked before circClicked = true; escape = true; //clicking circle = players escapes } }
function mousePressed() {
  if (!rectClicked && mouseX > 50 && mouseX < 130 && mouseY > 100 && mouseY < 150) { //first rectangle is clicked
    rectClicked = true;
  } else if (rectClicked && !triClicked && mouseX > 170 && mouseX < 230 && mouseY > 100 && mouseY < 150) { //triangle clicked true  if rectangle clicked first
    triClicked = true;
  } else if (rectClicked && triClicked && !circClicked && dist(mouseX, mouseY, 320, 125) < 25) {//circle clicked true if rectangle and triangle clicked before
    circClicked = true;
    escape = true; //clicking circle = players escapes
  }
}

Week 5 – Reading Response

Human vision is intuitive whereas computer vision relies on algorithms, pattern recognition, and controlled environments for said algorithms to work. Computer vision processes images pixel by pixel whereas human vision takes in an image as a whole. For example, to detect movement, some computer vision algorithms detect the value different between individual pixels next to each other. However, human pattern recognition looks at the overall shape of an object. We can also better handle seeing things in different contexts. However, computers need to specifically be told to watch out for slight variations. With the rise of machine learning and neural networks that help computer vision with pattern recognition, however, it’ll get easier for computers to detect objects or movement.

Some techniques for tracking include detecting motion, detecting presence, and detection through brightness thresholding. Detecting motion means comparing pixel brightness per frame. If the brightness of a pixel changes a lot from one from to another, that can indicate movement. The total differences experienced by different pixels can give a broad picture of total movement. Detecting presence means comparing an with a preset background to see what’s different (background subtraction). Detection through brightness threshold uses a general brightness level and compares pixels to it to determine whether or not the pixel is in the foreground or background.

Tracking through computer vision has a lot of potential in interactive art. Being able to detect a user’s movement means it can influence the direction of the art. Even with musical performances for example, an artist can do a specific movement to trigger a specific sound or effect. Raves or concerts which are both audio and visual based could have a lot of potential using dance movements to aid musical production. More advanced tracking has also allowed for the detection of subtle expressions and glances, which gives artists even more to work with as they can use a user’s emotion to guide an art piece.

Week 5 – Midterm Progress

Concept

I was thinking for a while of a good midterm project that would combine everything we have been taught so far and would be also fun to make and even more fun to play/experience. I came up with an arcade style game that combines the fast paced racing games I used to play as a kid to the fun and popular endless runners like “Subway Surfers”. I am making a game where the player controls car movement left and right to avoid oncoming traffic and collect coins. The more the game progresses and the more coins the user collects, the game becomes faster and therefor harder making it a fun challenge for everyone.

User interactions
  • Arrow keys to move the car left and right, allowing for quick and responsive controls
  • Randomly positioned traffic that appears on the screen, requiring strategic movement to avoid collision
  • Randomly appearing coins that the player collects to increase their score, encouraging risk-taking and precise movement
  • A progressively challenging difficulty curve where traffic increases in speed and density over time
  • Game over state when the player collides with traffic, prompting a restart option to try again and improve their score
Code design

I have structured my code using object oriented programming with the following classes:

Car – Represents the player’s car and handles movement

Traffic – Represents the incoming traffic and resets to random position when it moves off screen

Coins – Represents collectable coins that appear at random positions when collected

Additionally the game includes:

Score system

Collision detection system

Car class:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class Car {
constructor(x, y, w, h) {
this.x = x;
this.y = y;
this.w = w;
this.h = h;
}
display() {
fill(0, 0, 255);
rect(this.x, this.y, this.w, this.h);
}
move() {
if (keyCode === LEFT_ARROW && this.x > 100) {
this.x -= 50;
} else if (keyCode === RIGHT_ARROW && this.x < 400) {
this.x += 50;
}
}
}
class Car { constructor(x, y, w, h) { this.x = x; this.y = y; this.w = w; this.h = h; } display() { fill(0, 0, 255); rect(this.x, this.y, this.w, this.h); } move() { if (keyCode === LEFT_ARROW && this.x > 100) { this.x -= 50; } else if (keyCode === RIGHT_ARROW && this.x < 400) { this.x += 50; } } }
class Car {
    constructor(x, y, w, h) {
        this.x = x;
        this.y = y;
        this.w = w;
        this.h = h;
    }
    display() {
        fill(0, 0, 255);
        rect(this.x, this.y, this.w, this.h);
    }
    move() {
        if (keyCode === LEFT_ARROW && this.x > 100) {
            this.x -= 50;
        } else if (keyCode === RIGHT_ARROW && this.x < 400) {
            this.x += 50;
        }
    }
}

Collision detection:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
if (car.x < traffic.x + traffic.w &&
car.x + car.w > traffic.x &&
car.y < traffic.y + traffic.h &&
car.y + car.h > traffic.y) {
console.log("Collision with traffic!");
noLoop();
}
if (car.x < traffic.x + traffic.w && car.x + car.w > traffic.x && car.y < traffic.y + traffic.h && car.y + car.h > traffic.y) { console.log("Collision with traffic!"); noLoop(); }
if (car.x < traffic.x + traffic.w &&
    car.x + car.w > traffic.x &&
    car.y < traffic.y + traffic.h &&
    car.y + car.h > traffic.y) {
    console.log("Collision with traffic!");
    noLoop();
}

Currently I have the car and the traffic as simple p5 box objects while I set everything up, but will change them to images as I work on the project.

Challenges and Risks

The most complex part of the project so far was implementing collision detection and ensuring objects reset properly. This was essential for making the game playable and preventing unfair conditions where the player could not avoid obstacles. I have dealt with this challenge already, but I am thinking of implementing a power up system into the game which might bring a completely new set of challenges with it.

Next steps

While the core mechanics have already been implemented there is still work to be done such as:

Add game sound

Improve the visuals

Add high score tracking

Possible implementation of power up system

 

So far this project has been a great learning experience, I am looking forward to work on it even more and refine it further!

Week 5 – Midterm Progress

I decided to make an experience based on a flower shop. Users can go into the shop, select flowers they like, make a bouquet, and buy flowers. I also want to implement a money system such that you need to purchase everything within the budget, and if you go over the budget, you can’t buy the bouquet. Right now, I’ve implemented being able to walk in and out of the store using a door (enter) and rug (exit), being able to select the flower stand and select flowers you like from a menu into your cart, and going to the checkout. I want to draw all the images, backgrounds, and items in this experience, but right now, I just simple stand-in pictures I drew. I’ll replace these shabby ones in the future, but I just wanted to get the mechanisms of my system working before beautifying it.

Jotting  down  ideas  before  starting:Lots of things ended up changing. I originally wanted to make everything pixel art too, but I think I’m gonna stick with more a freeform style.

Fear:

I had a lot of difficulty displaying and un-displaying different elements in the project. For example, I have a hover effect I’m proud of as it helps indicate what’s clickable to users. However, that means turning things on and off in terms of displaying them, which is a lot more complicated to implement than it intuitively feels to use. I knew I was gonna spend a lot of time figuring out how these mechanisms were gonna work, so I wanted to start early on these instead of the visual sophistication of the project. That’s why I created really simple sketches and blobs to fill in the final drawings.

I’m also not sure if I want to implement a budget; the minimum viable product should be being able to make a bouquet with flowers you chose without worrying about the money, but it’d be nice to have. Having these milestones is important because then you feel less overwhelmed. Each tiny detail can take a long time to implement, so it can feel very daunting. However, I can steadily make progress through these milestones.

Week 5: Reading Response

Computer Vision

The articles juxstaposing of computer vision with human vision was very interesting. While some aspects are similar, such as a common use of types of lenses for both recording video and through retinas, the image processing appears to be where differences begin to proliferate.

An interesting point I found the article made was that videos are inherently recorded to store pixel information, but not necessarily scene information. For instance, a night sky is recorded as lots of black pixels–rather than some encoding of a night sky parameter enabled. This fundamental concept means that complex algorithms must be constructed to reconstruct and interpolate the scene information from pixel values. Furthermore, there are still many video encoding formats (e.g., H.264, H.265), so standardization is further lacking in this regard–introducing additional complexity to the process.

One of the techniques I found intriguing is the background subtraction technique, where an initial reference image of the set is first captured. Then, the reference is used to systematically distinguish which objects belong to the scene, and which do not.

The surveillance art, which monitored the Golden Gate Bridge, sparked considerable reflection. I found the author’s point particuarly pointed, when it was revealed that the art had captured a considerably higher number of suicides than what was noticed through traditional systems. However, I can also see how recording these events is also uniquely invasive to the subjects, who are likely unaware that they have become part of an art piece–and did not sign up to be so. This work was only made possible through computer vision.

week 5 reading

Computer vision algorithms are not general-purpose and rely on specific assumptions about the video input, while human vision is adaptable and can interpret a wide range of visual stimuli. Unlike humans, computers cannot inherently understand or extract semantic information from visual data without additional programming. Additionally, computer vision systems may struggle with ambiguous or poorly defined scenes, whereas humans can often infer meaning from context.

Techniques to enhance computer vision include controlled illumination to improve contrast, using brightness thresholding to distinguish objects based on their brightness, and employing background subtraction to isolate moving objects. Additionally, surface treatments like high-contrast paints can make objects more detectable. Simple object tracking algorithms can also be implemented to follow specific features, such as the brightest pixel in a video frame.

Computer vision’s tracking and surveillance capabilities enable interactive art to engage audiences by responding to their movements and expressions, creating immersive experiences. However, this capacity raises ethical concerns regarding privacy and consent, as participants may be unaware of being monitored. Artists must balance the innovative use of tracking technology with responsible practices to ensure a respectful and enjoyable interaction.