Week 5: Midterm Progress

Concept

I’m in the process of creating a haunted-house-themed game where the player has purchased a dilapidated property, mainly because it was so cheap—no one else wanted it. The catch is that it’s infested with ghosts. My overarching goal is for the player to banish these ghosts, wave by wave, ultimately revealing the house’s true beauty once it’s free of any supernatural presence. I want to capture a sense of gradual transformation: at first, the environment is dim and unnerving, but it transitions to a bright, welcoming home as the player defeats all the ghosts.

Progress So Far

So far, I have built out a state machine that includes an intro screen, a main “PLAY” state, and placeholders for the final “WIN” and “END” screens. In the intro, the game briefly explains the story—that the house was bought cheaply because of the hauntings—then moves to the main gameplay once the user clicks the start button. The basic logic for wave progression is in place: after the initial wave of slower ghosts, I plan to introduce a second wave of faster ghosts, and ultimately a boss ghost that requires multiple hits to defeat. Each ghost’s code has been thoroughly tested in small increments, ensuring that the transition between waves feels smooth. I also integrated a rudimentary health system; each time a ghost vanishes without being clicked, the player’s health decreases, raising the stakes as they progress.

Class Implementation

A key aspect of my current setup is the Ghost class, which I designed to handle movement, timing, and click interaction. By encapsulating these behaviors, I’ve managed to keep my main draw() loop more organized and make it simpler to test the game’s logic wave by wave. Here is a condensed version of the Ghost class:

class Ghost {
  constructor(x, y) {
    this.x = x;
    this.y = y;
    this.size = 60;
    this.visible = true;
    this.wasClicked = false;
    this.vx = random(-2, 2);
    this.vy = random(-2, 2);
    this.spawnTime = millis();
    this.lifespan = 5000; // 5 seconds
  }

  move() {
    if (!this.visible) return;
    this.x += this.vx;
    this.y += this.vy;
    if (this.x < 0 || this.x + this.size > width) {
      this.vx *= -1;
    }
    if (this.y < 0 || this.y + this.size > height) {
      this.vy *= -1;
    }
    if (millis() - this.spawnTime > this.lifespan) {
      this.visible = false;
    }
  }

  display() {
    if (this.visible) {
      image(ghostImg, this.x, this.y, this.size, this.size);
    }
  }

  isClicked(mx, my) {
    return (
      this.visible &&
      mx > this.x && mx < this.x + this.size &&
      my > this.y && my < this.y + this.size
    );
  }

  vanish() {
    this.visible = false;
    this.wasClicked = true;
  }
}

Using this as a foundation, I have also been working on subclasses like FastGhost and BossGhost to provide unique behaviors—faster speed, shorter lifespans, or requiring multiple hits to defeat. This object-oriented structure ensures I can easily add or modify ghost types without complicating the main game flow.

Frightening / Challenging Aspects

One of the biggest challenges I’m facing is tuning the difficulty so it feels suspenseful without being overly punishing. In particular, I need to strike a balance between ghost speed, lifespan, and the number of ghosts per wave. If ghosts vanish too slowly, it’s too easy; if they move or time out too quickly, it becomes frustrating. Another tricky part is creating a strong contrast between the haunting atmosphere at the start and the serene, beautiful environment at the end—this requires careful coordination of art assets, lighting (or color usage), and the timing of transitions so players truly feel like they’ve “rescued” the house from a dark fate.

Risk Prevention

I have set up the game flow using distinct states—“INTRO,” “PLAY,” “WIN,” and “END”—to keep code separated and avoid any messy overlaps. Testing each state individually helps isolate potential bugs early. I also made sure that all ghost interaction, movement, and collision logic lives within their respective classes, so if there’s an issue with a particular ghost type, I know exactly where to look for a fix. By incrementally adding waves and testing them (rather than coding all three at once), I can ensure that each wave behaves as intended and that wave transitions don’t break the health or scoring systems. This structured approach reduces the chance of large-scale errors going unnoticed until late in development.

Next Steps

I still need to fully integrate the final boss ghost, which will serve as the game’s climactic encounter. Once the boss is in place, I will refine the wave progression so that defeating the boss triggers a major visual shift—the unveiling of the house’s hidden beauty. Finally, I’ll spend time polishing transitions between states, adjusting ghost behaviors, and ensuring that the health system and scoring remain engaging but fair. My aim is for players to feel a real sense of accomplishment when they see the haunting gloom replaced by a warm, inviting dwelling, thereby completing the game’s central promise of transforming a scary, cheap purchase into a comfortable new home.

 

Midterm Progress Report

Concept/Design

For my midterm, I wanted to create a escape room type of game, where the players have to like find objects around the room to be able to escape. In terms of design, I want to structure it so that the players must find five objects for example within a room and answer a series of questions to unlock the next clue. So, the game might begin with an introductory hint that guides the player to find the first object. Once they find it, they are either provided with another hint or asked a question. Answering the question correctly would then lead them to the next clue or object, however anwering it wrong may either let them retry or offer an extra hint. As the player progresses, they will uncover more objects, until they have all the objects needed to escape. I will probably have the game feature different rooms for the player to choose from, and each room will present a unique set of clues, objects, and questions to solve.

Challenging Code/Uncertainty

For this project I think the most complex part of the it is like an ordered clicking type of mechanism. For this game specifically, I think this is needed because I want the players to interact with objects in a specific order,  whether it’s pressing buttons or flipping swithces , I want them to do it in a specific order. By adding this system in my code, where objects must be clicked in the correct order, it will make sure that players engage with the puzzle thoughtfully rather than just clicking randomly and finding the key the first try.

To minimize this uncertainty, I tried to write the code to keep track of which shapes the player has clicked by using variables like rectClicked, triClicked, and circClicked. These start as false, meaning the player hasn’t clicked them yet. Then, in the mousePressed() function, I set rules so that the shapes can only be clicked in the correct order. For example, the player has to click the rectangle first, then the triangle, and finally the circle. If the player tries to click them out of order, the game won’t move forward. This helps make sure the game flow stays smooth and clear for the player. below is this code:

function mousePressed() {
  if (!rectClicked && mouseX > 50 && mouseX < 130 && mouseY > 100 && mouseY < 150) { //first rectangle is clicked
    rectClicked = true;
  } else if (rectClicked && !triClicked && mouseX > 170 && mouseX < 230 && mouseY > 100 && mouseY < 150) { //triangle clicked true  if rectangle clicked first
    triClicked = true;
  } else if (rectClicked && triClicked && !circClicked && dist(mouseX, mouseY, 320, 125) < 25) {//circle clicked true if rectangle and triangle clicked before
    circClicked = true;
    escape = true; //clicking circle = players escapes
  }
}

Week 5: Midterm Progress

Concept

Back home, one of the places I used to spend most of my time was my workspace in my room. As the saying goes, “You spend most of your life inside your head. Make it a nice place to be” ; for my case would be my workspace. That, combined with my passion for filming content, sparked my obsession with decorating my workspace. I studied, ate, worked, and even slept there (quite literally ).

Since I’ve left that space behind in Bangladesh, I wanted to recreate a piece of it through an interactive artwork in p5.js for my midterm project. My goal is to visually mimic my workspace and integrate interactivity based on what I’ve learned in class. For starters, i plan it to have the feels of an open world game where you can interact and explore the objects around you. Since my workspace is so busy with a lot of stuff, it creates the perfect scenario for exploration. I also plan to add a knock-on-my-door ish start screen instead of a regular one.I think this will not just meet the project’s requirements for interactivity but also make it feel more personal— it the closest thing I have to my old workspace now.

Design Process

Now that i have jotted down my concept, I realized I had unintentionally set myself up for a lengthy challenge. Hunting for assets that matched my aesthetic turned into a frustrating game of almost-but-not-quite—everything either messed with the vibe or threw off the whole look I was going for.

I ultimately made the (inevitable) decision to draw everything from scratch. To begin the production process, I created a rough sketch of my concept:

Description of the image

At this point, I felt overwhelmed by the number of objects I was adding and my initial plan to make all of them interactive. Then I remembered  professor Mang’s advice which was something along the lines of-what matters most is that the project is aesthetically pleasing and clean, rather than being overloaded with features. Moving forward, I might limit interactivity, making some objects static to maintain a clean look and also for ease.

I decided to work further on deciding the aesthetic, I adjusted my initial sketch, settling on a more cohesive design:

Description of the image

I wanted to include myself in the scene, so I drew my character sleeping at the desk—just as I often did in real life. I initially considered adding interactive movement to my character but ultimately decided against interactivity for two main reasons:

  1. The sheer amount of drawings I’d have to draw to animate it.
  2. Potential issues with the layering, which could interfere with other interactive elements.

To avoid complications, I’ll position my character far from the interactive elements. My workspace had a blue theme with touches of white and pink, so I chose those as my main color palette. I also opted for a flat 2D style, as shading objects would take too much time and require changing the direction of shadow if I later changed their placement.

Challenges & Possible Fixes

1. Preparing the Assets

I started by drawing my character’s hair particles in the app Resprite and attempted to create movement. However, I realized that different segments of the hair needed to move at different speeds based on their density and weight, meaning a single sprite wouldn’t be enough. This led me to manually draw multiple hair segments, and the assets piled up quickly.

Description of the image

I took this photo halfway through the process of drawing each hair segment. To ease the workflow, I loaded them into Pixel Studio to create a spritesheet. However, I ran into a major issue—the file size exceeded p5.js’s permitted limit. Compressing the spritesheet resulted in a significant loss of quality, which is a problem since the hair is a large, prominent element on the screen.

At this stage, I’m still unsure how to proceed while maintaining the quality. I may need to explore alternative ways to optimize the sprites or adjust the animation approach.

Possible fix:

1.Splitting the hair into separate layers and animating them dynamically in p5.js instead of pre-rendering frames.

Or, 2. Using vector-based movement instead of raster spritesheets.

2. Layering Issues & Depth Management

Since some objects are interactive and others are static, layering them incorrectly might make it hard to interact with certain elements or could cause visual glitches. I also feel like the range of interactivity of one object could hamper the others.

Possible Fix:

1.Use z-index ordering within the p5.js draw loop to ensure the correct stacking of elements.

Or,2. Implementing collision detection if certain objects should not be interactable when behind others. However, I’m yet to figure out the logic for that.

3. Interactivity & User Experience

With so many objects in the workspace, interacting with them all might feel cluttered or overwhelming. There’s also the risk of users not realizing which objects are interactive.

Possible Fix:

Add subtle visual cues, like slight movements, highlights, or hover effect/audio feedback/different interaction types (e.g., clicking, dragging, hovering) to make interactions more varied and engaging.

 

Wrapping It Up

While I initially set out to create an interactive replica, the design process has made me rethink how interactivity and aesthetics can coexist without overwhelming. With ongoing challenges like asset optimization, performance tuning, and user interaction design, I’m continuously refining my approach. Ultimately, I hope this piece not only meets the technical requirements but also captures some of my personality-keeping things interesting to urge the user to continue the explorations would be hard though.

Week 5 : Reading Response

Humans recognize faces and objects effortlessly because we rely on memory and perception, without needing to crunch huge amounts of data. Computers, on the other hand, have to go through an entire process—analyzing pixel data, running algorithms, and comparing what they see to stored information just to identify something. Plus, we’re naturally good at adapting to different environments, while computers can struggle when conditions change from what they were programmed for, often leading to glitches or poor performance.

To help computers track objects better, there are a few go-to techniques. Brightness thresholding boosts contrast by filtering colors and intensities, making objects stand out more clearly. Background subtraction helps by removing the static background so the system can focus only on what’s moving. These methods make it easier for computer vision to detect and track what matters, whether in art, security, or interactive installations.

In interactive art, computer vision takes audience engagement to the next level by allowing people to actively shape the artwork with their movements and gestures. Unlike traditional static art, these installations respond in real-time, making the experience more dynamic and immersive. By tracking participants as they move, computer vision can trigger visual, auditory, or even tactile reactions, turning viewers into active contributors rather than just passive observers. This creates a deeper connection with the artwork, giving people a sense of agency in the creative process. Whether it’s motion-triggered visuals, gesture-controlled projections, or body-driven games, computer vision is expanding the possibilities of creativity in interactive media.

Midterm Project Progress: Interactive Fantasy Game

For this project, I decided to transform a short fantasy story I wrote in primary school into an interactive game using p5.js. The game has multiple choices that affect the storyline, but in the end, all paths lead to the same outcome. There are a total of 5 key decision points and 14 different screens in the game.

One of my biggest challenges was implementing the buttons. I initially tried to create them using object-oriented programming (OOP), but it became too confusing because the button positions change so frequently from screen to screen so i just used the built-in function createButton().

Currently, the game has a pink background, but I plan to replace it with images that reflect the setting of each screen, adding to the immersive experience. I also intend to incorporate sound effects that correspond to the events in the story to further enhance the game’s atmosphere.

During this week, I’ll be focusing on:

**Using OOP for the buttons instead of builtin functions

**Adding some sounds and background images

The part I’m most proud of so far is the overall structure of the different screens and managing the transitions between them as it  took a lot of time to figure out how to switch between screens smoothly.

Here is the code snippet of the screens and buttons changing:

function draw() {
  background('pink');
  //The Screen is picked based on the screen number
  if (screen === 0) {
    showStartScreen();
    } else if (screen === 1) {
    showBirthdayScreen();
  } else if (screen === 11) {
    showSuppliesScreen();
  } else if (screen === 12) {
    showWeaponScreen();
  } else if (screen === 111) {
    showNightScreen();
  } else if (screen === 112) {
    showMorningScreen();
  } else if (screen === 121) {
    showNightScreen();
  } else if (screen === 122) {
    showMorningScreen();
  } else if (screen === 1111 || screen === 1121 || screen === 1211 || screen === 1221) {
    showRiverScreen();
  } else if (screen === 1112 || screen === 1122 || screen === 1212 || screen === 1222) {
    showForestScreen();
  } else if (screen === 11000 || screen === 12000 || screen === 21000 || screen === 22000) {
    showNextScreen();
  } else if (screen === 5000) {
    showDragonCaveScreen();
  } else if (screen === 5001) {
    showInsideScreen();
  } else if (screen === 5002) {
    showOutsideScreen();
  } else if (screen === 5003) {
    showTrapScreen();
  } else if (screen === 262626) {
    showFinalScreen();
  }
}

function mousePressed() {
  if (screen == 0 && isMouseOver(enterButton)) {
    screen = 1;
    hideAllButtons();
    showBirthdayScreen();
  } else if (screen == 1) {
    if (isMouseOver(suppliesButton)) {
      screen = 11;
      hideAllButtons();
      showSuppliesScreen();
    } else if (isMouseOver(weaponButton)) {
      screen = 12;
      hideAllButtons();
      showWeaponScreen();
    }
  } else if (screen === 11) {
    if (isMouseOver(nightButton)) {
      screen = 111;
      hideAllButtons();
      showNightScreen();
    } else if (isMouseOver(morningButton)) {
      screen = 112;
      hideAllButtons();
      showMorningScreen();
    }
  } else if (screen === 12) {
    if (isMouseOver(nightButton)) {
      screen = 121;
      hideAllButtons();
      showNightScreen();
    } else if (isMouseOver(morningButton)) {
      screen = 122;
      hideAllButtons();
      showMorningScreen();
    }
  } else if (screen === 111) {
    if (isMouseOver(riverButton)) {
      screen = 1111;
      hideAllButtons();
      showRiverScreen();
    } else if (isMouseOver(forestButton)) {
      screen = 1112;
      hideAllButtons();
      showForestScreen();
    }
  } else if (screen === 112) {
    if (isMouseOver(riverButton)) {
      screen = 1121;
      hideAllButtons();
      showRiverScreen();
    } else if (isMouseOver(forestButton)) {
      screen = 1122;
      hideAllButtons();
      showForestScreen();
    }
  } else if (screen === 121) {
    if (isMouseOver(riverButton)) {
      screen = 1211;
      hideAllButtons();
      showRiverScreen();
    } else if (isMouseOver(forestButton)) {
      screen = 1212;
      hideAllButtons();
      showForestScreen();
    }
  } else if (screen === 122) {
    if (isMouseOver(riverButton)) {
      screen = 1221;
      hideAllButtons();
      showRiverScreen();
    } else if (isMouseOver(forestButton)) {
      screen = 1222;
      hideAllButtons();
      showForestScreen();
    }
  } else if (screen === 1111 || screen === 1121 || screen === 1211 || screen === 1221) {
    if (isMouseOver(fishButton)) {
      screen = 11000;
      hideAllButtons();
      showNextScreen();
    } else if (isMouseOver(riverspiritsButton)) {
      screen = 12000;
      hideAllButtons();
      showNextScreen();
    }
  } else if (screen === 1112 || screen === 1122 || screen === 1212 || screen === 1222) {
    if (isMouseOver(firefliesButton)) {
      screen = 21000;
      hideAllButtons();
      showNextScreen();
    } else if (isMouseOver(forestspiritsButton)) {
      screen = 22000;
      hideAllButtons();
      showNextScreen();
    }
  } else if (screen === 11000 || screen === 12000 || screen === 21000 || screen === 22000) {
    if (isMouseOver(next1Button)) {
      screen = 5000;
      hideAllButtons();
      showDragonCaveScreen();
    }
  } else if (screen === 5000) {
    if (isMouseOver(insideButton)) {
      screen = 5001;
      hideAllButtons();
      showInsideScreen();
    } else if (isMouseOver(outsideButton)) {
      screen = 5002;
      hideAllButtons();
      showOutsideScreen();
    } else if (isMouseOver(trapButton)) {
      screen = 5003;
      hideAllButtons();
      showTrapScreen();
    }
  } else if (screen === 5001 || screen === 5002 || screen === 5003) {
    if (isMouseOver(next2Button)) {
      screen = 262626;
      hideAllButtons();
      showFinalScreen();
    }
  }
}

Here is the game:

Week 5: Reading Response

The main difference between computer and human vision is that while humans can instantly recognize faces, objects, and contexts, a computer sees only raw pixel data unless explicitly programmed to interpret it. We have the ability to recognize people, expressions, colors, animals, etc instantly that computers have to be taught how to do and even that to only some extent. To help computers “see” or track objects of interest, techniques such as frame differencing, background subtraction, and brightness thresholding are used to extract meaningful features from video input. These methods allow interactive systems to detect movement, isolate subjects, and respond dynamically to user actions, forming the foundation of many interactive media works. However, each of these techniques has limitations, such as sensitivity to lighting changes or reliance on high-contrast visuals, highlighting the need for carefully designed environments to optimize detection accuracy.

Still, computer vision has improved vastly over the past few decades and has especially become much more accessible for artists, designers, and even beginner programmers to incorporate vision-based interactivity into their work. Software environments like Processing, Max/MSP/Jitter, and Macromedia Director provide frameworks for integrating computer vision, either through direct coding or via plug-ins and toolkits. Additionally, stand-alone applications like BigEye and EyesWeb expand these capabilities, enabling real-time motion tracking and expressive gesture analysis that is much easier to get your hands on.

In interactive art, computer vision’s capacity for tracking and surveillance introduces both creative potential and ethical concerns. Artists can leverage motion detection and object tracking to create immersive, responsive installations that engage audiences much more than just standing there staring. However, the same tools that enable interactivity can also invoke concerns about surveillance, privacy, and data collection. Since interactive art often explores the relationship between technology and human experience, the ability of computer vision to monitor and track movement, especially without explicit user consent, raises questions about autonomy and control. What information is the computer storing? Can it be used against me? These are questions people who are being recorded don’t even have the chance to ask. Ultimately, while computer vision expands the possibilities for dynamic and participatory art, its application must be carefully considered to balance both innovation and creativity with ethical responsibility.

Week 5 Reading

What are some of the ways that computer vision differs from human vision?

The biggest distinction is that while when we see a photo, it is intuitive for us to distinguish a person from the background, it is comparatively harder for computers to make the distinction. This means, early computers had to use movement to distinguish which pixels on a display belonged to the object or person of interest and which were only part of a background. Furthermore, to detect movement computers had to calculate the change in color or brightness of pixels between frames whereas these things are quite simple for human vision.

What are some techniques we can use to help the computer see / track what we’re interested in?

frame differencing – comparing frames to determine movement.

background subtraction – has an original background scene, compares with captured scene to determine what is not part of the background and is the object of interest.

brightness thresholding – looking for changes in luminosity to determine the position/change in position of objects.

How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

It allows for the extra dimension of interaction. Many artworks, including the one with the poem’s falling letters. This means that you can take more than just key or button input sand by using computer vision to track human movements, you make the artwork more intuitive to operate (consider that people may not know you need to press a button but if you see your own image on the projector you already know how to control that shadow).

Week 5 – Reading Response

Human vision is intuitive whereas computer vision relies on algorithms, pattern recognition, and controlled environments for said algorithms to work. Computer vision processes images pixel by pixel whereas human vision takes in an image as a whole. For example, to detect movement, some computer vision algorithms detect the value different between individual pixels next to each other. However, human pattern recognition looks at the overall shape of an object. We can also better handle seeing things in different contexts. However, computers need to specifically be told to watch out for slight variations. With the rise of machine learning and neural networks that help computer vision with pattern recognition, however, it’ll get easier for computers to detect objects or movement.

Some techniques for tracking include detecting motion, detecting presence, and detection through brightness thresholding. Detecting motion means comparing pixel brightness per frame. If the brightness of a pixel changes a lot from one from to another, that can indicate movement. The total differences experienced by different pixels can give a broad picture of total movement. Detecting presence means comparing an with a preset background to see what’s different (background subtraction). Detection through brightness threshold uses a general brightness level and compares pixels to it to determine whether or not the pixel is in the foreground or background.

Tracking through computer vision has a lot of potential in interactive art. Being able to detect a user’s movement means it can influence the direction of the art. Even with musical performances for example, an artist can do a specific movement to trigger a specific sound or effect. Raves or concerts which are both audio and visual based could have a lot of potential using dance movements to aid musical production. More advanced tracking has also allowed for the detection of subtle expressions and glances, which gives artists even more to work with as they can use a user’s emotion to guide an art piece.

Week 5: Midterm Progress

Project Concept

My midterm project is based on a concept I wrote in The Gazelle a few years ago: Drowning At NYUAD: Finding Space In A Ceaseless Work Culture. The article discusses the at times unhealthy work and hustle culture of NYUAD students, where a lack of sleep, overloading classes, and extracurricular commitments can be used as a metric of how “successful,” you are.

The midterm game aims to exaggerate the absurdity of it–underscoring the need to find a personal work-life balanace that enables you to enjoy (and not just survive) your undergraduate career.

Design

The game is designed to parallel mobile games, where a character is dragged along the bottom of the screen–launching projectiles towards oncoming enemies. Except, instead of enemies, the character will be defeating NYUAD-specific monsters, like SIG Training, homework all-nighter, readings, etc.

Prior to playing the game, the user will be presented with several screens that allow for customization. Specifically, the user will need to select a character and an academic weapon from a list of three options each. Below includes screenshots of the game storyboarding, designed in Figma.

Key Risks

One of the aspects that I am most concerned about is the number of screens involved in the game. To alleviate the potential issues and minimize complexity, I am going to work to encapsulate each screen as a class object, with a .hide() and .show() method. This way, the code may stay maintainable and navigable. I may also opt to include a configuration dataclass object, to pass to each scene as a reference, in order to avoid defining many global configuration variables.

Week 5 – Midterm Progress

Concept

I was thinking for a while of a good midterm project that would combine everything we have been taught so far and would be also fun to make and even more fun to play/experience. I came up with an arcade style game that combines the fast paced racing games I used to play as a kid to the fun and popular endless runners like “Subway Surfers”. I am making a game where the player controls car movement left and right to avoid oncoming traffic and collect coins. The more the game progresses and the more coins the user collects, the game becomes faster and therefor harder making it a fun challenge for everyone.

User interactions
  • Arrow keys to move the car left and right, allowing for quick and responsive controls
  • Randomly positioned traffic that appears on the screen, requiring strategic movement to avoid collision
  • Randomly appearing coins that the player collects to increase their score, encouraging risk-taking and precise movement
  • A progressively challenging difficulty curve where traffic increases in speed and density over time
  • Game over state when the player collides with traffic, prompting a restart option to try again and improve their score
Code design

I have structured my code using object oriented programming with the following classes:

Car – Represents the player’s car and handles movement

Traffic – Represents the incoming traffic and resets to random position when it moves off screen

Coins – Represents collectable coins that appear at random positions when collected

Additionally the game includes:

Score system

Collision detection system

Car class:

class Car {
    constructor(x, y, w, h) {
        this.x = x;
        this.y = y;
        this.w = w;
        this.h = h;
    }
    display() {
        fill(0, 0, 255);
        rect(this.x, this.y, this.w, this.h);
    }
    move() {
        if (keyCode === LEFT_ARROW && this.x > 100) {
            this.x -= 50;
        } else if (keyCode === RIGHT_ARROW && this.x < 400) {
            this.x += 50;
        }
    }
}

Collision detection:

if (car.x < traffic.x + traffic.w &&
    car.x + car.w > traffic.x &&
    car.y < traffic.y + traffic.h &&
    car.y + car.h > traffic.y) {
    console.log("Collision with traffic!");
    noLoop();
}

Currently I have the car and the traffic as simple p5 box objects while I set everything up, but will change them to images as I work on the project.

Challenges and Risks

The most complex part of the project so far was implementing collision detection and ensuring objects reset properly. This was essential for making the game playable and preventing unfair conditions where the player could not avoid obstacles. I have dealt with this challenge already, but I am thinking of implementing a power up system into the game which might bring a completely new set of challenges with it.

Next steps

While the core mechanics have already been implemented there is still work to be done such as:

Add game sound

Improve the visuals

Add high score tracking

Possible implementation of power up system

 

So far this project has been a great learning experience, I am looking forward to work on it even more and refine it further!