Midterm Project – The Maze Captures

Concept:

My midterm project is designed to be a new interactive game that combines both an artistic experience and a fun game that users can enjoy. The Maze Captures is inspired by the Sheikh Zayed Museum and the Qasr Al Hosn Festival in Abu Dhabi. The artistic side comes from the museum, which is new and modern with a strong historical and cultural connection to the United Arab Emirates. I was then able to develop the game aspect from the Qasr Al Hosn Festival, which had a culturally themed escape room, but I adapted it to my own idea.

The concept of the game is to walk around and experience the museum, and each time an object is spotted, the player should position it inside their camera lens frame and capture it within a specified time to increase their score.

Here is an image that demonstrates my inspiration, from the Sheikh Zayed National Muesuem:

Embedded Sketch:

How it works:

The game starts with an instructions page to allow the user to understand what they are expected to do for the game to work. They press the start button and are positioned at the start of the maze. Players use the arrow keys to walk around smoothly and use their mouse pad to look around both sides for easier movement. The setup is structured in a way that gives the player different paths to take, like a maze would be, and includes different types of objects, from artworks to items placed throughout.

When an object is spotted, it should be centered in the lens frame and captured using the Enter button, and a capture sound is added for interactivity and a sense of reality. Each time an object is captured, the score increases depending on how early it is within the timer, the earlier the capture, the higher the score. Once the time is over, the player views their score and can press the play again button to restart the game.

An AI generated image to picture my ideas:

Reflection:

I am proud of my project overall, as working with WEBGL requires a lot of effort, and I feel like I was able to manage it well. I am particularly proud of creating the movement of the player, as it started off as a problem, but I was able to overcome it in the best way possible. The issue was that the arrow keys were not aligned with the movement and would keep switching, and I couldn’t look around. I then figured out that I should set each arrow key using sine and cosine code to allow each one to have its own behavior rather than mixing them up. I also used the mouseDragged function covered in class to be able to look around.

//Mouse movement to look around
function mouseDragged(){
  angle += movedX * 0.01;
}

//Player Movement using the keys
function movePlayer() {
  let nextX = player.x;
  let nextZ = player.z;

  if (keyIsDown(UP_ARROW)){
    nextX += sin(angle) * speed;
    nextZ -= cos(angle) * speed;
  }
  
  if (keyIsDown(DOWN_ARROW)){
    nextX -= sin(angle) * speed;
    nextZ += cos(angle) * speed;
  }
  
  if (keyIsDown(LEFT_ARROW)){
    nextX -= cos(angle) * speed;
    nextZ -= sin(angle) * speed;
  }
  
  if (keyIsDown(RIGHT_ARROW)){
    nextX += cos(angle) * speed;
    nextZ += sin(angle) * speed;
  }

Another part I am proud of, which was also a problem at first, was that the player would walk through the walls, not allowing the game to function as I intended and making it unrealistic. I followed tutorials about movement in WEBGL and was able to use a for loop and detection code to prevent that from happening. I also applied it to prevent players from moving outside the game floor area.

In walls class:

//Setup code to prevent walking through walls
  detect(px, pz, size){
    return(
    px + size > this.x - this.w/2 &&
    px - size < this.x + this.w/2 &&
    pz + size > this.z - this.d/2 &&
    pz - size < this.z + this.d/2
    );
  }

In sketch:

let blocked = false;

for (let wall of walls) {
  if (wall.detect(nextX, nextZ, playerSize)) {
    blocked = true;
    break;
  }
}

Overall, I am satisfied with the outcome of my work. However, there are some areas where I think I could improve in the future, mostly regarding the graphics. Perhaps I could add more advanced decor and curves to the walls to make it more modern, and I would also love to create the exterior design of the museum so that players could move outside and look at it as well.

References:

I started off by learning about the WEBGL Mode in order to be able to complete my project the way I want it to be, using these links:

https://p5js.org/reference/p5/WEBGL/ 

https://youtu.be/nqiKWXUX-o8?si=tSUSM77-Ie64E13u 

To meet requirements, I used the slides covered in class and reviewed these links.

Go into fullscreen:

https://p5js.org/reference/p5/fullscreen/ 

Since I worked with WEBGL mode, I followed tutorials and reference links for each part:

Setting up the maze, with the floor and walls:

https://youtu.be/6TPVoB4uQCU?si=LFsx9b4IhB0HOYoW 

Allowing the player to move around:

https://youtu.be/BW3D9WwalQE?si=144ULyiZDN_HCjMr

Add images and objects in the maze:

https://youtu.be/O1mYw-3Wl_Q?si=4ENHn11BepkQGdqX

Midterm Project: Salama’s Nail Salon

For my midterm project, I made an interactive nail salon game in p5.js called Salama’s Nail Salon. The idea of the game is that a customer gives a short line that hints at the kind of color they want, and the player has to choose a nail polish color that matches it. I wanted it to feel simple and cute, but still have actual game logic behind it, so I used mood-based color matching instead of making it just about choosing any random color. Then, the player uses a hue slider to test different shades, and the game gives a result out of five stars depending on how close the chosen color is to the customer’s intended mood.

To build the project, I first split it into three main screens, which helped me organize the whole game better. I made a start screen with instructions, a playing screen where the customer gives their line and the player chooses a nail color, and a result screen that gives feedback and has a restart button in case the player would like to try again. Structuring the project this way made it much easier for me to understand the logic of the game and the logic of the code.

For the visuals, I used AI to produce my background image of a nail salon, and I added a font and soft pink buttons so the overall style fit a nail salon game. One part I spent a lot of time on was the nail itself. Instead of drawing the full finger, I used a finger image and drew a nail shape with the chosen color directly on top of it so the player could see the polish update on the actual nail. This took a lot of trial and error, because the shape had to be positioned and rounded carefully to fit inside the nail correctly. I also added a glossy highlight so it looked more like actual nail polish instead of just a flat shape.

I am especially proud of the result system, because the result compares the chosen color to the target color for the customer’s mood and then gives a star rating from one to five. At first, I was showing more technical feedback, like how far the player was from the target color, but when I made my sister Salama try the game, I realized that the results did not fit the tone of the game very well. Since the game is meant to feel playful and easy to understand, I changed the result screen so it gives simple reactions and color-based hints instead, and I think this made the game feel much more natural and kid-friendly. However, I did use AI to help with this part, and I explain exactly how I used it at the end of my blog.

function resultFeedback() {
  if (stars === 5) {
    resultTitle = "Perfect match!";
    resultHint = "The customer loved this color.";
  } else if (stars === 4) {
    resultTitle = "So close!";
    resultHint = colorHint(correctMood);
  } else if (stars === 3) {
    resultTitle = "Nice try!";
    resultHint = colorHint(correctMood);
  } else if (stars === 2) {
    resultTitle = "Not quite yet!";
    resultHint = colorHint(correctMood);
  } else {
    resultTitle = "Oops!";
    resultHint = "That color was far from what the customer wanted.";
  }
}

function colorHint(mood) {
  if (mood === "Happy") {
    return "The customer wanted a brighter, sunnier color.";
  } else if (mood === "Calm") {
    return "The customer wanted a softer, more bluish color.";
  } else if (mood === "Confident") {
    return "The customer wanted a bolder, more purple color.";
  } else if (mood === "Sad") {
    return "The customer wanted a deeper, darker color.";
  } else {
    return "Try a different shade next time.";
  }
}

 

Another part I am happy with is that I included multiple customer moods and randomized phrases, so the game does not feel exactly the same every time. I also used object-oriented programming by making a Customer class. Each round creates a customer object that stores the customer’s mood and phrase, and this helped me organize the dialogue part of the game more clearly. I used arrays of phrases for each mood and randomly selected one for each round. I also used buttons, a slider, text, images, shapes, and sound, so the final sketch combines a lot of the tools we have learned. Even though the game looks simple, there is a lot of structure behind it, and I’m really proud of how I organized it.

One challenge I ran into was figuring out how to make the game feel visually complete without making it too complicated. I originally was not sure how to show the nail color in a way that looked convincing, and I had to test many different values before the color shape finally fit the nail properly. Another challenge was deciding what kind of feedback to show on the result screen. I did not want it to feel too technical, so I had to rethink the wording and make it fit the tone of the game better. I also had to think a lot about spacing and layout so the nail image would not overlap with the hue spectrum and the controls. 

AI Use:

I used AI in two specific parts of this project. First, I used AI to generate the background image for the game. I explained that I wanted a background image for a nail salon game, described how I wanted it to look visually, and explained that it would be used across all of the screens in the game. I generated the AI background image when I was working on the midterm progress stage, and I decided to keep it because it matched the theme I wanted better than the images I found online. I chose to do this because I wanted a background that looked soft, cute, and clearly related to a nail salon.

The second place where I used AI was for the wording of the result screen. My original result feedback system was much more technical. In my code, the result was based on comparing the player’s chosen hue to the target hue for the customer’s mood. I used functions like hueDistance() to measure how close the chosen color was to the target color, starsFromDistance() to convert that distance into a star rating from one to five, and computeResult() to calculate the final result. Because the logic of the system was based on color distance, my first version of the result screen also showed that technical logic more directly. It included the star result, the predicted mood, and feedback about how far the chosen hue was from the target hue. From a coding perspective, that made sense to me because it matched how I had built the scoring system.

However, after I had my sister Salama test the game, I realized that this version of the result feedback did not fit the tone of the project very well. She did not really understand what the technical feedback meant in the context of a playful nail salon game, and it felt too logic-based for the kind of experience I wanted. Because of that, I used AI specifically to help me rethink how the result should be communicated to the player. AI helped me revise the wording and presentation of the result. I changed the result screen in the code by replacing the more technical feedback with the variables resultTitle and resultHint, and by adding the functions resultFeedback() and colorHint(mood) to generate simpler and more natural feedback for the player. Instead of showing technical information, the result screen now gives simpler reactions such as “Perfect match!” or “Nice try!” and follows them with clearer color-based hints, such as saying the customer wanted something brighter, softer, darker, or more bluish. So my logic for the result system stayed the same, but AI helped me change the way that result was expressed, so it made more sense to the player and matched the tone of the game better. I also then edited it and added phrases on my own, and kept fixing or changing the wording.

Resources:

 My font: https://fonts.google.com/specimen/Emilys+Candy?categoryFilters=Feeling:%2FExpressive%2FCute&preview.script=Latn 

My Background Sound: https://pixabay.com/music/beats-chill-lofi-music-background-358420/

Images used:

I used p5.js library and p5.sound library

I used the course slides to copy parts like input, fullscreen, and window resizing. I also used ideas from the class for structuring the game into different screens and organizing the game logic. 

I used AI to generate the background image, and to fix the results feedback.

My sister Salama was the main testing participant and also the inspiration for this game, which is why I felt it was fitting to call it Salama’s Nail Salon.

I also received major help from my brother with the coding logic of the game. I had it structured weirdly, and he helped me understand how to organize it.

 

Midterm Project – F1 Pitstop Game

Concept:

Inspired by pit stops in Formula 1, the goal of this game is to change tires in the correct order as quickly as possible.

There are 3 rounds per game. In each round, the car enters your garage, and you are given a random order to change the tires. The order changes every round. At the top of the screen in the red banner, you see the tire you have to change. You must click that tire on the car to “change” it. The game not only requires speed but correctness. Each wrong tire you press costs you a 2-second penalty, adding to your total time at the end.

Once all 3 rounds are complete, you can see your final time and the time taken during each round. The game also tracks your high score, even when you restart the sketch, so you can come back to beat your high score at any time!

Final Game + Code:

*For the best experience, open the sketch in a new tab and play the game in full screen

Implementation:

I began by creating the audio files I needed for the game. I went to YouTube to find the F1 theme song to use on the start and results pages, and I also found a video demonstrating a real pit stop to use for the wheelgun and engine sounds. Then, I converted each video to an MP3 file, and I used an audio cutting website to cut the audio so I only took the wheelgun and engine sounds, and I had to shorten the F1 theme audio since it was too big to upload to the p5 editor.

Then, I gathered all my images. I asked ChatGPT to create a cartoon-ish pitlane image with arrows and lines to use in my game since I could not find the exact image I needed online. I also found a cartoon F1 car with a transparent background.

I also chose a font to use from Google Fonts and uploaded that as well.

Then, I researched different functions I knew I needed. First, as the professor suggested, I wanted to implement a high score tracking feature, where the user’s highest score is saved even when they restart the sketch. I found the function storeItem(), which does exactly that. I also needed a time tracking function for the timer, so I found the millis() function, which returns the number of milliseconds since the sketch started running. Finally, I needed a way to shuffle the order of the tires. My initial idea for implementing the shuffling was to randomly generate a number between 0 and 3, and each time the number is used, it cannot be used again. These numbers are then used to access the tire at that index in the array and is displayed to the player; however, I realized this would make it more complicated to track the tire and if the user clicked the correct one, and is also very inefficient. So, I referred to Google to find a function that shuffles items in an array and found the shuffle() function.

The game was built around 4 classes. First is the GameManager class which controls everything. It tracks which screen should currently be displayed from the 4 screens: start, instructions, playing, or results. It also manages the progression between the three rounds, calculates the total pit stop time, compares it against the stored high score, and handles the mute state for the background music in the main and results screens. The high score is stored using the storeItem() function, which stores a certain value with a label you give it in the browser’s localStorage. For retrieval, you use the getItem() function. Initially, I did not have a mute button, but after having my friends test my game, one of them suggested it due to the song’s loud volume and how it can become annoying after listening to it for a while.

The Car class handles how the car drives into and out of the pit box. It has three properties that control its movement: y which is its current position, targetY which is where it needs to stop, and two boolean flags called arrived and exiting that track what stage of the animation it is in. Every frame the update() method runs and uses

lerp(this.y, this.targetY, 0.04)

to move the car smoothly toward the center. lerp() works by taking 4% of the remaining distance each frame, which means the car moves fast at first and naturally slows down as it gets closer, creating a deceleration effect. I only used lerp() for the entrance of the car because logically when the car accelerates to the top of the screen it is leaving the garage to go back on track, so it must leave the screen quickly to give that effect. Once the difference between the current position and the target is less than 1 pixel, the car snaps exactly to the center and arrived is set to true. When the pit stop is complete, exiting is set to true and the car moves upward until it disappears off the top edge. To draw the car, I used imageMode(CENTER) with translate(), so the image is always perfectly centered on the car’s current coordinates.

The PitStop class controls the main gameplay logic. At the start of every round the start() method calls shuffle() on the array storing the tire labels to generate a random sequence, and the variable currentStep keeps track of which position in that sequence the player is currently on. The timer only starts counting when the car has fully arrived, using a boolean flag timerStarted so that millis() is only recorded as startTime the moment the banner appears. The elapsed time is calculated every frame as

(millis() - this.startTime) / 1000 + this.penalty

because dividing by 1000 converts milliseconds to seconds and penalty adds any two second penalties from wrong clicks. When a wrong tire is clicked, wrongFlash is set to true and wrongTimer records the exact millisecond it happened, then in update() the code checks if 500 milliseconds have passed since then and resets the flash.

The Tire class handles each of the four individual tires, and each tire is an instance of the Tire class. Each tire stores its position as offsetX and offsetY, which are fractions of the canvas size, and the actual pixel coordinates are calculated every frame using getX() which returns gameManager.car.x + this.offsetX * width and getY() which returns gameManager.car.y + this.offsetY * height. I created functions to calculate the tire’s X and Y positions rather than giving them a static value in the constructor so they can be resized with the car when the window size is changed. Each tire also has a state property that is set to either ‘waiting’, ‘active’, ‘done’, or ‘wrong’, and the draw() method uses these states to determine the color it should display each tire with. Click detection in isClicked() works by checking whether the mouse coordinates fall within the rectangle representing the tire.

Parts I am proud of:

The feature I am most proud of is definitely the high score. When I initially heard the idea, I didn’t think I would be able to implement it, assuming it might be too complicated. However, I am glad I found the storeItem() and getItem() functions and that they were extremely simple to use.

This is where the high score is retrieved in the reset()/constructor function of the GameManager.

//load the saved high score from browser storage
//if nothing has been saved yet, default to 0
this.highScore = getItem("pitStopHighScore") || 0;

And here, it checks whether the player achieved a new high score or not to update the variable. It also changes a boolean flag which helps determine if the player achieve a new high score to display a new high score message in the results screen.

//check if this is a new high score
          if (this.highScore == 0 || this.totalTime < this.highScore) {
            //new best. save it to browser storage
            this.highScore = this.totalTime;
            storeItem("pitStopHighScore", this.highScore);
            this.isNewBest = true;
          } else {
            this.isNewBest = false;
          }

I am also generally proud of the graphics and sounds used in the game. It ended up looking way better than I imagined I would be able to make it.

Challenges and Areas for Improvement:

One bug I ran into was that the timer was starting at around 2 seconds instead of 0 at the beginning of each round. I realized this was because I was starting the timer the moment the round began in startRound(), so it was not accounting for the fact that the car still needed to move in from the bottom of the screen before the player could actually do anything. So, by the time the tires appeared, the timer had already been counting for however long the entry animation took. I fixed this by adding a boolean flag called timerStarted to the PitStop class and a startTimer() method that only records the start time if the timer hasn’t begun yet. Instead of starting the timer in start(), I call startTimer() inside drawPlaying() in the GameManager, but only once car.arrived is true, so that the timer starts at exactly the same moment the banner and tires appear on screen.

For the future, I would like to add more complexity to the game, perhaps more rules or different game modes. I also thought of this later on, but a great feature to implement, which would’ve allowed players to score lower times would be to show the sequence of tires that they need to click for that round. This would allow players to anticipate the next tire they have to click and click it faster.

References:

Pitlane Background:

Audio:

Function References:

Font:

MidTerm: A Journey Into the Body

Concept

I wanted to make an experience that wasn’t just a sketch but a narrative. What I had in mind was an interactive exploration of the human body and its hidden stories. Think of it as stepping into a cyberpunk scanner; each part of the body reveals a facet of personality and capability. The head sparks ideas, the heart pulses with emotion, hands create, and legs propel forward. The inspiration was from a portfolio website, where instead of convnetional text boxes, there is skeleton and you scroll into it with a zooming effect. Any part you scroll into, the attribitues of it a revealed. This could be tranformed to a mulitfold story telling from a basic achievement deck.

The journey begins with a “scan in progress” message, setting a cinematic tone. Clicking on any part of the body leads the user deeper into the system. Images, sound, and written text combine to create a multi-sensory experience

How it Works

The backbone of the project is OOP.  Each body part is represented by a BodyPart object, which stores its position, clickable area, description, associated image, and sound. This modular approach made the project easy to maintain and extend.

class BodyPart {
  constructor(name, xRatio, yRatio, r, info, soundName) {
    this.name = name; // e.g., "head"
    this.xRatio = xRatio; // relative horizontal position
    this.yRatio = yRatio; // relative vertical position
    this.r = r; // clickable radius
    this.info = info; // description text
    this.soundName = soundName; // linked sound effect
  }

  get x() { return width * this.xRatio; }
  get y() { return height * this.yRatio; }

  clicked() {
    let d = dist(mouseX, mouseY, this.x, this.y);
    if (d < this.r / 2) {
      currentPart = this;
      state = "part";
      playSound(this.soundName);
    }
  }
}

The sketch scales dynamically to fullscreen. The portrait, interactive zones, and text all resize proportionally, making it consistent across resolutions.

Another highlight is the typewriter text effect with blinking cursor, inspired by coding interfaces in movies. It gives the impression that the system is alive and responding in real time. Neon panels and glitch text effects reinforce the cyberpunk aesthetic, enhancing immersion.

The core working of the skethc in the potrait. Every body part has its own personality and story. Clicking triggers an image, descriptive text, and sound effect. As for now I have made 5 interactable part which are

    • Head.
    • Heart
    • Ears
    • Hands
    • Legs

Coding Highlights

The sketch includes animated rain, interactive portrait, and Dynamic Typewriter Text.

function drawRain() {
  stroke(0, 255, 255);
  strokeWeight(2);
  for (let drop of rain) {
    line(drop.x, drop.y, drop.x, drop.y + 12);
    drop.y += drop.speed;
    if (drop.y > height) {
      drop.y = 0;
      drop.x = random(width);
    }
  }
}

The combination of visuals, interactivity, and sound was a technical challenge but creates a unified, immersive experience.

Screenshot

The Potrait

For the remaining screenshots, I am leaving them out of the post so that they can discoverd along with their sounds in the sketch

Challenges & Lessons Learned

Building this project was a journey in itself. I ran into mutlpiple issues  like assets failing to load, the portrait wouldn’t render properly in fullscreen, and the rain would appear but stay static  I solved these by:

  • Ensuring preloading of images and sounds with reasonable file sizes. I found out that p5,js doesn;t let me upload files wiht more tha 5mb so i had to look for small loopable audios instead of voiceovers
  • Using relative positioning and scaling for the portrait and body parts. I didn’t considered this to be a problem but when I had a few constant moving elements i.e. rain and typewriting. I had to fix it by keeping track of orignal window size

Areas for future improvement:

  • Animated transitions
  • Custom portrait illustrations
  • Additional hover and sound effects
  • Expand body part descriptions with deeper storytelling
AI Usage

AI partiulary helped in developing my potrait. I helped me keep track on of my origin point as I have used multiple rotations init. I believe I could achieve the same reuslt by hit and trial but that would have consumed a lot more time. Another place where AI was helpful was the additon of neon panel. It added a major asthetic to the project. lastly I tried using AI to add comments but they were mostly generaic and self explanatory from the code. So, I decided to leave them out. Generally I didn’t use AI to write a lot of code for me but keep track of the moving elements. As shifting screen sizes did prove to be a challenge. Most cases I noticed that I have been using a lot of variable to keep track of previous states but AI came up with calculations like

typewriterText(intro, windowWidth/2-100, windowHeight*0.3, windowWidth*0.4, LEFT);

to save memory. This did add on to the tidiness of code eventually

Conclusion

This project is both technical and artistic; a small interactive narrative exploring human form, identity, and emotion. Each element, from clickable body parts to typewriter text, contributes to a cohesive experience. The goal was immersion: to make the user feel like they are interacting with a living digital system rather than a static sketch specially with sounds and texts.

 

 

Midterm Project – The Grove

1. Sketch and Code

2. Concept

I wanted to make something that felt more physical than most browser games. The idea was simple: instead of clicking a button and having a resource appear, you actually go and get it. You walk to the river to fill a bucket. You dig in the forest and carry the clay back. You hold your mouse on a pottery wheel until the shape changes. The whole game is built around making you move between spaces and handle things directly, rather than managing numbers in a menu.

The game has five locations — a world map, a river, a forest, a pottery studio, and a greenhouse — each with its own interaction logic and its own music track. You start with five seeds and no other resources, and the loop is: collect clay and water, make a pot in the studio, bring it to the greenhouse with soil and a seed, and wait for the plant to grow. The cursor changes depending on where you are and what you’re carrying, so you can always tell what you’re holding without opening an inventory screen. The visual style came from wanting it to feel lo-fi and cozy, loosely inspired by games like Stardew Valley but much smaller in scope.

The world map – each region is a hand-mapped pixel boundary
3. How it Works

The entire game runs on two parallel state variables stacked on top of each other. gameState controls the meta-level — which screen the player is on (title, instructions, gameplay, or pause). currentLayer controls the world-level — which physical location the player is standing in. Every frame, the draw() loop reads both and routes rendering and input accordingly. This separation means that pausing the game, for instance, simply renders the pause menu on top of an already-drawn scene without tearing anything down. A new layer can be added to the game without touching any existing screen logic.

Navigation between scenes is handled by a rectClick() helper that checks whether the mouse landed inside a manually defined pixel rectangle. The world map coordinates were discovered by logging mouseX and mouseY to the console while clicking over the background image — a reliable form of coordinate mapping. Two rectangles per scene allow irregular regions of the map to be approximated without any polygon math.

/*
 * Detects which map region was clicked and navigates to that layer.
 * Regions are defined as bounding rectangles over the map background art.
 */
function checkMapClick() {
    if (rectClick(0, 190, 260, 470) || rectClick(240, 330, 380, 430)) {
        currentLayer = "STUDIO";
    } else if (rectClick(240, 200, 500, 260) || rectClick(300, 260, 510, 360)) {
        currentLayer = "GREENHOUSE";
    } else if (rectClick(260, 110, 780, 200) || rectClick(520, 200, 780, 290)) {
        currentLayer = "FOREST";
    } else if (rectClick(525, 365, 840, 450) || rectClick(790, 215, 1025, 450)) {
        currentLayer = "RIVER";
    }
}

Plants must keep living regardless of which scene the player is viewing. They are stored in a global activePlants array and updated on every frame via updateGlobalPlants(), called unconditionally at the top of draw(). This means a seedling keeps aging while the player is away collecting water at the river. Growth is tracked using millis() rather than frameCount, making it completely frame-rate independent.

// Called every frame; promotes the stage when enough time has passed. 
update() {
    let age = millis() - this.birthTime;
    if (age > this.growthDuration && this.stage < 2) {
        this.stage++;
        this.birthTime = millis();          // Reset timer for the next stage

        // Play the "fully grown" sound once
        if (this.stage === 2 && !this.hasPlayedGrowthSfx) {
            sfxGrowing.play();
            this.hasPlayedGrowthSfx = true;
        }
    }
}
4. Technical Decisions
The Pottery Wheel — Hold-to-Craft

The most deliberate design decision in the project was rejecting an instant “Make Pot” button in favor of a hold-to-craft interaction. The pottery wheel tracks how long the player’s mouse has been in contact with it and advances a shapingFrame counter every five seconds, visually pulling the clay through four distinct silhouettes. During contact, a looping wheel sound plays and the pot sprite is mirrored horizontally on alternating frames to suggest rotation. Release the mouse and the sound cuts immediately — the wheel stops the moment you lift your hand. The entire sequence takes fifteen seconds of sustained attention, which is long enough to feel like real effort and short enough not to become tedious.

// ── Pottery Wheel ──
if (wheelState !== 'EMPTY' && !isDraggingFromWheel) {
    let isTouching = mouseIsPressed && dist(mouseX, mouseY, wheelX, wheelY) < 70;

    if (wheelState === 'SHAPING') {
        if (isTouching) {
            // Keep wheel sound looping while the player holds the wheel
            if (!sfxWheel.isPlaying()) sfxWheel.loop();

            // Advance the pot shape frame every 5 seconds of contact
            if (millis() - shapingTimer > 5000) {
                shapingFrame = min(shapingFrame + 1, 3);
                shapingTimer = millis();
            }
        } else {
            sfxWheel.stop(); // Stop sound when mouse is lifted
        }

        // Once fully shaped, transition to draggable state
        if (shapingFrame === 3) {
            wheelState = 'READY_TO_DRAG';
            sfxWheel.stop();
        }
    }

    // Draw the pot on the wheel, mirroring every 10 frames to suggest spinning
    push();
    imageMode(CENTER);
    translate(wheelX, wheelY);
    if (wheelState === 'SHAPING' && isTouching && frameCount % 20 < 10) scale(-1, 1);
    drawPotFrame(0, 0, shapingFrame, 200, 200);
    pop();
}
The Furnace — Time as Stakes

Once a shaped pot is dragged into the furnace, a four-phase timer begins. There is a ten-second window to retrieve a perfect pot, then a five-second grace period where the pot is visibly burnt but still removable (though broken), then five more seconds before it crumbles to ash entirely. This makes the act of pot-making carry real risk: leave the studio to collect other resources and you may return to nothing. The time-management tension it creates between the furnace and the wider world loop was a late addition to the design, but it became one of the most important decisions in the whole game — it’s what makes the studio feel dangerous rather than merely mechanical.

// ── Furnace ──
if (furnaceState !== 'EMPTY' && !isDraggingFromFurnace) {
    let elapsed = (millis() - furnaceStartTime) / 1000; // Seconds since firing started

    if (elapsed < 10) {
        furnacePotFrame = 3;
        furnaceState = 'FIRING';
        if (!sfxFurnace.isPlaying()) sfxFurnace.loop();
    } else if (elapsed < 15) {
        furnacePotFrame = 4;
        furnaceState = 'READY_TO_DRAG'; // Pot is done — player can pick it up
        sfxFurnace.stop();
    } else if (elapsed < 20) {
        furnacePotFrame = 5;
        furnaceState = 'BURNT'; // Left too long — pot is cracked
    } else {
        furnacePotFrame = 6;
        furnaceState = 'ASH';  // Completely destroyed
        sfxFurnace.stop();
    }

    imageMode(CENTER);
    drawPotFrame(205, 237, furnacePotFrame, 70, 70);
}
The Cursor as a Physical Inventory

Rather than displaying abstract resource counts in a HUD panel, physical resources are communicated directly through the cursor. In the forest, the shovel sprite changes to show clay or soil clinging to the blade the moment something is dug up. At the river, the bucket visually fills. Resources are deposited by carrying them to the backpack icon in the corner — the act of storing something is the same gesture as moving it there.

Bucket cursor fills visually after clicking the river surface
Cursor becomes a clay-caked shovel after digging a deposit
5. Challenges
Double-Firing Buttons

The most persistent bug in the project was button clicks firing twice from a single physical interaction. p5.js triggers both mousePressed and mouseClicked in sequence for the same click event, and because several buttons triggered state changes or inventory mutations, the same action would execute twice — opening and immediately closing the inventory, or incrementing a counter twice in one tap. The fix was a lastMenuClickTime debounce guard: every button action stamps the current timestamp, and any input arriving within 250 milliseconds of that stamp is silently discarded. Setting mouseIsPressed = false inside the button handler also “eats” the event before any downstream listener can see it.

// Fire the action on click, preventing double-firing with a debounce timestamp
if (hover && mouseIsPressed) {
    sfxButton.play();
    lastMenuClickTime = millis();
    mouseIsPressed = false; // Consume the press so nothing else reacts to it
    action();
}
The Cursor Bleeding Over UI Buttons

A subtler issue emerged from the custom cursor system: the shovel and bucket sprites would remain active when hovering over the “Return to Map” and “Menu” buttons in the forest and river scenes. This made the buttons feel broken — the system’s hand cursor never appeared, and the sprite image obscured the button labels. The fix required duplicating the button bounding-box logic inside drawCustomCursor() and explicitly reverting to cursor(ARROW) whenever the mouse entered a UI button’s region. It’s not the most elegant solution, since the same coordinates appear in two places, but it is simple, clear, and reliable.

6. Areas for Improvement

The most obvious missing layer of feedback is what happens when a planting action fails. If the player clicks a greenhouse slot without the right resources, nothing happens. A brief wobble on the backpack icon or a soft error tone would communicate the missing ingredient without interrupting the lo-fi calm. The furnace has the same problem: because there is no visible countdown, the “BURNT” outcome surprises players on a first run through the studio. A subtle color shift on the furnace door as elapsed time crosses into the danger zone would be enough to telegraph urgency without resorting to a numerical timer on screen.

Structurally, the game currently has no win condition or narrative arc beyond the resource loop itself. A concrete goal — growing five plants to full harvest, for instance — would give the loop a sense of closure and make the opening seeds feel like the start of something rather than an arbitrary starting point. Beyond that, the pottery wheel’s hold-to-craft timer could become adaptive: longer contact for a more durable pot, shorter contact for a fragile one that breaks after a single use. That single change would introduce meaningful trade-offs to what is currently a single fixed path through the studio, without adding any new systems

On the technical side, every scene coordinate in the codebase is a hard-coded pixel value sniffed by hand from a 1024×576 canvas. If the canvas size ever changes, every boundary needs to be remapped manually. Normalizing all coordinates to proportions of width and height and then multiplying at render time would make every scene scale to any canvas size automatically — a straightforward refactor that would future-proof the entire coordinate system.

6. Resources
Inspiration

Some of the interaction design was also influenced by this p5.js sketch, which was linked as an example and I came across while exploring what direct, hands-on interaction could look like inside a browser canvas.

Libraries
    • p5.js (v1.11.11) — core rendering, input handling, and sprite-sheet animation via image().
    • p5.SoundloadSound(), loop()setVolume(), and isPlaying() for all BGM cross-fades and per-action sound effects.
Visual Assets
    • All backgrounds, sprites, and sprite sheets were generated using Google Gemini and subsequently edited by hand — cropped, trimmed to transparency, and sliced into equal-width frames for use with p5’s source-rectangle API.
Audio
    • All audio is managed through p5.Sound. BGM transitions are handled by manageBGM(), which compares a targetBGM reference against currentBGM each frame and only swaps when the target has changed — preventing the track from restarting on every draw call.
    • Background Music — each location in the game has its own assigned instrumental track, chosen to match the mood of that space:
      • Main Menu               Supernatural               NewJeans
      • Instructions             ASAP                               NewJeans
      • Map                             The Chase                     Hearts2Hearts
      • River                           Butterflies                     Hearts2Hearts
      • Forest                         Ditto                                NewJeans
      • Studio                        Right Now                     NewJeans
      • Greenhouse             OMG                                NewJeans
      • Pause Menu             Midnight Fiction        ILLIT
    • Sound Effects — all SFX (wheel spinning, bucket fill, shovel dig, furnace fire, etc.) were sourced from Pixabay and other royalty-free libraries.

Midterm Project – Madame Digital

Embedded Sketch

Concept

Have you ever wanted to take a picture with a celebrity, but without the impossible chances of actually meeting one?That is basically the fantasy world Madame Digital captures!

My midterm project is basically inspired by the whole wax museum experience, but I wanted to turn it into something more personal and interactive instead of just something you look at. Instead of walking around and seeing celebrity figures from a distance, the user actually gets to choose a celebrity and then step into a digital photobooth with them. Once you pick who you want, the webcam turns on and you see yourself on screen next to the celebrity, almost like you’re actually standing there together. You can adjust yourself in the frame, see a live mirrored preview, and then take a photo that combines both of you into one image. The final result is like a little souvenir picture you can save, which was important to me because I wanted it to feel like you’re taking something away from the experience, not just clicking through it. I really liked the idea of taking something that normally exists in a physical place and turning it into something accessible on your screen, but still keeping that same excitement.

At the same time, I did not want it to feel like just another random app with buttons. I wanted it to feel like entering a space. Something a little theatrical, and a little staged in the best way possible. That is why I leaned into the red curtain background, gold touches, framed celebrity cards, and a series of screens that slowly guide the user through the experience, really giving them the museum vibe. The point of my project was not to try to be realistic. If anything, its charm comes from how intentionally artificial it is, just like a wax museum. Even though the whole thing is obviously digital and staged, the user still gets to walk away with a “memory.” That is what made the concept feel fun to me from the start, and that is what I wanted to preserve throughout the project.

How the Project Works

My project works as an interactive experience that guides the user through a digital version of a wax museum. When the user opens the project, they begin on a title screen and are led through instructions, a celebrity selection gallery, a live photobooth, and finally a souvenir screen. At each stage, the user interacts with the screen by clicking buttons or selecting options, which moves them forward in the experience. Once a celebrity is chosen, the webcam is activated and places the user next to their selected celebrity in a shared frame. The user can then take a photo, which combines both images into a single souvenir-style picture that they can view and also save at the end.

To make my project flow between the different stages, I used the state-based system we learned in class, where a variable called screenMode (also known as gameState from class) controls which screen is being displayed to the user. This was honestly one of the most important parts of the project because it keeps everything organized and prevents it from feeling messy and chaotic. Depending on the value of screenMode a different function runs which draws that specific screen.

if (screenMode === "start") {
  drawStart();
} else if (screenMode === "how") {
  drawHow();
} else if (screenMode === "gallery") {
  drawGallery();
} else if (screenMode === "booth") {
  drawBooth();
} else if (screenMode === "souvenir") {
  drawSouvenir();
}

I feel like this gameState system really shaped how my project came together. It really helped a lot with debugging, because if something wasn’t working, I could immediately tell which screen the problem was in.

Another part I am really proud of is how I built the gallery using arrays and loops instead of placing everything manually. Each celebrity is stored as an object, which made the whole system more flexible and easier for me to expand.

let celebrityList = [
  { name: "Taylor Swift", portraitFileName: "taylor.jpg" },
  { name: "Beyoncé", portraitFileName: "beyonce.jpg" },
  { name: "Timothée Chalamet", portraitFileName: "timothee.jpg" }
];
for (let i = 0; i < celebrityList.length; i++) {
  let x = 150 + i * 140;
  let y = 250;
  image(celebrityList[i].portraitImage, x, y, 120, 160);
}

I liked this part because it made the project feel more thought out technically. If I wanted to add more celebrities later, I wouldn’t have to redesign anything, I could just add more information, and it would work.

The part I am most proud of though, is the photobooth and souvenir feature because that is where the whole idea really comes together. Once the user selects a celebrity, the webcam turns on and places them in the same frame. I mirrored the webcam so it works like a normal selfie camera, which makes it feel more natural and less confusing. When the user clicks the snap button, the code creates a new image that combines both the webcam photo and the celebrity photo into one final photo.

souvenirPicture = createGraphics(photoWidth * 2 + 40, photoHeight + 40);

souvenirPicture.push();
souvenirPicture.translate(photoWidth, 20);
souvenirPicture.scale(-1, 1);
souvenirPicture.image(webcam, 0, 0, photoWidth, photoHeight);
souvenirPicture.pop();

souvenirPicture.image(
  chosenCelebrity.portraitImage,
  photoWidth + 20,
  20,
  photoWidth,
  photoHeight
);

This is my favorite part because it turns the interaction into something personal that the user can actually keep. It’s not just clicking through screens, instead you end up with a photo that feels like a souvenir from the experience. I also think the overall flow is something I am proud of. Even though the project is playful, the user is always being guided from one step to the next. Even small choices, like adding a music toggle (on and off button) and using the same museum-ish background style across screens helped make everything feel connected.

Challenges and Areas for Improvement

One of the biggest challenges I ran into was with the webcam. At first, it showed up twice, once as a default video element and once on the canvas. That was confusing because it looked like something was wrong with my drawing code, but it was actually just how p5 handles video by default. Hiding the original video element and manually drawing it solved the problem, but it definitely took some time to figure out. 

Another challenge I ran into was with the proportions and measurements. I had to deal with a lot of them at once, so it definitely got confusing at times. With coding in general, many challenges come up, so there were minor challenges throughout the code. However, with trial and error, I was able to fix them thankfully.

If I were to improve the project further, one thing I would really want to add is celebrity specific music. Right now the experience already has sound, but I think it would be way more immersive if each celebrity had their own audio choice attached to them. For example, if the user chooses Beyoncé, the background sound could shift into one of her songs, or when choosing Timothée Chalamet or Jennifer Aniston, a different sound could play that matches their overall vibe. I think this would make the gallery feel much more alive overall and more like each choice leads to a slightly different experience.

I would also like to add more customization to the final souvenir, like filters, or decorative frames, or maybe even different photobooth themes. The current version showcases the concept really well, but I can definitely imagine advancing it to make the experience feel even more personalized in a way.

References

Throughout this project, I used a mix of p5 examples, class slides, YouTube videos, and a previous student project to help me understand and implement different features.

These helped me figure out how to display the webcam inside the canvas and structure the photobooth layout.

I used these to understand different variables in order to flip the webcam so it works like a mirror.

These were important for learning how to use createGraphics and combine multiple elements into one image, and other just other image related code.

I also referenced a previous Intro to Interactive Media student project for this part. This helped me with the final image saving process.

These helped me understand how to create interactive buttons that respond to hover and clicks.

This example helped me structure the overall flow of my project and handle the different screens. I used a variable (screenMode) that works just like the in-class (gameState).

These helped me implement background music and button click sounds.

links that helped with general concepts and overall Object-oriented programming.

These helped me implement full-screen and understand the responsive canvas.

font 

Background Music

In addition to these references, I relied a lot on the class slides to understand the key concepts.

Usage of AI

I used ChatGPT mainly for the hardest parts of my project, especially when multiple things were happening at once and I couldn’t tell what exactly was going wrong. 

The most difficult part for me was the souvenir photo system. That part is not just simply “taking a picture,” it’s actually creating a completely new image, bringing the webcam photo, flipping it so it works like a mirror, placing it next to the celebrity image, and then saving everything together as one final frame. At first, I didn’t fully understand how that worked because it’s not the same as just drawing things on the screen. ChatGPT helped me understand that I was basically drawing on a separate hidden canvas and then displaying the result later. It also helped me understand how transformations work together, because if I flipped the webcam without positioning it correctly, it would either disappear or show up in the wrong place. That was one of the hardest parts because it required me to think about how everything is layered and combined, not just displayed.

Another difficult part was the celebrity gallery. I didn’t want to just place images randomly, I wanted it to feel like an actual selection screen with evenly spaced cards that respond to the mouse and adjust to different screen sizes. That meant I had to think about layout and proportions and spacing all at the same time. This part was actually more complicated than i thought because everything in my code is based on proportions like width 0.25 or height 0.16, so even small changes (like decimals) would completely change the layout in ways I didn’t expect. ChatGPT helped me understand how to think about these proportions more intentionally instead of just guessing numbers.

I also used ChatGPT to help me understand code that I was getting and adapting from references and other projects. Sometimes I would find an example that did something similar to what I wanted, but I didn’t fully understand how or why it worked. Instead of just copying it, I used ChatGPT to break it down and explain it to me, which helped me actually learn from it and adjust it to fit my own project.

Another thing I used AI for was generating some of my visuals. I used Gemini to create the celebrity images as well as the red curtain background, since I wanted a very specific “Madame Tussauds” style aesthetic and it was faster to generate and then refine them. 

Overall, I used AI more as a tool to understand and debug difficult parts of my project, and to generate some visuals, rather than to build the project itself obviously. 

Midterm

Concept

This game is inspired by Emirati culture, specifically the tradition of Talli weaving, which is often associated with older generations thus the character Yedo which is a way of saying Grandma in Arabic . I wanted to create something that doesn’t just function as a game, but also reflects a part of the culture in a modern, interactive way. The idea of matching thread colors comes directly from the process of choosing and weaving Talli, where color plays an important role in the final design. By turning this into a fast paced challenge, I translated a traditional craft into something engaging and digital, while still keeping its cultural essence.

I also focused on the visual style using warm brown tones, wood textures, and handcrafted shapes to give the game a feeling of a traditional workspace rather than a typical digital interface. This made the experience feel more connected to the environment where Talli is actually made.

Embedded code

Highlighted code

//new round (source custom logic)
function startNewRound() {
  let target = random(talliColorOptions); //picks a random talli color from the list
  targetColorName = target.name; //stores the name of the color the player must find
  correctIndex = floor(random(12)); //chooses a random spot in the grid for the correct color
  gridColors = []; //clears the grid before filling it again
  //fills the grid with 12 colors one correct color and the rest random but not matching the target
  for (let i = 0; i < 12; i++) {
    if (i == correctIndex) gridColors.push(target);
    //places the correct color in its chosen position
    else {
      let other;
      do {
        other = random(talliColorOptions); //picks a random color option
      } while (other.name == target.name); //ensures it's not the target color
      gridColors.push(other); //adds the wrong color to the grid
    }
  }
  roundStartTime = millisInGame(); //records when this round started for the timer
}
function getClickedCellIndex(mx, my) {
  let cols = 4; //number of columns in the grid
  let rows = 3; //number of rows in the grid
  let cellW = width / 5; //width of each grid cell
  let cellH = height / 6; //height of each grid cell

  let startX = width / 2 - cellW * 1.5; //starting x position of the grid
  let startY = height / 2 - cellH * 1.5 + cellH; //starting y position of the grid

  let index = 0; //keeps track of which cell number its checking

  //This function loops through the grid and compares mx,my with each cell to figure out which cell was clicked.
  for (let r = 0; r < rows; r++) {
    //loops through each row
    for (let c = 0; c < cols; c++) {
      //loops through each column inside that row

      let x = startX + c * cellW; //x position of this cell
      let y = startY + r * cellH; //y position of this cell

      let hitW = cellW * 0.8; //slightly smaller hitbox width to make tapping feel nicer
      let hitH = cellH * 0.8; //slightly smaller hitbox height

      //checks if the mouse click (mx, my) is inside this cells hitbox
      if (
        mx > x - hitW / 2 &&
        mx < x + hitW / 2 &&
        my > y - hitH / 2 &&
        my < y + hitH / 2
      ) {
        return index; //returns the number of the cell that was clicked
      }

      index++; //moves to the next cell number
    }
  }
  return -1; //returns -1 if the click didnt hit any cell
}

These are the parts of my code I’m most proud of is how I generate each round and detect where the player clicks, using the startNewRound () and getClickedCellIndex () functions. This part was honestly the most challenging because it’s what makes the game actually work, not just look good. At first, I had a problem where the correct color would sometimes show up more than once, which made the game confusing. I fixed this by using a do. . .while loop that keeps picking a new random color until it’s different from the target color, so now there is always only one correct answer. Another issue I faced was that when I clicked on a Talli, it didn’t always register properly because the shape is not a perfect rectangle. To fix this, I created a slightly smaller “hitbox” inside each grid cell using 0.8 of the width and height, so the clicks feel more accurate and natural. I also had to carefully calculate the positions of each cell using rows, columns, and an index system, so I could match the clicked area to the correct color. This part really pushed me to think logically and problem solve, and it’s the reason the game feels interactive and responsive instead of just visual.

In terms of interaction, the game starts with a title screen where the player clicks a large “Start” button to begin. Once the game starts, the user sees a grid of twelve talli colors along with a prompt from Yedo telling them which color to tap. The player must click the correct talli before the timer runs out, and each correct answer immediately generates a new round with a fresh set of colors and a slightly shorter time limit. If the player clicks the wrong talli or runs out of time, the game switches to a game over screen showing their score and high score, and they can click again to restart. This loop of clicking, reacting, and racing against the timer is what makes the experience interactive and engaging.

How it was made

My midterm project was created by combining what I learned in class with my own ideas and experimentation. I built the game step by step, starting with the basic structure like setup () and draw(), then adding different features like the title screen, gameplay, timing system, and interactions. I used class examples such as the timer using millis (), the reset button, and fullscreen toggle, and adapted them to fit my game. I also used reference videos for things like the high score system and general game logic, then changed them to match my Talli concept. For the background I used generative ai to give me what I wanted and for the audio I selected a wood clicking sound to fit the theme.

A big part of my process was working in separate p5.js files before adding things into my main project. For example, I experimented with the Talli shape in a different file to get the design and details right, and I also tested parts of the game logic separately. While I was doing that, I was also building the title screen and setting up the base of my main project file. This helped me stay organized and avoid breaking my main code while testing new ideas.

One of the parts I struggled with at first was the timer bar, because I needed it to visually shrink as time passed. I figured it out by using millis to track how long the round has been running, and then using map to convert that time into the width of the bar. The full bar is drawn first, and then the shrinking red bar is drawn on top using the mapped value. For example, this line Let w = map()elapsed,0, allowedTime, barW, 0); takes the elapsed time and turns it into a decreasing width, so the bar gets shorter the closer the player gets to running out of time. Once I understood how map() works, the whole system made sense, and the timer bar became a clear visual indicator that adds pressure and makes the game feel more dynamic. I reference the p5Js website to understand this function and how it remaps one range of numbers into another this really helped to apply for when I wanted to shrink the timer bar.

After building the basic structure, I focused on the gameplay elements like the grid, score, and timer. I created the 4×3 grid by looping through rows and columns and assigning each cell a color from the gridColors array. The score system was placed in the top left corner using simple text placement, and I used millis to calculate how long each round has been running so I could control the timer and difficulty. I also added a restart system using a reset function that brings the player back to the title screen and resets all the main variables. For the text that shows the color and the color chosen I used the random function and built the sentence and depending on what color is chosen it is followed in on the sentence. For the high score, I followed a reference video and compared the current score to the saved one, updating it only when the player beats their previous best. This made the game feel more replayable and gave the player a reason to keep trying.

Most of the core logic, like generating random colors, building the grid, and detecting clicks, was developed through trial and error. I tested frequently, fixed issues like repeated colors and inaccurate click detection, and improved the game step by step. I referenced a basic p5.js code with screen switching and click detection example (linked in references ) to understand how to structure multiple screens (title, game, and game over) and how to check if the mouse is inside a rectangular area. I adapted the logic to fit my Talli Game by expanding it into a full grid system, adding timers, and creating custom hitboxes.

Overall, the project is a mix of class learning, references, and my own problem solving, with a workflow that kept my code clean and easier to manage.

Reflection and future ideas

Overall, I’m really proud of how this project turned out because I was able to turn a cultural idea into an actual working game. At the start, I didn’t expect it to come together this smoothly, especially with all the different parts like timing, interaction, and visuals. One of the biggest things I learned was how important it is to build step by step and test constantly, because small mistakes in logic can affect the whole game.

I also improved a lot in problem solving and the importance of being patient and having different files as back up to not lose my progress in case I get an error. For example, I had to figure out how to stop colors from repeating, how to make clicks more accurate, and how to organize my code so it didn’t get confusing. Working in separate files really helped me stay organized and experiment freely without breaking my main project. I also feel like I got better at thinking both creatively and logically at the same time, especially when designing the Talli and making sure it actually worked in the game. In the future I hope to be able to create more explorative and advanced games similar to games that follow a story as you complete the quests one that comes to mind is It Takes Two its a two player game but the concept is interesting since its a story game where you have two players work together to complete the game I like the concept of it being multiplayer so in the future I hope to be able to understand how to have two players and code for the different types of events in the game etc.

References :

High score reference :

https://www.youtube.com/watch?v=gvtXwIYh35s

Bezier shape:

https://p5js.org/reference/p5/bezierVertex/

Reset button:

https://editor.p5js.org/mrquirk/sketches/_5Jdl1JKH

Start button home page:

https://www.youtube.com/watch?v=HfvTNIe2IaQ

Fonts:

https://fonts.google.com/specimen/Old+Standard+TT?query=old

Sound file:

https://samplefocus.com/samples/wood-tap-click

P5js with screen switching and click detection reference :

Copy:https://editor.p5js.org/maa9946/sketches/09uKWhHdi

Original link: https://editor.p5js.org/ehersh/sketches/SJ8hr7XCm

Map ():

https://p5js.org/reference/p5/map/

Shape:

https://editor.p5js.org/maa9946/sketches/KeancnIJE

Reading Reflection- Week 5

This reading challenges the common stereotype that computer vision has to function similarly to human vision by highlighting how opaque digital images are, meaning that computers cannot interpret them without algorithms. It shows that computers basically cannot function if there were no algorithms to program them. Unlike humans that can constantly interpret context, different types of objects and different meanings, computers rely on simplified techniques, algorithms and systems such as frame differencing, background subtraction and brightness thresholding to detect movement or presence. For example, the reading explains that frame differencing compares pixel changes between frames, while background subtraction depends on differences from a stored image of the scene. This actually made me realise that computer vision is actually much more dependent on certain conditions having to work than I really initially thought. For instance, background subtraction only works if lighting and contrast are carefully designed. This shows that seeing for computers is more about structured assumptions. Meanwhile, the reading shows that we can actually guide what computers see by designing the background and physical environment, such as using high contrast, stable lighting or even infrared to improve detection. This connects to the idea of interactive media, where it’s not just coding systems but also designing physical and visual conditions that shape interaction.

What I felt was most interesting was how computer vision’s ability to track people overlaps with surveillance, and how artists actually engage with this. The reading discusses works such as Sorting Daemon and Suicide Box, where tracking systems are used to monitor and analyze people, sometimes raising ethical concerns about profiling and data collection. This made me question whether interactive art using computer vision is actually empowering participants or not. On one hand, projects like Videoplace create an embodied interaction where the human body becomes part of the system, but on the other hand, surveillance-based works turn viewers into objects of analysis. I think this is what makes computer vision powerful in art as it can both engage and critique. However, the author seems somewhat biased toward presenting computer vision as accessible and empowering for artists, while not fully addressing issues such as privacy and ethical risks. This raises a question: as computer vision becomes easier to use, how should artists balance creativity with responsibility? I also wonder whether using computer vision in interactive art always requires some level of surveillance, or if there are ways to design systems that avoid this dynamic altogether.

Midterm Project

The Concept

For my midterm project, I decided to create a choice-making game based on the Star Wars film series. The concept is that you have the choice to control the events in the film Revenge of the Sith, following up to Anakin Skywalker becoming Darth Vader. You get to control key events in the film to try to prevent Anakin Skywalker from becoming Darth Vader.

The Process and Challenges

I first started out by creating the starting page of the game on p5. I downloaded an image that I had and added a sound in the background in the draw function. I added the full-screen option and resized the window accordingly. My initial plan for the instructions page was for you to click an instructions button, and a box would appear telling you the instructions. I built the instructions button and the box appearing, but then I didn’t end up liking it as much as I thought I would. That’s when I decided to make the instructions appear in the same way the opening crawl appears at the beginning of every Star Wars film. I found a website that creates crawls and made one. The only problem was that the video was 90 MB, and p5 only supports up to 5 MB. I tried cutting up the video into several videos and playing them one after the other, but that was not working. It would only play one video, or it would play each one under the other in the same window at the same time. That made it extremely difficult to create the idea I wanted, and it made running p5 very uncooperative, as the page would refuse to load and p5 would crash.

That’s when I decided to switch to VS Code.

After switching to VS Code, I uploaded the instructions video, and it worked perfectly. A problem I faced however was that the background audio kept running overitself over and over again, so I switched its location to the setup() function and it worked. My sister told me afterwards, to include a way to skip the instructions because that would make debugging easier and playing much more enjoyable. I created a Skip button and a start button, but I ended up removing the skip button altogether since the start button served the same purpose.

I then began creating the nodes and game states. To do that, I decided to incorporate Object-Oriented Programming to create each question in its own seperate class instead of repeatedly writing down the questions and option buttons. I created my first question class and used it as a template. I attempted to use a for loop to run through the class because that’s what I assumed I had to do. Then I decided to completely omit the for loop and just accessing the classes without loops.

After accessing the question from my class, I started working on the conditional aspect of changing the node after clicking the choice you want. I tried doing a typical option1Button.mousePressed(xyz), but it wasn’t working. So I asked ChatGPT and it told me it’s because I’m not accessing the class itself so I had to include the array name before.

Following that, I started actually working with my game states. (since my game is a choice-making game, I had to incroporate a state for every single node). I used the following code example: if (firstQuestion[0].option1Button.mousePressed()){ gameState===’node2′; }. I used this because my if statements in the beginning of my game call to draw the other nodes once the game state changes. This did not work and I was not able to access my other node/question when I clicked the choice button..After multiple attempts at debugging like trying to make the background black to cover over the previous start page,  I resorted to ChatGPT which suggested I make a seperate transition function to be called when the buttons are pressed. This transition function would change the game state successfully and allow me to switch between nodes/questions since this was the structure of my game states:

function draw(){
//creating the conditions that will allow to switch between nodes
 if (gameState === 'start') {
    drawStartScreen();
  } else if(gameState==='instructions'){
    drawInstructions();
  } else   if (gameState==='node1'){//what happens in the first node and so on
    drawNode1();
  } else if (gameState==='node2'){
    drawNode2();
  } else if (gameState==='node3'){
    drawNode3();
  } else if(gameState==='node4'){
    drawNode4();
  } else if(gameState==='node5'){
    drawNode5();
  } else if(gameState==='node6'){
    drawNode6();
  } else if(gameState==='node7'){
    drawNode7();
  } else if(gameState==='sithEnding'){
    drawSithEnding();
  } else if(gameState==='deathEnding'){
    drawDeathEnding();
  } else if(gameState==='happyEnding'){
    drawHappyEnding();
  }
}

I then wrote the specific draw functions for each node. The concept of the transition node was extremely helpful and I used it repeatedly across my code. I proceeded to repeat the pattern of transition functions —> changing the node —> writing my drawNode functions.

I repeated this cycle across my timeline until I had to code the endings. Once I finished my endings, I had to incorporate a method to reset the game. At the endings of my timeline, I included a restart button. This button would call a transition function. This transition function would change the node back to ‘start’ again, effectively reseting the game.

Part I’m Most Proud Of

I’m most proud of using OOP for my questions and choice buttons. I think that was one of the most effective and efficient ways I could have run my code. It saved me so much time that could have been wasted debugging and searching for which question in particular to fix. It kept everything organized and structured.

class Question1{
    constructor(question, option1, option2){
        this.question = question;
        this.option1 = option1;
        this.option2 = option2;
    
        this.option1Button=createButton(this.option1);
        this.option1Button.style('font-size', '18px');
        this.option1Button.style('font-family', 'titleFont');
        this.option1Button.style('color', 'white');
        this.option1Button.style('background-color', 'black');
        this.option1Button.style('text-align', 'center');
        this.option1Button.style('border', 'none');
        this.option1Button.size(500, 80);
        this.option1Button.position(width*0.25 - 250, height/1.5);
        this.option1Button.hide();
    
    
        this.option2Button=createButton(this.option2);
        this.option2Button.style('font-size', '18px');
        this.option2Button.style('font-family', 'titleFont');
        this.option2Button.style('color', 'white');
        this.option2Button.style('background-color', 'black');
        this.option2Button.style('text-align', 'center');
        this.option2Button.style('border', 'none');
        this.option2Button.size(500, 80);
        this.option2Button.position(width*0.75 - 250, height/1.5);
        this.option2Button.hide();
    
    
    }



    display(){
        fill('white')
        textFont(titleFont); 
        textSize(34);
        textAlign(CENTER, CENTER)
        text(this.question, width/2, height/3);
    }
    show(){
        this.option1Button.show();
        this.option2Button.show();
    }
    hide(){
        this.option1Button.hide();
        this.option2Button.hide();
    }
}

AI Disclosure

I used Gemini to produce the backdrop images of the nodes. I used ChatGPT to debug issues such as:

  1. The soundtrack playing repeatedly, ChatGPT told me to move my soundtrack.play to setup() since draw() runs 60 times a second
  2. The nodes would not switch but that was because I did not know the difference between = and ===. ChatGPT explained the difference to me, and I was able to know when to use = and ===
  3. I was not able to change nodes when I click buttons, this was the structure I tried incorporating:
    (firstQuestion[0].option1Button.mousePressed()){ gameState==='node2'; }
    

    but it did not work and it refused to switch nodes. ChatGPT told me to incoroporate the transition nodes. That was I believe the most beneficial and efficient parts that ChatGPT told me to incorporate. Without it my code would be extremely chaotic and difficult to debug. It was extremely beneificial, and it allowed me to create the pattern I would use to switch out the node and game states. It was a one time suggestion, but it was something I repeatedly included in multiple parts of the code. Even within restarting  the game I used it. And also, towards the end of coding my instructions video was changing its position differently than I coded it. So, I applied the transition function to only call the video when the instructions button is pressed.

  4. My buttons would not be in the correct position until I hit full-screen, so ChatGPT told me to put their code in my windowResized function
  5. I used the CoPilot extension in VS Code to change stylistic components of my buttons. After I finished the project and posted the github link, I realized that I had been working on the project with my computer zoomed out to 33% instead of it being at the default 100%. So I asked the extension to change all the text and button sizes 66% smaller to fit the frame correctly. I did this to save time instead of going to every singular text and button and manually changing the sizes which would have been extremely time consuming. I made it lower the positions of my buttons and change the background colors instead of manually doing it for every button and text.

All the code was written entirely by me, all AI tools were used as a supplement to debug.

Reflection:

For the future I would love to incorporate the idea of creating and animating the actual characters and allowing the users to move the characters around with the mouse, choosing which other characters to speak to and picking choices from the dialogue themselves. I would want to give it a video game style. I would have loved to create more storylines and endings for the game as well.

GAME LINK: https://da3755-ui.github.io/prevent-the-fall-of-anakin/

Midterm Final Project

My overall concept:

My project is an interactive game called Boba Builder, where the player gets to create their own custom bubble tea drink. I wanted the game to feel fun and give the user a cute, customizable experience, almost like a small digital cafe where you can design a drink however you want. The idea actually started because I was craving matcha, and at first I wanted to make a matcha‑themed game. But then I realized matcha doesn’t have many color or topping variations since it’s always green, so it didn’t feel very customizable. That made me think about other drinks, and then I remembered how popular boba tea was around 2020-2022 during the COVID era. I mean, it’s still popular now, but that was the time when it really became a trend. I liked how many different combinations you can make in real life, and that inspired me to turn it into a game where players can mix and match their own drink.

The main goal of my project was to make the player feel like they are actually building something step by step. Each screen represents a different part of the drink‑making process, and the choices the player makes show up inside the cup. I wanted the final drink to feel personal, like something the player actually created, and I liked the idea that every person could end up with a completely different drink. The game doesn’t have winning or losing; it’s more about creativity, enjoying the process, and having a fun little experience.

Final Project: 


How My Project Works: 

The entire game I built uses the same coding ideas we learned in class, which made it easier for me to understand how to structure everything. My project works using a state system in p5.js. This means the game changes screens depending on what the player chooses. For example, the game starts on the start screen, then moves to the tea screen, then the boba screen, then the ice screen, and so on. Each screen has its own buttons that let the player pick what they want to add to their drink. When the player clicks a button, the game updates a variable like chosenTea, chosenBoba, or chosenIce. These variables are then used inside my drawCup() function, which updates the cup and shows the player’s choices as they build their drink.

if (state === "start") {
  drawStartScreen();
} else if (state === "tea") {
  drawTeaScreen();
} else if (state === "bubbles") {
  drawBobaScreen();
} else if (state === "ice") {
  drawIceScreen();
} else if (state === "straw") {
  drawStrawScreen();
} else if (state === "color") {
  drawColorScreen();
} else if (state === "finished") {
  drawFinalScreen();
}

I also used object‑oriented programming to create my buttons. I made two classes: a Button class for rectangle buttons and a RoundButton class for circle buttons. Each button has its own position, size, color, and label. Both classes also have a clicked() function that checks if the player pressed the button. This helped me reuse the same code for all my buttons instead of rewriting it over and over. It made my project much more organized and easier to manage.

class Button {
  constructor(x, y, w, h, fillColor, textColor, txt) {
    this.x = x;
    this.y = y;
    this.w = w;
    this.h = h;
    this.fillColor = fillColor;
    this.textColor = textColor;
    this.text = txt;
  }

  draw() {
    fill(this.fillColor);
    rect(this.x, this.y, this.w, this.h, 30);
    fill(this.textColor);
    textSize(width * 0.035);
    text(this.text, this.x, this.y);
  }

  clicked() {
    return (
      mouseX > this.x - this.w / 2 &&
      mouseX < this.x + this.w / 2 &&
      mouseY > this.y - this.h / 2 &&
      mouseY < this.y + this.h / 2
    );
  }
}

The game also uses sound effects and background music. The background music plays throughout the whole game, and when the player pours tea, the pouring sound plays. When the pouring sound finishes, the game automatically moves to the next screen using pour.onended(nextState);. I also added sounds for boba, ice, and other actions to make the game feel more interactive and satisfying. At the end of the game, the player sees their final drink with all the choices they made.

What I’m Proud Of:

I’m really proud of how the whole game feels like a real drink‑making experience. The cup updates step by step, and the player can see their drink change as they make choices. I’m also proud that I figured out the back button issue. Now, on every screen, the player can go back to the previous page without breaking the game, which took a lot of testing and fixing. I’m also proud of the visuals I drew in Procreate. I made the start‑screen background, the toppings, and the decorations myself, and I think they make the game look cute and fun.

Another thing I’m proud of is the button system I created. I made a separate buttons.js file and used classes to organize all my buttons. This made my code much cleaner and helped me understand object‑oriented programming better. I’m also proud of how I used sound effects to make the game feel more real and less boring. The pouring sound, the drip sound, and the ice sound all add to the experience and make the game more satisfying to play.

I’m also proud that I solved some difficult bugs. The back button and the lag issues were really frustrating, but I kept trying different things until I fixed them. It felt good to finally get everything working the way I wanted.

Areas of Improvement & Problems I Faced:

One area I want to improve is how the layout works on different screen sizes. I used windowWidth and windowHeight to make the game responsive, but sometimes the buttons or text still look a little off on very small or very large screens. In the future, I want to create a better scaling system so everything stays in the right place no matter what device the player uses, even on phones.

I also had problems with the back button. Sometimes it didn’t reset the right variables, so old choices stayed on the screen. Other times, the game lagged because something was being recreated inside the draw() function instead of only running once in setup(). These issues took a long time to figure out, and I had to test many different things before I finally fixed them.

Another challenge was making sure the cup updated correctly with all the player’s choices. I had to keep track of many variables and make sure they all worked together without breaking anything. It was confusing at first, but once I organized everything and cleaned up my code, it became much easier to manage.

References: 

– Pouring sound:  https://freesound.org/people/piotrkier/sounds/700153/ 

  • I used this sound for when the tea pours into the cup.

– Drip sound:  https://freesound.org/people/Neotone/sounds/75345/ 

  • This sound plays when the player chooses boba or a color.

– Ice sound:  https://freesound.org/people/giddster/sounds/386431/ 

  • This sound is used when the player adds ice to their drink.

– Background Music: https://freesound.org/people/Mrthenoronha/sounds/370293/ 

  • I used this as the soft background music that plays during the whole game.

– Audio Editing Tool: https://clideo.com/editor/ 

  • I used this website to trim and edit my audio files so they fit better in the game.

– Font Used: https://fonts.google.com/specimen/Ribeye+Marrow 

  • This is the custom font I used for the text in my project.

– p5.js reference: https://p5js.org/reference/p5.MediaElement/onended/ 

  • I used this p5 reference to learn how the onended() function works. This helped me understand how to make the game move to the next screen after the pouring sound finishes.

Drawings:  

  • I used Procreate to draw the visuals for my project, including the start background and the toppings. I created each drawing myself and exported them as PNG files so I could use them for my game.

Ai usage: 

  • For my project, I mainly used ChatGPT to help me understand and fix problems that were confusing or hard to figure out on my own. One issue I had was that when I changed the size or position of something in my sketch, other parts of the layout sometimes reacted in ways I didn’t expect, especially because I was using a lot of percentage‑based values like width * 0.5 or height * 0.7. ChatGPT helped me understand how p5.js handles screen sizes and why certain numbers can affect the spacing of different elements. I also used ChatGPT to understand why my instructions text wasn’t breaking into separate lines and it was written in a way that wasn’t aesthetically appealing. It explained how the \n symbol works in p5.js and how text alignment affects multi‑line text, which helped me format my instructions correctly. Another major problem was my back button. Sometimes it took me to the wrong screen, sometimes it kept old choices even when restarting th whole p5 game, and sometimes it even made the whole game lag. ChatGPT helped me understand that this can happen if the order of resetting variables and changing states is wrong, or if something is being recreated inside the draw() function when it should only happen once in setup(). After learning this, I reorganized my code and fixed the issue. ChatGPT also helped me understand why my game lagged at certain moments by explaining that loading images or creating new objects inside the main loop can slow everything down. Finally, I asked ChatGPT to explain how the onended() function works for sounds, which helped me understand why I needed pour.onended(nextState); for my tea‑pouring animation. All the coding, design, and decisions in my project were done by me; ChatGPT only helped me understand confusing parts and figure out why certain things weren’t working the way I expected.