Midterm – Pitchy Bird

following my initial progress on a backend-backed idea, I faced challenges managing the complexities of API usage and LM output, and thus switched to another idea briefly mentioned at the start of my last documentation

Core Concept

For my midterm project, I wanted to explore how a fundamental change in user input could completely transform a familiar experience. I landed on the idea of taking a game—Flappy Bird, known for its simple tap-based mechanics, and re-imagining it with voice control. Instead of tapping a button, the player controls the bird’s height by changing the pitch of their voice. Singing a high note makes the bird fly high, and a low note brings it down.

The goal was to create something both intuitive and novel. Using your voice as a controller is a personal and expressive form of interaction. I hoped this would turn the game from a test of reflexes into a more playful—and potentially, honestly, sillier challenge.

How It Works (and What I’m Proud Of)

At its core, the project uses the p5.js library for all the visuals and game logic, combined with the ml5.js library to handle pitch detection. When the game starts, the browser’s microphone listens for my voice. The ml5.js pitchDetection model (surprisingly lightweight) analyzes the audio stream in real-time and spits out a frequency value in Hertz. My code then takes that frequency and maps it to a vertical position on the game canvas. A higher frequency means a lower Y-coordinate, sending the bird soaring upwards.

click here to access the game

I’m particularly proud of a few key decisions I made that really improved the game feel.

First was the dynamic calibration. Before the game starts, it asks you to be quiet for a moment to measure the ambient background noise, which is measured set a volume threshold, so the game doesn’t react to the hum of a fan or distant chatter. Then, it has you sing your lowest and highest comfortable notes. This personalizes the control scheme for every player, adapting to their unique vocal range, which could be an important design choice for a voice-controlled game.

Another technical decision I’m happy with was implementing a smoothing algorithm for the pitch input. Early on, the bird was incredibly jittery because the pitch detection is so sensitive. To fix this, I stored the last five frequency readings in an array and used their average to position the bird. This filtered out the noise and made the bird’s movement feel much more fluid and intentional. Finally, instead of making the bird fall like a rock when you stop singing, I gave it a gentle downward drift. This “breath break” mechanic makes the game feel like air and acknowledges the physical reality of needing to breathe, which was a small but important game design tweak.

Challenges and Future

My biggest technical obstacle was a recurring bug where the game would crash on replay. It took a lot of console-logging and head-scratching, but it ultimately turned out that stopping and restarting the microphone doesn’t work the way I’d thought. The audio stream becomes invalid after microphone stops, and I couldn’t reuse it. The solution was to completely discard the old microphone object and create a brand new one every time a new game starts.

In addition, there are definitely areas I’d love to improve. The calibration process, while functional, is still based on setTimeout, which can be sort of rigid. A more interactive approach, where the player clicks to confirm their high and low notes, would be a much better user experience. Additionally, the game currently only responds to pitch. It might be fascinating to incorporate volume as another control dimension—perhaps making the bird dash forward or shrink to fit through tight gaps if you sing louder. 

A more ambitious improvement would be to design the game in a way that encourages the player to sing unconsciously. Right now, you’re very aware that you’re just “controlling” the bird in another way. But what if the game’s pipe gaps prompt you to utter a simple melody? The pipes could be timed to appear at moments that correspond to the melody’s high and low notes. This might subtly prompt the player to hum along with the music, and in doing so, they would be controlling the bird without even thinking.

Week 6: Pong² (Midterm Project)

Overall Concept:

This is Pong². It’s called Pong² because this isn’t just a spiritual successor to the first ever video game, it’s a dimension beyond. If regular pong only moved you in one axis, it only has one dimension; this game lets you move left to right AND back and forth– creating a very amusing experience with much more depth than the original.

Pong² features two modes. Let’s start with Versus. Versus is a continuation of what I worked on all the way back in Week 3. In Versus, you and another player play a 1 v 1 duel of up to 9 rounds, best of 5. It’s the same win conditions as classic pong but with a lot more strategic depth. Moving your paddle in the opposite direction towards the ball will “smash” or “spike” it into your opponent’s side while moving in the same direction as you receive the ball will help you “control” or “trap” it. 

Pong²’s second mode is its best: co-op. In Co-Op mode, you and another player work together to guard the bottom goal against multiple balls that speed up with every return. You have 3 lives and each ball that slips past you takes a life from your shared life total. You have to coordinate with your teammate to make sure you keep a watch on every ball and not just respond as fast as you can to each one, because that’s only going to take you so far (my bet is on 50 seconds).

 

Feedback: Color Scheme Consistency

On our Monday checkup, Professor Ang told me to have consistent colors for highlighting the gameplay instructions; when I use different colors to highlight I risk player confusion. I also proceeded to keep the game mode colors consistent with the menu instead of having the menu remain exclusively white and purple.

CO-OP Mode Design

I originally planned to make a player vs AI mode but realized that I really didn’t know how to make it respond similarly to how a real player would move the paddle. I had received feedback from Professor Ang to make the concept more interesting than just the usual 1v1 pong, and that’s when it hit me: what if I made 2 Player survival pong?

I had a lot of fun designing the co-op mode on paper, but I made my code a huge mess by duplicating my versus game mode javascript file instead of working from scratch. I THOUGHT that would make the process faster since I would be working with my previous framework but I ended up having to modify so many things it ended up convoluting it.

I originally had the game start with just one ball and you wouldn’t get the second ball until the 15th hit on the top side; however,I realized I wanted players to naturally have the idea to coordinate with each other. For instance, player one might tell his teammate “I’ll take the red ball you take the blue one” to strategize. So what I decided was to make the 15th hit spawn the third ball and the 2nd  top bounce spawn the second ball which presents the challenge to players at a good pace.

I was very proud of this for loop I built to manage the many colorful balls that you had to defend against. Between this week and last week, I also had to add a canHitPaddle() method and timer to make sure you wouldn’t accidentally double hit the same ball as you tried to move in the same direction as it.

for (let b of coopBalls){ //runs thru all the balls stored in the array
    b.display();
    b.move();
    b.scoreDetection();
    
    if (coopBallCollidePaddle(b, player1paddle1, "Top")&& b.canHitPaddle()) {
      paddleBounceNeutralSFX.play();
      b.lastHitTime = millis() //records hit time using canHitPaddle()
      b.ballYspeed *= -1;
      b.yPos = player1paddle1.yPos - player1paddle1.paddleHeight / 2 - b.radius - 5;
      
      if (keyIsDown(83)) { // 'S'key 
        paddleBounceControlSFX.play();
        b.ballYspeed *= COOPbackControlFactor;
      } else {
        paddleBounceNeutralSFX.play();
      }
    }

    if (coopBallCollidePaddle(b, player2paddle1, "Top")&& b.canHitPaddle()) {
      paddleBounceNeutralSFX.play();
      b.lastHitTime = millis()//records hit time using canHitPaddle()
      b.ballYspeed *= -1; 
      b.yPos = player2paddle1.yPos - player2paddle1.paddleHeight / 2 - b.radius - 5; //the 5 helps make a microadjustment to prevent funky collisions
      
      if (keyIsDown(DOWN_ARROW) || keyIsDown(75)) { // '↓' or 'K' key 
        paddleBounceControlSFX.play();
        b.ballYspeed *= COOPbackControlFactor;
      } else {
        paddleBounceNeutralSFX.play();
      }
      
    }
    
  }//closes for b loop  

I think the co-op mode overall turned out very well. It was surprisingly challenging but also a good amount of fun for the friends that I asked to test it.

 

VERSUS Mode Improvements

I had the idea to add direction input with forward and backward velocity. Basically say you’re on the bottom side, if you receive the ball with “down arrow” then you slow it down because you pressed the same key the ball was headed, but if you received it with the “Up arrow” key you would add momentum to it.

I had so much fun implementing this layer of complexity to the game. A main criticism I received at first was that the extra dimension of movement for the paddles didn’t offer a significant difference to the usual pong gameplay loop. 

This was my response to that– adding a much needed strategic depth to the game. You weren’t just competing on who was better at defense now, you were seeing who had the better attack strategy.

In fact I originally didn’t plan to add this forward and backward velocity effect to the co-op mode but I loved how it felt so much that I knew I had to add it in some way. This posed a balancing challenge as well; if we were to keep the backControlFactor from the versus mode then it would prove far too easy. 

 

Sound Design

I originally wanted to implement the sounds into my versus mode but then I realized building the infrastructure for all the menus first would make my life a lot easier. So sound design ended up as the last thing on the to-do list.

I usually got sound effects from silly button websites when I needed to edit videos so I never knew where to get actual licensed SFX for indie games. I eventually found one called pixabay that had a good list of sound effects.

Most of the menu button SFX and game over SFX were pretty straightforward, but I really wanted the SFX for the gameplay loop itself to sound very satisfying. My favorite ones were definitely the ball control SFX and the countdown SFX. For the ball control SFX, I was inspired by the sound design in the Japanese anime Blue Lock– which has some incredible sound effects like the bass-y effect when one of the characters controls the ball midair. I found this super nice sounding bass sound and that’s how the idea of using acoustic instrument sounds stuck. The other two ended up being a snare and a kickdrum sound.

 

Difficulties Endured

The Great Screen Changing Mystery

There was this incredibly confusing bug that I never truly figured out but the gist was that it would randomly switch screens specifically to the versus game mode SOMETIMES when I clicked into Co-Op. 

I dug and I dug through the code endlessly until I saw that I left one of the screen-related if-statements to give an else-statement that defaulted the screen to versus. This was here for earlier tests but this seriously still doesn’t make any sense to me. There was never a call to the function that would switch to versus and there was nothing that would leave the screen state invalid for any of the available screens to take over.

This was a frustrating gap in my knowledge that I never fully understood, but I did fix it.

 

Hit “K” to Let the Ball Through (and look stupid)

For some reason, the bottom paddle kept letting the ball straight through whenever I blocked it while holding “K” or the down arrow. This took hours of looking for the problem, going “I’ll deal with that later” and then running into it again and wondering what the heck is causing this?

It was very specifically the bottom paddle and it HAD to be related to backward movement input detection. Then I realized that it was ONLY the bottom paddle because I had set the ball speed to minSpeed, which is always positive. So all I had to do was to convert it to a negative value equivalent to minSpeed.

if (keyIsDown(UP_ARROW) || keyIsDown(73)) { // '↑' or 'I' key 
  paddleBounceSmashSFX.play();
  ballforVS.ballYspeed *= fwdVelocityFactor;
} else if (keyIsDown(DOWN_ARROW) || keyIsDown(75)) { // '↓' or 'K' key 
  paddleBounceControlSFX.play();
  ballforVS.ballXspeed = minSpeed;
  ballforVS.ballYspeed = minSpeed;
} else {
  paddleBounceNeutralSFX.play();
}
Cursor Changing

When I was developing the buttons, I realized that they didn’t really feel clickable. So I wanted to add some code to change the cursor to the hand whenever it hovered over a button. This proved surprisingly complicated for what is seemingly an easy task. The most efficient way I found was to append all the buttons to an array for each menu and have it checked in a for loop everytime– then closing off with a simple if-statement.

function updateCursorFor(buttonArray) {
  let hovering = false;
  
  for (let btn of buttonArray) {
    if (btn.isHovered(mouseX, mouseY)){
      hovering = true;
    } 
  }
  if (hovering) {
    cursor(HAND);
  } else {
    cursor(ARROW);
  }
}
Conclusion & Areas for Improvement

As for areas of improvement, I would probably make the ball-paddle collision logic even more stable and consistent. It usually behaves predictably and as intended but it sometimes shows the way I coded it has a lot of flaws with colliding on non-intended surfaces.

However, despite all this, I am really proud of my project– both from the UI design standpoint and the replayability of the gameplay loop. Once my other friends are done with their midterms I’m gonna have a much better opportunity to see flaws in the design.

 

 

Midterm Project – Balloon Pop

Game link:

https://editor.p5js.org/mm13942/full/lU_ibrAn2

Concept

My project is a fun and lighthearted balloon survival game where the player controls a balloon flying through the sky. The goal is simple — avoid the falling bombs, collect hearts for extra points, and stay alive as long as possible. The balloon can only move left and right, while the background scrolls continuously to give the feeling of rising into the clouds. The moment the balloon hits a bomb or the timer bar runs out, it pops and game over.

Inspiration

I was inspired by classic arcade-style games where the goal is to survive for as long as possible while dodging obstacles. I wanted to make something cheerful and colorful but still challenging. The idea of having a balloon as the main character felt fitting because it’s fragile yet expressive, and visually, it works well with soft, moving clouds and bright skies.

Production and Main Features

Initially, I was planning to do the obstacles on the sides that appear randomly:

but then I understood that it wouldn’t be a great decision, considering the size of a laptop screen, so I decided to stick to my another idea with objects falling randomly from the sky. After creating the draft of the game, I wrote down everything that needed to be added to my work to complete the game.

Key Features:

  1. Object-Oriented Design – Classes for Balloon, Bomb, Heart, and Button
  2. Pages:
    • Main page with right-aligned buttons
    • Instructions page
    • Weather choice page (Rainy, Sunny, Snowy)
    • Skin choice page (Pink, Yellow, Orange balloons)
    • Game over page with left-aligned content
  3. Gameplay Mechanics:
    • Bombs spawn with increasing difficulty (max 7)
    • Hearts spawn every 8 points (+5 score bonus)
    • Hearts spawn away from bombs to avoid interference
    • Proper collision detection with circular hitboxes (no false collisions)
    • Infinitely scrolling backgrounds based on weather
    • Score tracking with high score display
  4. Controls:
    • Arrow keys or A/D to move
    • C key for fullscreen toggle
    • Mouse clicks for all buttons
  5. Audio Support – Sound functions
  6. Highlighted Buttons – Selected weather/skin buttons get highlighted
  7. Back Buttons – On every sub-page to return to main menu

Code I’m Proud Of

Part of the code that I’m really proud of might not seem too hard, but it was definitely the most time consuming one and took a lot of trial and error. it was removing the white background from the uploaded pictures. when it first worked out, I thought everything was good but turns out P5 was creating 60 frames per second of one single image and when I played for more that 10 seconds it kept shutting down. I had to do a lot of debugging to understand what the actual problem was and was finally able to make it work without lagging, which really made me happy

processedBombImg = removeWhiteBackground(bombImg);
  processedHeartImg = removeWhiteBackground(heartImg);
  processedPinkBalloonImg = removeWhiteBackground(pinkBalloonImg);
  processedYellowBalloonImg = removeWhiteBackground(yellowBalloonImg);
  processedOrangeBalloonImg = removeWhiteBackground(orangeBalloonImg);
  
  // Initialize player balloon
  player = new Balloon(width/2, height - 80 * scaleFactor);
  
  // Create all button objects with proper positions
  setupButtons();
  
  // Set the pixel font for all text
  textFont(pixelFont);
}


// remove White Background 
function removeWhiteBackground(img) {
  // Create a graphics buffer to manipulate pixels
  let pg = createGraphics(img.width, img.height);
  pg.image(img, 0, 0);
  pg.loadPixels();
  
  // Loop through all pixels and make white ones transparent
  for (let i = 0; i < pg.pixels.length; i += 4) {
    let r = pg.pixels[i];     // Red channel
    let g = pg.pixels[i + 1]; // Green channel
    let b = pg.pixels[i + 2]; // Blue channel
    
    // If pixel is mostly white (R, G, B all > 200), make it transparent
    if (r > 200 && g > 200 && b > 200) {
      pg.pixels[i + 3] = 0; // Set alpha to 0 (transparent)
    }
  }
  pg.updatePixels();
  return pg; // Return the processed image
}

I also really liked the energy bar idea. I was really struggling with coming up with ideas, and my friend Nigina gave some feedback on my game and suggested to add this feature, which prevents the players from skipping the hearts.

function drawEnergyBar() {
  // Position in top-right corner
  let barX = width - energyBarWidth * scaleFactor - 20 * scaleFactor;
  let barY = 20 * scaleFactor;
  let barW = energyBarWidth * scaleFactor;
  let barH = energyBarHeight * scaleFactor;
  
  // Draw outer frame 
  stroke(255); // White border
  strokeWeight(3 * scaleFactor);
  noFill();
  rect(barX, barY, barW, barH);
  
  // Calculate fill width based on current energy percentage
  let fillWidth = (energyBar / 100) * barW;
  
  // Determine fill color based on energy level
  let barColor;
  if (energyBar > 60) {
    barColor = color(0, 255, 0); // Green when high
  } else if (energyBar > 30) {
    barColor = color(255, 255, 0); // Yellow when medium
  } else {
    barColor = color(255, 0, 0); // Red when low 
  }
  
  // Draw filled portion of energy bar
  noStroke();
  fill(barColor);
  rect(barX, barY, fillWidth, barH);
  
  // Draw "ENERGY" label above bar
  fill(255);
  textAlign(CENTER, BOTTOM);
  textSize(16 * scaleFactor);
  text("ENERGY", barX + barW / 2, barY - 5 * scaleFactor);
}

 

Design

Visually, I wanted it to feel airy and positive, so I used soft pastel colors, smooth cloud movement, and rounded buttons. Each page has its own layout — right-aligned buttons on the main page and left-aligned elements on the Game Over screen — to make navigation easy.

Challenges

The hardest part of my code was definitely managing how all the game elements work together  (the bombs, hearts, clouds, timer bar, and different pages). Getting everything to appear, move, and disappear smoothly without glitches took a lot of trial and error.Sometimes bombs appeared too frequently or hearts overlapped with them. I fixed this by randomizing positions with distance checks.

if (bombs.length < maxBombs && frameCount % 50 === 0) { bombs.push(new Bomb(random(width), -20)); } if (score % 10 === 0 && !heartExists) { hearts.push(new Heart(random(width), -20)); heartExists = true; }

The collision detection between the balloon and falling bombs was tricky too, since I had to make sure it felt fair and accurate using circular hitboxes. Another challenging part was balancing the gameplay, making bombs fall fast enough to be fun but not impossible, while also keeping the hearts from overlapping with them. On top of that, managing all the page transitions (main menu, instructions, weather, skins, game over) and keeping the selected settings consistent made the logic even more complex. Overall, the hardest part was making everything work together in a way that felt natural and didn’t break the flow of the game.

Future Improvements

In the future, I’d like to make the game feel more complete by adding real background music and more sound effects for popping, collecting hearts, and clicking buttons. Another improvement would be to make the difficulty change as the score increases, for example, bombs could fall faster or spawn more frequently the longer you survive. I’m also thinking of adding new power-ups like shields or magnets to make the gameplay more interesting. On the design side, animated buttons and smoother page transitions could make the menus feel more polished. Eventually, I’d love to include a high score system to track progress and make players more competitive.

 

Midterm Project: OOO – EEE

I made a game before that was controlled with keyboard inputs so this time, I wanted to create a game that used different input.
As I was scrolling through Youtube Shorts to find inspiration of my project, I came across a simple game playable with user’s pitch levels. In the video I watched, the character moved up and forward depending on the level of pitch. With this in mind, I tried making simple programs that took user input.

 

First, I built a program that detected a specific pitch, in this case “C”. If the user sings the pitch, the block is moved upwards and if the user maintains the same level for certain amount of time, the block permanently moves upwards. I made this because my initial strategy was to make a adventure game where the character travels a 2D map and make certain interactions that triggers things such as lifting a boulder with a certain note. This little exercise allowed me to get familiar with sound input and how I can utilize it in the future.

For my midterm, I decided to create a simple game that uses paddles that move left and right. The goal of the game is to catch the falling objects with these moving paddles. The hardest part about the game was obviously moving the paddles depending on the user’s pitch. At first, the paddles were so sensitive to the point that its movement was all over the place even with a slight input of sound. Adjusting that and making it so that it moves smoothly was the key in my game.

While I was testing for movements, I realized that I was making sounds that resembled a monkey. I was making the sounds OOO for the low volumne and EEE to make high pitched noise. So I came up with a clever idea to make the game monkey-themed with falling objects being bananas and the paddles as monkey hands. It made me laugh thinking that the users would have to immitate monkeys in order to play the game. Also, I added a little feature in the end to replay the sound that the players made while playing my game so that they can feel a bit humilliated while playing the game. I thought this was a great idea to bring some humor in my game. I also had to test this multiple times making the game and had to experience it beforehand.

My game is divided into 4 major stages: start screen, instructions, gameplay, and gameover screen. As explained in class, I utilized the different stages so that resetting was easiler.

The startscreen is the title screen. It has 3 buttons: instructions, play and full screen button. Clicking the buttons make a clicking sound. That is the only sound feature I have since my gameplay is hugely effected by sound. Any background music or sound effects effect how the game is played so I kept it to the minimum. Also, making the game full screen effects the game play, so I had the fullscreen feature to fill everywhere else with black.

Before playing the users can click the instructions page to find out the controls and calibrate their pitch range. I deliberately say to use OOOs and EEEs for the pitches so that they can sound like monkeys. These pitch ranges are adjustable with up and down arrows and are stored in the local storage so that the settings remain even after resetting the game. I also show a live paddle so that the users can see how their voice with move the hands.

Once they hit play, the core loop is simple: bananas spawn at the top and fall; the goal is to catch them with the monkey “hands” (the paddle) at the bottom. I map the detected pitch to x-position using the calibrated min/max from instructions, I clamp the raw frequency into that window, map it to the screen’s left/right bounds (so the hands never leave the canvas), then smooth it. To keep control stable I added a small noise gate (ignore very quiet input), a frequency deadzone (ignore tiny wiggles), linear smoothing with lerp, and a max step cap so sudden jumps don’t overshoot. the result feels responsive without the little movement I had early on. The player scores when a banana touches the hands and loses a life on a miss; three misses ends the round.

When the run ends, the gameover screen appears with the background art, a big line like “you got x bananas!”, and two buttons: “play again” and “did you sound like a monkey?”. during gameplay i record the same mic that powers pitch detection; on gameover I stop recording and let the player play/stop that clip. it’s a tiny feature, but it adds a fun (and slightly embarrassing) payoff that matches the monkey concept.

 

I’m especially proud of how I handled pitch jumps. early on, tiny jitters made the hands twitchy, but big interval jumps still felt sluggish. I fixed this by combining a few tricks: a small deadzone to ignore micro-wiggles, smoothing with lerp for steady motion, and a speed boost that scales with the size of the pitch change. when the detected frequency jumps a lot in one frame (like an “ooo” to a sharp “eee”), I temporarily raise the max movement per frame, then let it settle back down. that way, small fluctuations don’t move the paddle, normal singing is smooth, and deliberate leaps produce a satisfying snap across the screen without overshooting. Getting this balance right made the controls feel musical.

For future improvements on the game itself, I want to smooth the frustration without losing the funny chaos. Bananas don’t stack, but several can arrive in different lanes at the same moment, and with smoothing plus a max step on the hands, some patterns are effectively unreachable. I kept a bit of that because the panic is part of the joke, but I’d like the spawner to reason about landing time instead of just spawn time, spacing arrivals so that at least one of the simultaneous drops is realistically catchable. I can still sprinkle in deliberate “double-arrival” moments as set pieces, but the baseline should feel fair.

Midterm Project Documentation: All Day Breakfast

Sketch(f for fullscreen): https://editor.p5js.org/joyzheng/full/tb0uwj2nP

Overall Concept

As a visiting student at NYUAD, I found the made-to-order dining system, particularly at the All Day Breakfast counter, to be very confused. Unlike the pre-made options I was used to, the text-only menus made it difficult to visualize my order. I always confused what and how many did I ordered if there’s no picture (some are arabic food I don’t know) and I often found myself pulling out a calculator to see if my selections added up to a full meal plan.

These frictions made me want to digitalize the experience to an interactive game that aims to gamify the ordering process. The core goal is to provide a more intuitive and visual way for players to assemble a meal, manage inventory, understand the costs, and manage their spending. By turning the process into a game with clear steps and rewards (badges), the project transforms a problem/demand discovered in my life into an engaging and replayable experience.

How It Works

The game guides the player through a six-scene narrative that mirrors the real-life process and menu of getting food at the D2 dining hall A LA BRASA All Day Breakfast Counter.

UI Prototype:

UE:

Scene 1:

Start Screen: The player is presented with the All Day Breakfast counter and prompted to “Ready to Order?”. Clicking the triangle button begins the game. The badge board is also displayed here, showing the player’s progress.

Scene 2:

Choose Food: The player is shown a grill with all available food items. They must first click to pick up a pair of tongs, which then attaches to their mouse. They can then click on food items to pick them up and click on the plate to add them to their meal. The total cost is updated in real-time.

Scene 3:

Scan Items: The player takes their plate to the cashier. They must pick up the scanner tool and move it over each food item on the plate. As each item is scanned, a beep sound plays, and the item is added to a virtual receipt.

Scene 4:

Payment: The cashier opens, revealing a coin tray. The player must pay the total amount shown on the receipt by clicking on coins from a palette and dropping them into the tray.

Scene 5:

Eat: The player sits down to eat. They must pick up a fork and use it to pick up food from their plate and bring it to the character(NYUAD Girl)’s mouth to “eat” it, which plays a sound and makes the food disappear.

Scene 6:

End Screen & Badges: After the meal, the game checks if the player’s actions have met the conditions for any new badges. If so, a special animation plays. The player is then given the option to “Dine in AGAIN!”, which resets the game and starts a new session.

Technical Decisions & Game Design I’m Proud of

I am proud of completing a fully functional and well-designed game within the project timeline, especially after iterating on the initial idea. A key technical challenge was to build the entire game to be fully responsive. The core of the responsive design is a set of helper functions (updateLayoutDimensions, scaleRectangle, scaleValue) that calculate scaling factors based on the current window size versus the original 700×500 design grid. This allows every element to reposition and resize dynamically, ensuring the game is playable on any screen.

It’s also helpful to discuss with Professor Mang to improve the interactivity and replayability of the game. We came up the ideas of implementing the stock management system and humorous badge reward that every NYUAD students who went to this dining hall could resonate with(e.g., never being able to spend a whole meal plan; why is 1 meal plan 33.6? Is that 0.1 for service fee?). I design the inventory as the same as how it usually would be in the counter, for instance, there’s always only a few avocado toast and I just never being able to get tofu omelet till now. Overall, this is also very meditating and educational (in some sense) that it reminds people to feed themselves well in dining hall even when you are rushing in classes and encourage user to do a balanced meal with enough amount of fiber everyday.

// =======================================
// SCENE 2: CHOOSE FOOD
// this function calculates the responsive positions for all food items in scene 2
function buildScene2FoodGrid() {
  // clears the array of food objects to ensure a fresh start each time the grid is rebuilt (e.g., window resize)
  scene2FoodObjects = [];

  // constants that define the original pixel dimensions of the background art and the specific rectangular area within it where the food is displayed
  const sourceImageSize = { w: 1536, h: 1024 };
  const sourceFoodArea = { x: 124, y: 138, w: 1284, h: 584 };
  
  // responsive calculation
  // current on-screen position and size of the food area
  // by finding the scaling ratio between the current canvas and the original background image
  // so the grid always perfectly overlays the correct part of the background art
  const foodGridRect = {
      x: sourceFoodArea.x * (canvasWidth / sourceImageSize.w),
      y: sourceFoodArea.y * (canvasHeight / sourceImageSize.h),
      w: sourceFoodArea.w * (canvasWidth / sourceImageSize.w),
      h: sourceFoodArea.h * (canvasHeight / sourceImageSize.h)
  };
  
  // the calculated grid area is then divided into cells (8 columns by 2 rows) to position each food item
  const columns = 8;
  const rows = 2;
  const cellWidth = foodGridRect.w / columns;
  const cellHeight = foodGridRect.h / rows;
  
  // the size of each food item is based on the smaller dimension (width or height) of a grid cell
  // this prevents the food images from looking stretched
  scaled by 70% to add padding
  const itemSize = min(cellWidth, cellHeight) * 0.7;
  
  // this loop iterates through every food item defined
  for (let i = 0; i < ALL_FOOD_ITEMS.length; i++) {
    // math.floor() and % convert the 1d loop index (i) into a 2d (row, col) grid coordinate
    let row = Math.floor(i / columns);
    let col = i % columns;
    
    // calculates the final top left (x, y) coordinate for each food item
    // starts at the grid's origin
    // adds the offset for the column/row
    // adds a centering offset
    let itemX = foodGridRect.x + col * cellWidth + (cellWidth - itemSize) / 2;
    let itemY = foodGridRect.y + row * cellHeight + (cellHeight - itemSize) / 2;
    
    // a new food object is created with its calculated position and size
    // added to the array to be drawn
    scene2FoodObjects.push(new FoodItem(ALL_FOOD_ITEMS[i], itemX, itemY, itemSize));
  }
}

 

The most complex piece of code, and the one I’m most proud of, is the logic in the buildScene2FoodGrid() function. Unlike other elements that scale relative to the canvas, this grid must scale relative to the background image itself to ensure the food items are perfectly aligned with the artwork.

This logic calculates a scaling ratio based on how the background image has been stretched to fit the screen, and then applies that same ratio to the coordinates of the food grid. It’s a powerful piece of code that makes the experience feel seamless.

Challenges & Improvements

The development process was a valuable relearning game development. I’m surprised by the amount of free assets resources and tutorials for game development online. I’m also inspired by the Coffee Shop Experience example of how to use p5js to manage a game and toggle between scenes.

One of the most surprisingly time-consuming challenges was a simple debugging session that lasted hours, only to discover I had misspelled “diarrhea” as “diarreah” or “diareah” in different location. This taught me the importance of meticulous checking and creating simple debugging tools to isolate issues early.

I also got the opportunities to explore AI created assets through this project. For this huge amount of assets, AI assets might be the best option for me in order to finish on time. However, I still spent at least half of the game development just to get back and forth for “drawing a good card” of images. To be honest, I want to say Nano Banana didn’t worth the hype for image creation. For game assets development, ChatGPT might be the best choice after trying a few different apps like Midjourney or Canva. This is very lightweight and it also supports transparent background with png, so it could be directly use without manually removing the background.

For the future, I have several ideas for improvement:

  1. Expand to Other Counters: I would like to implement a similar ordering system for the D1 dining hall, which also has a confusing menu.
  2. UI Enhancements: I plan to add a toggle to hide or show the badge board, giving the player more control over their screen space.
  3. More Badges: Adding more creative badges would further increase the incentive for players to try different food combinations and spending strategies.
  4. Scene Refinement: Some scenes are quite dense with assets. In a future version, I might split complex steps into more scenes to make the layout feel cleaner and less cluttered.
  5. Real Implementation: After Midterm, I will demo this to dining hall manager to see if they want to adopt this ordering system or just using a more intuitive and interactive menu to run the dining hall more efficient.

Midterm Project: Twenty Seconds

View Twenty Seconds here:
https://editor.p5js.org/siyonagoel/full/VcTSl8x7V

My Concept:

Twenty Seconds is an immersive minigame experience that aims to make its users uncomfortable. Every part of the project has been developed with the intention that its use should ultimately make someone feel uneasy, and most of this logic is based on things that make me uncomfortable. This also means that some parts of the project may not cause the same level of discomfort for everyone, and that’s okay, but I’ve tried my best to use things that make most of the population uncomfortable. The project essentially features 8 rooms, each of which has a 20-second timer running, and the user has to either complete some sort of a challenge, or sit through a bad experience (like uncomfortable sounds) for 20 seconds. They cannot move to the next room until they complete the challenge in their current room, and they complete the experience only after going through all rooms.

There are some deliberate design choices within this project. To start with, I made sure that there is a very minimal use of color throughout the project. Hence, the only colors you will see are white, black, and red. Initially, I was thinking of only white and black, but after I realised one more color is a necessity, I added red as I find it one of the most uncomfortable colors due to its association with violence. Also, there is no general background music that plays throughout the project, although there are some specific sounds for a few rooms and the pop-up messages. What can be more uncomfortable than silence, when people can actually hear their own thoughts? The font I used—Reenie Beanie—was the best mix I could find between a readable font and human handwriting, something that looks like it was scrawled on a blackboard with chalk.

For my midterm project, I wanted to do something that is a unique mix of both a game and interactive art, and I believe Twenty Seconds captures this quite nicely.

Technical and Game Design:

The project treats each room as a self-contained mini challenge while keeping a single central state (the hallway leads to a door which leads to a room which leads back to the hallway). I am proud of the clear division in my code between different sections, such as resource loading, room initialization, and rendering. For example, preload() gathers all the images and sounds that I use, each initRoomX() sets up the state, and draw() delegates to the current room. Because of this structure, I could easily extend the code every time I wanted to add a new room, and made debugging predictable. Here’s an example:

function initRoom2() {
  roomItems = [];
  
  // Define positions and images for the 4 items
  let positions = [
    { x: 50, y: 130, w: 230, h: 192, img: bedImg, name: "bed" },
    { x: 320, y: 130, w: 230, h: 204, img: labubuImg, name: "labubu" },
    { x: 600, y: 130, w: 162, h: 263, img: toiletImg, name: "toilet" },
    { x: 810, y: 150, w: 220, h: 157, img: sofaImg, name: "sofa" }
  ];
    
  roomItems = positions;  
  
  startTimer();
}

function drawRoom2() {
  background(0);

  // Instructions
  fill("white");
  textSize(30);
  textAlign(CENTER, TOP);
  text("Which of these definitely does not belong in any of our homes?", width / 2, 10);
  
  drawTimer();

  // Draw all room items
  for (let item of roomItems) {
    image(item.img, item.x, item.y, item.w, item.h);
    
    // show the item's name when hovering over it
    if (isMouseOverItem(item)) {
      if (item.name === "bed") {
        fill("white");
        textSize(30);
        text("Spiky bed", 190, 350);
      } else if (item.name === "labubu") {
        fill("white");
        textSize(30);
        text("A labubu", 480, 350);
      } else if (item.name === "toilet") {
        fill("white");
        textSize(30);
        text("Snake toilet", 755, 400);      
      } else if (item.name === "sofa") {
        fill("white");
        textSize(30);
        text("Centipede sofa", 995, 350); 
      }
    }
      
    // failure condition
    checkTimerExpired("You're trapped until you find the right answer >:)");
  }
}

So, every time I had to implement a new room, I would just add its required initRoomX() function and drawRoomX() function to the existing code, along with the required functionality and pop-up logic in the mousePressed() function. Since elements like the pop-ups and the timer were to be used repeatedly for all the rooms, I made sure to structure them as easily reusable functions that I can call in one line without having to paste the same 4-5 lines of code in the code for every room.

On the technical side, there are a couple of algorithms I’m proud of for some of the games. The first is the ones I used for room 1, the room filled with clocks. I used a circle packing algorithm, learnt from here, to generate the placement of the clocks without them overlapping.

// circle packing algorithm for generating non-overlapping clocks
while (attempts > 0) {
  // random position and size of circles
  let r = random(minR, maxR);
  let x = random(r, width - r);
  let y = random(r + topBuffer, height - r);
  
  // Check if position overlaps with existing clocks
  let valid = true;
  for (let c of clocks) {
    let d = dist(x, y, c.x, c.y);
    if (d < r + c.r) {
      valid = false;
      break;
    }
  }
  
  // generate a clock if the position is valid
  if (valid) {
    clocks.push(new Clock(x, y, r));
  }

For the warping of the clock’s hands when the clock “melts”, I created a function called drawMeltingHand() in the Clock class that uses subtle bezier deformation for the cool effect. Before this I had no idea that something known as Bezier curves exist, and found out that there is a p5js function for it when i was searching for ways online to draw curved lines smoothly.

drawMeltingHand(x, y, length, angle, melt, weight) {
    push();
    stroke("red");
    strokeWeight(weight);

    // Midpoint of the hand
    let midLength = length * 0.5;
    let x1 = cos(angle) * midLength;
    let y1 = sin(angle) * midLength;
    // straight first half part of the hand
    line(0, 0, x1, y1);

    // curved tip that bends downwards
    let x2 = cos(angle) * length;
    let y2 = sin(angle) * length + melt * 0.5;
    // bezier(x1, y1, x2, y2, x3, y3, x4, y4)
    bezier(x1, y1, x1, y1 + melt * 0.3, x2, y2 - melt * 0.2, x2, y2);

    pop();
  }

Another interaction design choice I’m proud of is the reversed cursor for whack-a-mole. I thought it would be complicated to implement, but the math actually turned out to be very very simple. If I just subtract from the center of the canvas the distance between the center of the canvas and the user’s real cursor, it would give me the corresponding coordinate for the virtual reversed cursor.

// this calculates the reversed cursor position
// the virtual mouse moves opposite to the user's actual mouse
let centerX = width / 2;
let centerY = height / 2;
virtualMouseX = centerX - (mouseX - centerX);
virtualMouseY = centerY - (mouseY - centerY);

I also really like the implementation of the eyes in room no. 6. I learnt about using the atan2() function for this purpose from here. It’s probably one of my most favorite rooms, because the code wasn’t too complicated, and the resulting effect was still very cool.

if (this.isSpecial) {
  // Static pupil
  pupilX = this.x + this.staticPupilX;  // keep it centered
  pupilY = this.y + this.staticPupilY;
} else {
  // Following pupil
  // tracks the mouse
  // atan2() finds the angle formed by a point, origin, and positive x-axis
  // calculate the angle between the eye center and the mouse position
  let angle = atan2(mouseY - this.y, mouseX - this.x);
  
  // don't want pupil leaving the eyeball
  // Set a maximum distance that the pupil can move from the eye center
  // 15% of the eye’s width or height (whichever is smaller)
  let distance = min(this.w * 0.15, this.h * 0.15);
  
  // calculate new pupil position
  pupilX = this.x + cos(angle) * distance;
  pupilY = this.y + sin(angle) * distance;
}

Problems I ran into:

  1. I made the hallway doors on Canva. So the distance and angle between the doors was based on their relative orientation to each other on the canva surface size. What I didn’t realize was that I would need to have the exact same relative distance between the doors on my p5js canvas as well so that the perspective lines align, and due to this the hallway pathway ended up becoming much broader than I had planned. The only way to fix this would have been to remake all the doors again keeping this in mind, but since that wasn’t a time-feasible option, I left the hallway as is.
  2. Another problem I ran into was with drawing Dali’s clock. While I achieved the effect that I wanted with the clock hands, I cannot say the same for the circular frame of the clock. I wanted the bottom half of the clock to stretch downwards in a wavy shape so it would resemble Dali’s clocks, but I could not figure out how to achieve that effect. I tried asking large language models like ChatGPT and Claude to help with this but their attempts failed quite horrendously. Finally, I settled for the next best thing and just made the bottom part of the clock stretch downwards in a straight fashion. I did this using the following code:
    if (angle > PI / 4 && angle < (3 * PI) / 4) {
      py += map(sin(angle), 0, 1, 0, this.meltAmount);
    }
    

    The if condition selects only the lower arc of the circle. map(sin(angle), 0, 1, 0, this.meltAmount) converts the gradient gotten from sin(angle) into a vertical offset that increases toward the bottom of the circle. So basically, the value returned by sin(angle) in the range 0-1 is mapped proportionally to a value in the range of 0- the melting amount value set by me, and by doing py += , I am able to pull the y-coordinate downwards.

  3. Figuring out the ideas for each room. It took some time, but here’s the reason each room has what it has:
  • Room 1: Clocks and the sound of a ticking clock just makes me really uncomfortable, a room full of them is just terrible. 
  • Room 2: Self-explanatory.
  • Room 3: Needle-in-a-haystack kind of a situation. I grew up hearing this phrase a lot, and I don’t like it.
  • Room 4: I hate the feeling of disorientation. I wanted people to go through a similar feeling but making them deal with a reversed cursor.
  • Room 5: I think there are some sounds that make you want to pull your hair out. I made sure they’re all in this one room.
  • Room 6: The idea of being watched all the time is so uncomfortable.
  • Room 7: Some words in the English language just feel so wrong. I thought a collection of them floating around in a one place would be nice.
  • Room 8: This room was technically to give people a break. So while they can relax and watch rain pouring for 15 seconds, that last 5 seconds the rain turns red, and I think red rain definitely doesn’t make anyone feel more comfortable.

Areas for improvement:

  1. A friend suggested this really cool idea for jumbling up the doors everytime someone returns back to the hallway. This would make the whole experience so much worse.
  2. Currently, the rooms aren’t manipulated by any kind of user input. What I mean by this is that yes the user interacts with the elements in the room, but it’s only through moving the cursor around or clicking. In the future, I would like to add more direct user interaction, such as user text input. I would also like to experiment with machine learning tools like computer vision and use the audience’s bodily movement or facial experiments as inputs.
  3. I purposely chose not to have background music that runs throughout the game, but I think if I really found the perfect music for the ambience I’m going for, I would be open to using it.
  4. In room 5, the sounds stop bluntly when crossing regions. In the future I would implement smooth crossfades, to create a more cohesive soundscape and reduce abruptness, which will make transitions feel intentional and less jarring.

Midterm Project: Worm vs Sanity

Concept

Food has always been something deeply emotional forme  a way to heal, connect, and recharge after long or lonely days. Whether it’s sharing a meal with friends and family or eating quietly in solitude, food always finds a way to lift the spirit. For me, food is more than just fuel; it’s comfort, joy, and sometimes even memory itself. Every dish I eat reminds me of a moment, a feeling, or a person.

But, of course, there’s another side to that relationship  those unforgettable moments when something unexpected shows up in your food: a hair, a fly, a worm, even a tiny stone. It’s disgusting, sometimes shocking, and yet  over time  it becomes something you laugh about. The idea for this project actually struck me when I once found a fly in my food. In that split second, my emotions bounced between anger, disgust, and disbelief and then, later, laughter. I realized how something so small could completely shift my mood and turn an ordinary meal into a story I’d never forget.

It also reminded me of moments with my grandmother. She used to cook for the whole family, and occasionally, there would be a stray hair in the food. Instead of getting angry, I’d turn it into a lighthearted joke so everyone could laugh. Those moments became cherished  not because the food was perfect, but because the imperfections made it real, made it ours. They were messy, human, and full of love.

Through my project, I wanted to recreate those shifting emotions  from disgust and frustration to humor and warmth. I tried to capture the entire emotional cycle we experience in those moments: the anger when something feels ruined, the creepiness of noticing something “off,” and the humor that comes when you finally laugh it off.

  • Anger is portrayed through intense, chaotic visuals  like the “deadly” appearance of the dining hall, the harsh red tones.

  • Creepiness comes through the eerie atmosphere  the bloody dining hall textures, dim lighting, and strange, almost horror-like visual style that makes you feel uneasy, the same way you feel when you find something in your food that shouldn’t be there.

  • Humor ties it all together. I added funny instructions like “Ultimate Guide to impress Worm” that turns disgust into comedy. It’s a playful reminder that these moments, while annoying, are also absurd and relatable  something we can laugh about later.

To make it more personal, I brought in imagery from NYUAD  specifically D2 and the Library, two of my favorite places on campus. They hold countless memories of food, laughter, and friendship, so I wanted to reimagine them in my project. I took photos and used ChatGPT to generate artistic, surreal versions of these spaces  blending reality and imagination. The result is an environment that feels both familiar and eerie, mirroring that strange feeling of discovering something unexpected in what you love.

Lastly, I chose to use hand gestures as one of the interaction method because I wanted the experience to feel physical and expressive, not just mechanical. In real life, our hands are what connect us to food. We cook with them, eat with them, react with them. So, using gestures like moving the left hand to go left, right hand to go right, and closing the fist to jump feels symbolically and emotionally right. It mirrors how our hands instinctively respond when we’re disgusted or startled. We pull back, push away, or clench up.

While it might not be the most conventional control scheme, that’s precisely what makes this project unique  and artistic  rather than a simple computer game. The goal wasn’t to make a polished arcade game, but to create a more embodied experience  one that makes the player aware of their own physical reactions.

 

How to Play:

At its core, the project is an interactive game centered around a simple but expressive idea: defeat the worms that are being generated from right of the screen before they reach the end of the screen on left.

Players can interact with the game in two different ways:

Keyboard controls — using the arrow keys to move and jump: → to go right, ← to go left, and ↑ to jump.

Hand gesture controls — raise your left hand to go left and raise your right hand to go right. By raise I mean make your hand visible to the camera and when u don’t want to go say left make you left hand unvisible to the sight of the camera. If you make a fist or close your finger the girl will jump.

The basic rule is simple: jump over the worms to eliminate them before they cross the screen. Players have three lives, and if they manage to survive until time >= 900 (meaning the draw function has run 900 times) with at least one life left, they win.

At first, it might feel unintuitive, but as you play, it becomes surprisingly fun and natural  like you’re physically fighting off those unwanted “guests” in your meal.

Parts I’m Proud Of

The part I’m most proud of is integrating Machine Learning into a project that’s not only technical but emotional and personal. As a Computer Science major, I’m always drawn to exploring how technology can express feeling and creativity. Implementing gesture control allowed me to bridge art and code  to make something that doesn’t just work, but feels alive.

I’m also proud of how I personalized the experience. By using NYUAD-specific places like D2 and the Library, I rooted the project in a world I know and love. It gives the game a familiar atmosphere one that other NYUAD students can relate to  while still transforming it into something strange and artistic.

Areas for Improvement 

While I’m proud of how the game turned out, there are several areas I’d like to refine. The hand gesture control, though innovative, can feel slightly clunky at first. I’d like to make it more responsive and intuitive  perhaps by training the ML model with more data or maybe using ML to detect body that can say if a person is turning left or right and the character themself move left or right.

I’d also love to expand the visual storytelling. Right now, the “bloody” D2 gives the right kind of creepiness, but I imagine adding more levels or moods maybe transitioning from a calm dining scene to a chaotic food fight as the difficulty increases.

Problems that you ran into

While building the project, I faced a few interesting technical challenges that pushed me to think creatively about how motion and input are detected and processed.

1. Detecting when the hand is closed (fist gesture):
My first major challenge was figuring out how to detect when the user’s hand is closed. I wanted the “fist” gesture to trigger a jump action, but at first, I wasn’t sure which hand landmarks to compare. Eventually, I decided to track the index fingertip (keypoint 8) and the base of the index finger (keypoint 5).

The idea was simple: if the y-coordinate of the fingertip (hand.keypoints[8].y) becomes greater than that of the finger base (hand.keypoints[5].y), it means the fingertip is lower in the camera frame  in other words, the finger is curled in, forming a fist.

I used console.log(hand.keypoints[8].y, hand.keypoints[5].y) to visualize the values and experimented by opening and closing my hand repeatedly to see when the condition triggered. This trial-and-error approach helped me fine-tune the threshold for reliable gesture recognition. It was satisfying to see the jump action respond accurately once the logic clicked.

 

 

2. Managing repeated function calls with hand gestures:
The second issue was with repeated trigger events when using gesture control. Unlike pressing a key  which calls the action just once per pressraising a hand is a continuous motion, so the detection function kept firing dozens of times per second.

For example, calling girl1.jump() or movement functions using hand gestures caused the action to repeat uncontrollably fast. To solve this, I implemented a counter-based system and used a modulus condition to limit how often the action executes. Essentially, if the function was being called too rapidly, I only allowed it to execute once every ten calls.

Similarly, I adjusted the character’s movement speed when controlled by gestures. Instead of moving by this.speed_x each frame (which made her move unrealistically fast), I scaled it down to this.speed_x * 0.005 inside the update_Ml() function. This made her movement smooth and proportional to the natural pace of a hand gesture, giving the game a more balanced and controlled feeling.

This also applied to animation strip changes by updating them every tenth frame, the animation stayed visually consistent without flickering or overloading.

 

My Sketch :

view only screen link: https://editor.p5js.org/aa11972/full/b224cudrh

 

Midterm

Inspiration

For this project, I want to create an interactive digital art piece that explores the true scale of reality by gradually zooming from large, natural environments down to microscopic and atomic levels.

Visual Elements

Flower Screen

  • Add a tree, birds, more flowers, a grass field, and the sun for a fuller composition.

  • Include animations such as swaying grass, apples falling from the tree, and birds flying across the screen to make it feel alive.

Leaf Screen

  • Add details like insects, the stem, and a more zoomed-in view of the leaf.

  • Animate insects crawling across the surface to bring a sense of realism.

Cell Screen

  • Show multiple plant cells floating in a jelly-like substance.

  • Design them to resemble real plant cells, with more detail and fluid animation.

Atom Screen

  • Illustrate atoms with orbiting ellipses that cross over each other.

  • Show the nucleus clearly, with protons and neutrons on display.

Interaction: Zoom Functionality

  • Replace the two-finger pinch with a two-hand gesture for zooming, making it more intuitive and reducing accidental zooms.

  • Add smooth zoom animations between levels instead of abrupt page changes, to create a more immersive transition.

Sound Design

  • Integrate sounds that complement each environment:

    • Flower screen: natural ambient sounds (e.g., wind, birds).

    • Leaf screen: subtle insect sounds.

    • Cell screen: soft “jelly-like” sounds.

    • Atom screen: buzzing or electrical sounds.

  • Add a “zoom-in” sound effect to enhance transitions

    (All sounds are sourced from Pixabay.com.)

Machine Learning

To enhance user interactivity, I incorporated machine learning using the ml5 library, which integrates well with p5.js and is relatively simple to implement. I set two thresholds, “close” and “far”, based on the distance of the user’s hands. These thresholds determine when the zooming action is triggered, making the interaction feel more natural and responsive.

Extra details and screenshots

I added a home page to show users the hand gestures and extra button functionalities.

Screen Recording 2025-10-07 at 00.17.22

Challenges

Coming up with creative ideas for this project was challenging, and implementing the zooming feature was especially difficult since I had never attempted it before. Getting it to work smoothly took a lot of trial and error.

This link from p5 was helpful – https://editor.p5js.org/mimimimimi/sketches/SOkckqY_r https://editor.p5js.org/Luxapodular/sketches/rk__bPdcm but also just experimenting with the ease in and out values to make the zoom as natural as possible.

// ===== TRANSITIONS =====
// initiate zoom transition between scenes
function startZoomTransition() {
  isTransitioning = true;        // flag to indicate transition is active
  transitionProgress = 0;        // reset
  
  // Play zoom sound for every transition at 50% volume (if not muted)
  if (zoomSound && !isMuted) {
    zoomSound.setVolume(0.5);
    zoomSound.play();
  }
}

// update for each frame
function updateTransition() {
  if (!isTransitioning) return;  
  
  transitionProgress += 0.03;    // increment by 3% each frame 

  //check if 100% (1)
  if (transitionProgress >= 1) {
    isTransitioning = false;     // stop transition
    transitionProgress = 0;      // reset
    currentPage = currentPage === SCENES.length - 1 ? 1 : currentPage + 1;
    playSceneSound(); // Play sound for the new scene
  }
}

// applies visual zoom effect during transitions
function applyCameraTransform() {
  // create smooth easing curve: slow start, fast middle, slow end
  const easeT = transitionProgress < 0.5
    ? 4 * transitionProgress ** 3      // first half: cubic ease-in
    : 1 - (-2 * transitionProgress + 2) ** 3 / 2;  // Second half: cubic ease-out
  
  // calculate zoom level: smoothly interpolate from 1x to 100x zoom
  const zoom = lerp(1, 100, easeT);
  
  // get the target point to zoom into for current scene
  const [x, y] = SCENES[currentPage].zoomTarget;
  
  // apply camera transformation:
  translate(x, y);     // move to zoom target point
  scale(zoom);         // apply zoom scaling
  translate(-x, -y);   // move back to keep target centered
}

final code – https://editor.p5js.org/kk4827/sketches/9CleTb6y1

MidTerm Project – Shahram Chaudhry

The Sketch

https://editor.p5js.org/sc9425/full/RnrYJ2fls

Concept Development and Final Concept

I originally imagined this project more like a game, where users would have a limited time to quickly label memories as good or bad, and discard the bad ones to “win.” The goal was simple: clean up the mental space by getting rid of what weighs us down. But as I worked on it more, especially while curating the kinds of memories to include my perspective started to shift.

I realized memories aren’t always black or white. They’re messy, layered, and often emotionally ambiguous. A single moment can carry joy and pain, nostalgia and regret. So the project evolved. Rather than forcing users to judge a memory under a timer, I wanted to create a quieter, more reflective experience,  one where the user has gentle control: to reveal, sit with, or discard memories at their own pace.

For instance, I studied abroad in Paris and found it magical: exploring the city, trying new foods, feeling independent. But I recently came across a post by someone who had a completely different experience there. They couldn’t afford daily subway rides, had to walk 6.5 kilometers to class, and got by on snacks. For them, Paris wasn’t the city of love, it was a daily struggle. That contrast stuck with me. Same place, completely different emotional weight. And that’s what Mind Palace became about: subjective memories, and giving people space to decide what they mean and what to do with them.

In terms of the UI, I think I made meaningful improvements during development. Initially, I had a simpler design with a pink color scheme, thinking it would naturally signify the brain or mind because that’s the color of the brain icon. However, when I showed it to classmates, several of them were confused about what it represented. Based on that feedback, I decided to pivot. I found an actual artistic image of a brain online that better communicated the theme, and I reduced its transparency so it wouldn’t overpower the rest of the experience. This way, the background sets the mood and context without distracting from the interactive elements.

The previous design was:


The final design:

How It Works

The Mind Palace starts with a simple instruction screen. Once the user clicks to begin, memories, represented as floating film icons (often associated with memories), gently drift across the screen. 

The user interacts using just their index finger, tracked by the webcam. Initially, I had a gesture (open palm) to reveal a memory, but after feedback in class, I realized it felt a bit unituitive. So I simplified it, now just hovering over a memory for 2 seconds reveals it. It made the interaction smoother and avoided asking users to remember too many gestures.

Once a memory is revealed and the user has had a chance to read it, they can discard it using a thumbs-down gesture. I have made sure that users can’t just hover over and directly discard a memory without it being revealed, because then users will just be discarding random memories. To make the gesture recognition more robust and avoid accidental deletion, I also made sure users had to hold the thumbs-down gesture for a second, so it wouldn’t trigger accidentally.

For resetting the experience, I originally thought about using an “OK” gesture, like saying “I’m done.” But since reset is a pretty major action, and misfires could be annoying, I decided to keep it simple: users just press the Escape key. It also felt kind of full circle, like they press a button to enter and a key to exit. I focused on keeping things intuitive and reflective. I meant to give the user space to engage with each memory calmly, without rushing.

Each memory is intentionally ambiguous. For example: “The last message I never replied to”

This could evoke very different emotions depending on the person engaging with it. For some, it might feel empowering,  a sign of setting boundaries, moving on, or finally letting go of something that no longer serves them. For others, it might bring up guilt, anxiety, or a lingering sense of “what if.” That’s the heart of the project: recognizing that memories aren’t fixed in meaning. What feels like healing to one person might feel like avoidance to another. By keeping the memories vague yet emotionally charged, I encourage reflection, allowing each user to project their own story onto them.

I’m especially proud of implementing gesture recognition. It’s something I’d seen at IM showcases before, but I didn’t think I’d be able to do it myself. Understanding hand landmarks and translating them into reliable, smooth gestures took time, but I managed to make it functional and fairly intuitive. Here’s the core gesture logic I used:

function isThumbsDown(landmarks) {
  const thumbTip = landmarks[4];
  const wrist = landmarks[0];
  return (
    thumbTip.y > wrist.y &&
    !isFingerUp(landmarks, 8) &&
    !isFingerUp(landmarks, 12) &&
    !isFingerUp(landmarks, 16) &&
    !isFingerUp(landmarks, 20)
  );
}

function isFingerUp(landmarks, tipIndex) {
  const midIndex = tipIndex - 2;
  return (landmarks[midIndex].y - landmarks[tipIndex].y) > 0.05;
}

I also made some simple but thoughtful design choices like placing the webcam feed at the top so users can always see if they’re in frame. That helped during testing and made the interaction clearer.

Challenges and Improvements

Gesture recognition was a big concern for me. It’s surprisingly tricky to get right,  too strict, and gestures feel frustrating to perform (and even to code); too loose, and false positives ruin the experience. One major challenge was simply understanding the hand landmark system, there are 21 tracked points per hand, and it took a while to learn which ones corresponded to each finger joint and how to use them meaningfully in gesture logic.

At first, I tried more complex calculations for gestures, but it quickly stopped feeling intuitive. Users had to “perform” gestures perfectly, and the experience lost its flow. Now I’ve simplified it: instead of complicated checks, I just use the thumb and index finger landmarks in straightforward ways, plus a timing delay. For example, the thumbs-down gesture only triggers if it’s held for one full second. This makes it much harder for it to fire accidentally while still keeping the interaction easy and natural for users.

Another improvement  would be adding variety,  either by generating new memory phrases dynamically or letting users add their own. Right now, the memory list is static. Adding this level of customization could make each user’s Mind Palace feel more personal. I also think sound effects tied to each gesture (reveal, discard, reset) would enhance immersion and make the interactions feel more responsive.

 



Week 6 – Midterm Project Documentation

Copy Paste To Browser For Full-screen Experience:

https://editor.p5js.org/AsmaAlMurr/full/i15QXvk3g

Overall Concept

My project, Majlis Madness, is an interactive game that introduces players to Emirati cultural traditions through play and memory.  The game is set inside a majlis, a traditional gathering space where families and friends share hospitality, food, and conversation. Instead of just learning about this culture through text, the game engages the player with a memory sequence challenge where they must remember the order in which Emirati snacks “glow” and then repeat the sequence, the player has three lives symbolized by a heart icon at the top left. The concept combines cultural storytelling and information with a fun, simple game design so that players learn something new while having an enjoyable experience.

Originally, my idea was to design the game around plants in a garden, where players would memorize the sequence of flowers. While this concept was visually appealing, it didn’t feel personal enough to me. I realized that using snacks in a majlis would be more meaningful, since it directly connects to my own cultural experiences and memories. Instead, I used its basic bones as a shell to create a better version of this game. This shift made the game feel more authentic and gave it a stronger connection to my Emirati traditions.

My Original Game Idea (Prototype):

Inspiration:

The design of Majlis Madness aims to blend visuals, sounds, and interactivity: background images depict the majlis, oud music plays in the background to create atmosphere, and traditional snacks like Vimto, laban, chips, and ice pops become the central objects of the memory challenge. This makes the project both playful and informative, highlighting how cultural spaces like the majlis combine hospitality, tradition, and fun.

On a personal level, I feel a deep connection to the majlis because it has always been at the center of family and community life for me. Growing up, it was the place where I learned the value of gathering, listening, and sharing food. By recreating it in my game, I wanted to honor this space and give players a sense of its warmth, cultural meaning, and social importance. For me, the project is not only about coding a game but also about carrying forward traditions that shaped my own experiences.

For the background of the game, I was inspired by the traditional majlis setting, with its patterned carpets, red cushions, and lanterns that create a warm, communal atmosphere. This space felt like the perfect environment to represent Emirati culture, since the majlis is where people come together to share food, stories, and hospitality. 

For the sound design, I wanted it to feel authentic and true to the spirit of the majlis. After struggling to find the right audio online, I decided to ask a few local friends for inspiration. Their suggestions helped me discover tracks that carried the warmth and cultural depth I was aiming for, which made the game atmosphere feel much more genuine. I decided to stick to the classic tradition of the oud instrument as that is what both my friends and I associate with the majlis setting .

Screenshot on whatsapp where I asked for help for selecting the sound:

Here is an image of a Oud instrument for those who have never seen one:

Soundtrack of Abu Dhabi | National Geographic

How It Works: (Game mechanics)

When the player loads the game, they first see a cover screen with a logo for the game (Majlis Madness) and two options. Pressing ‘Enter’ takes them to a welcome page that introduces the Emirati majlis and explains its cultural importance. There is also an instructions page that gives step-by-step directions on how to play. Once the player starts, they watch a glowing sequence of snacks and then try to click them back in the same order. Each correct click is rewarded with a glow effect and a positive sound, while mistakes trigger an error sound, a red X, and eventually a gameover screen. If the player completes a sequence, they level up and face a longer, more challenging sequence.

Technically, the game uses a state machine to move between phases such as “cover”, “welcome”, “instructions”, “waiting”, “show”, “play”, “win”, and “gameover”.  Images and audio files are preloaded, and the layout is made responsive so the game can adapt to fullscreen sizes.

Planning and Design

(CHAT GPT WAS USED FOR SPECIFIC ARTISTIC ELEMENTS)

Before I wrote a single line of code, I began designing the game on paper. Sketching out screens and flows helped me plan the user experience in a structured way. I connected this process with UI concepts we had learned in class, like keeping instructions simple, providing clear feedback, and creating intuitive navigation between states. Having this roadmap made the actual coding process smoother, since I already had a clear vision of how each part of the game should look and feel.

Initial Planning Sheet:

Whiteboard In Class:

For the visual elements, I combined resources from different places. I gathered some reference images from Google (like snacks and majlis items) and then used ChatGPT to generate cartoon versions of these objects, then further tweaked them on procreate. This gave the game a playful and consistent art style while still grounding it in recognizable Emirati cultural elements. I liked how this workflow let me balance authenticity with creativity, bringing everyday cultural objects into a polished, game-ready format. For assets that were more specific like the glow feature in the game I used procreate on my iPad to draw a glowy circle because that was too difficult to find on google images, so it was better to create that on my own.

I am especially proud of the way my project mixes cultural storytelling with technical interactivity. Adding atmospheric oud music, using Arabic text (“مرحبا”) alongside English, and visually highlighting Emirati snacks that most locals relate with their childhood makes the game feel culturally rich. From a technical perspective, organizing the code into states and using a class for snacks makes the project more readable and user-friendly, while still handling animation, audio, and user feedback effectively.

Code Snippet:

I am particularly proud of the code that handles snack animations and misclick feedback, because it brings the game to life and makes the experience feel way more polished. The way it checks for shaking when a player clicks the wrong snack, enlarging when a snack is chosen, and glowing during active play, adds personality to each object and makes the interactions more satisfying. I also like how the red X sign briefly appears on a misclick before moving to the gameover state, it gives the player clear feedback without being overwhelming. For me, this section shows how I was able to combine logic and creativity: not just making the game functional, but adding expressive details that make it feel engaging and fun.

// Draw snacks with active animations
  for (let i = 0; i < snacks.length; i++) {
    const shaking = wrongIndex === i && wrongShakeFrames > 0;
    const enlarging = clickedSnack === i;
    const glowOnTop = enlarging && state === "play";
    snacks[i].draw(shaking, enlarging, glowOnTop);
  }
  if (wrongShakeFrames > 0) wrongShakeFrames--;

  // when showing a misclick, display the red X sign briefly then go to gameover
  if (state === "misclick") {
    if (wrongIndex >= 0 && xImg) {
      const s = snacks[wrongIndex];
      image(xImg, s.x + s.w / 2 - 30, s.y - 40, 60, 60);
    }
    misclickHold--;
    if (misclickHold <= 0) state = "gameover";
    return;
  }

 

Debugging 🙁

Debugging turned out to be one of the most challenging but also most important/ rewarding parts of this project. There were times when I stared at the same piece of code for hours and couldn’t see what was wrong, and I realized I needed fresh eyes, either by stepping away and taking breaks or by asking someone else to look at it with me. That process often helped me notice small mistakes I had been overlooking. The class we had last week on debugging strategies ended up being way more useful than I expected for a project of this size. It gave me practical techniques, like breaking problems into smaller parts and testing sections of the code separately, which saved me a lot of time and frustration.

Me when I find the bug after looking for hours, just to realize I had spelt the word “function” wrong:

Debugging 101 : r/ProgrammerHumor

Challenges and Areas for Improvement

One of the biggest challenges I ran into was learning how to use states (this made me want to cry), since I had never worked with them before. At first, it was confusing to manage the different phases of the game and make sure each part made sense (like the cover screen, instructions, and gameplay) and  transitioned smoothly. Over time, I began to understand how states could structure the flow and make the game easier to organize. Another challenge was finding traditional Emirati audio that felt authentic and added to the mood of the majlis setting. I wanted the sound to truly capture the atmosphere, so it took extra effort to search for the right oud tracks that matched the visuals and theme of the game.

For improvements, I would like to add more cultural depth to the game, such as different levels themed around other parts of Emirati hospitality or new backgrounds showing other Emirati cultural settings. Technically, the game could also benefit from smoother animations, for example, fading glows or a more creative transition between states and more positive feedback when a player levels up. While the core mechanics and atmosphere work well, as I learned in this course there is ALWAYS room to expand our games, wether that’s in terms of storytelling or even polishing the technical aspects. Overall, I’m very happy with how this turned out.