Week 8: Reading Response

Norman’s ‘Emotion & Design: Attractive things work better’ was a great complimentary work to his ‘The Design of Everyday Things’ that we previously read this semester. A perspective that may seem contrasting was presented yet upon a closer look it becomes apparent that it is not an opposition to his previous work but a build up. Both the works tie in together to create the full picture of what the essential elements of design are. After reading his previous work, I’d say I definitely increased my focus and attention on functionality and usability in my work without considering the fact that  aesthetics wen hand in hand with them. A work that is appealing to look at is one that encourages attention and engagement with it, as human nature is to be driven by aesthetics and visuals, sometimes even beyond usability. Though that should not take away attention from the importance of usability, and that is why the puzzle is incomplete without the pieces provided by the previous work by Norman.

This work particularly brought to my attention the importance of positive affect in encouraging creativity and breadth of thinking when it comes to design and got me thinking of how I can contextualize this to my upcoming projects. Part of creative interactive art is to create works that encourage exploration of the themes and functionalities of a given work, and the achieve this desired effect while usability is essential to ensure that the users have guidance to what they need to do, it is as important to focus on aesthetics to encourage this exploration of the usability of the work. While the importance of positive affect is undoubtedly present in most designs, I’d say it’s importance is exceptionally present in the design of interactive art works and projects. Therefore, the discussion of the intersection of usability and aesthetic is an important one to present in the beginning stages of the design journey to have a foundation that is strong in both aspects. Though, I do believe that in certain cases usability might prove to be more essential to a project or vise versa due to the nature of the situation.

Further, McMillan’s ‘Her Code Got Humans on the Moon’ about the programmer Margret Hamilton was an inspiring work of the discussion of the career beginnings of a women that navigated her way through the field of software engineer back in it’s initial appearance. Beyond her impressive feat reading the work was interesting dive into the world of programming and engineering before it became what we know it today. The method of punching holes in stacks of punch cards to be sent to guide the threading of copper wires into magnetic rings was a clear indication of the significant progress in the field of software engineering. In particular, the lack of urgency of the use of error-checking code was another aspect that is extremely contrasting to today’s practiced with error-checking becoming a central part of today’s programming. Most systems now are responsible to the functions of so many everyday tasks and manage the flow of large amounts of data, which is why they cannot afford to face errors. Though, the article made it clear that mistakes such as the ones that happened with the Apollo were necessary to understand the importance of such functionalities. Which revealed the importance of the work that was being done by Hamilton and the rest of the team at the time to the foundation of today’s software engineering industry.

Week 8 Reading Response

What I immediately noticed in the readings is how both Don Norman and Robert McMillan challenge how we define functionality; Norman through the psychology of aesthetics, and McMillan through the ingenuity of software engineering. Reading “Emotion and Design: Attractive Things Work Better” made me question something simple yet profound: why do I find certain interfaces or objects “trustworthy”? Norman’s claim that “attractive things work better” stayed with me because it connects emotion to cognition, arguing that beauty is not decoration but an active force in usability. His description of positive affect broadening creative thought resonated with me, especially when I considered my own design projects in other Interactive Media courses I have taken. When a prototype looks cohesive and inviting, I find myself more patient while debugging it; frustration fades faster. Norman’s teapot metaphor illustrates this perfectly, the emotional experience of interacting with a design changes how we perceive its flaws.

In contrast, McMillan’s “Her Code Got Humans on the Moon” celebrates the emotional labor and intellectual rigor behind Margaret Hamilton’s software for Apollo 11. I was surprised by how Hamilton’s story complicates the idea that engineering is purely rational. Her insistence on accounting for human error, writing software that could correct an astronaut’s mistake, echoes Norman’s belief that design must accommodate emotion and imperfection. When Hamilton’s code prevented a lunar crash due to a data overload, it wasn’t just logic at work but empathy, the foresight to design for failure.

Together, these texts made me rethink the separation between “soft” and “hard” skills in design. Emotion and logic, art and code, are not opposites but co-creators of reliability. I’m left wondering: in a future dominated by AI systems, can machines be designed to “care” the way Hamilton’s software did, to anticipate human error with grace?

Week 8 – reading

Her Code Got Humans On The Earth

Margaret Hamilton’s story resonates with me as an aspiring software engineer, especially seeing how she navigated a world that wasn’t built for her. She was able to bring her daughter Lauren to the lab on weekends, letting her sleep on the floor while she coded into the night. That choice wasn’t just about balancing work and family, but showing both are achievable. This  actually saved a mission, when Lauren accidentally crashed the Apollo simulator by pressing P01 during flight, Hamilton saw the danger immediately and warned NASA. They brushed her off, insisting astronauts were too perfect to make mistakes and did not take her concern seriously. But during Apollo 8, astronaut Jim Lovell did exactly what Lauren had done, wiping out all the navigation data. Hamilton and her team spent nine hours finding a fix to bring them home. Hamilton wasn’t just writing code, she was inventing the entire idea of software engineering in real-time, creating the practices we still rely on today. Her work reminds me that the best engineers aren’t the ones who assume everything will go perfectly, but the ones who plan for when it doesn’t. Her thinking of all branches of an act is what makes her an incredible software engineer.

Attractive Things Work Better

As someone studying computer science, Norman’s argument that “attractive things work better” initially felt weird to hear, like permission to prioritise aesthetics over functionality. But it makes sense as good designs should balance both aesthetics and usability, creating experiences that are functional and resonant. What really resonated was his point about positive affect making us more tolerant of minor difficulties. When I’m working with tools that feel good to use, I don’t rage-quit when I hit a bug. But when I’m already stressed and the interface is terrible, every small friction angers me more. This is why critical systems, like hospital applications, should be completely simple and understandable, while something non-critical like a coffee ordering app can afford to prioritise delight over efficiency.

However, I’m uncertain whether beauty can truly compensate for poor usability. Norman says “when we feel good, we overlook design faults,” but this happens far too often with modern apps. Apple’s system apps, from the clock to the calculator, are aesthetically beautiful but frustratingly impractical for users who need advanced features.

Still, I agree with his main point, we’re not computers evaluating products on pure utility. We’re emotional beings, and our feelings genuinely affect our performance. As engineers, we should build things that not only work but also make people feel capable and confident.

 

Midterm Project – Hogwarts Experience

Concept

Hogwarts Experience is an interactive web experience inspired by the whimsical world of Hogwarts.
It blends a classic sorting quiz, a maze challenge, and wand selection into one compact game built entirely in JavaScript (using p5.js).

The idea was to explore how storytelling, visuals, and interactivity could merge to create something that feels alive; something more than just a quiz or a mini-game.

Inspiration

From my fascination of JK Rowling’s use of symbols (a hat, a house, wand) in Harry Potter explores identity and choice. I wanted to capture that feeling of “who am I?” in a lightweight browser experience.

Technically, this project was also a personal experiment:
how far can I go with only vanilla p5.js, with minimal frameworks and assets, what be drawn or generated?

Visual Elements

The visuals are all hand-coded with p5.js shapes and color palettes chosen to reflect the four houses:

  • Gryffindor: warm reds and golds, courage in motion
  • Ravenclaw: deep blues and calm precision
  • Hufflepuff: mellow yellows and earthy tones
  • Slytherin: sleek greens and silvers, a hint of ambition

[I got the color codes from codepen.io]

The wand selection features small glowing particle bursts when you find your correct match, a simplified particle system I built directly into the Wand class.
It’s minimal but expressive: circles of light that rise and fade like sparks from Ollivander’s wand shop.

Interaction & Controls

  • The quiz is fully clickable — each answer dynamically updates your house “weight.”
  • Once sorted, you navigate a small maze using arrow keys (or WASD).
  • You can activate your house ability with a single keypress (spacebar).
  • The wand test responds to clicks, showing visual feedback for correct or incorrect matches.

Each stage was designed to feel self-contained but connected, a simple rhythm of choice, discovery, and action.

Sound Design

Sound is subtle but intentional.
A soft background theme plays during the game, punctuated by short cues:

  • a shimmer when your wand responds,
  • a gentle whoosh for movement,
  • a celebratory chime when you win,
  • and scary dementor sound when you fail to exit the maze.

All sound events are managed with a simple sound registry that starts, stops, and restarts based on player state. I tried to get rid of any overlaps or chaos. I wanted it to add atmosphere without overwhelming the visuals.

Code Architecture

The game is built around a few modular classes:

  • Question → handles quiz text, answers, and house mapping
  • Player → manages movement, collision, and ability us
  • Enemy → manages the enemies in the maze
  • Wand → merges wand logic and particle effects for magical feedback
  • GameManager (lightweight) → controls flow between quiz, wand test, and maze

Each class does one job well.
The code favors clarity over complexity; the division into classes make it readable, flexible, and easily expandable.

Code Snippet to Highlight

test() {
    if (this.isCorrect) {
        this.glowing = true;
        for (let i = 0; i < 20; i++) {
            this.particles.push({
                x: this.x,
                y: this.y,
                vx: random(-3, 3),
                vy: random(-5, -1),
                life: 60,
                success: true
            });
        }
        return true;
    } else {
        for (let i = 0; i < 10; i++) {
            this.particles.push({
                x: this.x,
                y: this.y,
                vx: random(-3, 3),
                vy: random(-5, -1),
                life: 60,
                success: false
            });
        }
        return false;
    }
}

It’s small, but it brings the world to life, literally adding a sparkle of magic when you choose correctly.

Future Additions

  • Better sprites & art direction: hand-drawn assets for characters, wands, and the maze walls
  • Fullscreen adaptive display: scaling visuals gracefully across devices
  • House competition system: each player’s score stored via browser cookies or localStorage, allowing a shared “House Points” leaderboard
  • Integration with ml5.js: experimenting with emotion or gesture recognition to let your facial expression or hand movement influence sorting outcomes

Each of these is a small step toward a more responsive, immersive experience,  a bit closer to real enchantment.

MIDTERM – Bad Trip

 

 

Welcome to my midterm project, Neo-Euphoria Visions, an interactive audiovisual artwork created with p5.js. This project is an exploration into surreal, psychedelic self-portraiture, heavily inspired by the distinct visual language and emotional tone of the HBO series Euphoria. It uses your webcam to pull you into a hallucinatory world that reacts to your presence, blurring the lines between the viewer and the viewed.

The experience is a multi-layered trip. It begins with a simple invitation before slowly transforming your reality. Your image is recast in a cold, UV-and-pink color palette while a motion trail ghosts your every move. A glowing aura emanates from your silhouette, and hand-doodled stars twinkle into existence around you. The piece is designed to be both beautiful and unsettling, contrasting the cold, trippy visuals with organic, hot-red tears that bleed from your eyes. With a dynamic soundtrack and automated visual shifts, the goal is to create a mesmerizing, ever-changing, and deeply personal digital hallucination.


Live Sketch

You can experience Neo-Euphoria Visions live in your browser by clicking the link below.


Screenshots

How It Works & What I’m Proud Of

This project is built on a foundation of p5.js, but its soul lies in the integration of two powerful libraries: ml5.js for computer vision and GLSL for high-performance graphics shaders. The entire visual output, from the colors to the background effects, is rendered in a single, complex fragment shader that runs on the GPU. This was a critical technical decision that allows for multiple layers of real-time effects without the performance lag that would come from CPU-based pixel manipulation.

The core mechanic involves layering several computer vision and graphical processes. First, ml5.js BodyPix creates a segmentation mask of the user, which is fed into the shader. This mask allows me to separate the scene into three distinct layers: the background, a glowing “aura” directly behind the user, and the user themselves. The shader then applies different artistic effects to each layer. Simultaneously, ml5.js FaceApi tracks facial landmarks to determine the precise location of the user’s eyes. This data is used by a custom Tear class in p5.js, which draws organic, flowing tears on a transparent overlay canvas, making them appear attached to the face. I’m particularly proud of the logic that makes the tears “follow” the eyes smoothly by interpolating their position between frames, which prevents the jittery tracking that can often occur.

JavaScript

// A snippet from the Tear class showing the smooth position update
updatePosition(newX, newY) {
    if (this.path.length === 0) return;
    let head = this.path[0];
    
    // Use lerp to smoothly move the tear's origin towards the new eye position.
    // This prevents jittering if the face detection is noisy.
    let targetPos = createVector(newX, newY);
    let smoothedPos = p5.Vector.lerp(head, targetPos, 0.3);
    let delta = p5.Vector.sub(smoothedPos, head);

    for (let p of this.path) {
        p.add(delta);
    }
}

One of the best technical decisions was implementing a temporal smoothing feedback loop for the BodyPix mask. The raw mask from the model can be noisy and flicker between frames, creating harsh, blocky edges. By blending each new mask with the previous frame’s mask, the silhouette becomes much more stable and organic, which was essential for the “glowing aura” effect to work properly. Finally, the automated, timed switching between three distinct color palettes gives the project a life of its own, making the experience unpredictable and unique for every viewing.

Glsl

// A snippet from the fragment shader showing the palette switching logic
void main() {
    // ...
    vec3 personPalette_phase;
    vec3 auraPalette_phase;

    if (u_activePalette == 1) { // Palette 1: Haze & Fire
        personPalette_phase = vec3(0.0, 0.1, 0.2); // UV
        auraPalette_phase = vec3(0.1, 0.15, 0.20); // Yellow/Orange
    } else if (u_activePalette == 2) { // Palette 2: Electric Pink/Cyan
        personPalette_phase = vec3(0.6, 0.7, 0.8); // Deep UV
        auraPalette_phase = vec3(0.5, 0.6, 0.7); // Pink/Cyan
    } else { // Palette 3: Cold UV
        personPalette_phase = vec3(0.5, 0.6, 0.7); // Deepest UV
        auraPalette_phase = vec3(0.8, 0.9, 1.0); // Electric Blue/Violet
    }
    // ...
}

(A second screenshot showing a different palette, perhaps “Haze & Fire,” to illustrate the variety.)
[Screenshot showing the warmer “Haze & Fire” palette in action.]

Problems and Areas for Improvement

The single biggest challenge I encountered during development was a series of stability issues related to the ml5.js library. I initially ran into persistent “… is not a function” errors, which, after extensive debugging, I discovered were caused by a major version update (v1.0.0) that had deprecated the FaceApi model I was using. The solution was to lock the project to a specific older version (v0.12.2) in the index.html file. This was a crucial lesson in the importance of managing dependencies in web development.

Even after fixing the versioning, I faced a “race condition” where both FaceApi and BodyPix would try to initialize at the same time, causing one or both to fail silently. This resulted in features like the aura and glitch-zoom not working at all. I resolved this by re-architecting the setup process to “chain” the initializations: BodyPix only begins loading after FaceApi has confirmed it is ready. This made the entire application dramatically more reliable. For future improvements, I would love to make the background effects more diverse and audio-reactive. Having the stars pulse or the colors shift in time with the bass of the music would add another layer of immersion. I could also explore using hand-tracking via HandPose to allow the user to “paint” or interact with the stars in the background, making the experience even more creative and personal.

 

Midterm Project: An Unusual Case

Concept

An Unusual Case is a horror mystery game, inspired by escape rooms, where there are scattered clues around the game in each room, and the user will have to identify them to move on to the next room and solve the mystery. Else, they are forever trapped in the room. The concept aims for the user to both have an engaging and entertaining experience while thinking through the clues and pushing them to think outside the box. Focusing on the player’s problem-solving and creative skills. Clues never give away the answer directly, rather they push the player to apply critical thinking and pattern recognition.

To make the game even more engaging, the themes of the game visual design are darkness and the melancholy which tie in well with the color palette consisting of deep shadows and muted tones. The mood is further curated by the sound effects that go from very quiet background noises to loud and sudden jumpscares, a mix that is intentional to keep players on edge and engaged. This horror aspect is not just for the sake of enticing fear but also to pull the players into the emotional tension of the situation, thus making every choice and revelation feel more powerful. In the end, An Unusual Case has the ambition to be a combination of mental challenge, appealing story, and spooky atmosphere, giving players a very deep and engaging experience.


 

Code Snippet

draw() {
    // Perspective scaling for hallway
    let scaleFactor = 200 / wallDist;
    let w = wallSize * scaleFactor;
    let h = wallSize * scaleFactor;

    let cx = width / 2;
    let cy = height / 2;
    let left = cx - w / 2, right = cx + w / 2, top = cy - h / 2, bottom = cy + h / 2;

    // Draw hallway walls
    fill(20, 0, 0); noStroke();
    quad(0, 0, left, top, left, bottom, 0, height);
    quad(width, 0, right, top, right, bottom, width, height);

    // Fog effect
    this.fogAlpha = 40 + 20 * sin(millis() * 0.005);
    fill(0, 0, 0, this.fogAlpha);
    rect(width / 2, height / 2, width, height);

    // Door setup and glow animation
    let doorWfull = w / 3, doorH = h / 1.5;
    let doorX = cx, doorY = bottom - doorH / 2;
    doorBounds = { x: doorX, y: doorY, w: doorWfull, h: doorH };

    if (!doorOpen) {
        let elapsed = millis() - this.lastGlow;
        if (elapsed < 3000) this.glowAlpha = 60 + 40 * sin((elapsed / 3000) * TWO_PI);
        for (let i = 1; i <= 3; i++) {
            fill(255, 0, 0, this.glowAlpha / (i * 1.5));
            rect(doorX, doorY, doorWfull + i * 12, doorH + i * 12, 8);
        }
    }

    // Door animation and jumpscare trigger
    if (doorAnim > 0.01 && !this.jumpscareTriggered && doorAnim > 0.4) this.triggerJumpscare();

    // Player movement and sound
    let moved = false;
    if (moveForward) { wallDist = max(50, wallDist - 2); moved = true; }
    if (moveBackward) { wallDist = min(800, wallDist + 2); moved = true; }
    if (moved && !footsteps.isPlaying()) footsteps.loop();
    else if (!moved) footsteps.stop();
}
The Hallway.draw() function is something I am exceptionally proud of because it is the technical core of the game, where several systems meet to provide an engaging player experience. This function takes care of complicated animations like wall scaling to make the scene appear deeper, glowing door effects to attract the player, and fog shading for ambiance. It combines player movement, door collision detection, footsteps and door interaction sound effects, and even a jumpscare mechanism, all with smooth transitions and low latency bringing together the essence of the game.  It’s the moment when the player moves from curiosity to fear, which characterizes the game, and a quality that really binds An Unusual Case together. I’d say I spent a lot of time on it because it is the first scene and first impressions matter so I wanted a hallway that communicated the message of the game well through both aesthetic and functionality.

Embedded Sketch 

Game Design
The game is designed in such a way that the main elements are separated into classes where the game and the environment are organized at the same time. The Hallway class is the one in charge of rendering a perspective corridor for the main exploration area and controlling the movements of the players. Animations of the doors, glowing effects, jumpscare triggers, and other events are all part of the Hallway class’s responsibilities.

The wordGame class is a puzzle where the player needs to unscramble the letters to form the correct words in order to move ahead. This class is responsible for selecting the words to be scrambled, performing the scrambling, validating the input, displaying the result, and transitioning the player to the next room based on the success or failure of the player. Another module, FindObject, creates an interactive mechanic of search in darkness where the player using the flashlight effect looks for the hidden key, making the experience more fun and engaging. It uses the clipping effect of the flashlight, placement of the object, and cues for interaction.
The officeReveal class works up to a narrative turning point by presenting a final interaction between the player and a document that reveals the mystery’s resolution. It is responsible for various scene elements, such as rendering, glowing highlights, paper enlargement, and reveal transitions where the player has the option to restart. Last but not least, the Character class is in charge of the rendering, scaling, and positioning of the player’s avatar to ensure there is no visual discontinuity among the scenes.
Reflection
The process of organizing the code and making sure that all classes worked together perfectly was difficult but it also proved to be a great learning experience. The communication between the hallway, rooms, mini-games, and animations was difficult and took up a lot of time, especially when trying to keep transitions smooth and the functions consistent. This experience brought to my attention how we can integrate a lot of the techniques we learnt through readings into our work such as signifiers as I learnt through user testing where more visual assistance was needed in certain places. I see the potential in this project for future expansion, more levels, different types of challenges, and even improving the current ones, enriching the game and making it more captivating. I had such a great time creating this interactive experience and it has piqued my interest to keep working on it even after the class is over, through experimenting with the features, improving the design and making it a more complete and engaging mystery game.

Midterm Project: “Football Obstacle Game”

Concept:

The idea behind this project was to create an interactive football shooting game that combines skill, strategy, and dynamic difficulty. The player must shoot goals while avoiding defenders and a moving goalkeeper. The game incorporates increasing challenge as the player scores, adding both excitement and replay value.

I wanted the game to be simple enough for beginners to play, yet engaging enough to demonstrate programming concepts like collision detection, movement logic, and audio-visual feedback in p5.js

Design and features of the game:

  • Player Controls: Use the arrow keys to move and mouse click to shoot.

  • Dynamic Defenders: Start with 3 defenders; a new defender is added every time the player scores.

  • Goalkeeper: Moves horizontally across the goal area and tries to block shots.

  • Score Tracking: Displays the player’s score at the top left.

  • Audio Feedback: Crowd cheers when the player scores, and a whistle plays when the player loses.

  • Background Image: I used a football pitch background image to visually represent the field.

  • Collision Detection: The game detects collisions between the player, ball, defenders, and goalkeeper to determine goals or game overs.

Code Highlight:

One of my favorite code snippets is the restart logic. Instead of refreshing the entire page, pressing “R” instantly resets the player, ball, defenders, and score to allow for a smooth restart.

Another code snippet I’m proud of is the collision detection function. This function checks whether the player or the ball collides with any of the defenders using the distance formula. It calculates the distance between two objects and compares it to their combined radii — if they overlap, the game ends.

// Restart after lose
  if (gameState === "lose" && (key === "r" || key === "R")) {
      Ball1.reset(Player1);
      Player1.reset();
      score = 0;

      // Reset defenders
      defenders = [];
      for (let i = 0; i < 3; i++) {
          let defenderX = random(width / 4, (3 * width) / 4);
          let defenderY = random(height / 6 + 50, (5 * height) / 6 - 50); // release below goal
          defenders.push(new Defender(defenderX, defenderY, 3));
      }
function checkDefenderCollision() {
  for (let defender of defenders) {
    // Player collision
    let dPlayer = dist(Player1.x, Player1.y, defender.x, defender.y);
    if (dPlayer < 20 + 15) {
      loseGame();
      return;
    }

    // Ball collision
    if (Ball1.isMoving) {
      let dBall = dist(Ball1.x, Ball1.y, defender.x, defender.y);
      if (dBall < Ball1.radius + 15) {
        loseGame();
        return;
      }
    }
  }
}

Embedded Sketch:

This is a link to the sketch: https://editor.p5js.org/Jayden_Akpalu/sketches/_4tt_i0oG

Reflections & Future Improvements:

Through this project, I learned how to manage game states, implement collision detection, and integrate audio and visuals for a more immersive experience. I also improved my debugging and problem-solving skills — especially when aligning the goalkeeper, handling full-screen scaling, and ensuring the game reset logic worked correctly.

If I had more time, I would like to improve the game in several ways. First, I’d like to replace the simple circles used for the players and defenders with animated sprite characters. This will make the movement and shooting feel more realistic and visually engaging. For example, I could use a sprite sheet of a footballer running or kicking the ball, and animate it frame by frame when the player moves or shoots. Also, I’d like to add multiple levels of difficulty with new environments or faster defenders to make gameplay more dynamic. Finally, I’d love to create a high-score tracker, allowing players to save and compare their best performances.

Midterm Project; Operation: Campus Cat

Project Concept
Operation: Campus Cat is a fast-paced game inspired by the beloved community cats of NYU Abu Dhabi. Set against the backdrop of a stylized campus map, players must protect their food from a hungry, mischievous orange cat who roams freely and relentlessly across the scene. It’s a tongue-in-cheek interpretation of a very real situation many NYUAD students have experienced: trying to eat while a campus cat watches… and slowly approaches.

While planning this game, I intended to blend together light strategy, reflex-based mechanics, and playful visuals that are based on NYUAD’s. As you can see, he tone is humorous but is still grounded in campus life, and, quite frankly, don’t expec  a fantasy game about fighting cats, but rather a funny tribute to the cats who rule the Interactive Media garden and food court. Operation: Campus Cat aims to turn a slice of real NYUAD culture into an accessible, replayable p5.js browser game. So, if you happen to be one of our campus cats’ victims, if they stole your food, I hope this makes you feel better in some way!

How the Game Works
Well, the core loop is pretty simple: food spawns randomly on the screen every few seconds, and the player must reach the food before the cat reaches it. Each successful click earns 5 points. But if the cat eats a food item, the player loses 2 points and it adds to the “cat ate” counter. Once the cat eats 5 items in a round, or if the round timer hits 0, the player loses one of their 3 lives. Once all lives are gone, the game ends with a final score.

The cat’s movement isn’t passive; it actively chases the nearest food using simple vector math. It glides across the campus map toward its next target, making the player prioritize which items to save. Clicking on the cat itself instead of the food will make it temporarily disappear (a “Signal Lost” message appears in its place), but doing so costs 3 points. Imagine you’re using a satellite to track the cats’ movement. This is basically it! This mechanic creates a high-stakes trade-off: delay the cat briefly, or focus on clearing food? Rounds last 60 seconds, and the player must keep moving fast and making strategic decisions.

A full-screen responsive shows score, remaining lives (as hearts), the number of missed food items in the current round, and a countdown timer. The game also features a start screen, instruction screen, and a game over screen, with appropriate transitions and buttons to replay or restart the session.

 Code Snippet
Here’s the logic for the cat’s food-chasing behavior, which uses distance checks and angle math:

const angle = atan2(nearestFood.y - this.y, nearestFood.x - this.x);
this.x += cos(angle) * this.speed;
this.y += sin(angle) * this.speed;

The Food class includes a pulse animation using a sine wave to make items feel more alive and clickable:

const pulse = map(sin(frameCount * 0.1 + index), -1, 1, 0.85, 1.15);

The game is organized using object-oriented design. The core classes are:

  1. Game: Overall state manager (start, play, game over)
    2. Cat: Handles cat behavior, movement, hiding state
    3. Food: Controls food spawning, visuals, and interaction
    4. HUD: Manages the interface and gameplay data display
    5. Button: A reusable UI component for menus and controls

Assets like images and sound effects are loaded via the Assets object, with fallback logic in case of load failure (e.g., drawing simple shapes instead of broken images). This ensures that even with missing files, the game still runs and remains playable.

What I’m Proud Of

  1. Game’s background

    I made this in freshman year while taking a core design class here at NYUAD. When I was looking for a drawing that shows our campus from above, this one worked perfectly! The only thing that was time consuming about it was finding the right palette that is both relatable and playful to suit the mood of the game. I decided to make it more green, make the top of campus center to resemble/imply the head of a cat (not sure if it shows). As I said, choosing the colors was challenging, and so ChatGPT helped me with the color choice as well.
  2. One section of the code I’m particularly proud of is the pulsing animation inside the Food class. It’s a small visual detail, but it adds a lot of liveliness to the screen. Each food item subtly “breathes” using a sine wave function, making it feel dynamic and easy to spot. This animation helps guide player attention and makes the gameplay feel more polished.

    // Inside Food.draw()
    const pulse = map(sin(frameCount * 0.1 + index), -1, 1, 0.85, 1.15);
    const img = Assets.img.food[this.imageKey];
    
    if (img && img.width > 0) {
      push();
      translate(this.x, this.y);
      scale(pulse);
      imageMode(CENTER);
      image(img, 0, 0, this.size * 2, this.size * 2);
      pop();
    }
    

    This little animation uses sin(frameCount * 0.1) to smoothly oscillate each food’s scale over time, creating a soft pulsing effect. I like this snippet because it shows how much visual impact can come from just a few lines of math and thoughtful timing, no extra assets or libraries needed. It makes the entire game feel more animated and alive without adding any performance cost.

    Challenges & Areas for Improvement

    One of the biggest challenges was cat movement; initially the cat was too fast or too slow, or would teleport unexpectedly. I had to tune the speed and collision radius multiple times to make it feel fair. Similarly, I ran into trouble with image preloading: sometimes food items or the campus map would fail to load. I added fallback logic so that the game shows a colored circle if images fail.

    In terms of gameplay, it currently doesn’t scale difficulty; every round is the same length, and the cat moves at a constant speed. In future updates, I’d like to introduce progressive rounds where spawn intervals shorten and the cat gets faster. Other ideas include adding multiple cats, special food items, or power-ups like “cat repellent” or “freeze time.”

    Lastly, while the game runs in fullscreen and resizes on window change, it’s not yet optimized for mobile/touch input, which would make it more accessible to a wider audience. Touch support and gesture input would be a major next step.

Midterm Project- Panda Math Stack- Elyazia Abbas

Concept: Learning Through Play with Panda Math Stack

For my midterm project, I chose to integrate a simple math operation game into my p5 sketch.  The Panda Math Stack transforms simple arithmetic into an interactive adventure where learning meets play. Designed with cheerful visuals and smooth animation, the game combines the task of a running panda collecting pancakes to make math practice fun and rewarding. Players solve addition problems by catching and stacking pancakes that match the correct answer. With every correct answer, the panda celebrates as cheerful sound effects and background music enhance the sense of accomplishment.

The game’s 30-second timer adds excitement, encouraging quick thinking while keeping learners engaged. Beyond its playful surface, Panda Math Stack promotes cognitive growth, especially for kids, through repetition and visual reinforcement, showing that learning can be both joyful and challenging in the right balance of design, music, and motion.

Explanation of The Main Components:

1. Use of Sounds
Sound plays an important emotional and feedback role in Panda Math Stack. The game includes three primary sound effects: background music (bgMusic), a stacking sound (boinkSound), and a game-over sound (gameOverSound). The looping background track establishes an upbeat rhythm that keeps players engaged throughout gameplay. The short “boink” effect provides instant positive feedback every time a pancake successfully lands on the tray, reinforcing the satisfaction of correct stacking. Finally, the game-over sound signals the end of a round, helping players transition between play sessions. Together, these sounds create a responsive and immersive auditory experience that strengthens player focus and motivation.

2. Use of Shapes
Shapes in this project are used to render most visual elements directly in p5.js without relying on external images. Circles and ellipses form pancakes, clouds, and decorative symbols, while rectangles and triangles are used for UI buttons and the grassy ground. By layering and coloring these basic shapes, the design achieves a friendly, cartoon-like appearance. The pancakes themselves use multiple overlapping ellipses in different brown tones, giving them dimension and warmth. This approach demonstrates how simple geometric forms can produce visually appealing and cohesive game elements that maintain a light, playful aesthetic.

3. Use of Images for Sprite Sheet
The panda character’s animation is handled using a sprite sheet (panda.png or panda2.png), divided into multiple frames organized in rows and columns. Each frame represents a different pose of the panda walking or idle. During gameplay, the code cycles through these frames (currentFrame) to simulate movement. This sprite-based animation technique allows smooth transitions without heavy computation, making the character appear lively and expressive as it moves and interacts with falling pancakes. The use of images here contrasts with the drawn shapes, introducing a visually rich and character-focused element that enhances personality and storytelling in the game.

 

4. On-Screen Text Evolution
Text evolves dynamically to communicate progress and emotions to the player. At the start, large pixel-style fonts announce the title and instructions, creating a retro arcade feel. During gameplay, text displays math problems, timers, and feedback messages that change in color and size depending on correctness ( for success, for errors). In the end screen and notebook view, typography shifts toward clarity and encouragement, summarizing scores or reviewing mistakes. This continuous adaptation of on-screen text keeps information readable while reinforcing the game’s educational purpose — transforming numbers and feedback into part of the visual storytelling.

5. Object-Oriented Programming in notebook.js and pancake.js
Both notebook.js and pancake.js apply object-oriented programming (OOP) principles to organize behavior and state efficiently. The Pancake class defines how each pancake behaves — from falling physics (update) to visual rendering (show) — encapsulating movement, collisions, and display logic into self-contained objects. Similarly, the Notebook class manages data storage and visualization of wrong answers, using methods like addWrong(), display(), and drawStack() to handle user progress and draw interactive visual feedback. This modular, class-based approach makes the code easier to scale, reuse, and maintain, as each object represents a distinct component of the game world with its own properties and responsibilities.

Code I am Proud of:

function checkAnswer() {
  if (!roundActive) return;

  if (stack.length === targetStack) {
    // Correct number of pancakes
    correctRounds++;
    message = `✅ Correct! (${num1} + ${num2} = ${targetStack})`;
    messageTimer = millis();
    roundActive = false;
    setTimeout(() => newMathProblem(), 1500);
    if (boinkSound) boinkSound.play();
  } else {
    // Incorrect answer
    message = "❌ Try Again!";
    messageTimer = millis();
    let question = `${num1} + ${num2}`;
    notebook.addWrong?.(question, targetStack, num1, num2);
    stack = [];
    pancakes = [];
  }
}

The checkAnswer() function is the core of the game’s math logic — it checks whether the number of pancakes the player stacked matches the correct answer to the math problem shown on screen. When the player presses “Check,” the function compares the length of the stack array (how many pancakes were caught) to targetStack (the sum of num1 + num2). If they’re equal, the player’s answer is correct — their score (correctRounds) increases, a success message and sound play, and a new math problem appears after a short delay. If the count is wrong, an error message is displayed, the pancakes reset, and the player can try again, making this function the key link between the math challenge and interactive gameplay.

Through Beta Testing:

Through beta testing, I received valuable feedback from my friends, that helped shape the final version of Panda Math Stack. They suggested transforming the concept into a math-based problem-solving game rather than just a stacking challenge, as this would make it more educational and purposeful.

 

They also recommended developing the notebook feature to track incorrect answers, allowing players to review and learn from their mistakes after each round. Incorporating these suggestions not only improved the gameplay experience but also strengthened the project’s learning component, making it both fun and pedagogically meaningful.

Embedded Sketch:

Future Improvements:

For future improvements, I plan to expand Panda Math Stack by adding more playable characters with unique animations and personalities to make the experience more engaging and customizable. I also aim to introduce multiple levels of difficulty, where each stage presents new visual themes and progressively challenging math problems. In addition to basic addition, future versions will include other arithmetic operations such as subtraction, multiplication, and division, allowing players to practice a wider range of skills. These updates will transform the game into a more dynamic and educational experience that adapts to different ages and learning levels.

Midterm Report

My Concept

For the midterm, I created Highway Havoc, a fast-paced driving game where the player must weave through incoming traffic and avoid collisions while maintaining control at high speeds. My goal was to build an arcade-style driving experience that feels alive, unpredictable, and immersive. Cars rush toward you, radar traps flash if you’re speeding, and the scrolling environment creates a sense of depth and motion.

To make the gameplay feel as close to real highway driving as possible, I implemented several interactive and behavioral features:

  • Dynamic traffic flow: Cars and buses spawn randomly, sometimes creating congested traffic and other times leaving open gaps.
  • Lane-based speeds: Vehicles on the left lanes drive faster than those on the right, mimicking real traffic patterns.
  • Adaptive driving: Vehicles automatically slow down and match the speed of the car ahead to prevent crashes.
  • Autonomous behavior: Vehicles occasionally use turn signals for three seconds and switch lanes when the adjacent lane is clear of other NPC cars. They can still crash into the player to make the game more challenging.
  • Reactive NPCs: There’s a 10% chance that a vehicle will get “spooked” and change lanes when the player flashes their headlights.

Website: https://mohdalkhatib.github.io/highwayhavoc/

Embedded Sketch

 

How it works

The game is built using object-oriented programming to manage all its core components: Player, EnemyCar, SchoolBus, and several static environment classes like LaneLine, Tree, and Radar. To make the player car look like it’s going upwards, all the objects on the canvas move downwards depending on the difference between the player’s speed and the object’s speed. Timed progression is handled using millis(), allowing the game to dynamically scale its difficulty It begins with just two enemy cars and gradually increases to seven over time. After 30 seconds, a special SchoolBus class appears. This bus inherits most of its behavior from EnemyCar but acts as a larger, slower obstacle, adding variety and challenge to the gameplay.

Player movement is handled with smooth lerp() transitions instead of instant lane jumps, creating a realistic sliding motion. The car tilts slightly while switching lanes, giving the animation a sense of weight and momentum.

// Smooth lane movement
this.x = lerp(this.x, this.targetX, 0.15); // 0.15 is speed of slide

// Calculate tilt: proportional to distance from targetX
let tilt = map(this.x - this.targetX, -63, 63, 0.2, -0.2); // max tilt ±0.2 rad 

// Rotate car while sliding
translate(this.x, this.y);
rotate(tilt); 

 

The garage menu is implemented as a separate state in the program. All player cars are stored in a 2D array (matrix) where the first dimension represents car type and the second represents color variants. Selecting a car updates the active type index, and selecting a color updates the corresponding color index. The game then loads the pre-rendered image from playerCarImgs[type][color] for the main gameplay.

let playerCarImgs = [];
let carColors = [
  ["#f10a10", "#b77bc6", "#ffde59"], // Car 1 colors 
  ["#737373", "#302d2d", "#df91c1"], // Car 2 colors 
  ["#7ed957", "#6f7271", "#df91c1"]  // Car 3 colors 
];
let selectedCarIndex = -1;
let selectedColorIndex = 0;
let selectedColorIndices = [0, 0, 0]; // store each car's color choice
// Load player cars (3 types × 3 colors)
  for (let i = 0; i < 3; i++) {
    playerCarImgs[i] = [];
    for (let j = 0; j < 3; j++) {
      playerCarImgs[i][j] = loadImage(`player_car_${i}_${j}.png`);
    }
  }

 

From a design standpoint, I’m especially happy with how the visual composition is handled. The positions of all road elements (lane markers, trees, etc.) are responsive relative to the center of the road, ensuring that everything remains aligned even if the screen size changes. The gradual introduction of more enemies also creates a natural difficulty curve without overwhelming the player early on.

One of the features I’m most proud of is the radar system. It actively monitors the player’s speed and position. If the player passes a radar while exceeding the speed limit, the game transitions to a “game over” state. At that moment, p5 captures a snapshot of the player’s car beside the radar, displays it on the game over screen, and plays a camera shutter sound effect. Similarly, collisions trigger a crash sound, giving the sense of danger on the highway.

detect(player) {
      return abs(this.y - player.y) < 10 && playerSpeed > speedLimit;
}

if (radar.detect(player)) {
      gameState = "GAMEOVER";
      engineSound.stop();
      captureGameOver("You were caught speeding!");
      shutterSound.play();
}

// Crashed or caught by radar: take snapshot
let snapWidth = 900;
let snapHeight = 300;

let sx = constrain(player.x - snapWidth / 2, 0, width - snapWidth);
let sy = constrain(player.y - snapHeight / 2, 0, height - snapHeight);

gameOverImage = get(sx, sy, snapWidth, snapHeight);
gameOverMessage = message;

 

Problem I Resolved

One of the biggest challenges I faced was figuring out how to implement realistic car sounds that respond naturally to gameplay. My initial idea was to use separate audio clips for different speeds or for moments when the player accelerates or decelerates. However, this quickly became too complex and didn’t sound seamless in motion.

Instead, I switched to a single looping engine sound and mapped both its playback rate and volume to the player’s current speed. This made the sound naturally increase in pitch and intensity as the car accelerated, without needing multiple clips. To add more realism, I also added a short braking sound effect that triggers automatically when a large drop in speed is detected, simulating tire friction or skidding during sudden stops.

function handleInput() {
  if (keyIsDown(UP_ARROW)) playerSpeed = min(playerSpeed + 0.1, maxSpeed);
  if (keyIsDown(DOWN_ARROW)) {
    let oldSpeed = playerSpeed;
    playerSpeed = max(playerSpeed - 0.5, 0);
  }

  // Adjust pitch and volume of engine
  let volume = map(playerSpeed, 0, maxSpeed, 0.1, 1.0);
  engineSound.setVolume(volume);
  let pitch = map(playerSpeed, 0, maxSpeed, 0.8, 4.0);
  engineSound.rate(pitch);
}

// If the speed decreased by 5 or more within 1 second
if (speedDrop >= 5) {
      if (!brakeSound.isPlaying()) {
        brakeSound.play();
      }
}

 

Areas for Improvement

One issue I noticed is that enemy cars sometimes merge into the same lane at the same time, leading to overlapping or clipping. Improving their lane-changing logic to better detect other vehicles could make the traffic feel more realistic. I’d also like to expand the variety of vehicles by adding a larger collection of car models and giving each one unique attributes such as top speed, acceleration, and braking ability.