Week 5: Midterm Progress Report

Concept & Design

My concept for the Midterm is an interactive piece of a Filipino-style bakery or a panaderya. I want to make a nostalgic and cozy piece where you can learn about different Filipino pastries and baked goods, interact with the radio to change to music (the songs being classic Filipino songs), and the electric fan in the background.

I started with a rough sketch  of the design and I’m planning to design the whole piece using pixel art and I will be using the PressStart2P font which is this pixelated looking font to really give it that nostalgic feeling. For the landing screen, I wanted it to be simple and straightforward with detailed instructions for the user  and to transition to the actual bakery, I’ll be using keyPressed() function. For the bakery, there’s four main interactivity functions for now and all of them are going to be clicked to use. The radio is going to have play, pause, next, and previous buttons that will control the music. For the electric fan, I’m planning to implement animation using sprite from the previous lesson and I want the blades of the fan to change speed when clicked (I’m still debating whether to put numbers that correspond to the speed like an actual fan). Most importantly, the shelf is gonna have different objects and when clicked, there will be a pop up that’s going to tell the user about that specific pastry. Lastly, the door is going to restart the experience starting with the landing screen.

Code Testing & Uncertain Part

I wanted to test out using the font to ensure that I actually knew how to use them for the sketch and it looked the way I wanted it to. It was quite easy to figure that out as we already had a discussion on using downloaded fonts. I also wanted to test out having images as my object and the pop when clicked for my pastries section. I spent some time and asked AI for assistance because I only remember how to do interactions with shapes but not images. I eventually figured out that p5.js doesn’t automatically detect clicks on images, so we have to manually create an invisible box around an image using four variables (X, Y, width, and height)  to track where it is on the canvas. Then in mousePressed() we check if the mouse coordinates fall inside that box, and if they do we know the image was clicked and trigger the popup.

Midterm 3 Progress: ROOFTOP RUSH

The Concept

ROOFTOP RUSH is a side-scrolling parkour runner built in p5.js. The player controls a free-runner crossing a city skyline at dusk. The city scrolls to the right at increasing speed. The player must jump between rooftops, avoid obstacles, and collect coins. Each run generates a different sequence of buildings, gaps, and obstacles, so no two runs are the same.

The central idea is one core mechanic: the more points you earn, the farther your jumps carry you. Score is not just a number in the corner. It directly changes how far the player can jump. Early in a run, jumps are short and the player must plan each crossing carefully. As the score grows, the jumps grow with it. The player gains the ability to clear gaps that were not possible at the start. At the same time, the world speeds up. The game becomes harder and more powerful at once. The tension between those two forces is what makes each run feel urgent.

The planned interactive features are:

  • Grapple Hook (G key): A crane will spawn automatically over any gap that is too wide to jump. Pressing G will lock onto the crane and swing the player across.
  • Wall-Run (Up key on a wall): Touching a wall will trigger a wall-slide. Holding Up will convert it into a wall-run, carrying the player upward before launching off.
  • Trick System (F for flip, R for spin): Performing tricks in mid-air will award bonus points. Chaining multiple tricks in one jump will multiply the reward.
  • Slide Kick: Sliding into certain obstacles will destroy them and award points instead of dealing damage. This turns a defensive move into an offensive one.
  • Upgrade Shop: Coins will carry over between runs. The player will spend them on permanent upgrades such as stronger jumps, longer dashes, or a larger coin magnet range.
  • Day and Night Cycle: The sky will shift from sunset to night over time. Stars will appear and a helicopter with a spotlight will patrol the skyline after dark.

The Riskiest Part: The Jump Curve

The most uncertain part of this project is the score-to-jump-force progression curve. This mechanic is the entire point of the game. If the curve is wrong, nothing else works. If it is too flat, the player will not notice the progression. If it is too steep, the player will overshoot buildings and the game will break.

The challenge is not technical. It is perceptual. Jump force is measured in pixels per frame. That number has no intuitive meaning to a player. The curve needs to satisfy three conditions:

  1. The change must be noticeable early. A player who earns 500 points should feel a real difference in jump distance.
  2. It must plateau at high scores. The growth must slow down so the game stays controllable.
  3. The maximum jump height must stay within the bounds of the level. Buildings differ in height by at most 90px. The widest gap will be 180px.

I plan to use a logarithmic curve. Logarithms grow fast near zero and flatten at large values. This matches both requirements. The formula will be:

jumpForce = max( BASE_JUMP - K * ln(1 + score) , MAX_JUMP )

Planned constants: BASE_JUMP = -11.0, K = 0.0004, MAX_JUMP = -18.5. The negative sign follows the p5.js convention where upward velocity is negative.

To test this before building the game, I wrote a standalone sketch. It plots jump height in pixels against score so I can read the curve visually and check the numbers at key milestones.

// Risk-reduction test sketch
// Paste into p5.js editor to visualize the jump progression curve
// before writing any game logic

const BASE_JUMP = -11.0;
const MAX_JUMP  = -18.5;
const K         = 0.0004;
const GRAVITY   = 0.62;

function getJumpForce(score) {
  return max(BASE_JUMP - K * log(1 + score), MAX_JUMP);
}

// Physics: h = v^2 / (2 * gravity)
function jumpHeight(score) {
  let v = abs(getJumpForce(score));
  return (v * v) / (2 * GRAVITY);
}

function setup() {
  createCanvas(700, 400);
}

function draw() {
  background(20, 20, 30);

  // axis labels
  fill(180); noStroke(); textSize(12);
  text("Score ->", 600, 390);
  text("^ Jump Height (px)", 10, 20);

  // reference lines
  stroke(60, 60, 80);
  for (let h = 50; h <= 300; h += 50) {
    let y = map(h, 0, 300, height - 40, 20);
    line(40, y, width - 20, y);
    fill(100); noStroke(); text(h + "px", 2, y + 4);
    stroke(60, 60, 80);
  }

  // curve
  stroke(255, 160, 40);
  strokeWeight(2.5);
  noFill();
  beginShape();
  for (let score = 0; score <= 10000; score += 50) {
    let x = map(score, 0, 10000, 40, width - 20);
    let y = map(jumpHeight(score), 0, 300, height - 40, 20);
    vertex(x, y);
  }
  endShape();

  // milestone markers
  let milestones = [0, 500, 1000, 2500, 5000, 10000];
  for (let s of milestones) {
    let x = map(s, 0, 10000, 40, width - 20);
    let h = jumpHeight(s);
    let y = map(h, 0, 300, height - 40, 20);
    stroke(255, 80, 80); fill(255, 80, 80); ellipse(x, y, 7);
    noStroke(); fill(220);
    text("s=" + s + "\n" + nf(h, 0, 1) + "px", x - 10, y - 16);
  }

  noLoop();
}

The sketch produces the following numbers:

Score Jump height What the player will feel
0 97 px Short. The player must judge each gap carefully.
500 116 px Noticeably higher. The reward is felt immediately.
1,000 128 px Confident. Medium gaps are now comfortable.
2,500 147 px Strong. Most gaps are within reach.
5,000 163 px Powerful. Wide gaps feel manageable.
10,000 177 px Near the ceiling. The curve has flattened.

The hard cap at MAX_JUMP = -18.5 gives a maximum jump height of 277px. That is just under half the canvas height and within the maximum building height of 360px. A player at any score will never jump off screen. The widest gap in the level will always be crossable. These numbers confirm the curve is safe to use before writing a single line of game logic.

The second risk is procedural level generation. A bad sequence could produce an impossible gap or a long boring flat stretch. To address this, I will clamp the height difference between adjacent buildings to 90px. I will also write a query function that automatically places a crane anchor over any gap wider than 110px. The grapple hook will always be reachable from that gap, so no run will ever be blocked by the level generator.

Next Steps

The concept and design are clear. The riskiest algorithm has been tested and validated. The next step is to build the full game system: the player state machine, the building generator, the collision detection, the scoring logic, and the upgrade shop.


p5.js v1.9.0 · February 2026 · Intro to IM, NYUAD

Week 5 – Midterm Progress

For midterm project, I am currently working on an interactive artwork called “Space Mood Garden”. The basic idea is to have a space‑themed screen where the user’s voice plants glowing orbs in a starry background, so their sound gradually turns into a kind of visual garden. Right now I am exploring how different levels of voice loudness can map to the size and color of these orbs, with quieter sounds creating smaller, cooler orbs and louder sounds creating larger, warmer ones. I like that the main input is the voice rather than the mouse, because it feels more personal and connects to the idea of mood, but I am still experimenting with how strong or subtle the visual changes should be.

The interaction design is starting to take shape using a three‑screen structure. The sketch begins with a start screen that shows a space background image and some simple instructions, and it waits for the user to press S and allow microphone access before moving on. In the main state, the sketch listens to the microphone using p5.sound and reads the current sound level, and whenever the volume goes above a threshold it creates a new “mood orb” at a random position on the canvas. At the moment I am mapping the volume at that moment to the orb’s size and color, and I have added a gentle pulsing animation so they look like they are breathing.

On the coding side, I have already set up the basic structure in p5.js using a state variable to switch between the start and play.. I also created a star class for the small moving stars in the background. The microphone input is working using p5.AudioIn and getLevel when the user gives access to microphone, and I am mapping the raw volume values to the visual parameters with map. The overall system is running, but I still want to tune the timing, the cooldown between orb spawns, and the visual style so it feels more like a coherent “garden” and less like scattered dots.

For this project, the most uncertain part for me is actually how the artwork will look and feel when someone really uses it, not just the technical side of the microphone. I know in theory that the sketch will create orbs based on the user’s voice, but I am still unsure whether the final garden will feel coherent and expressive, or just like random circles scattered on a space background. Because the orbs appear at random positions and their size and color depend on sound levels that I cannot fully predict, it is hard to imagine in advance what kind of compositions people will end up with and whether those compositions will really communicate a sense of “mood”. Overall I am still actively working on the project, trying different parameter values and small design tweaks, but I feel more confident now that the core idea is solid and that with a bit more tuning the project will turn out well.

Week 5: Midterm Progress

Concept:

My concept is based on my favorite Ramadan TV shows, “ظاظا و جرجير” (Zaza w Gargyer) and “بكار” (Bakar). These shows teach children Islamic and Egyptian morals and values in an indirect and engaging way. Bakar is especially important because it was one of the first cartoons to represent Egyptian culture and identity.

I want to create an adventure game where players can choose one of these cartoon characters and go on a journey inspired by the spirit of Ramadan. Throughout the game, players will stop to help others, learn from their mistakes, and make choices that reflect kindness, generosity, patience, and responsibility. The goal is to combine fun gameplay with meaningful lessons, just like the original shows.

Design:

I want my game to give this cozy relaxing feel that you could play anytime like on a portable console or at console. I want it to be as nostalgic as possible. I want it to be similar aesthetic to Stardew Valley. I am currently working on the characters sprite sheets.

 

 

Frightening / Challenging Aspects

The most frightening aspect for me is figuring out how to make the game simple, yet fun, while still being a meaningful learning experience. I’m unsure whether I should design it as a pixel-style platformer or a 2D narrative game.

I also want the game to accurately capture and translate the spirit of Ramadan, just like the original cartoons. Can I successfully express that spirit through a cozy, pixelated platform game that can be enjoyed by the entire family?

Risk Prevention

To reduce this risk, I would create a short sample or prototype of the game to test whether it truly fits the idea and essence I have in mind. Most importantly, I would evaluate how the characters and the setting turn out visually.

Then, I would ask people who are familiar with the original cartoons whether they can recognize the characters and understand what they represent in the pixelated version. If a high number of people are able to recognize them and connect them to the original spirit of the shows, I would feel more confident about moving forward with the project.

Week 5 – Reading Response (mss9452)

From reading this I understood just how different “seeing” actually is from one person or entity to the next.

As a human, I don’t have to think about the process of seeing. I see a person, and I immediately understand a face, an expression, perhaps even an intention. But computer vision doesn’t see meaning, it sees numbers. The article clearly states that high-level image understanding is still a very difficult task, whereas low-level image understanding is much more feasible. A computer doesn’t see “a person walking,” it sees the differences in pixel values.

For instance, frame differencing involves subtracting one frame from another to identify movement. This seems so very mechanical in contrast to how easily we, as humans, can identify movement. Background subtraction involves comparing a live image with a pre-stored image of the background to identify what is out of place. As humans, we can easily identify a person regardless of how much the lighting changes. However, for a computer, lighting, contrast, and setup are very important.

One thing that I found particularly interesting is that rather than trying to make computers see the world as we do, the reading proposes that we design the physical world so that it is easier for the computer to see, using high contrast, controlled lighting, and reflective surfaces. This is a reversal of the situation, rather than trying to make the algorithm “smarter,” we are trying to make reality more computable. For me this is  fascinating because interaction design is not just about the digital world, it is also about the physical world.

However, with interactive art, this is even more complicated. Tracking technologies can enable very powerful experiences of embodiment, such as in early works such as Videoplace, where silhouettes and motion become interactive elements. I find the concept of being able to have meaningful artistic experiences through simple detection technologies very appealing. The system does not fully “understand” the body but simply tracks enough information to react to it.

However, at the same time, the reading points out works such as Sorting Daemon, which emphasize surveillance and profiling. This was somewhat uncomfortable. The same technology that enables playful interaction can also extract, categorize, and analyze individuals. In the context of interactive art, being tracked can be very engaging. In other contexts, it can be very invasive.

I think it’s this tension that makes computer vision so potent in interactive media. It turns the body into data, but this data can either be expressive and interactive or controlling and analytical. As artists and designers, we’re not simply using tracking as a tool, we’re making choices about visibility and power.

This reading has made me more conscious of the fact that computer vision isn’t about simulating human vision. It’s about finding patterns that the machine can calculate. And perhaps it is interactive art that is where human vision and machine vision intersect.

week 5 – Midterm Progress (mss9452)

Concept:

For my project I plan on redesigning the classic game “Snake,” however I do plan on adding my own twists to the game.

  1. In this game the Snake is represented as glowing circular outlines, moving like a light source.
  2. For the background I intend to add islamic geometric patterns for the aesthetics, but I still haven’t implemented it yet.
  3. More Features will be added to make the game more engaging and interesting.

UI Design:

Interactions remain the same as the classic game:

  • Arrow Keys to control direction
  • Mouse click to start/restart game
  • T key to toggle theme

I decided to keep interactions simple to preserve the familiarity of the game, in hopes that the visual redesign could shift the experience from purely arcade-based to more atmospheric and reflective. Additionally, there’s a time counter, to make players more aware of the duration rather than just the score.

Code Structure:

I tried making the program more organized by into different layers, where each part does a specific function. They are:

  1. Game Logic: to handle movement, growth, collision and fruit spawning.
  2. Rendering: for background, glowing snake, grid display and UI overlay
  3. Interaction: to handle key input and theme toggling

I made sure to separate things to avoid confusion and to make adding additional features easier.

Uncertain / Complex Parts:

The thing I’m most worried about is adding sounds to the game as there are several different things that need it such as:

  • fruit collection
  • collision
  • background sound

There are several issues I’ve identified which include the timing of the sounds, possible audio restrictions and making sure the audio is not distracting.

To avoid this risk early on, I experimented with basic p5.js sound playback on its own. I tested playing simple oscillator sounds based on mouse interaction to make sure that:

  • Audio plays back properly after user interaction.
  • There are no console errors.
  • Sound playback can be controlled in short bursts.
  • Sound playback can synchronize with game events such as fruit pickup.

By testing sound playback independently before incorporating it into the full game logic, I avoided the risk of ruining the main system later on.

 

 

Week 5 – Reading Response

In the article “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers”, many existing works, themes and applications of computer vision, and basic methods of representing computer vision.

To now directly address the questions asked of me, computer vision indeed differs gravely from human vision. While the goal of computer vision remains the same as that of human vision, id est to represent physical (extended) objects in a manner where no significant detail is lost, and so to identify and then perform computations upon the representations of this data (if so it may be needed).

Humans are unfathomably complex beings, with there being over 100 million rods (cells for low-light conditions and peripheral vision, source: Cleveland Clinic), and several million cones (cells for detail and color, source: Cleveland Clinic), many machines even by todays standards can never ever ever come close to us biological entities. Furthermore, operating at an average of 20 Watts (source: National Library of Health), our brains are incredibly efficient at managing and responding to input from the incredible complexity of our eyes, and every other sensory system and square inch of skin.

Now that I am done marveling at ourselves, I return to humiliate the computers. Computers are inefficient, slow, blocky, prone to faults and can really only function on binary numbers and logic (though in more recent years other number and logical systems are being explored). The challenge is both in the sensors scanning the environment and relaying this analog data to the computer. Next the challenge is for that analog data to be converted into a digital format (fundamentally 1s and 0s), and then for that data to be processed by a program in an efficient manner. Typically, videos from this sensory data are stored as “a stream of rectangular pixel buffers”, and according to the paper, this doesn’t really tell us much about what the computer is really being fed through the system.

The paper moves to mention several different schemes and standards that computer vision encoding may be analogous to, for representing real world data, and underscores how there is no unified convention when it comes to these schemes. Certain techniques that a basic algorithm may use to discern motion from stillness includes comparing two adjacent frames in a video to see what pixel values changes, as well as background subtraction. I now extend upon this independently, in that it is probably wiser to first subtract the background before measuring any sort of pixel value changes or points of reference, as we don’t want background pixel noise to impact accuracy.

What I really found interesting was how we may be able to implement basic interactions on these two – albeit simple – methods alone. Once the silhouette of a person has been detected, their motion or boundary can be used as a collider for free-falling objects, for example. Alternatively, we may even be able to recolor a person and their environment in grayscale, or B&W for intriguing stylistic effects. Perhaps it is so that it is only I who yearns for aged B&W technology. There is something I find oddly simple yet calming in such technology.

Alas, I have extended upon the core of that in the reading which held significance to me. Though I would like to mention, the more we try to personify computers, id est implementing traditionally biological processes to them, the more I marvel at our own biological complexity!

Week 5 Reading Response Zere

What are some of the ways that computer vision differs from human vision?

While humans use their eyes and brains to interpret the information about the scenes they see, computer vision utilizes cameras and algorithms to analyze images. Computers can’t understand images or videos without specific algorithms that assign meaning to the pixels.

What are some techniques we can use to help the computer see/track what we’re interested in?

  1. Frame differencing: Motion is detected by comparing consecutive video frames. This works well when objects or people are in motion.
  2. Brightness thresholding: This separates foreground and background based on the light/dark values. This helps in environments where there is a strong visual contrast.
  3. Simple object tracking: This finds the brightest/darkest pixels in an image/scene and follows the object’s position.

How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

You can create responsive interaction artworks, as there is opportunity to track objects/bodies/faces/gestures etc. Additionally, some artworks use surveillance techniques to comment on power and monitoring in a social perspective. This gives room for critique and opinion, something that is crucial in any form of art.

Week 5 – Midterm Progress Report

Concept

I will be creating an interactive game called “SignSprint”. It is a typing-like game that uses American Sign Language (ASL) as input instead of the regular keyboard input. The user is supposed to make signs corresponding ASL alphabets to match letters and words displayed on the screen. The user has a specified time to completely spell the words displayed using ASL and at the end of the specified time, the accuracy and speed of the user will be displayed. There will also be a leaderboard to display the fastest users.

The inspiration behind this game is a class I am currently taking on Disabilities Studies where I am required to learn some ASL. After learning about computer vision in class and playing with a few models, I decided to combine these two ideas and create a game out of it. I believe creating this game will be a great exercise to learn ASL and have fun while doing so.

Design

This game will be built using a machine learning computer vision model to identify the alphabets of ASL as input to play the game. The game will consist of 3 main screens:

Start screen: This will the first screen and the main menu . It will contain a guide on how to play the game. It will contain a button to that opens a page to show a full list of the ASL symbols, a button that open a trial mode for users to practice their ASL in a non competitive or game like environment and finally a button that opens the competitive game mode where the user will be timed and expected to make the appropriate symbols.

Game screen: There are two main game screen. The trial screen where a user practices their ASL with no time. The user can go on as long as they want to and exiting this screen takes them back to the start screen. The other form of the game screen is the test screen where a user has 60 mins to get as many letters correct as they can. The screen will automatically closed after the 60s and lead you to the end page.

End page: This will display the leaderboard of the fastest players and contain a button or instruction to restart the game.

Frightening part of the code

The frightening part of the code is getting the machine learning model to identify the signs and interpret it into input for the game. It model has to be quick and responsive to improve the user experience overall.

The reduce this risk, I am exploring the use of ml5js library to get the appropriate model get the user input.

Week 5 – Reading Reflection

The reading shows computer vision as a set of simple tricks for finding motion, color, and change in an image, not as a rich way of seeing like humans have. We can still recognize a friend if the light changes or the background is messy, but a computer vision system often fails unless we control the scene and use tools like frame differencing, background subtraction, and blob tracking. How literal and fragile machine vision really is: it does not “understand” what it sees, it just reacts to pixel changes and thresholds. As designers, we then adjust our projects around these limits, asking people to stand in certain places or wear certain colors so the system can pick them up, which can make the computer seem smarter than it is. The reading also made me realize that even these basic choices what to track, what to ignore already decide which movements and bodies are visible in an artwork and which are not.

The author sounds very hopeful about giving artists and beginners easy tools for using computer vision, and this focus on access feels positive but also a bit one-sided now. The text treats these methods mainly as creative building blocks, but today we know similar techniques are used in security cameras, public-space monitoring, and commercial tracking. Because of that, the cheerful tone around “playful” tracking feels incomplete to me since it does not fully address how normal it can make constant watching feel. The reading did not change my view that computer vision is tied to power and control, but it did sharpen it by showing how quickly simple tools in art and education can be connected to serious surveillance uses. I finished feeling that if we use these techniques in interactive art now, we should make engaging, responsive pieces and try to reveal and question the act of tracking itself.