Midterm 3 Progress: ROOFTOP RUSH

The Concept

ROOFTOP RUSH is a side-scrolling parkour runner built in p5.js. The player controls a free-runner crossing a city skyline at dusk. The city scrolls to the right at increasing speed. The player must jump between rooftops, avoid obstacles, and collect coins. Each run generates a different sequence of buildings, gaps, and obstacles, so no two runs are the same.

The central idea is one core mechanic: the more points you earn, the farther your jumps carry you. Score is not just a number in the corner. It directly changes how far the player can jump. Early in a run, jumps are short and the player must plan each crossing carefully. As the score grows, the jumps grow with it. The player gains the ability to clear gaps that were not possible at the start. At the same time, the world speeds up. The game becomes harder and more powerful at once. The tension between those two forces is what makes each run feel urgent.

The planned interactive features are:

  • Grapple Hook (G key): A crane will spawn automatically over any gap that is too wide to jump. Pressing G will lock onto the crane and swing the player across.
  • Wall-Run (Up key on a wall): Touching a wall will trigger a wall-slide. Holding Up will convert it into a wall-run, carrying the player upward before launching off.
  • Trick System (F for flip, R for spin): Performing tricks in mid-air will award bonus points. Chaining multiple tricks in one jump will multiply the reward.
  • Slide Kick: Sliding into certain obstacles will destroy them and award points instead of dealing damage. This turns a defensive move into an offensive one.
  • Upgrade Shop: Coins will carry over between runs. The player will spend them on permanent upgrades such as stronger jumps, longer dashes, or a larger coin magnet range.
  • Day and Night Cycle: The sky will shift from sunset to night over time. Stars will appear and a helicopter with a spotlight will patrol the skyline after dark.

The Riskiest Part: The Jump Curve

The most uncertain part of this project is the score-to-jump-force progression curve. This mechanic is the entire point of the game. If the curve is wrong, nothing else works. If it is too flat, the player will not notice the progression. If it is too steep, the player will overshoot buildings and the game will break.

The challenge is not technical. It is perceptual. Jump force is measured in pixels per frame. That number has no intuitive meaning to a player. The curve needs to satisfy three conditions:

  1. The change must be noticeable early. A player who earns 500 points should feel a real difference in jump distance.
  2. It must plateau at high scores. The growth must slow down so the game stays controllable.
  3. The maximum jump height must stay within the bounds of the level. Buildings differ in height by at most 90px. The widest gap will be 180px.

I plan to use a logarithmic curve. Logarithms grow fast near zero and flatten at large values. This matches both requirements. The formula will be:

jumpForce = max( BASE_JUMP - K * ln(1 + score) , MAX_JUMP )

Planned constants: BASE_JUMP = -11.0, K = 0.0004, MAX_JUMP = -18.5. The negative sign follows the p5.js convention where upward velocity is negative.

To test this before building the game, I wrote a standalone sketch. It plots jump height in pixels against score so I can read the curve visually and check the numbers at key milestones.

// Risk-reduction test sketch
// Paste into p5.js editor to visualize the jump progression curve
// before writing any game logic

const BASE_JUMP = -11.0;
const MAX_JUMP  = -18.5;
const K         = 0.0004;
const GRAVITY   = 0.62;

function getJumpForce(score) {
  return max(BASE_JUMP - K * log(1 + score), MAX_JUMP);
}

// Physics: h = v^2 / (2 * gravity)
function jumpHeight(score) {
  let v = abs(getJumpForce(score));
  return (v * v) / (2 * GRAVITY);
}

function setup() {
  createCanvas(700, 400);
}

function draw() {
  background(20, 20, 30);

  // axis labels
  fill(180); noStroke(); textSize(12);
  text("Score ->", 600, 390);
  text("^ Jump Height (px)", 10, 20);

  // reference lines
  stroke(60, 60, 80);
  for (let h = 50; h <= 300; h += 50) {
    let y = map(h, 0, 300, height - 40, 20);
    line(40, y, width - 20, y);
    fill(100); noStroke(); text(h + "px", 2, y + 4);
    stroke(60, 60, 80);
  }

  // curve
  stroke(255, 160, 40);
  strokeWeight(2.5);
  noFill();
  beginShape();
  for (let score = 0; score <= 10000; score += 50) {
    let x = map(score, 0, 10000, 40, width - 20);
    let y = map(jumpHeight(score), 0, 300, height - 40, 20);
    vertex(x, y);
  }
  endShape();

  // milestone markers
  let milestones = [0, 500, 1000, 2500, 5000, 10000];
  for (let s of milestones) {
    let x = map(s, 0, 10000, 40, width - 20);
    let h = jumpHeight(s);
    let y = map(h, 0, 300, height - 40, 20);
    stroke(255, 80, 80); fill(255, 80, 80); ellipse(x, y, 7);
    noStroke(); fill(220);
    text("s=" + s + "\n" + nf(h, 0, 1) + "px", x - 10, y - 16);
  }

  noLoop();
}

The sketch produces the following numbers:

Score Jump height What the player will feel
0 97 px Short. The player must judge each gap carefully.
500 116 px Noticeably higher. The reward is felt immediately.
1,000 128 px Confident. Medium gaps are now comfortable.
2,500 147 px Strong. Most gaps are within reach.
5,000 163 px Powerful. Wide gaps feel manageable.
10,000 177 px Near the ceiling. The curve has flattened.

The hard cap at MAX_JUMP = -18.5 gives a maximum jump height of 277px. That is just under half the canvas height and within the maximum building height of 360px. A player at any score will never jump off screen. The widest gap in the level will always be crossable. These numbers confirm the curve is safe to use before writing a single line of game logic.

The second risk is procedural level generation. A bad sequence could produce an impossible gap or a long boring flat stretch. To address this, I will clamp the height difference between adjacent buildings to 90px. I will also write a query function that automatically places a crane anchor over any gap wider than 110px. The grapple hook will always be reachable from that gap, so no run will ever be blocked by the level generator.

Next Steps

The concept and design are clear. The riskiest algorithm has been tested and validated. The next step is to build the full game system: the player state machine, the building generator, the collision detection, the scoring logic, and the upgrade shop.


p5.js v1.9.0 · February 2026 · Intro to IM, NYUAD

WEEK 5 – Creative Reading Reflection

In “Computer Vision for Artists and Designers,” Golan Levin explains that computers do not see like humans. I realized that computer vision is not really vision. It is measurement.

When I look at a room, I see people, emotions, and context. A computer sees pixels. It compares brightness values. It detects motion between frames. It subtracts a stored background. Techniques like frame differencing and thresholding reduce the world to contrast and change. The computer does not understand meaning. It only processes differences in light and color.

This difference forces artists to redesign the environment. If the lighting changes, the system fails. If a person stands still, motion detection fails. So artists add backlights. They use white walls. They create strong contrast. In Videoplace by Myron Krueger, participants stand in front of a bright background so their silhouettes are easy to track. The interaction feels magical. But it depends on careful staging for the machine.

I was also struck by works about surveillance. In Sorting Daemon, David Rokeby reduces people to color patches. The system does not know who they are. It only sorts hues and shapes. This makes me uncomfortable. It shows how easily people can be turned into data.

For me, the most important insight is this: computer vision changes not only how machines see us, but how we design spaces for machines to see.

Week 4 – Data Visualization

Data Visualization

For this week’s assignment, an animated visualization of global temperature data from 1880 to today. The sketch fetches real data from NASA’s GISS API and draws it as a bar chart. Blue bars mean cooler-than-average years. Red bars mean warmer-than-average years. As the bars animate in, poetic phrases appear based on the temperature, giving the data a human, storytelling feel.

Controls: Click to pause/play · Hover bars for details · Press R to restart · Press S to save · Spacebar to toggle

Code I Am Proud Of

The part I am most proud of is how the bars get their color. Instead of just picking “blue” or “red,” I used lerpColor() to blend between shades based on the actual temperature value. Colder years get a deeper blue, and warmer years get a more intense red. It makes the warming trend really pop visually:

// Color: blue for cold, red for warm
let barColor;
if (d.anomaly < 0) {
  barColor = lerpColor(
    color(70, 130, 200, 200),
    color(40, 80, 160, 200),
    map(d.anomaly, 0, -0.5, 0, 1)
  );
} else {
  barColor = lerpColor(
    color(200, 100, 70, 200),
    color(240, 60, 40, 220),
    map(d.anomaly, 0, 1.5, 0, 1)
  );
}

I also like the generative text system. Each time a new data point appears, the sketch picks a phrase from a pool that matches the temperature: cold, warm, or neutral. It is a simple idea but it adds a lot of personality to the piece:

function triggerPhrase(anomaly) {
  if (millis() - lastPhraseTime < 1200) return;

  let phrases;
  if (anomaly < -0.1) {
    phrases = coldPhrases;
  } else if (anomaly > 0.3) {
    phrases = warmPhrases;
  } else {
    phrases = neutralPhrases;
  }

  activePhrase = random(phrases);
  phraseTargetAlpha = 255;
  lastPhraseTime = millis();
}

What I Learned

Working with a real API changed everything. The full 145-year dataset tells a story on its own. You can see the blue bars dominate the early decades, then the reds creep in and accelerate. I did not need to editorialize; the shape of the data is the narrative. The hardest part was dealing with CORS issues and making sure broken data rows do not crash the sketch.


Data: NASA GISS Surface Temperature Analysis · Built with p5.js · February 2026

Week 4- Reading response

One thing that drives me crazy is self-checkout and checkout lanes in general. Stores build 30 checkout stations but only use two. Now we have self-checkout, and still the process is confusing and slow. This problem is not about the technology itself. It is about poor design. The system does not clearly show how to use it efficiently. It lacks good signifiers. Simple visual cues, clear instructions, or intuitive layout would solve the problem. The action should be easy to understand without guessing.

From the reading, I learned that good design depends on affordances, signifiers, mapping, feedback, and clear conceptual models. In the case of self-checkout, the affordance exists because you can scan items and pay. But the signifiers are missing. The design does not communicate the correct sequence of actions. The mapping between the screen, the scanner, and the bagging area is also unclear. If designers focused more on discoverability and understanding, these everyday frustrations would not happen.

I can apply these principles to interactive media. In digital design, buttons, gestures, and workflows should clearly show what they do. Users should not guess how to swipe, click, or complete a task. Good feedback is also important. When a user takes an action, the system should respond immediately. Finally, the interface should support a simple conceptual model. Users should understand how the system works just by looking at it and trying it. If I design with clear signifiers, natural mapping, and strong feedback, I can create interactive work that feels simple, clear, and easy to use.

Week 3 – creative reading response

I think a strongly interactive system is one where the user and the system affect each other. It is not just clicking and seeing something move. It feels like a two way conversation. The system responds fast and clearly. The user understands what caused the change. The user’s choices matter and can change what happens next.

In my p5 sketches, I mostly use mouse movement and clicks. The shapes or colors react, but the interaction is sometimes very basic. It does not always feel deep. To improve this, I can make the sketch remember what the user did before. The drawing can change over time based on their actions. This will make it feel more alive and less repetitive.

I also want to give better feedback. The system should clearly show that it is responding. Small changes in color, size, or movement can help. I can also use more controls like the keyboard. This will give the user more ways to interact. I want my sketches to feel like small worlds the user can explore, not just simple animations.

Cosmic Mandala — Radial Symmetry Through Code

Concept

Cosmic Mandala explores radial symmetry and circular motion through generative code, drawing inspiration from 1970s psychedelic computer graphics and ancient mandala symbolism. Unlike traditional static mandalas, this piece breathes and rotates, with each layer moving independently in mesmerizing counter-rotations.

The artwork uses nested loops within loops. The outer loop cycles through 8 concentric layers, while inner loops populate each layer with 36-72 elements (lines, petals, or dots). This creates hundreds of animated elements from just a few lines of code, demonstrating the multiplicative power of loops in generative art.

The piece questions: The slowly shifting hue values and counter-rotating layers create an almost hypnotic effect, inviting viewers into a meditative space.

Code Highlight

I am particularly proud of the central spiral animation, where a single loop creates two intertwining spirals using polar coordinates and Perlin noise:

function drawCentralSpiral() {
  let numPoints = 150;
  let maxInnerRadius = maxRadius * 0.2;
  
  // First spiral
  beginShape();
  for (let i = 0; i < numPoints; i++) {
    let angle = i * 8 + t * 2;  // angle increases with each point + time
    let r = map(i, 0, numPoints, 0, maxInnerRadius);  // radius grows outward
    
    // Add organic wobble with Perlin noise
    let wobble = noise(i * 0.1, t * 0.02) * 10;
    r += wobble;
    
    // Convert polar to cartesian
    let x = cos(angle) * r;
    let y = sin(angle) * r;
    curveVertex(x, y);
  }
  endShape();
  
  // Counter-spiral (rotated 180°)
  rotate(180);
  // ... mirrors the pattern
}

This creates a dynamic yin-yang effect. The key insight: angle = i * 8 + t * 2 means each point is offset by 8 degrees, AND the entire spiral rotates via t * 2. The noise wobble prevents it from looking mechanical. One loop, 150 iterations, endless motion.

Live Sketch

Interaction: Click to pause/resume | Press 'S' to save | Press 'R' to reset animation

Reflection & Future Directions

Week 3 taught me the power of polar coordinates in generative art. While Week 2 used Cartesian grids (x, y), this piece thinks in circles (angle, radius). This shift unlocked organic, flowing patterns that feel more natural despite being equally mathematical.

The most challenging part was balancing chaos and order. Too many rotating layers made viewers dizzy; too few felt static. I settled on 8 layers with alternating rotation directions, creating visual rhythm without overwhelming motion. The counter-rotating spirals provide a focal anchor point.

Technical Insights:

  • Nested loops are multiplicative: 8 layers × 36 elements = 288 animated objects from 2 simple loops
  • Modulo creates patterns: layer % 3 cycles through 3 different pattern types seamlessly
  • HSB color mode enables smooth shifts: incrementing hue creates rainbow cycles naturally

Ideas for Future Work:

  • Interactive Meditation Tool: Mouse distance from center controls rotation speed. Moving closer would slow down the animation, creating a breath-paced meditation aid
  • Generative Variations: Add keyboard triggers (1-9) to switch between preset moods: calm (blues/slow), energetic (warm colors/fast), or chaotic (random colors/opposite rotations)
  • Sacred Geometry: Implement golden ratio (φ) and Fibonacci sequences in radius calculations for mathematically "perfect" proportions
  • Particle Systems: Replace some static elements with particles that orbit along the layers, adding another dimension of motion
  • 3D Depth: Use WebGL to extrude the mandala into 3D space, creating a rotating torus or tunnel effect
  • Sound Generation: Map each layer to a different musical note/frequency, turning the visual mandala into an ambient soundscape using Tone.js
  • Export for Projection: Optimize for full-screen projection mapping in gallery installations or meditation spaces

Created with p5.js | February 2026

Reading reflection

Watching this talk, what came to my mind was how deeply rooted the tension between chaos and order is in both art and human history. I found myself reflecting on how, for centuries, artists were expected to create structure and clarity in response to a world perceived as chaotic. This made sense within a Newtonian worldview, where the universe felt stable, predictable, and governed by clear rules. As the speaker moved through moments of scientific disruption and historical trauma, especially the impact of World War I, I began to see how those events made strict order feel insufficient. It became clear to me that the shift toward chance and randomness in art was not a rejection of meaning, but an attempt to better reflect the uncertainty and complexity of lived experience.

What struck me most was the idea that chance is never truly blind. As I watched the examples of algorithmic and generative work, I kept thinking about how these systems are carefully designed yet allowed to behave unpredictably. This balance between control and surrender felt deeply human to me. I realized that using randomness is not about giving up responsibility as an artist, but about trusting processes to reveal patterns that cannot be forced. In that sense, chaos becomes a collaborator rather than a threat. It allows art to remain alive over time, continuously changing while still holding a coherent identity.

WEK 2 Art Exploration

Concept

Digital Terrain is a generative artwork inspired by the golden age of computer graphics from the 1970s, when plotters and early systems created mesmerizing patterns through mathematical precision. The piece explores repetition with variation

The composition uses a 6×5 grid structure where each cell contains one of five distinct generative patterns: vertical lines that breathe, organic blob contours, flowing topographic lines, mixed pattern cutouts, and drifting wave forms. The aesthetic echoes vintage plotter art with its warm cream lines on deep black, while the subtle animations bring a meditative, living quality to the static grid.

The work question: How can repetition avoid monotony? Each viewing reveals new relationships between adjacent cells, creating an ever-evolving visual conversation.

Code Highlight

I am particularly proud of the organic blob generation using Perlin noise mapped to polar coordinates. This technique creates smooth, natural-looking shapes that feel hand-drawn rather than computer-generated:

function drawOrganicBlob(w, h) {
  let offsetX = random(1000);
  let offsetY = random(1000);
  
  // Map circle to noise space for organic shapes
  beginShape();
  for (let a = 0; a < TWO_PI; a += 0.1) {
    let xoff = map(cos(a), -1, 1, 0, 2);
    let yoff = map(sin(a), -1, 1, 0, 2);
    let r = map(noise(offsetX + xoff, offsetY + yoff), 0, 1, w * 0.15, w * 0.4);
    let x = w/2 + r * cos(a);
    let y = h/2 + r * sin(a);
    curveVertex(x, y);
  }
  endShape(CLOSE);
  
  // Nested contours for depth
  let numContours = floor(random(2, 5));
  for (let c = 0; c < numContours; c++) {
    let shrink = map(c, 0, numContours, 0.9, 0.3);
    // ... creates inner rings
  }
}

By converting circular motion (cos(a), sin(a)) into noise coordinates, we get perfectly smooth blobs that change shape naturally. The nested contours add depth, mimicking topographic maps or growth rings. This marriage of geometric precision and natural randomness captures the essence of generative art.

Live Sketch

Interaction: Click canvas to regenerate | Press 'S' to save artwork

Reflection & Future Directions

This project taught me how loops transform code into art. Instead of drawing one shape, a loop draws hundreds with variations. The challenge was balancing randomness with aesthetic coherence.

The animation addition was particularly enlightening. At first, I had everything static (noLoop()), but adding a time dimension (t += 0.008) brought the piece to life.

Ideas for Future Work:

  • Color Evolution: Slowly shift the color palette over time, transitioning from cream to blues to warm oranges, creating day/night cycles
  • Audio Reactivity: Use microphone input to make patterns respond to sound. For example, wave amplitude increases with volume, blobs pulse with bass frequencies
  • Cell Communication: Let adjacent cells influence each other's patterns, creating waves of change across the grid rather than isolated behaviors
  • Export System: Generate high-resolution (4000×3000px) versions for actual printing/framing, exploring the digital-to-physical art transition
  • 3D Depth: Add subtle shadows or perspective to create the illusion that some cells are recessed or raised, adding architectural dimension
  • Pattern Memory: Track which patterns appear most frequently and gradually reduce them, ensuring ongoing visual freshness even across long gallery displays

Created with p5.js | February 2026

Assignment 1: Self Portrait

Concept

For this project, I wanted to create more than just a static self-portrait. My concept was to build a “digital puppet”. A TV show host personality that lives inside the browser.
The goal was to achieve a level of “aliveness” using generative motion (breathing, blinking, head bobbing) and interactivity.

I wanted the character to react to the user’s cursor, shifting from professional composure to excitement depending on how the viewer interacts with the canvas.

I am particularly proud of the interactive expression system. Here is the logic that detects where the mouse is and calculates the “target” emotions, blending them smoothly over time:

 

// === INTERACTIVE EXPRESSIONS BASED ON MOUSE POSITION ===

// Define interaction zones
let mouseInSmileZone = mouseY > height * 0.6 && mouseY < height * 0.85; 

// SMILE: If mouse is in the lower portion, target a big smile
if (mouseInSmileZone) {
  targetSmile = map(mouseY, height * 0.6, height * 0.85, 0.3, 1.0);
  targetEyebrowRaise = 0.3; // Slight eyebrow raise when smiling
} else {
  targetSmile = 0.1; // Return to subtle default smile
  targetEyebrowRaise = 0;
}

// Smooth transitions to make it feel natural
smileAmount = lerp(smileAmount, targetSmile, 0.1);
eyebrowRaise = lerp(eyebrowRaise, targetEyebrowRaise, 0.1);

 

This code allows the smile to grow gradually as the mouse moves lower, and pulls the eyelids into a “happy squint” simultaneously, making the reaction feel genuine rather than robotic.

Reflection and Future Improvements

This process taught me how much detail goes into simulating “life.” A simple sine wave can simulate breathing, and a random timer can simulate blinking, but combining them is what creates the illusion of a living character. One significant challenge was the hand, getting the fingers to look anatomical while being drawn with code.

For future improvements, I would like to add Audio Reactivity. Perhaps connect the mouth movement to the microphone so the character can “lip sync” to my voice.