Week 4 – Data Visualization

Data Visualization

For this week’s assignment, an animated visualization of global temperature data from 1880 to today. The sketch fetches real data from NASA’s GISS API and draws it as a bar chart. Blue bars mean cooler-than-average years. Red bars mean warmer-than-average years. As the bars animate in, poetic phrases appear based on the temperature, giving the data a human, storytelling feel.

Controls: Click to pause/play · Hover bars for details · Press R to restart · Press S to save · Spacebar to toggle

Code I Am Proud Of

The part I am most proud of is how the bars get their color. Instead of just picking “blue” or “red,” I used lerpColor() to blend between shades based on the actual temperature value. Colder years get a deeper blue, and warmer years get a more intense red. It makes the warming trend really pop visually:

// Color: blue for cold, red for warm
let barColor;
if (d.anomaly < 0) {
  barColor = lerpColor(
    color(70, 130, 200, 200),
    color(40, 80, 160, 200),
    map(d.anomaly, 0, -0.5, 0, 1)
  );
} else {
  barColor = lerpColor(
    color(200, 100, 70, 200),
    color(240, 60, 40, 220),
    map(d.anomaly, 0, 1.5, 0, 1)
  );
}

I also like the generative text system. Each time a new data point appears, the sketch picks a phrase from a pool that matches the temperature: cold, warm, or neutral. It is a simple idea but it adds a lot of personality to the piece:

function triggerPhrase(anomaly) {
  if (millis() - lastPhraseTime < 1200) return;

  let phrases;
  if (anomaly < -0.1) {
    phrases = coldPhrases;
  } else if (anomaly > 0.3) {
    phrases = warmPhrases;
  } else {
    phrases = neutralPhrases;
  }

  activePhrase = random(phrases);
  phraseTargetAlpha = 255;
  lastPhraseTime = millis();
}

What I Learned

Working with a real API changed everything. The full 145-year dataset tells a story on its own. You can see the blue bars dominate the early decades, then the reds creep in and accelerate. I did not need to editorialize; the shape of the data is the narrative. The hardest part was dealing with CORS issues and making sure broken data rows do not crash the sketch.


Data: NASA GISS Surface Temperature Analysis · Built with p5.js · February 2026

Week 4- Reading response

One thing that drives me crazy is self-checkout and checkout lanes in general. Stores build 30 checkout stations but only use two. Now we have self-checkout, and still the process is confusing and slow. This problem is not about the technology itself. It is about poor design. The system does not clearly show how to use it efficiently. It lacks good signifiers. Simple visual cues, clear instructions, or intuitive layout would solve the problem. The action should be easy to understand without guessing.

From the reading, I learned that good design depends on affordances, signifiers, mapping, feedback, and clear conceptual models. In the case of self-checkout, the affordance exists because you can scan items and pay. But the signifiers are missing. The design does not communicate the correct sequence of actions. The mapping between the screen, the scanner, and the bagging area is also unclear. If designers focused more on discoverability and understanding, these everyday frustrations would not happen.

I can apply these principles to interactive media. In digital design, buttons, gestures, and workflows should clearly show what they do. Users should not guess how to swipe, click, or complete a task. Good feedback is also important. When a user takes an action, the system should respond immediately. Finally, the interface should support a simple conceptual model. Users should understand how the system works just by looking at it and trying it. If I design with clear signifiers, natural mapping, and strong feedback, I can create interactive work that feels simple, clear, and easy to use.

Week 3 – creative reading response

I think a strongly interactive system is one where the user and the system affect each other. It is not just clicking and seeing something move. It feels like a two way conversation. The system responds fast and clearly. The user understands what caused the change. The user’s choices matter and can change what happens next.

In my p5 sketches, I mostly use mouse movement and clicks. The shapes or colors react, but the interaction is sometimes very basic. It does not always feel deep. To improve this, I can make the sketch remember what the user did before. The drawing can change over time based on their actions. This will make it feel more alive and less repetitive.

I also want to give better feedback. The system should clearly show that it is responding. Small changes in color, size, or movement can help. I can also use more controls like the keyboard. This will give the user more ways to interact. I want my sketches to feel like small worlds the user can explore, not just simple animations.

Cosmic Mandala — Radial Symmetry Through Code

Concept

Cosmic Mandala explores radial symmetry and circular motion through generative code, drawing inspiration from 1970s psychedelic computer graphics and ancient mandala symbolism. Unlike traditional static mandalas, this piece breathes and rotates, with each layer moving independently in mesmerizing counter-rotations.

The artwork uses nested loops within loops. The outer loop cycles through 8 concentric layers, while inner loops populate each layer with 36-72 elements (lines, petals, or dots). This creates hundreds of animated elements from just a few lines of code, demonstrating the multiplicative power of loops in generative art.

The piece questions: The slowly shifting hue values and counter-rotating layers create an almost hypnotic effect, inviting viewers into a meditative space.

Code Highlight

I am particularly proud of the central spiral animation, where a single loop creates two intertwining spirals using polar coordinates and Perlin noise:

function drawCentralSpiral() {
  let numPoints = 150;
  let maxInnerRadius = maxRadius * 0.2;
  
  // First spiral
  beginShape();
  for (let i = 0; i < numPoints; i++) {
    let angle = i * 8 + t * 2;  // angle increases with each point + time
    let r = map(i, 0, numPoints, 0, maxInnerRadius);  // radius grows outward
    
    // Add organic wobble with Perlin noise
    let wobble = noise(i * 0.1, t * 0.02) * 10;
    r += wobble;
    
    // Convert polar to cartesian
    let x = cos(angle) * r;
    let y = sin(angle) * r;
    curveVertex(x, y);
  }
  endShape();
  
  // Counter-spiral (rotated 180°)
  rotate(180);
  // ... mirrors the pattern
}

This creates a dynamic yin-yang effect. The key insight: angle = i * 8 + t * 2 means each point is offset by 8 degrees, AND the entire spiral rotates via t * 2. The noise wobble prevents it from looking mechanical. One loop, 150 iterations, endless motion.

Live Sketch

Interaction: Click to pause/resume | Press 'S' to save | Press 'R' to reset animation

Reflection & Future Directions

Week 3 taught me the power of polar coordinates in generative art. While Week 2 used Cartesian grids (x, y), this piece thinks in circles (angle, radius). This shift unlocked organic, flowing patterns that feel more natural despite being equally mathematical.

The most challenging part was balancing chaos and order. Too many rotating layers made viewers dizzy; too few felt static. I settled on 8 layers with alternating rotation directions, creating visual rhythm without overwhelming motion. The counter-rotating spirals provide a focal anchor point.

Technical Insights:

  • Nested loops are multiplicative: 8 layers × 36 elements = 288 animated objects from 2 simple loops
  • Modulo creates patterns: layer % 3 cycles through 3 different pattern types seamlessly
  • HSB color mode enables smooth shifts: incrementing hue creates rainbow cycles naturally

Ideas for Future Work:

  • Interactive Meditation Tool: Mouse distance from center controls rotation speed. Moving closer would slow down the animation, creating a breath-paced meditation aid
  • Generative Variations: Add keyboard triggers (1-9) to switch between preset moods: calm (blues/slow), energetic (warm colors/fast), or chaotic (random colors/opposite rotations)
  • Sacred Geometry: Implement golden ratio (φ) and Fibonacci sequences in radius calculations for mathematically "perfect" proportions
  • Particle Systems: Replace some static elements with particles that orbit along the layers, adding another dimension of motion
  • 3D Depth: Use WebGL to extrude the mandala into 3D space, creating a rotating torus or tunnel effect
  • Sound Generation: Map each layer to a different musical note/frequency, turning the visual mandala into an ambient soundscape using Tone.js
  • Export for Projection: Optimize for full-screen projection mapping in gallery installations or meditation spaces

Created with p5.js | February 2026

Reading reflection

Watching this talk, what came to my mind was how deeply rooted the tension between chaos and order is in both art and human history. I found myself reflecting on how, for centuries, artists were expected to create structure and clarity in response to a world perceived as chaotic. This made sense within a Newtonian worldview, where the universe felt stable, predictable, and governed by clear rules. As the speaker moved through moments of scientific disruption and historical trauma, especially the impact of World War I, I began to see how those events made strict order feel insufficient. It became clear to me that the shift toward chance and randomness in art was not a rejection of meaning, but an attempt to better reflect the uncertainty and complexity of lived experience.

What struck me most was the idea that chance is never truly blind. As I watched the examples of algorithmic and generative work, I kept thinking about how these systems are carefully designed yet allowed to behave unpredictably. This balance between control and surrender felt deeply human to me. I realized that using randomness is not about giving up responsibility as an artist, but about trusting processes to reveal patterns that cannot be forced. In that sense, chaos becomes a collaborator rather than a threat. It allows art to remain alive over time, continuously changing while still holding a coherent identity.

WEK 2 Art Exploration

Concept

Digital Terrain is a generative artwork inspired by the golden age of computer graphics from the 1970s, when plotters and early systems created mesmerizing patterns through mathematical precision. The piece explores repetition with variation

The composition uses a 6×5 grid structure where each cell contains one of five distinct generative patterns: vertical lines that breathe, organic blob contours, flowing topographic lines, mixed pattern cutouts, and drifting wave forms. The aesthetic echoes vintage plotter art with its warm cream lines on deep black, while the subtle animations bring a meditative, living quality to the static grid.

The work question: How can repetition avoid monotony? Each viewing reveals new relationships between adjacent cells, creating an ever-evolving visual conversation.

Code Highlight

I am particularly proud of the organic blob generation using Perlin noise mapped to polar coordinates. This technique creates smooth, natural-looking shapes that feel hand-drawn rather than computer-generated:

function drawOrganicBlob(w, h) {
  let offsetX = random(1000);
  let offsetY = random(1000);
  
  // Map circle to noise space for organic shapes
  beginShape();
  for (let a = 0; a < TWO_PI; a += 0.1) {
    let xoff = map(cos(a), -1, 1, 0, 2);
    let yoff = map(sin(a), -1, 1, 0, 2);
    let r = map(noise(offsetX + xoff, offsetY + yoff), 0, 1, w * 0.15, w * 0.4);
    let x = w/2 + r * cos(a);
    let y = h/2 + r * sin(a);
    curveVertex(x, y);
  }
  endShape(CLOSE);
  
  // Nested contours for depth
  let numContours = floor(random(2, 5));
  for (let c = 0; c < numContours; c++) {
    let shrink = map(c, 0, numContours, 0.9, 0.3);
    // ... creates inner rings
  }
}

By converting circular motion (cos(a), sin(a)) into noise coordinates, we get perfectly smooth blobs that change shape naturally. The nested contours add depth, mimicking topographic maps or growth rings. This marriage of geometric precision and natural randomness captures the essence of generative art.

Live Sketch

Interaction: Click canvas to regenerate | Press 'S' to save artwork

Reflection & Future Directions

This project taught me how loops transform code into art. Instead of drawing one shape, a loop draws hundreds with variations. The challenge was balancing randomness with aesthetic coherence.

The animation addition was particularly enlightening. At first, I had everything static (noLoop()), but adding a time dimension (t += 0.008) brought the piece to life.

Ideas for Future Work:

  • Color Evolution: Slowly shift the color palette over time, transitioning from cream to blues to warm oranges, creating day/night cycles
  • Audio Reactivity: Use microphone input to make patterns respond to sound. For example, wave amplitude increases with volume, blobs pulse with bass frequencies
  • Cell Communication: Let adjacent cells influence each other's patterns, creating waves of change across the grid rather than isolated behaviors
  • Export System: Generate high-resolution (4000×3000px) versions for actual printing/framing, exploring the digital-to-physical art transition
  • 3D Depth: Add subtle shadows or perspective to create the illusion that some cells are recessed or raised, adding architectural dimension
  • Pattern Memory: Track which patterns appear most frequently and gradually reduce them, ensuring ongoing visual freshness even across long gallery displays

Created with p5.js | February 2026

Assignment 1: Self Portrait

Concept

For this project, I wanted to create more than just a static self-portrait. My concept was to build a “digital puppet”. A TV show host personality that lives inside the browser.
The goal was to achieve a level of “aliveness” using generative motion (breathing, blinking, head bobbing) and interactivity.

I wanted the character to react to the user’s cursor, shifting from professional composure to excitement depending on how the viewer interacts with the canvas.

I am particularly proud of the interactive expression system. Here is the logic that detects where the mouse is and calculates the “target” emotions, blending them smoothly over time:

 

// === INTERACTIVE EXPRESSIONS BASED ON MOUSE POSITION ===

// Define interaction zones
let mouseInSmileZone = mouseY > height * 0.6 && mouseY < height * 0.85; 

// SMILE: If mouse is in the lower portion, target a big smile
if (mouseInSmileZone) {
  targetSmile = map(mouseY, height * 0.6, height * 0.85, 0.3, 1.0);
  targetEyebrowRaise = 0.3; // Slight eyebrow raise when smiling
} else {
  targetSmile = 0.1; // Return to subtle default smile
  targetEyebrowRaise = 0;
}

// Smooth transitions to make it feel natural
smileAmount = lerp(smileAmount, targetSmile, 0.1);
eyebrowRaise = lerp(eyebrowRaise, targetEyebrowRaise, 0.1);

 

This code allows the smile to grow gradually as the mouse moves lower, and pulls the eyelids into a “happy squint” simultaneously, making the reaction feel genuine rather than robotic.

Reflection and Future Improvements

This process taught me how much detail goes into simulating “life.” A simple sine wave can simulate breathing, and a random timer can simulate blinking, but combining them is what creates the illusion of a living character. One significant challenge was the hand, getting the fingers to look anatomical while being drawn with code.

For future improvements, I would like to add Audio Reactivity. Perhaps connect the mouth movement to the microphone so the character can “lip sync” to my voice.