MIDTERM – Bad Trip

 

 

Welcome to my midterm project, Neo-Euphoria Visions, an interactive audiovisual artwork created with p5.js. This project is an exploration into surreal, psychedelic self-portraiture, heavily inspired by the distinct visual language and emotional tone of the HBO series Euphoria. It uses your webcam to pull you into a hallucinatory world that reacts to your presence, blurring the lines between the viewer and the viewed.

The experience is a multi-layered trip. It begins with a simple invitation before slowly transforming your reality. Your image is recast in a cold, UV-and-pink color palette while a motion trail ghosts your every move. A glowing aura emanates from your silhouette, and hand-doodled stars twinkle into existence around you. The piece is designed to be both beautiful and unsettling, contrasting the cold, trippy visuals with organic, hot-red tears that bleed from your eyes. With a dynamic soundtrack and automated visual shifts, the goal is to create a mesmerizing, ever-changing, and deeply personal digital hallucination.


Live Sketch

You can experience Neo-Euphoria Visions live in your browser by clicking the link below.


Screenshots

How It Works & What I’m Proud Of

This project is built on a foundation of p5.js, but its soul lies in the integration of two powerful libraries: ml5.js for computer vision and GLSL for high-performance graphics shaders. The entire visual output, from the colors to the background effects, is rendered in a single, complex fragment shader that runs on the GPU. This was a critical technical decision that allows for multiple layers of real-time effects without the performance lag that would come from CPU-based pixel manipulation.

The core mechanic involves layering several computer vision and graphical processes. First, ml5.js BodyPix creates a segmentation mask of the user, which is fed into the shader. This mask allows me to separate the scene into three distinct layers: the background, a glowing “aura” directly behind the user, and the user themselves. The shader then applies different artistic effects to each layer. Simultaneously, ml5.js FaceApi tracks facial landmarks to determine the precise location of the user’s eyes. This data is used by a custom Tear class in p5.js, which draws organic, flowing tears on a transparent overlay canvas, making them appear attached to the face. I’m particularly proud of the logic that makes the tears “follow” the eyes smoothly by interpolating their position between frames, which prevents the jittery tracking that can often occur.

JavaScript

// A snippet from the Tear class showing the smooth position update
updatePosition(newX, newY) {
    if (this.path.length === 0) return;
    let head = this.path[0];
    
    // Use lerp to smoothly move the tear's origin towards the new eye position.
    // This prevents jittering if the face detection is noisy.
    let targetPos = createVector(newX, newY);
    let smoothedPos = p5.Vector.lerp(head, targetPos, 0.3);
    let delta = p5.Vector.sub(smoothedPos, head);

    for (let p of this.path) {
        p.add(delta);
    }
}

One of the best technical decisions was implementing a temporal smoothing feedback loop for the BodyPix mask. The raw mask from the model can be noisy and flicker between frames, creating harsh, blocky edges. By blending each new mask with the previous frame’s mask, the silhouette becomes much more stable and organic, which was essential for the “glowing aura” effect to work properly. Finally, the automated, timed switching between three distinct color palettes gives the project a life of its own, making the experience unpredictable and unique for every viewing.

Glsl

// A snippet from the fragment shader showing the palette switching logic
void main() {
    // ...
    vec3 personPalette_phase;
    vec3 auraPalette_phase;

    if (u_activePalette == 1) { // Palette 1: Haze & Fire
        personPalette_phase = vec3(0.0, 0.1, 0.2); // UV
        auraPalette_phase = vec3(0.1, 0.15, 0.20); // Yellow/Orange
    } else if (u_activePalette == 2) { // Palette 2: Electric Pink/Cyan
        personPalette_phase = vec3(0.6, 0.7, 0.8); // Deep UV
        auraPalette_phase = vec3(0.5, 0.6, 0.7); // Pink/Cyan
    } else { // Palette 3: Cold UV
        personPalette_phase = vec3(0.5, 0.6, 0.7); // Deepest UV
        auraPalette_phase = vec3(0.8, 0.9, 1.0); // Electric Blue/Violet
    }
    // ...
}

(A second screenshot showing a different palette, perhaps “Haze & Fire,” to illustrate the variety.)
[Screenshot showing the warmer “Haze & Fire” palette in action.]

Problems and Areas for Improvement

The single biggest challenge I encountered during development was a series of stability issues related to the ml5.js library. I initially ran into persistent “… is not a function” errors, which, after extensive debugging, I discovered were caused by a major version update (v1.0.0) that had deprecated the FaceApi model I was using. The solution was to lock the project to a specific older version (v0.12.2) in the index.html file. This was a crucial lesson in the importance of managing dependencies in web development.

Even after fixing the versioning, I faced a “race condition” where both FaceApi and BodyPix would try to initialize at the same time, causing one or both to fail silently. This resulted in features like the aura and glitch-zoom not working at all. I resolved this by re-architecting the setup process to “chain” the initializations: BodyPix only begins loading after FaceApi has confirmed it is ready. This made the entire application dramatically more reliable. For future improvements, I would love to make the background effects more diverse and audio-reactive. Having the stars pulse or the colors shift in time with the bass of the music would add another layer of immersion. I could also explore using hand-tracking via HandPose to allow the user to “paint” or interact with the stars in the background, making the experience even more creative and personal.

 

Midterm Project – Floral Frenzy

Overall concept  

Floral Frenzy is an original design experience which seeks to tap on the emotional responses of the users. Unlike other video games, there is no “winning” or “losing”. Instead, this project relies on the exploration and curiosity of the individual, as well as offering a visually pleasing experience.

The initial ideas consisted of creating a background with multiple static elements, such as mountains, trees, and bushes, and only a few interactive objects that would contain multiple functions. However, after some trial and error, this plan shifted and I decided to add more objects with interactivity in order to balance the weight of static vs interactive.

This was the initial sketch for the project, mainly focused on the aesthetics and potential elements to include in the final draft.
How does the project work 

From the spaceships, floating stones, and blue trees to the massive fantasy-like structure, this project consists of a mystical, otherworldly plane, drawing the user’s attention from one point to another. Nonetheless, the roses are meant to be the main focus of interactivity, as each of these contain a different reaction, such as releasing a symphony, displaying text, or displaying an unexpected result. Nonetheless, in order to produce the illusion of failure and success, two of the roses are programmed to either poison or compliment the user.

Overall the end result is a game-like set of visuals in which the user is given the instructions to interact with the object, these being the roses, and press a single key in order to delve into the real experience filled with animations in the background, vibrant colors, elements of a variety of sizes, and the interactive objects. Two of these objects provide different outcomes which then take the user back to the homepage.

Fullscreen Mode Sketch 

Link to sketch: https://editor.p5js.org/imh9299/full/HPfU68QCi

Embedded sketch:

Favorite code snippets

Although it appears to be a simple, clear function and codes at first glance, it took me many hours to decipher how to make the music, text, change of color, and other reactions to activate whenever the user clicked on them. Nevertheless, once it proved to be operational, I was filled with joy and I was proud to have completed it successfully, especially since this is the main interactive attraction of the project.

function mousePressed() {
  // helpful debug print
  print('mousePressed at', mouseX, mouseY, 'gameState:', gameState);

  // Only win/lose while the game is playing
  if (gameState === "playing") {
    if (winningRose.contains(mouseX, mouseY)) {
      print("clicked winningRose");
      wonGame = true;
      gameState = "end";
      return; // stop further handling this click
    }

    if (losingRose.contains(mouseX, mouseY)) {
      print("clicked losingRose");
      wonGame = false;
      gameState = "end";
      return;
    }
  }

  // Other clickable objects (always checked)
  if (flower.contains(mouseX, mouseY)) {
    print("clicked flower");
    song.play();
  }

  if (messageRose.contains(mouseX, mouseY)) {
    print("clicked messageRose");
    displayText = !displayText;
  }
}

Another code I am very proud of is the one that allows my cut image of a rock float in the background. At the start, I had to introduce all the variables such as the image, the preload function, fixing the size, speed at which it moved, tracking the time, among other things. While this wasn’t the most difficult codes to understand, it took me a while to find a way to transfer them from a clean, almost empty sketch to one filled with more than one hundred codes. Not to mention, once I managed to implement this code, I found myself struggling to make the music in one of my objects play again. Nevertheless, it was worth the effort as I like how this simple animation adds a new layer of life, along the moving spaceship and fantastical creature,  compared to the rest of the composition.

let rockX, rockY; // Position of background rock
let amplitude = 20; // How far up and down the rock moves
let speed = 0.04; // How fast the rock bobs
let time = 0; // A variable to track time for the sine wave
let rockImage; // Image of the rock

 

// --- ROCK ANIMATION (only in game) ---
let yOffset = sin(time) * amplitude;
let currentY = rockY + yOffset;
image(rockImage, rockX, currentY, 160, 160);
time += speed;

 

Areas for improvement and obstacles along the way

Some of the major obstacles I ran into was the integration of interactivity to the sketch. While I managed to build the necessary functions, classes, and parameters to apply interactivity for the sounds, text, and other responses, embedding these into my designed objects proved to be far more challenging than I expected. Not only did these responses have to activate as a result of the interaction of the user, but they also had to be contained within the “game-like” structure of the sketch. Furthermore, fixing the position of these objects, along the non-responsive ones, also forced me to explore multiple x and y positions so that the objects, images, and all additional elements could adapt to the windowWidth and windowHeight. Finally, after solving the issue of merging interactivity with the objects, finding a way to assemble Game states and the transitions between start, playing, and end states turned out to be a difficult challenge. Given that my project was specifically designed to be explored and simply be “visually pleasing”, integrating a game-like structure meant finding an end result that would match my project’s story and tone, and remain original. Despite all the struggling, after many hours of exploring and researching the p5 references, libraries, and works done by students in the past, I managed to overcome all these obstacles and offer this final product which I am genuinely happy about.

Some areas of improvement I would like to work on my next project would be the use of interactivity. Although I did include interactive objects, these are not as impressive as I initially thought. I would like to challenge myself to create a more interactive and engaging element such as the games designed by my peers, not only for my assignments but to also implement in my future non-academic works. Furthermore, I would like to learn how to make my codes look “cleaner” and make functions and classes in a way that reduces my stress when I need to find a specific code.

References:

Images: Rocks, Spaceship, and Fantasy structure were downloaded from Pinterest. The fantasy creature is an original design

Mouse press: https://drive.google.com/drive/u/0/folders/1Qx5kltRZwxzt7Z6l-G-mabzWDUs7nLgj

Elements: https://p5js.org/reference/p5/p5.Element/

Ai overview displaying text through an object: how to display text when clicking an object p5

Window width height example by professor Aya: https://editor.p5js.org/mangtronix/sketches/t4G0erH1B

Text Font: https://p5js.org/reference/p5/textFont/

Mirror image: https://editor.p5js.org/icm4.0/sketches/ETcm93Ua1

Float: https://p5js.org/reference/p5/float/

Used ChatGTP to clean my codes, (also to store and recover them since I suffered from constant glitches that deleted my information), and to find errors in my work whenever a specific element was not showing or it wasn’t processing its intended objective.

Midterm Project

Concept

Color Gate Challenge is a fast and colorful reaction game where the player controls a glowing ball that changes colors to pass through matching gates. The idea came from color-matching and reflex games I used to play, but I wanted to create something that feels more modern and bright, with glowing effects and smooth motion.

The goal of the game is to move the ball through falling gates without hitting the walls. Each gate has a color, and the player must change their ball to the same color to pass safely. If you pass the wrong color or crash into the barrier, the game ends.

My Final Game:

Code Snippet and Parts I Am Proud Of:

One of the parts I am most proud of is how I  control the player, the gates, and the full game system.
Each part of the game (player, gate, and main game) is built as a class, which made it easier to add new features later.

The player’s color can change, and the game checks if it matches the gate’s color before letting the player pass. This made the logic clear and fun.

if (this.player.checkCollision(gate)) {
this.gameOver();
return;
}

This simple check controls the whole challenge of the game.
If the player touches the wrong color or hits the gate walls, the game ends immediately.

I also added a color preview system that shows the next few gate colors, so the player can plan ahead.
It uses small color dots on the screen to help the player see which color to switch to next.

this.upcomingGates.forEach((gate, index) => {
const dot = document.createElement('div');
dot.className = 'color-dot';
dot.style.background = this.getColorString(this.player.colors[gate.color]);
colorPreview.appendChild(dot);
});

Another part I am proud of is how the speed control works. The player can press keys to make the gates fall faster or slower, and there is a live bar that shows the speed level. This made the game more interactive and customizable.

Problems and Future Improvements

At first, the game was too easy because the gates were falling too slowly, and it didn’t feel challenging. I changed the gate speed and added random colors to make it more unpredictable and exciting.

Another problem was keeping everything in the right position when resizing the window. I had to fix the player’s size and position every time the screen changed, using the windowResized() function.

In the future, I want to:

  • Add special gates that move sideways

  • Add sound effects for color switches and collisions

  • Add power-ups that give the player a shield or slow motion

  • Create a moving space background for more depth

  • Add more visual effects like explosions and particle trails

Midterm Project – Music Vent

What is Music Vent?

 

So I created this music visualizer called **Music Vent**, and the whole idea came from thinking about how we use music when we’re feeling emotional – especially when we’re sad or need to vent. You know how sometimes you just want to put on some music and let it all out? That’s exactly what this project is about.

 

The point of Music Vent is to create an immersive experience for music listening, especially for those moments when you want to vent through sad music. But here’s the thing – while you’re going through those emotions, there are these cute, colorful elements that somehow bring you ease and comfort.

 

The Concept Behind It:

 

I wanted to capture this duality that happens when we listen to music emotionally. On one hand, you have these really comforting, almost therapeutic elements:

 

– **Flying radio-cloud birds**: These little radios attached to clouds that float across the screen in the most adorable way. They’re like digital companions that keep you company while you’re listening.
– **A beautiful galaxy background**: I created this artistic Milky Way galaxy with twinkling stars and colorful dust clouds that creates this peaceful, cosmic atmosphere.
– **Soft colors and smooth animations**: Everything flows gently and uses calming colors that make you feel at ease.

 

But then on the other hand, you have the more intense, cathartic elements:

 

– **Beat-responsive visualizations**: These are the NCS-style spectrum analyzers (those green bar graphs you see in the middle) that react aggressively to the music’s beats. They can feel a bit disruptive to the peaceful vibe, but that’s intentional – they represent the raw emotional energy you’re releasing.

 

How I Built It

 

The Technical Setup

 

I built this using p5.js and JavaScript, and I tried to keep the code organized using classes so it wouldn’t become a complete mess. Here’s basically how it’s structured:

 

“`class MusicVisualizerApp {
constructor() {
this.audioManager=newAudioManager();
this.visualManager=newVisualizationManager();
this.uiManager=newUIManager();
}
}“`
I have separate managers for handling the audio, the visuals, and the user interface. This way, if I want to change how the audio analysis works, I don’t have to mess with the visual code.

 

The Audio Analysis Part

 

This was probably the trickiest part. I needed the system to actually “understand” the music and respond to it emotionally. So I created this mood detection algorithm:

 

“`javascript
class MoodProfile {
analyzeMood() {
constavgEnergy=this.average(this.analysisBuffer.map(d=>d.energy));
constavgBass=this.average(this.analysisBuffer.map(d=>d.frequencyBands.bass));
constavgHigh=this.average(this.analysisBuffer.map(d=>d.frequencyBands.high));
// Calculate emotional characteristics
this.currentMood.energy=Math.min(avgEnergy*2, 1.0);
this.currentMood.danceability=Math.min((avgBass+this.currentMood.energy) *0.8, 1.0);
this.currentMood.valence=Math.min((avgHigh+avgCentroid) *0.9, 1.0);
}
}
“`

 

Basically, the system listens to the music and analyzes different frequency bands – like how much bass there is, how much high-frequency content, the overall energy level. Then it tries to figure out the “mood” of the song and adapts the visuals accordingly.

 

The cool thing is that it can detect beats in real-time and make the black hole effect happen right when the beat hits. I spent way too much time getting the beat detection algorithm right!

 

Creating the Galaxy Background

 

I wanted something that felt cosmic and peaceful, so I created this Milky Way galaxy effect. It has about 500 twinkling stars, colorful dust clouds, and these spiral arms that slowly rotate. But here’s the cool part – when a beat hits in the music, the whole galaxy gets sucked into a black hole!

 

“`javascript
// When beats are detected, everything spirals inward
if (beatDetected) {
this.targetBlackHoleIntensity=1.0;
// Stars and particles get pulled toward the center
}
“`

 

The black hole effect was inspired by how intense emotions can feel like they’re pulling everything into them. When the beat drops, you see this dramatic transformation where all the peaceful elements get drawn into this swirling vortex with orange and purple colors.

 

### The Flying Radio-Cloud Birds

 

This was probably my favorite part to code. I took inspiration from a radio drawing I had made before and turned it into these little geometric radios that fly around attached to fluffy clouds. They spawn randomly from either side of the screen and just float across peacefully.

 

“`javascript
class RadioCloudBird {
constructor(x, y, direction=1) {
this.cloudColor=random([‘white’, ‘lightblue’, ‘pink’, ‘purple’]);
this.radioColor=random([‘brown’, ‘black’, ‘silver’, ‘gold’]);
this.bobSpeed=random(0.02, 0.05); // Makes them bob gently
}
}
“`

 

Each radio is drawn using basic geometric shapes – rectangles for the body, circles for the speakers and knobs, lines for the antenna. I had to figure out how to scale everything properly so they’d look right when flying around, but once I got it working, they became these adorable little companions that make the whole experience feel less lonely.

 

## What I Learned and Challenges I Faced

 

### Making Everything Feel Smooth

 

One thing I really focused on was making sure all the animations felt organic and not jarring. I used a lot of interpolation to smooth out the transitions:

 

“`javascript
// Instead of sudden changes, everything gradually transitions
this.values.bass = lerp(this.values.bass, newBassValue, 0.1);
this.values.energy = lerp(this.values.energy, newEnergyValue, 0.1);
“`

 

This makes the whole experience feel more natural and less like you’re watching a computer program.

 

### A Small Touch: Conversation Detection

 

I also added this feature where if the system detects you’re talking (through the microphone), it automatically lowers the music volume. I Included this interactivity feature because that one feature I really wished to see in music party listening softwares. As someone who used to listen to music bots a lot on discord a lot, I always found it annoying to manually reduce or mute the music bot whenever I wanna speak to my friends while listening. This was the initial inspiration to create this project by the way, but then I got the idea of the concept behind this visualizing experience so I focused more on it.

Here is the project on p5, have fun experiencing it!

 

Midterm Progress

Concept:

I think it’s safe to say that a lot of events on campus have photobooths, and even stores in malls. I wanted to do something similar, but fully digital. While my current sketch isn’t very aesthetic yet, I plan to model it after the Urban Outfitters photobooth, adding playful visuals and frames to make the experience interactive and fun, like a real-life booth.

The user interacts with the photobooth through a series of pages. They start at the main booth screen, where they can click a button to enter their name on the Start Page. After submitting their name, they move to the Filter Page to see a live video feed and choose a filter by pressing the arrow keys. Once they select a filter and click “OK,” they reach the Frame Screen, where they can take a photo. Finally, the Thank You Screen appears with an option to restart, which loops them back to the booth, letting them take another photo. Each page guides the user step by step, making the experience clear and interactive.

Code design :

All the pages are in their own class.

    • PhotoBooth: The main screen where users start. Displays the booth and a button to enter the next step.
    • StartPage: Handles the user’s name input and the “Start” button to move to the filter selection.
    • FilterPage: Shows a live video feed and lets users choose a filter using arrow keys, with an “OK” button to proceed.
    • FrameScreen: Applies the selected filter to the video and allows the user to take a photo with a camera button.
    • ThankYouScreen: Confirms the photo has been taken and offers a button to restart the booth.
    • Flow control: currentScreen determines which class is displayed, and button/key interactions manage transitions between screens.

Difficulties:

I think that throughout implementing this, the filter page class was the most troubling. I had to use the translate function to mirror my video feed because I wanted a mirrored picture as the result, and I needed it to only apply this to the video, and not the text and button. 

//mirror the video (so it feels like a mirror)

    push();

    translate(width, 0);

    scale(-1, 1);

    image(this.video, 0, 0, width, height);

    pop();

Another key part of the project is the filters and how the selection works. Each filter is applied as an overlay on the live video feed, and the user can browse through them using the left and right arrow keys. I also made sure the selection loops continuously, so moving past the last filter brings you back to the first one.

 //sepia filter overlay

    if (this.filters[this.currentFilter] === "sepia") {

      fill(112, 66, 20, 60);

      noStroke();

      rect(0, 0, width, height);




      //black & white filter using canvas filter

    } else if (this.filters[this.currentFilter] === "bw") {

      push();

      tint(255, 255);

      drawingContext.filter = "grayscale(100%)";

      translate(width, 0);

      scale(-1, 1);

      image(this.video, 0, 0, width, height);

      drawingContext.filter = "none";

      pop();

    }

 

nextFilter() {

    //go to next filter (wraps around)

    this.currentFilter = (this.currentFilter + 1) % this.filters.length;

  }




  prevFilter() {

    //go to previous filter (wraps around)

    this.currentFilter =

      (this.currentFilter - 1 + this.filters.length) % this.filters.length;

  }

To deal with, or let’s say minimize the risks with these two implementations, I first tested them out in a separate sketch on p5.js, and when I made sure they worked how I wanted them to, I added them to my project flow. I also added console logs to help me debug and then removed them once everything was working as expected.

Lastly, this project really tested my understanding of how order matters. I had to use .hide() a lot to make sure buttons and features weren’t carried on between screens. 

Reflection:

Now that the structure is solid, I can focus on making it look appealing. I’m open to any suggestions and even suggestions for my current flow!

Here’s my Sketch:

Week 5 – Midterm Update

This week I made progress on my midterm project, which is an interactive webcam artwork inspired by Euphoria’s psychedelic aesthetics. The concept is to let users take a live video feed of themselves and transform it into surreal, hallucination-like visuals. By pressing different keys, the user can switch between effects like color-swapping, glowing “UV tears,” and trippy, pulsating backgrounds.

Concept

The idea is to create a personal, immersive experience where the viewer sees themselves transformed under digital hallucinations. The interactivity allows them to “step into” a psychedelic portrait that reacts to their input.

Design & User Interaction

  • The webcam is the canvas: it captures the user in real-time.

  • Pressing different keys (1–4) switches between effects.

  • Each effect is designed to distort colors, overlay glowing shapes, or alter the background to give a hallucinatory vibe.

Sketch

Code Design

I structured the code into separate functions (psychedelicColors(), uvTears(), hallucinationBackground()) so each effect is modular. The keyPressed() function makes it easy to navigate between modes, and the modular design means I can keep adding effects without breaking old ones.

Risk / Challenge

The most intimidating part of this project is achieving truly psychedelic visuals that feel organic rather than random. Messing with pixel arrays and shaders can be complex. To reduce this risk, I started small: I tested pixel manipulation with basic color channel swaps and added some randomized “tear” overlays. Now that the structure works, I can safely experiment with shaders and sound reactivity in later iterations.

Next Steps

  • Experiment with custom GLSL shaders for more advanced visuals.

  • Add sound reactivity so the effects pulse with music.

  • Enhance backgrounds with layered fractals or noise-based distortio

Week 5 – Reading Reflection

Reading the essay Computer Vision for Artists and Designers made me realize how differently computers and humans actually “see.” Our eyes and brains process the world in ways that feel natural: we recognize faces instantly, understand depth, guess intentions from gestures, and fill in missing details without even noticing. Computers, on the other hand, don’t have that intuitive grasp. They just see pixels and patterns. A shadow or a little blur can confuse them. Where we understand context,  like knowing a cat is still a cat even if half hidden, computers rely on strict rules or training data, and they often fail when something doesn’t match what they’ve been taught to expect.

To bridge that gap, a lot of effort goes into helping machines track what we want them to notice. Instead of raw pixels, we give them features: edges, colors, corners, or textures. Algorithms can then use those features to keep track of an object as it moves. More recently, deep learning has allowed computers to learn patterns themselves, so they can recognize faces or bodies in a way that feels closer to human intuition (though still fragile). Sometimes, extra sensors like depth cameras or infrared are added to give more reliable information. It’s almost like building a whole toolkit around vision just to get machines to do what we take for granted with a single glance.

Thinking about how this plays into interactive art is both exciting and a little unsettling. On one hand, the ability to track people makes art installations much more engaging — an artwork can respond to where you’re standing, how you move, or even who you are (as I observed in TeamLab). That creates playful, immersive experiences that wouldn’t be possible without computer vision. But the same technology that enables this interactivity also raises questions about surveillance. If art can “see” you, then it’s also observing and recording in ways that feel uncomfortably close to security cameras. I think this tension is part of what makes computer vision so interesting in art: it’s not just about making something interactive, but also about asking us to reflect on how much we’re being watched.

Assignment 4 – Data Visualization

Concept:

For this assignment, I wanted to create a data visualization that conveys information through motion and color. Initially, I tried using real-world GDP data I downloaded from Kaggle, but I ran into errors because some columns were missing values, and I wasn’t sure how to handle those gaps. So, I decided to create my own fictional CSV file with made-up GDP numbers for a larger set of countries. Each country is represented as a bouncing circle whose size reflects its GDP and whose color adds a visual layer of distinction. The animation gives the data energy and makes patterns easier to notice. I also utilized arrays and a custom class to organize the circles, dynamically managing their movement, size, and color.

Code Highlight:

One part of my code I’m proud of is how the circles move and react to the edges of the canvas. Each circle has its own speed and direction, creating a sense of individuality while still forming a cohesive scene. I also added a feature that displays the country’s name when you hover over a circle, allowing you to read the data without cluttering the visualization.

if (dist(mouseX, mouseY, this.x, this.y) < this.size / 2) {
    fill(255);
    textAlign(CENTER, CENTER);
    textSize(18);
    text(this.name, this.x, this.y - this.size / 2 - 10);
}

Embedded Sketch:

Reflections & Future Improvements:

This project taught me how to combine datasets with visual techniques to make information more intuitive. I realized that adding motion and color can turn raw numbers into an engaging experience. Hover interactions enhanced the usability, making the visualization informative without overwhelming the viewer.

For future improvements, I would like to make the visualization more interactive, perhaps allowing users to filter by region or toggle between GDP and population. I also want to experiment with color gradients or circle trails that respond to velocity, so the data becomes even more expressive. Overall, this project helped me understand how programming, animation, and data can intersect to communicate ideas creatively.