MIDTERM – Bad Trip

 

 

Welcome to my midterm project, Neo-Euphoria Visions, an interactive audiovisual artwork created with p5.js. This project is an exploration into surreal, psychedelic self-portraiture, heavily inspired by the distinct visual language and emotional tone of the HBO series Euphoria. It uses your webcam to pull you into a hallucinatory world that reacts to your presence, blurring the lines between the viewer and the viewed.

The experience is a multi-layered trip. It begins with a simple invitation before slowly transforming your reality. Your image is recast in a cold, UV-and-pink color palette while a motion trail ghosts your every move. A glowing aura emanates from your silhouette, and hand-doodled stars twinkle into existence around you. The piece is designed to be both beautiful and unsettling, contrasting the cold, trippy visuals with organic, hot-red tears that bleed from your eyes. With a dynamic soundtrack and automated visual shifts, the goal is to create a mesmerizing, ever-changing, and deeply personal digital hallucination.


Live Sketch

You can experience Neo-Euphoria Visions live in your browser by clicking the link below.


Screenshots

How It Works & What I’m Proud Of

This project is built on a foundation of p5.js, but its soul lies in the integration of two powerful libraries: ml5.js for computer vision and GLSL for high-performance graphics shaders. The entire visual output, from the colors to the background effects, is rendered in a single, complex fragment shader that runs on the GPU. This was a critical technical decision that allows for multiple layers of real-time effects without the performance lag that would come from CPU-based pixel manipulation.

The core mechanic involves layering several computer vision and graphical processes. First, ml5.js BodyPix creates a segmentation mask of the user, which is fed into the shader. This mask allows me to separate the scene into three distinct layers: the background, a glowing “aura” directly behind the user, and the user themselves. The shader then applies different artistic effects to each layer. Simultaneously, ml5.js FaceApi tracks facial landmarks to determine the precise location of the user’s eyes. This data is used by a custom Tear class in p5.js, which draws organic, flowing tears on a transparent overlay canvas, making them appear attached to the face. I’m particularly proud of the logic that makes the tears “follow” the eyes smoothly by interpolating their position between frames, which prevents the jittery tracking that can often occur.

JavaScript

// A snippet from the Tear class showing the smooth position update
updatePosition(newX, newY) {
    if (this.path.length === 0) return;
    let head = this.path[0];
    
    // Use lerp to smoothly move the tear's origin towards the new eye position.
    // This prevents jittering if the face detection is noisy.
    let targetPos = createVector(newX, newY);
    let smoothedPos = p5.Vector.lerp(head, targetPos, 0.3);
    let delta = p5.Vector.sub(smoothedPos, head);

    for (let p of this.path) {
        p.add(delta);
    }
}

One of the best technical decisions was implementing a temporal smoothing feedback loop for the BodyPix mask. The raw mask from the model can be noisy and flicker between frames, creating harsh, blocky edges. By blending each new mask with the previous frame’s mask, the silhouette becomes much more stable and organic, which was essential for the “glowing aura” effect to work properly. Finally, the automated, timed switching between three distinct color palettes gives the project a life of its own, making the experience unpredictable and unique for every viewing.

Glsl

// A snippet from the fragment shader showing the palette switching logic
void main() {
    // ...
    vec3 personPalette_phase;
    vec3 auraPalette_phase;

    if (u_activePalette == 1) { // Palette 1: Haze & Fire
        personPalette_phase = vec3(0.0, 0.1, 0.2); // UV
        auraPalette_phase = vec3(0.1, 0.15, 0.20); // Yellow/Orange
    } else if (u_activePalette == 2) { // Palette 2: Electric Pink/Cyan
        personPalette_phase = vec3(0.6, 0.7, 0.8); // Deep UV
        auraPalette_phase = vec3(0.5, 0.6, 0.7); // Pink/Cyan
    } else { // Palette 3: Cold UV
        personPalette_phase = vec3(0.5, 0.6, 0.7); // Deepest UV
        auraPalette_phase = vec3(0.8, 0.9, 1.0); // Electric Blue/Violet
    }
    // ...
}

(A second screenshot showing a different palette, perhaps “Haze & Fire,” to illustrate the variety.)
[Screenshot showing the warmer “Haze & Fire” palette in action.]

Problems and Areas for Improvement

The single biggest challenge I encountered during development was a series of stability issues related to the ml5.js library. I initially ran into persistent “… is not a function” errors, which, after extensive debugging, I discovered were caused by a major version update (v1.0.0) that had deprecated the FaceApi model I was using. The solution was to lock the project to a specific older version (v0.12.2) in the index.html file. This was a crucial lesson in the importance of managing dependencies in web development.

Even after fixing the versioning, I faced a “race condition” where both FaceApi and BodyPix would try to initialize at the same time, causing one or both to fail silently. This resulted in features like the aura and glitch-zoom not working at all. I resolved this by re-architecting the setup process to “chain” the initializations: BodyPix only begins loading after FaceApi has confirmed it is ready. This made the entire application dramatically more reliable. For future improvements, I would love to make the background effects more diverse and audio-reactive. Having the stars pulse or the colors shift in time with the bass of the music would add another layer of immersion. I could also explore using hand-tracking via HandPose to allow the user to “paint” or interact with the stars in the background, making the experience even more creative and personal.

 

Week 5 – Midterm Update

This week I made progress on my midterm project, which is an interactive webcam artwork inspired by Euphoria’s psychedelic aesthetics. The concept is to let users take a live video feed of themselves and transform it into surreal, hallucination-like visuals. By pressing different keys, the user can switch between effects like color-swapping, glowing “UV tears,” and trippy, pulsating backgrounds.

Concept

The idea is to create a personal, immersive experience where the viewer sees themselves transformed under digital hallucinations. The interactivity allows them to “step into” a psychedelic portrait that reacts to their input.

Design & User Interaction

  • The webcam is the canvas: it captures the user in real-time.

  • Pressing different keys (1–4) switches between effects.

  • Each effect is designed to distort colors, overlay glowing shapes, or alter the background to give a hallucinatory vibe.

Sketch

Code Design

I structured the code into separate functions (psychedelicColors(), uvTears(), hallucinationBackground()) so each effect is modular. The keyPressed() function makes it easy to navigate between modes, and the modular design means I can keep adding effects without breaking old ones.

Risk / Challenge

The most intimidating part of this project is achieving truly psychedelic visuals that feel organic rather than random. Messing with pixel arrays and shaders can be complex. To reduce this risk, I started small: I tested pixel manipulation with basic color channel swaps and added some randomized “tear” overlays. Now that the structure works, I can safely experiment with shaders and sound reactivity in later iterations.

Next Steps

  • Experiment with custom GLSL shaders for more advanced visuals.

  • Add sound reactivity so the effects pulse with music.

  • Enhance backgrounds with layered fractals or noise-based distortio

Week 4 – Post Response

Post Response: The Psychopathology of Everyday Things

One thing that drives me crazy, which wasn’t directly mentioned in Norman’s reading, is the “close door” button on elevators. It feels like a control I should have, but in most modern elevators, pressing it does absolutely nothing for ordinary passengers. The button is just there to give the illusion of control, which is frustrating when you actually want the doors to close faster. A simple improvement would be to either remove the button entirely or make it functional for real—perhaps by allowing it to slightly shorten the automatic door timer, while still respecting safety regulations. This would reduce the cognitive dissonance between what the interface suggests and what it actually does.

Another example is the volume slider on phones. While it looks linear, the actual increase in volume is logarithmic to match human perception. This hidden design is clever, but it highlights how designers are constantly manipulating reality to feel “natural.” I find it fascinating—and slightly maddening—how often interfaces are designed to trick our senses into thinking things are more intuitive than they are.

Applying Norman’s principles to interactive media, these insights are extremely valuable. For example, in a music app or video game, sliders for volume, brightness, or character speed should be perceptually adjusted so that changes feel smooth and intuitive. Similarly, buttons or controls should give honest feedback: if an action can’t actually happen (like the elevator door closing instantly), the interface should indicate that limitation rather than pretend otherwise. This reduces user frustration and improves overall satisfaction.

In general, understanding the psychopathology of everyday things encourages designers to consider the gap between perception and reality. By designing interactions that match how humans perceive the world, we can make digital interfaces feel more natural and enjoyable.

Week 4 – Text Art

Concept

My artwork “Can You Understand Me?” explores human connectedness through language and the barriers that can arise when communication breaks down. The phrase “Can you understand me?” is a simple but powerful question, expressing both the desire for connection and the frustration of being misunderstood. By placing this phrase in multiple world languages on the screen, I wanted to highlight the diversity of expression and the universality of the human need to be heard.

The floating words and question marks represent the fluid, sometimes chaotic nature of communication. They drift, interact, and repel—mirroring how language can bring people closer or push them apart, depending on understanding.


Code Highlight (what I’m most proud of)

One feature I’m especially proud of is the repelling effect from the mouse. I designed this so that when the mouse gets too close, the words push away. It represents the tension we sometimes feel in conversation—when misunderstanding or pressure creates distance, even when connection is desired.

Repel from mouse let mouse = createVector(mouseX, mouseY);
let d = dist(this.pos.x, this.pos.y, mouse.x, mouse.y);
if (d < 200) {
let force = p5.Vector.sub(this.pos, mouse);
force.setMag(8 / max(d, 1));
this.vel.add(force);
}

This snippet calculates the distance between each word and the mouse, then generates a force pushing the word away when the mouse is within 200 pixels. It’s a small detail but adds a lot of dynamic personality to the sketch.


Embedded Sketch


Reflection

Working on this project taught me how much movement and interaction can symbolize deeper human experiences. The floating words show how languages circulate and intermingle globally, while the mouse repulsion adds an element of friction, symbolizing the challenges of cross-cultural communication.


Reference

Week 3 – art work

Title: Does the Power of Lover overcome the Love of Power

This is an interactive generative artwork that explores the dynamic struggle between two opposing forces: Power of Love and Love of Power. Using color, motion, and shape, the piece visualizes how these forces interact, dominate, and fade.

Users can interact with the artwork using the keyboard: pressing X strengthens the Power of Love, Z increases Love of Power, and R resets the canvas. As one force grows, it overwhelms the other, creating a constantly shifting visual narrative of dominance, balance, and fragility.

Highlight of Code I’m Proud Of:

One part I’m particularly proud of is the heart shape formula used in the Heart class. Creating a convincing pulsing heart shape took a bit of trial and error—adjusting the sine and cosine coefficients to get the proportions and curves right. When I looked it up to check my work, I was pleasantly surprised to realize that I had remembered the formula almost perfectly! The code also incorporates time-based pulsing and subtle distortions influenced by the interaction of the two forces:

class Heart {
  constructor(battle) {
    this.battle = battle;
  }

  display() {
    let { t, loveStrength, powerStrength } = this.battle;

    let heartOpacity = map(loveStrength, 0, 1, 10, 90);
    let heartPulse = map(loveStrength, 0, 1, 0.4, 2.5);

    stroke(340, 80, 100, heartOpacity);
    strokeWeight(2 * heartPulse);

    for (let i = 0; i < 3; i++) {
      let s = 100 + sin(t * 2 + i) * 50 * heartPulse; // pulsing size
      beginShape();
      for (let a = 0; a < TWO_PI; a += 0.1) {
        // classic heart shape formula
        let x = s * 16 * pow(sin(a), 3) / 10 + sin(t + i) * 30 * powerStrength;
        let y = -s * (13 * cos(a) - 5 * cos(2 * a) - 2 * cos(3 * a) - cos(4 * a)) / 10
                + cos(t + i) * 30 * powerStrength;
        vertex(x, y);
      }
      endShape(CLOSE);
    }
  }
}

This section highlights the combination of mathematical formula, animation, and interaction to make the heart both accurate and dynamic.

Week 1 – Self Portrait

This is a self-portrait created with p5.js, inspired by Picasso’s cubist style. I focused on abstracting key features such as the hair, glasses, beard, and mustache into bold geometric forms and playful colors to emphasize expression over realism.