Welcome to my midterm project, Neo-Euphoria Visions, an interactive audiovisual artwork created with p5.js. This project is an exploration into surreal, psychedelic self-portraiture, heavily inspired by the distinct visual language and emotional tone of the HBO series Euphoria. It uses your webcam to pull you into a hallucinatory world that reacts to your presence, blurring the lines between the viewer and the viewed.
The experience is a multi-layered trip. It begins with a simple invitation before slowly transforming your reality. Your image is recast in a cold, UV-and-pink color palette while a motion trail ghosts your every move. A glowing aura emanates from your silhouette, and hand-doodled stars twinkle into existence around you. The piece is designed to be both beautiful and unsettling, contrasting the cold, trippy visuals with organic, hot-red tears that bleed from your eyes. With a dynamic soundtrack and automated visual shifts, the goal is to create a mesmerizing, ever-changing, and deeply personal digital hallucination.
Live Sketch
You can experience Neo-Euphoria Visions live in your browser by clicking the link below.
Screenshots
How It Works & What I’m Proud Of
This project is built on a foundation of p5.js, but its soul lies in the integration of two powerful libraries: ml5.js for computer vision and GLSL for high-performance graphics shaders. The entire visual output, from the colors to the background effects, is rendered in a single, complex fragment shader that runs on the GPU. This was a critical technical decision that allows for multiple layers of real-time effects without the performance lag that would come from CPU-based pixel manipulation.
The core mechanic involves layering several computer vision and graphical processes. First, ml5.js BodyPix creates a segmentation mask of the user, which is fed into the shader. This mask allows me to separate the scene into three distinct layers: the background, a glowing “aura” directly behind the user, and the user themselves. The shader then applies different artistic effects to each layer. Simultaneously, ml5.js FaceApi tracks facial landmarks to determine the precise location of the user’s eyes. This data is used by a custom Tear class in p5.js, which draws organic, flowing tears on a transparent overlay canvas, making them appear attached to the face. I’m particularly proud of the logic that makes the tears “follow” the eyes smoothly by interpolating their position between frames, which prevents the jittery tracking that can often occur.
JavaScript
// A snippet from the Tear class showing the smooth position update
updatePosition(newX, newY) {
if (this.path.length === 0) return;
let head = this.path[0];
// Use lerp to smoothly move the tear's origin towards the new eye position.
// This prevents jittering if the face detection is noisy.
let targetPos = createVector(newX, newY);
let smoothedPos = p5.Vector.lerp(head, targetPos, 0.3);
let delta = p5.Vector.sub(smoothedPos, head);
for (let p of this.path) {
p.add(delta);
}
}
One of the best technical decisions was implementing a temporal smoothing feedback loop for the BodyPix mask. The raw mask from the model can be noisy and flicker between frames, creating harsh, blocky edges. By blending each new mask with the previous frame’s mask, the silhouette becomes much more stable and organic, which was essential for the “glowing aura” effect to work properly. Finally, the automated, timed switching between three distinct color palettes gives the project a life of its own, making the experience unpredictable and unique for every viewing.
Glsl
// A snippet from the fragment shader showing the palette switching logic
void main() {
// ...
vec3 personPalette_phase;
vec3 auraPalette_phase;
if (u_activePalette == 1) { // Palette 1: Haze & Fire
personPalette_phase = vec3(0.0, 0.1, 0.2); // UV
auraPalette_phase = vec3(0.1, 0.15, 0.20); // Yellow/Orange
} else if (u_activePalette == 2) { // Palette 2: Electric Pink/Cyan
personPalette_phase = vec3(0.6, 0.7, 0.8); // Deep UV
auraPalette_phase = vec3(0.5, 0.6, 0.7); // Pink/Cyan
} else { // Palette 3: Cold UV
personPalette_phase = vec3(0.5, 0.6, 0.7); // Deepest UV
auraPalette_phase = vec3(0.8, 0.9, 1.0); // Electric Blue/Violet
}
// ...
}
(A second screenshot showing a different palette, perhaps “Haze & Fire,” to illustrate the variety.)
[Screenshot showing the warmer “Haze & Fire” palette in action.]
Problems and Areas for Improvement
The single biggest challenge I encountered during development was a series of stability issues related to the ml5.js library. I initially ran into persistent “… is not a function” errors, which, after extensive debugging, I discovered were caused by a major version update (v1.0.0) that had deprecated the FaceApi model I was using. The solution was to lock the project to a specific older version (v0.12.2) in the index.html file. This was a crucial lesson in the importance of managing dependencies in web development.
Even after fixing the versioning, I faced a “race condition” where both FaceApi and BodyPix would try to initialize at the same time, causing one or both to fail silently. This resulted in features like the aura and glitch-zoom not working at all. I resolved this by re-architecting the setup process to “chain” the initializations: BodyPix only begins loading after FaceApi has confirmed it is ready. This made the entire application dramatically more reliable. For future improvements, I would love to make the background effects more diverse and audio-reactive. Having the stars pulse or the colors shift in time with the bass of the music would add another layer of immersion. I could also explore using hand-tracking via HandPose to allow the user to “paint” or interact with the stars in the background, making the experience even more creative and personal.

