Midterm Progress

Concept:

I think it’s safe to say that a lot of events on campus have photobooths, and even stores in malls. I wanted to do something similar, but fully digital. While my current sketch isn’t very aesthetic yet, I plan to model it after the Urban Outfitters photobooth, adding playful visuals and frames to make the experience interactive and fun, like a real-life booth.

The user interacts with the photobooth through a series of pages. They start at the main booth screen, where they can click a button to enter their name on the Start Page. After submitting their name, they move to the Filter Page to see a live video feed and choose a filter by pressing the arrow keys. Once they select a filter and click “OK,” they reach the Frame Screen, where they can take a photo. Finally, the Thank You Screen appears with an option to restart, which loops them back to the booth, letting them take another photo. Each page guides the user step by step, making the experience clear and interactive.

Code design :

All the pages are in their own class.

    • PhotoBooth: The main screen where users start. Displays the booth and a button to enter the next step.
    • StartPage: Handles the user’s name input and the “Start” button to move to the filter selection.
    • FilterPage: Shows a live video feed and lets users choose a filter using arrow keys, with an “OK” button to proceed.
    • FrameScreen: Applies the selected filter to the video and allows the user to take a photo with a camera button.
    • ThankYouScreen: Confirms the photo has been taken and offers a button to restart the booth.
    • Flow control: currentScreen determines which class is displayed, and button/key interactions manage transitions between screens.

Difficulties:

I think that throughout implementing this, the filter page class was the most troubling. I had to use the translate function to mirror my video feed because I wanted a mirrored picture as the result, and I needed it to only apply this to the video, and not the text and button. 

//mirror the video (so it feels like a mirror)

    push();

    translate(width, 0);

    scale(-1, 1);

    image(this.video, 0, 0, width, height);

    pop();

Another key part of the project is the filters and how the selection works. Each filter is applied as an overlay on the live video feed, and the user can browse through them using the left and right arrow keys. I also made sure the selection loops continuously, so moving past the last filter brings you back to the first one.

 //sepia filter overlay

    if (this.filters[this.currentFilter] === "sepia") {

      fill(112, 66, 20, 60);

      noStroke();

      rect(0, 0, width, height);




      //black & white filter using canvas filter

    } else if (this.filters[this.currentFilter] === "bw") {

      push();

      tint(255, 255);

      drawingContext.filter = "grayscale(100%)";

      translate(width, 0);

      scale(-1, 1);

      image(this.video, 0, 0, width, height);

      drawingContext.filter = "none";

      pop();

    }

 

nextFilter() {

    //go to next filter (wraps around)

    this.currentFilter = (this.currentFilter + 1) % this.filters.length;

  }




  prevFilter() {

    //go to previous filter (wraps around)

    this.currentFilter =

      (this.currentFilter - 1 + this.filters.length) % this.filters.length;

  }

To deal with, or let’s say minimize the risks with these two implementations, I first tested them out in a separate sketch on p5.js, and when I made sure they worked how I wanted them to, I added them to my project flow. I also added console logs to help me debug and then removed them once everything was working as expected.

Lastly, this project really tested my understanding of how order matters. I had to use .hide() a lot to make sure buttons and features weren’t carried on between screens. 

Reflection:

Now that the structure is solid, I can focus on making it look appealing. I’m open to any suggestions and even suggestions for my current flow!

Here’s my Sketch:

Week 5 – Midterm Update

This week I made progress on my midterm project, which is an interactive webcam artwork inspired by Euphoria’s psychedelic aesthetics. The concept is to let users take a live video feed of themselves and transform it into surreal, hallucination-like visuals. By pressing different keys, the user can switch between effects like color-swapping, glowing “UV tears,” and trippy, pulsating backgrounds.

Concept

The idea is to create a personal, immersive experience where the viewer sees themselves transformed under digital hallucinations. The interactivity allows them to “step into” a psychedelic portrait that reacts to their input.

Design & User Interaction

  • The webcam is the canvas: it captures the user in real-time.

  • Pressing different keys (1–4) switches between effects.

  • Each effect is designed to distort colors, overlay glowing shapes, or alter the background to give a hallucinatory vibe.

Sketch

Code Design

I structured the code into separate functions (psychedelicColors(), uvTears(), hallucinationBackground()) so each effect is modular. The keyPressed() function makes it easy to navigate between modes, and the modular design means I can keep adding effects without breaking old ones.

Risk / Challenge

The most intimidating part of this project is achieving truly psychedelic visuals that feel organic rather than random. Messing with pixel arrays and shaders can be complex. To reduce this risk, I started small: I tested pixel manipulation with basic color channel swaps and added some randomized “tear” overlays. Now that the structure works, I can safely experiment with shaders and sound reactivity in later iterations.

Next Steps

  • Experiment with custom GLSL shaders for more advanced visuals.

  • Add sound reactivity so the effects pulse with music.

  • Enhance backgrounds with layered fractals or noise-based distortio

Week 5 – Reading Reflection

Reading the essay Computer Vision for Artists and Designers made me realize how differently computers and humans actually “see.” Our eyes and brains process the world in ways that feel natural: we recognize faces instantly, understand depth, guess intentions from gestures, and fill in missing details without even noticing. Computers, on the other hand, don’t have that intuitive grasp. They just see pixels and patterns. A shadow or a little blur can confuse them. Where we understand context,  like knowing a cat is still a cat even if half hidden, computers rely on strict rules or training data, and they often fail when something doesn’t match what they’ve been taught to expect.

To bridge that gap, a lot of effort goes into helping machines track what we want them to notice. Instead of raw pixels, we give them features: edges, colors, corners, or textures. Algorithms can then use those features to keep track of an object as it moves. More recently, deep learning has allowed computers to learn patterns themselves, so they can recognize faces or bodies in a way that feels closer to human intuition (though still fragile). Sometimes, extra sensors like depth cameras or infrared are added to give more reliable information. It’s almost like building a whole toolkit around vision just to get machines to do what we take for granted with a single glance.

Thinking about how this plays into interactive art is both exciting and a little unsettling. On one hand, the ability to track people makes art installations much more engaging — an artwork can respond to where you’re standing, how you move, or even who you are (as I observed in TeamLab). That creates playful, immersive experiences that wouldn’t be possible without computer vision. But the same technology that enables this interactivity also raises questions about surveillance. If art can “see” you, then it’s also observing and recording in ways that feel uncomfortably close to security cameras. I think this tension is part of what makes computer vision so interesting in art: it’s not just about making something interactive, but also about asking us to reflect on how much we’re being watched.

Assignment 4 – Data Visualization

Concept:

For this assignment, I wanted to create a data visualization that conveys information through motion and color. Initially, I tried using real-world GDP data I downloaded from Kaggle, but I ran into errors because some columns were missing values, and I wasn’t sure how to handle those gaps. So, I decided to create my own fictional CSV file with made-up GDP numbers for a larger set of countries. Each country is represented as a bouncing circle whose size reflects its GDP and whose color adds a visual layer of distinction. The animation gives the data energy and makes patterns easier to notice. I also utilized arrays and a custom class to organize the circles, dynamically managing their movement, size, and color.

Code Highlight:

One part of my code I’m proud of is how the circles move and react to the edges of the canvas. Each circle has its own speed and direction, creating a sense of individuality while still forming a cohesive scene. I also added a feature that displays the country’s name when you hover over a circle, allowing you to read the data without cluttering the visualization.

if (dist(mouseX, mouseY, this.x, this.y) < this.size / 2) {
    fill(255);
    textAlign(CENTER, CENTER);
    textSize(18);
    text(this.name, this.x, this.y - this.size / 2 - 10);
}

Embedded Sketch:

Reflections & Future Improvements:

This project taught me how to combine datasets with visual techniques to make information more intuitive. I realized that adding motion and color can turn raw numbers into an engaging experience. Hover interactions enhanced the usability, making the visualization informative without overwhelming the viewer.

For future improvements, I would like to make the visualization more interactive, perhaps allowing users to filter by region or toggle between GDP and population. I also want to experiment with color gradients or circle trails that respond to velocity, so the data becomes even more expressive. Overall, this project helped me understand how programming, animation, and data can intersect to communicate ideas creatively.

 

Assignment 4 – Generative Text

Concept

Well, so I wanted to create something that would bring words to life based on their emotional weight. The idea came to me while I was writing an essay and then I was just thinking about how certain words just hit differently; like how “thunder” feels more intense than “whisper,” for example. So I built this visualization that reads emotion data from a CSV file and animates words according to their intensity levels. Each word gets its own personality through movement – high-intensity words explode across the screen in chaotic circles, medium-intensity ones flow like waves, and gentle words just float peacefully. The colors and sizes change too, so you can literally see and feel the emotion in each word. Every 3 seconds it switches to a new word automatically, but you can also hit the spacebar to jump ahead if you’re impatient (like me).

Highlight of the code I’m proud of

The part that really clicked for me was figuring out how to make each letter move independently while still keeping the word readable. At first, all my letters were just clumped together or flying off in random directions, but then I realized I needed to treat each letter as its own little character with its own animation offset.

// Animate each letter individually
for(let i = 0; i < word.length; i++) {
  let x, y, size;
  
  if(intensity >= 80) {
    // High energy - explosive circular movement
    let angle = time * speed + i * PI/3;
    let radius = sin(time * 6 + i) * intensity * 0.5;
    x = cos(angle) * radius * 0.4 + i * 40 - (word.length * 20);
    y = sin(angle) * radius * 0.3;
    size = 45 + sin(time * 8 + i) * energy * 15;
  }
  // ... more animation types
}

Using i * PI/3 and i * 0.6 as offsets made each letter follow the same pattern but at slightly different phases, so they stay connected as a word but each one has its own rhythm.

I’m also pretty happy with how I handled the CSV parsing. I decided to do it manually instead of using a library.

// Parse CSV manually - split into lines then extract values
const lines = csvText.trim().split('\n');
const headers = lines[0].split(',');

// Process each data row (skip header row)
for (let i = 1; i < lines.length; i++) {
  const values = lines[i].split(',');
  csvData.push({
    word: values[0],
    intensity: parseInt(values[1]),
    color: [
      parseInt(values[2]),  // red
      parseInt(values[3]),  // green
      parseInt(values[4])   // blue
    ]
  });
}

Sketch

The visualization cycles through emotional words automatically, or press spacebar to advance manually.

 

Future Improvements

There are definitely some things I want to tackle next. First, I’d love to add sound. Another idea is to let users upload their own word lists or even type in words manually to see how the system interprets them. Right now it’s limited to my CSV file, but it would be cool to make it more interactive. I also want to experiment with particle effects – maybe letters could leave trails or break apart into smaller pieces for really explosive words.

The color system could be smarter too. Right now I’m manually assigning colors, but it would be interesting to generate them automatically based on the word’s emotional category or intensity. Maybe cooler colors for calm words and warmer ones for energetic words, or even colors that shift gradually as the intensity changes.

Week 4 Coding Assignment

Concept:

When I see words like generative text, I immediately think of the code rain effect that happened in the movie matrix.

(screenshot of the code rain visual effect in the movie matrix)

I intend to recreate this effect by using p5js.

Code that I am proud of:

To recreate this effect, there must be at least two types of objects that we want to define. First, the “word stream” that appears in the screen. Second, the individual elements of the characters. By writing these two classes, the rest of the setup should be easy.

draw() {
    // Slight green variance between streams
    const g = map(sin(this.colorPhase), -1, 1, 180, 255);
    if (this.isHead) {
      fill(180, g, 180); // brighter/whiter head
    } else {
      fill(120, g, 120);
    }
    text(String(this.value), this.x * this.glyphSize, this.y * this.glyphSize);

when writing matrix symbol class, I am proud of these lines of code. by using very simple mathematics, I successfully achieved the effect of having green variance across different streams.

output:

reflection:

after reading the design post, I feel like if i want to add any design to this animation, I would try to add the effect of when users swipe their mouses across the screen, the length of the streams would change or the stream would be stroked by where the mouse is at.

 

Week 4 – Post Response

Post Response: The Psychopathology of Everyday Things

One thing that drives me crazy, which wasn’t directly mentioned in Norman’s reading, is the “close door” button on elevators. It feels like a control I should have, but in most modern elevators, pressing it does absolutely nothing for ordinary passengers. The button is just there to give the illusion of control, which is frustrating when you actually want the doors to close faster. A simple improvement would be to either remove the button entirely or make it functional for real—perhaps by allowing it to slightly shorten the automatic door timer, while still respecting safety regulations. This would reduce the cognitive dissonance between what the interface suggests and what it actually does.

Another example is the volume slider on phones. While it looks linear, the actual increase in volume is logarithmic to match human perception. This hidden design is clever, but it highlights how designers are constantly manipulating reality to feel “natural.” I find it fascinating—and slightly maddening—how often interfaces are designed to trick our senses into thinking things are more intuitive than they are.

Applying Norman’s principles to interactive media, these insights are extremely valuable. For example, in a music app or video game, sliders for volume, brightness, or character speed should be perceptually adjusted so that changes feel smooth and intuitive. Similarly, buttons or controls should give honest feedback: if an action can’t actually happen (like the elevator door closing instantly), the interface should indicate that limitation rather than pretend otherwise. This reduces user frustration and improves overall satisfaction.

In general, understanding the psychopathology of everyday things encourages designers to consider the gap between perception and reality. By designing interactions that match how humans perceive the world, we can make digital interfaces feel more natural and enjoyable.

Week 4 – Text Art

Concept

My artwork “Can You Understand Me?” explores human connectedness through language and the barriers that can arise when communication breaks down. The phrase “Can you understand me?” is a simple but powerful question, expressing both the desire for connection and the frustration of being misunderstood. By placing this phrase in multiple world languages on the screen, I wanted to highlight the diversity of expression and the universality of the human need to be heard.

The floating words and question marks represent the fluid, sometimes chaotic nature of communication. They drift, interact, and repel—mirroring how language can bring people closer or push them apart, depending on understanding.


Code Highlight (what I’m most proud of)

One feature I’m especially proud of is the repelling effect from the mouse. I designed this so that when the mouse gets too close, the words push away. It represents the tension we sometimes feel in conversation—when misunderstanding or pressure creates distance, even when connection is desired.

Repel from mouse let mouse = createVector(mouseX, mouseY);
let d = dist(this.pos.x, this.pos.y, mouse.x, mouse.y);
if (d < 200) {
let force = p5.Vector.sub(this.pos, mouse);
force.setMag(8 / max(d, 1));
this.vel.add(force);
}

This snippet calculates the distance between each word and the mouse, then generates a force pushing the word away when the mouse is within 200 pixels. It’s a small detail but adds a lot of dynamic personality to the sketch.


Embedded Sketch


Reflection

Working on this project taught me how much movement and interaction can symbolize deeper human experiences. The floating words show how languages circulate and intermingle globally, while the mouse repulsion adds an element of friction, symbolizing the challenges of cross-cultural communication.


Reference

Week 4 – Reading Response

One thing that always frustrates me are the elevator button panels. I can’t count the number of times I’ve stood in front of a shiny panel of identical metal buttons, with no clear distinction between “door open,” “door close,” or the emergency button. Sometimes the “door open” symbol is barely visible or located in an unintuitive spot, and more than once I’ve pressed the alarm instead of the open button (which is always a bit embarrassing).A small redesign could go a long way: using clearer icons, color coding, or even tactile differences would make it more intuitive. Norman’s point that “when something goes wrong, the user is often blamed instead of the design” fits perfectly here. It’s not that people are clumsy, it’s that the system doesn’t communicate well.

I’m still thinking about the line that said “Good design starts with an understanding of psychology and technology. It is not enough that we build products that work; they must be understandable and usable.” This directly brings us to the UX. UX is important because it shapes the way people experience and connect with a product, space, or interaction. Good UX reduces confusion, frustration, and wasted effort, making tasks feel natural and even enjoyable. As Norman emphasizes, design isn’t just about how something looks, it’s about how it communicates its purpose and supports human behavior. When UX is overlooked, people often end up blaming themselves for “not getting it,” when in reality it’s the design that failed to guide them.

When I think about applying Norman’s principles of design to interactive media, the biggest lesson is clarity in signifiers and feedback. In my projects, I’ve sometimes assumed that users will “just know” what to click or press, but Norman’s reminder is that what feels intuitive to me as the designer may not feel that way to someone else. Adding clear visual cues like arrows, hover highlights, or simple instructions makes interactions discoverable and satisfying. Affordances and signifiers should be obvious, users shouldn’t have to guess what to click, drag, or press. Feedback is just as important: when someone interacts with my project, they should instantly see or hear a response. That way, instead of frustration, interaction feels like a satisfying loop.

 

Week 4 – Bump please

Concept

For this assignment I wanted to create something to do with text and the first thing that came to my mind was how we ask our friends to Bump! our posts on our Facebook RoR group (Room of Requirement, a group in Facebook for NYU Abu Dhabi students). So then I decided why not create something to do with this legendary word Bump?

When I think of that word, multiplication comes to my mind because the more times you write it, the more audience sees your post. So that lead me to an idea that when you press the word, it spawns another Bump!, with a random color and its own bouncing behavior.

Creation

I created a Bump class that stores position, speed, color, and the text itself. Each bump moves across the canvas and bounces when it hits the edges. When the user clicks on a bump, a new one is created in a random position with a random color, and it moves independently. I also added a restart option so that pressing the key C or c clears the screen and brings back just one bump in the center.

Difficulties

The most challenging part for me was figuring out how to detect if the mouse actually clicked on the text. Since text doesn’t have a simple rectangle in p5.js, I had to calculate the approximate bounding box using textWidth and the text size. After that, I also had to make sure the bumps bounced correctly on the edges, which meant checking both horizontal and vertical boundaries.

Favorite Part

My favorite part of the code is definitely the spawning of new bumps. I love how every click brings a surprise because the new text gets a random color and a random speed. It gives the sketch a playful and unpredictable feeling, which matches the energy of the word Bump! itself.

function mousePressed() {
  // check each bump to see if mouse is inside it
  for (let b of bumps) {
    if (b.clicked(mouseX, mouseY)) {
      // if clicked, create a new bump at random position
      bumps.push(new Bump(random(width), random(height)));
      break; // only add one new bump per click
    }
  }

Future Improvements

In the future, I would like to add a few more playful features. For example, making the bumps grow or shrink over time, adding sound effects when they collide with the walls, or even keeping track of how many bumps were created. Another fun idea would be to make the bumps interact with each other—like bouncing off one another instead of just overlapping.

So here’s my end result, and if you see my post on RoR, please bump it :))

Press bump! to multiply it, and press C to reset