Week 5 – Midterm Draft 1 Progress

Concept + Design

For this project, I wanted to go for something mystical yet down to earth. This is why I wanted to integrate “ordinary” objects such as trees, flowers, and combine them with magical designs, such as the fantasy-like structures on the background, and blue hues.

Although I am still in the process of deciding what the interaction will be for the viewers to engage with this project, I have narrowed my main ideas to two.

  1. The roses will display different colors and musical tracks whenever the viewer clicks on them. For example, one rose will show a different color after the viewer clicks on it. Another will change music every time it is clicked on it. And the third one might give an inspirational message after every click. This will allow an entertaining and colorful interaction to match the tone of the sketch.
  2. Another option, while less entertaining, will focus on the visual aesthetics, hence, the interaction will rely on a function that resembles a magnifying glass so viewers can take a closer look at all the objects displayed, and depending on where in the canvas they move the mouse, they will be able to listen to a different musical track.
Most Frightening/Complex Part

Although I haven’t written a code or program that can detect, prevent, or avoid specific problems or glitches, I one code I wrote to prevent the image in my canvas from having any issues is the preload function. This should allow the code to load properly before displaying the image.

// Edited Image PNG 
let palace;
function preload(){
   palace = loadImage('palace.PNG');
}

Another function I struggled to make but eventually led to a success was the custom shape, which initially was made in another sketch to define the shape of the tree. However, after I realized that I would need to move this shape in my final sketch, I introduced the translate code, along function “drawCustomShape” in order to execute the function for the custom shape and translate it in the canvas to my desired position.

function draw() {
  background(51);
  

 drawCustomShape(450, 230);
  drawCustomShape(-50, 230);

  
}


// function of custom shape (tree) + translation of position at x and y

function drawCustomShape(x, y) {
  push();

  translate(x, y);

  noStroke();
  fill("rgb(235,233,233)");
  beginShape();

  vertex(140, 95);
  vertex(140, 250);

  vertex(140, 250);
  vertex(100, 280);
  vertex(225, 280);
  vertex(225, 280);

  vertex(190, 250);

  vertex(190, 95);
  endShape(CLOSE);

  fill("rgb(32,32,228)");
  ellipse(120, 90, 170, 120);
  ellipse(180, 98, 130, 110);
  ellipse(150, 45, 140, 160);
  ellipse(200, 55, 150, 120);

  pop();
  // end of code for custom shape
}

 

Embedded sketch

 

Reflection and ideas for future work or improvements

My next steps are to find a way to display text to introduce the context and instructions for the interaction. I also plan to decide what said interaction will be based on the feedback and begin to write the codes necessary in a separate sketch before applying it to the final one. In the meantime, I will also search for the music pieces I will be using and edit them accordingly.

Furthermore, I need to plan how to include the option to start a new session without restarting the sketch. Any kind of feedback will be highly appreciated.

Week 5: Reading Response

I think both computer vision and human vision have benefits and downsides in terms of how they can comprehend the reality and meaning of the world. Computer vision simply relies on the quality of algorithms and environmental conditions, which often makes it fail to comprehend the meaning of videos. Human vision, on the other hand, allows us to instinctively comprehend everything in the world through our greatest tools called “eyes”. Furthermore, when it comes to emotions, computer vision is not able to fully understand human emotions. As mentioned in the reading, the emotion recognition system turns very subjective, complex, personal features, called emotions, into objective data, which I don’t think is ethically right, because we are essentially labeling people’s emotions in a way that does not perfectly depict them. However, computer vision can literally track everything in real time for as long as possible until the whole energy is consumed. We, as humans, cannot keep our eyes open and look at everything. But computer vision can stay active indefinitely, being able to record everything that is going on. Expanding on this, computer vision can depict the true reality of the world if all of the conditions are met and algorithms are implemented correctly. For example, Suicide Box was able to reveal the true reality of suicide that society was uncomfortable confronting. In this sense, computer vision is very effective in maintaining transparency. 

 

To enhance the quality of computer vision, we can control the environment of the physical world. For example, we can change the brightness and lighting of the background or change the color of objects so that the target is spotlighted, making it easier for computers to track.

 

In terms of the future of computer vision, I think more and more artists are going to incorporate computer vision into their interactive art as people are getting more interested in human and computer interaction, such as VR, AR, XR, and robotics. teamLab would be a great example. They exhibit artwork that allows people to interact with it. Specifically, in Sketch Aquarium, kids draw fish and then the fish appear on the screen so people can feed them or make them swim together. But I believe there are also ethical implications of using computer vision, such as tracking people’s personal data without consent and digital sexual harassment. Therefore, we should establish standards to make sure that computer vision tracking systems are used in appropriate ways.

week 5 – reading

  • What are some of the ways that computer vision differs from human vision?

As humans, we are able to look at something and classify it no matter the angle, lighting. A computer on the other hand just sees pixels of certain colours and we see the result as a reflection of real life. Then this is how machine learning gets involved, through hundred of images being labelled, colours and patterns identified, the computer is able to tell what it is looking at. That is exactly how my ml model in my midterm is able to detect which hand is which, which fingers are which.

As humans, we are told what is what by our environment and we see this come up with blind people in particular. What we see as green, one may not see it the same way. So in that sense, we are similar.

  • What are some techniques we can use to help the computer see / track what we’re interested in?

Frame differencing – detects motion by comparing each pixel in one video frame with the next. The difference in brightness indicates movement and this requires stable lighting and a stationary camera.

Background subtraction – detects presence by comparing the current frame against a stored image of the empty scene. Areas that differ significantly likely represent objects of interest, but this is sensitive to lighting changes.

Brightness thresholding – distinguishes objects based purely on luminosity and comparing each pixel’s brightness to a threshold value. This works when you can control illumination through backlightin.

By combining these techniques, we can create more complex art representations such as contact interactions (triggering events when a silhouette touches a graphic object), overlap interactions (measuring shared pixels between silhouette and virtual elements), or reflection interactions (computing angles when objects strike the silhouette). Warren’s research shows that once you’ve identified body pixels, implementing sophisticated interactions requires “little more than counting pixels” – making computer vision accessible for creating responsive installations, games, and performance systems where participants interact with virtual creatures or control visual elements through gesture and movement.

  • How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

The surveillance aspect is unavoidable, computer vision in art exists in the same technological ecosystem as security systems and facial recognition.

Different artists engage with this differently. Krueger’s Videoplace uses vision technology playfully where people willingly participate. But Lozano-Hemmer’s Standards and Double Standards explicitly creates “a condition of pure surveillance” using symbols of authority and that visibility of surveillance is the point. Rokeby’s Sorting Daemon confronts automated profiling by making visible the disturbing implications of computer vision used for racial categorisation, using surveillance tools to critique surveillance itself. Jeremijenko’s Suicide Box is honeslty very creepy to me, to see deaths and have them shown really raises questions about who has the right to see those sort of moments.

This is a topic that raises questions on consent and where data of users is being stored. If I was to interact with some art, should I assume that it won’t store any input that I am feeding it

Week 5 – Reading Response ( Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers )

Human vision and computer vision are very different. Humans are born with natural senses that let us see, feel, and understand what’s happening around us without even thinking about it. For example, if I see someone waving at me, I instantly know who they are and what that action means. Computers don’t have this kind of built-in understanding. They need to be given artificial “senses” through cameras and sensors. Instead of just seeing, a computer has to go step by step, detecting pixels, separating the object from the background, and then tracking where it moves. Humans process all of this automatically and with emotion, while computers rely only on data and instructions.

To help computers track what we want, we can use techniques like motion detection, background subtraction, brightness levels, and object tracking. These methods allow the computer to focus on what matters, like spotting movement or identifying a specific shape or color.

Computer vision brings something really exciting to the world of art because it makes art interactive. It allows artists, even those who aren’t very technical, to push their creativity further without as many limits. I find this so beautiful because it opens up new ways to experience and create art.

A great example of this is the Rain Room. In this installation, rain falls inside a dark room, but when someone walks through it, sensors detect their movement and stop the rain wherever they are standing. It feels like you have the power to control the rain. Another example is TeamLab Borderless, where massive digital projections respond to visitors’ movements, creating an ever-changing world of color and light.

What makes this so special is that it gives the audience a role in shaping the art. The original artist creates the setup, but every visitor changes how the piece looks and behaves through their actions. In a way, everyone becomes an artist. The artwork is never exactly the same twice, which makes it even more meaningful. It turns art into a shared, living experience where the line between the creator and the audience disappears.

Week 5: Midterm progress

My Midterm Project Concept

Last week, after a long and tiring day, I decided to take a short break and treat myself to a simple dinner. I made a fresh salad, seasoned it well, and added a generous scoop of hummus. I thought that a good meal would help me feel better. However, halfway through eating, I noticed a fly lying right in my food. The sight instantly ruined my appetite and left me feeling uneasy, worried I might end up with a stomach ache. I couldn’t help but think how much better the evening would have been if that fly hadn’t landed in my meal.

Interestingly, a friend later shared a similar unpleasant experience of finding a worm in their food. That conversation sparked an unusual but fun idea for a game: Worm Against Sanity. In this game, the player goes around the campus covering spots like the library, D1, D2, marketplace, and the Palms eliminating worms before they ruin the food.

.One of the most challenging parts of building Worm Against Sanity was making the game seamlessly switch between multiple screens while also animating the girl and worm sprites so that they moved realistically across the canvas. I wanted the opening screen, the play area, and the menu to feel like distinct spaces, but still connect smoothly when the player clicked a button. To achieve this, I kept track of a screen variable that updates whenever a mouse click falls within certain button coordinates. In the draw() function, I check the current value of screen and display the correct background and elements for that state. At the same time, I focused on fluid character and enemy movement. For the girl, I downloaded a running GIF and converted it into a sprite sheet, then wrote logic to cycle through the sprite frames every time an arrow key is pressed, flipping the image when she moves left. The worm uses a similar sprite-sheet approach, but it continuously advances across the screen on its own, updating its frame at regular time intervals and reducing the player’s life if it escapes. Coordinating these mechanics screen transitions, sprite-sheet animation, and frame-by-frame movement took careful planning and debugging, but it created a smooth and lively gameplay experience once everything clicked together.

I also experimented with adding interactive features, such as having the character jump on a worm when I move my hand or make a fist. Although I haven’t fully figured out how to implement motion-based controls yet, I’m actively exploring solutions and refining the concept.

In terms of visuals, I wanted the game to feel lively and unique, so I used AI tools to generate a cartoony illustration of the NYUAD campus to serve as the background for the different screens. This gives the game a playful, campus-specific atmosphere and saves time that would have gone into manual drawing.

 

My Work so Far

 

Week 5 – Game Concept

This weeks focus : 

This week is the first week I’m working on my midterm project, I decided to focus on the game concept so I can have clear direction, I planned my entire concept from A to Z, but I did not focus on the technical side because I decided the most efficient way to start is with a clear plan.

The basket adventures : game concept

Goal:

Catch falling candies to score points and survive as long as possible, while avoiding poison candies that make you lose lives.

Lives

Player starts with 3 lives.

• Lose 1 life for each poison candy caught.

• When lives = 0 → Game Over screen appears.

Scoring:

+5 points for every second survived.

+10 points for each candy caught (optional extra boost).

• Display score at the top at all times.

Difficulty:

As time goes on, falling objects spawn faster and fall quicker.

Power-up (Immunity Mode):

• A special glowing candy sometimes falls.

• When caught → player becomes immune to poison for 10 seconds.

• A countdown timer appears on screen during immunity.

Game Modes (Themes)

The four modes are mostly visual changes:

1. Witch Land:

• Background: Haunted forest.

• Good objects: candy corn, pumpkins.

• Poison: Skulls or green potion bottles.

  • Power up: Bat.

2. Unicorn Land:

• Background: Pastel rainbow sky.

• Good objects: Cupcakes, rainbow candies.

• Poison: Spiky black storm clouds.

  • Power up: Star.

3. Funky Land:

• Background: Disco dance floor.

• Good objects: Ice cream.

• Poison: Gray or dull-colored shapes.

• Power up: sunglasses.

4. Animal Kingdom:

• Background: Jungle scene.

• Good objects: Bananas,  coconuts.

• Poison: snakes.

     • power up: leaves.

Different Screens

Game Flow Description

Home Page (Adventure Selection):

• The first screen shows the message:

“Where do you want to take the basket on an adventure?”

• The player chooses one of four themes:

1. Witch Land

2. Unicorn Land

3. Funky Land

4. Animal Kingdom

• Once a theme is chosen, it moves to the Instructions Page.

Instructions Page:

• Shows the selected theme background at the top.

• Displays clear instructions:

• What the good candy looks like and that it increases your score.

• What the poison candy looks like and that it takes away one life.

• The power-up candy image and that it gives immunity for 10 seconds.

• Controls for moving the basket (left and right arrow keys).

• How the scoring works:

“You gain 5 points for every second you survive.”

• A button or message appears: “Press SPACE to start!”

Gameplay:

• The selected theme’s background is displayed.

• Player moves the basket left and right to catch falling candies.

Good candy: Increases score.

Poison candy: Decreases lives by 1.

Power-up: Activates 10-second immunity with a visible countdown timer.

• The game gets faster and harder over time.

• When lives reach 0, the game ends and moves to the Game Over screen.

Game Over Screen:

• Displays:

Final score.

• A message: “Game Over!”

• Two options for the player:

1. Restart:

• Restarts the same theme immediately.

2. Choose New Theme:

• Returns to the Home Page where they can select a different theme.

At first, I thought of making a simple game with a basket catching candies because it felt like the kind of game anyone could play to relax or pass the time. I also wanted something my little siblings could enjoy, so I asked them for their thoughts. My younger sister immediately said it should be in a magical unicorn land, while one of my brothers insisted on a spooky witch land, and my other brother said it had to be in an animal kingdom. That’s when I realized it would be more fun and interactive to add multiple themed worlds. This way, the game could appeal to a wider audience and let players choose their own adventure, making it more exciting and personal for everyone who plays.

 

Code  highlight:

Figuring out the button placement was tricky at first. I used percentages for the positions and sizes instead of fixed numbers so the buttons would stay in the right spot no matter the screen size. It took a lot of trial and error to line them up perfectly with the image, and small changes made a big difference. Even though it was frustrating, I learned how to make my game flexible and work well on different devices.

// buttons placement based on background image
const buttonsPct = {
  unicorn: { xPct: 0.16, yPct: 0.23, wPct: 0.30, hPct: 0.26 }, // top-left
  animal:  { xPct: 0.54, yPct: 0.23, wPct: 0.30, hPct: 0.26 }, // top-right
  funky:   { xPct: 0.16, yPct: 0.60, wPct: 0.30, hPct: 0.26 }, // bottom-left
  witch:   { xPct: 0.54, yPct: 0.60, wPct: 0.30, hPct: 0.26 }  // bottom-right
};

 

Week 5 – midterm progress

So for my midterm, I want to create some form of art and use Machine Learning. I want to have a visualisation of biology. I want to show a flower and have it keep being zoomed in to the atomic level. I want to use the ML model to detect the pinching motion, and this would trigger the page change.

index.html -> leaf.html -> cell.html -> atom.html

Firstly, I wanted to focus on the ML model and have the motion be detected. I used the ‘Hand Pose Detection with ml5.js’ video from the coding train as a foundation. I changed the parameters just to detect the right hand index finger and thumb.

Currently, I have incredibly basic images for the 4 pages and I will work on making them more aesthetic. The last page has OOP principles from the several atoms and the spinning electrons.

I also want to add some sort of noise to the first 3 images to represent what environment you could find them in. I am also thinking of making the transition between them represent some sort of medium between the 2 images.

 

class Atom {
  constructor(x, y, rotationSpeed = 0.02, innerOrbitRadius = 40, outerOrbitRadius = 60) {
    this.x = x;
    this.y = y;
    this.rotationSpeed = rotationSpeed;
    this.innerOrbitRadius = innerOrbitRadius;
    this.outerOrbitRadius = outerOrbitRadius;
    this.rotation = 0;
    this.nucleusSize = 20;
    this.electronSize = 8;
    this.outerElectronSize = 6;
  }

 

Week 5 – Midterm Update

This week I made progress on my midterm project, which is an interactive webcam artwork inspired by Euphoria’s psychedelic aesthetics. The concept is to let users take a live video feed of themselves and transform it into surreal, hallucination-like visuals. By pressing different keys, the user can switch between effects like color-swapping, glowing “UV tears,” and trippy, pulsating backgrounds.

Concept

The idea is to create a personal, immersive experience where the viewer sees themselves transformed under digital hallucinations. The interactivity allows them to “step into” a psychedelic portrait that reacts to their input.

Design & User Interaction

  • The webcam is the canvas: it captures the user in real-time.

  • Pressing different keys (1–4) switches between effects.

  • Each effect is designed to distort colors, overlay glowing shapes, or alter the background to give a hallucinatory vibe.

Sketch

Code Design

I structured the code into separate functions (psychedelicColors(), uvTears(), hallucinationBackground()) so each effect is modular. The keyPressed() function makes it easy to navigate between modes, and the modular design means I can keep adding effects without breaking old ones.

Risk / Challenge

The most intimidating part of this project is achieving truly psychedelic visuals that feel organic rather than random. Messing with pixel arrays and shaders can be complex. To reduce this risk, I started small: I tested pixel manipulation with basic color channel swaps and added some randomized “tear” overlays. Now that the structure works, I can safely experiment with shaders and sound reactivity in later iterations.

Next Steps

  • Experiment with custom GLSL shaders for more advanced visuals.

  • Add sound reactivity so the effects pulse with music.

  • Enhance backgrounds with layered fractals or noise-based distortio

Week 5 Midterm Progress

Concept

For my midterm project, I came up this dining hall idea at the last minute. I had originally been inspired by music interactivity in p5.js and considered continuing with my earlier idea of a meditation game. But while eating lunch, I came up with a new idea that felt both playful and relevant to my experience here at NYUAD. So, I mostly working on replanning my idea and preparing assets this week.

As a visiting student from the New York campus, I was used to the dining hall’s pre-made meals. But At NYUAD, the on-demand menus were at first a little overwhelming. Without pictures, I often had no idea what I had ordered (especially with Arabic dishes I wasn’t familiar with) and I even found myself pulling out a calculator to check how much I had left in my meal plan and how much I orderd. Counters like All Day Breakfast felt especially confusing.

So my concept is to digitalize the experience of eating at NYUAD’s D2 All Day Breakfast counter. The project will let users visualize the ordering process, making it more interactive and hopefully reducing the friction that comes with navigating the real-life menu.

User Interaction

Planned Scenes (prototype):

1.Entering the A LA BRASA counter and tapping into the menu

2.Picking up the clamp to get food from the grill to the plate

3.Scanning food on the plate at the cashier’s scanner

4.Paying with coins in the cashier tray (display receipt?)

5.Eating!!

6.Burping to finish the meal

 

Assets:

Audio:

Dining hall ambient background

Cashier scanner beep

Cash register “kaching”

Burp sound

Yumyum sound

 

Pixelated images:

A LA BRASA counter background

All Day Breakfast menu

Grill plate

Clamp

Plate

Cashier scanner

Cashier with coins tray

Coins (D5, D3, D2, D1, D0.5, D0.25)

Fork

 

Pixel art food items:

Avocado fried egg toast

Avocado toast

French toast

Fried egg

Scrambled egg

Plain omelet

Cheese omelet

Mixed vegetable omelet

Tofu omelet

Hash brown

Chicken sausage

Beef bacon

Turkey bacon

Classic pancake

Coconut banana pancake

small bowl salad

 

The Most Frightening Part & How I’m Managing It

The biggest challenge I anticipate is gathering and aligning all these assets into a coherent game within the midterm timeframe. Real-life food images can be messy and hard to unify visually. To reduce this risk, I’ve decided to make everything in pixel art style. Not only does this match the “breakfast game” aesthetic, but it also makes it much easier to align items consistently.

Since Professor Mang mentioned we can use AI to help generate assets, I’ve been experimenting with transforming photos of my own plates and my friends’ meals into pixelated versions. This approach makes asset creation more manageable and ensures I’ll be able to integrate everything smoothly into the game.

 

Week 5: Reading Response

The part that stopped me was Suicide Box (1996), a camera pointed at the Golden Gate Bridge, quietly counting every time someone jumped. It sounds blunt, almost cold, yet I like the idea behind it. The artists (Natalie Jeremijenko and Kate Rich) flipped surveillance on its head: instead of policing people, the camera bore witness to a tragedy that the official numbers under-reported. I even looked it up afterward and found more debate and follow-up writing. Some people doubt the footage, others question the ethics of recording suicides. That tension actually makes the piece stronger for me; it shows how art can force uncomfortable truths into view.

 

What struck me next was how the essay treats technology as something physical and playful. Levin keeps pointing out that success often comes from the scene you build, not just the code you write: light a wall evenly, add reflective tape, adjust the lens. I like that attitude. It feels more like setting a stage than crunching math, and it makes computer vision sound approachable even fun for artists and students. The student project LimboTime, for example, came together in one afternoon with a webcam and a bright background. That shows how the simplest setups can spark creative interaction.

 

Overall, reading this made me want to experiment myself. The mix of raw data, social urgency, and poetic framing in Suicide Box shows how art and code can meet to notice what society tries not to see and maybe, slowly, help change it.