Week 6 Midterm project progress

My midterm project, The Polyglot Galaxy, is an interactive generative text artwork that visualizes multilingual greetings as floating stars in a galaxy environment. The project expands on  my Week 5 text generator into a more immersive interactive media system that integrates text, sound, animation, state-based interaction, and computer vision.

As each time the user clicks on the canvas, a greeting phrase from a different language is stamped onto the screen. Over time, these phrases accumulate and form a constellation-like galaxy. Within the frame, it will display 4 different voices. The visual aesthetic is inspired by space, glow, and floating motion, which represents languages as stars in a shared universe.

For Week 6, I introduced webcam interaction as a form of real-time input. Instead of functioning only as a background element, the camera actively influences the visual behavior of the system. The brightness detected from the live webcam feed controls the twinkling speed and intensity of the text objects. This transforms the artwork from a static generative system into an embodied interactive experience where the audience’s movement directly affects the visuals.

 

function updateCamBrightness() {
  cam.loadPixels();
  let sum = 0;
  for (let i = 0; i < cam.pixels.length; i += 40) {
    let r = cam.pixels[i];
    let g = cam.pixels[i + 1];
    let b = cam.pixels[i + 2];
    sum += (r + g + b) / 3;
  }
  camBrightness = sum / (cam.pixels.length / 40);
}

 

I am particularly proud of successfully integrating computer vision into a generative art system in a simple yet meaningful way. Rather than just implementing complex face detection (which would be rather computationally heavy and technically advanced), I chose brightness-based interaction. This decision balances technical feasibility, performance efficiency, and conceptual clarity.

Moreover, I am also proud of the object-oriented structure of my code. The GreetingText class encapsulates the floating animation, glow effects, blinking, and camera-reactive twinkling within a reusable system. This makes the project scalable and organized as more text objects are generated over time.

One major challenge I encountered was browser permission and issues related to the webcam. In some environments, the camera feed just doesn’t function unless the sketch runs in a secure (HTTPS) context or after the user grants camera permission. I addressed this by using the webcam primarily as a data input rather than relying on it as a visible visual component.

For improvements I would like to imagine as we know after the midterms we would be focusing on more hardware related stuff and therefore I would like to incorporate the functions of a camera where if you swipe left it would display a phrase in a language and if you swipe right it would display another language in another phrase and if you swipe up does another phrase in another language.

References
-Course Lecture Slides: Week 6 – Computer Vision & DOM (Introduction to Interactive Media)
-Daniel Shiffman, p5.js Video and Pixels Tutorials
-p5.js Documentation: createCapture(VIDEO) and pixel processing
-Creative Coding approaches to camera-based interaction in interactive media

Week 5 — Reading Response

Golan Levin’s essay on computer vision really hammered home the fact that a camera isn’t an eye — it’s just a sensor feeding a math equation. Having done CS IBDP HL, I was already pretty familiar with how control systems work and how cameras process input, so honestly, none of the technical side was surprising to me. I’ve spent enough time looking at how sensors translate the physical world into data arrays to know that a computer doesn’t “understand” what it’s looking at; it’s just running basic tricks like frame differencing to see what moved or background subtraction to see what’s new in the shot. It doesn’t see a “person”; it just sees a bunch of pixels that changed from gray to brown. This makes the computer incredibly easy to trip up with something as simple as a flickering light or a shirt that’s the same color as the wall.

Because computers are so literal and limited, artists have to do a lot of the heavy lifting physically before the code even runs. Levin points out that things like controlled lighting or high-contrast backgrounds aren’t just aesthetic choices — they’re necessary “cheats” to help the computer distinguish a human from the floor. I think about projects like Myron Krueger’s backlit walls, where the environment is specifically engineered to give the computer a perfect silhouette. It makes me realize that successful interactive art isn’t just about writing clever software; it’s about designing a space that “explains” the world to the camera so the algorithm doesn’t get confused by visual noise.

There’s also a weird tension in using these tools because, at their core, they’re surveillance technologies. Levin mentions works like Sorting Daemon or the Suicide Box to show how artists can flip the script on military-grade tracking. It’s a bit uncomfortable to realize that the same tech making a digital shadow look cool in a gallery is what’s used to profile people in public spaces. It makes me wonder if we can ever fully enjoy “interactive” media without that nagging feeling of being watched and categorized. It’s a reminder that while the interaction feels like magic, the data being pulled is never really neutral — it’s always being filtered through whatever narrow definitions the programmer chose.

Midterm Progress

The Concept

For my game, I decided to go for a Star Wars theme, building on the process of Anakin Skywalker becoming Darth Vader. The concept of a game is an interactive choice-making game; you have to try to help Anakin not turn to the dark side by making certain choices. Depending on the choices you make, Anakin will either remain a Jedi or turn to the Dark Side.

The Design

The game’s color palette is going to consist of red, orange, brown, and black. It’s mostly going to be geometric with minimal curves. The game will start with a start page with Anakin Skywalker and Obi-Wan Kenobi on the screen with a Mustafar background. Once the user clicks the start button, an instruction block should pop up with instructions on the game and explaining the elements of choice and consequence. Users cannot control the motion of the characters, only their choices.

There will be music in the background from the Star Wars franchise. As for the code’s design, the game will rely on Object Oriented Programming for the characters’ design, relationships,  dialogue, and the choice boxes. I will also be relying on several if statements and nested loops since the story is a very condition-based game.

What I Am Afraid Of:

My biggest concern/fear is connecting certain choices with particular events and consequences going wrong. I fear the code doesn’t run properly, the choices don’t connect, and I am left with an inconsistent story and a game that doesn’t make sense. To manage this, I’m going to work on the different parts seperately, and organize certain elements into classes to avoid complexity of the code. I will have to make explicit comments on the code to make everything organized so no mistakes happen or confusion within the events and consequences.

Visuals

AI was used to generate visuals

 

Week 5 Reading Analysis

Unlike human vision, computer vision is not as malleable and flexible. It is trained on the basis of algorithms, therefore, if any error occurs, it’s not as quick to adapts and lacks human adaptability. Instead, it would need to be trained. I believe that in order to translate our physical world for computer vision and algorithms, we must map out their particular characteristics which make them stand out. Almost similar to how we would explain to a child (or even a visual of a child’s painting) reveals the most identifying aspects of particularities in our physical world. Since computer vision has certain limitations and require algorithm training, you would need to extensively train the computers to read what you want it to read in extreme detail. In addition, you would want to make sure it makes no errors especially since surveillance and privacy are extremely sensitive topics in the digital world.

Assignment 5: Midterm Progress

CONCEPT

For my midterm project, I am planning to create an active, time-management puzzle game where the player must balance quick math with physical character movement. The goal is to hang exactly 10 pieces of clothing on a clothesline before the 1:30 timer runs out, without exceeding the line’s strict weight limit. The player’s character can hang clothes in the backyard for up to 1:30 minutes, but she must carefully manage the weight on the line because it can carry only up to 25 grams.

The player will not know in advance which piece of clothing they will get from the laundered clothes basket. Each piece has a different weight: hoodies weigh 4 grams, pants 3 grams, shirts and shorts 2 grams, and tank tops 1 gram. If the player is unsure, they can temporarily place a piece of clothing in the “for later” basket. However, this basket can only hold up to 6 pieces of clothing, and once items are placed there, they cannot be accessed again until later.

I also plan to integrate a rain effect in the middle of the game, adding another layer of challenge. At a certain point, the player will receive a 10-second warning to pick up the laundered basket and move to a shaded area somewhere on the screen. If they fail to do so, every piece of clothing will gain an extra +1 gram. If the total weight on the line exceeds 25 grams, the line will snap, and the player will have to start over.

DESIGN 

I first designed my concept in Canva so I could actually see what the game would look like. I started with the intro page, then the instructions page, and finally worked on the game UI itself. I found a background on Pinterest and added some extra elements from Canva to make it look more complete. I struggled a bit at first with how to design the game UI because I wasn’t sure how everything would fit together, but after trying different layouts, I finally got it to work. I also asked Claude to help me generate different game maps, which gave me a bunch of ideas and styles. Looking at those really helped me figure out what I liked and guided me in deciding the final layout for the game.

UNCERTAIN PART

One of the uncertainties in this game is the rain part. Without the rain, players could win more easily, which is why I want to include it to add more challenge. However, the rain event is complicated to implement because it interrupts the normal gameplay for about 40 to 50 seconds and requires multiple conditions to work at the same time. During this event, the player needs to drag the basket and hold it while walking, but the basket movement should only be allowed within a 10 second timer. At the same time, the system also needs to add +3g to the clothesline weight. The most confusing part for me is adding +1g to all remaining basket items if they are not placed in the shade.

I think this part requires careful state management, such as handling dragging, carrying, placing, walking, and temporarily locking or allowing actions, along with proper timing so that nothing breaks, overlaps, or conflicts during the rain event.

REDUCING RISK

To reduce the risk, the controls during the rain event will be simplified. The player will press the spacebar to pick up the laundered basket and use the arrow keys to move the character. Once the rain starts, +3g will be added to the clothesline weight, making it unavailable, so the player will no longer be able to hang clothes. Instead, the player just needs to move to the shaded area with the basket and wait there until the rain stops.

The part I am still unsure about is how to add +1g to each clothing item if they are not placed in the shade. If this becomes too complicated to implement, I will simplify the mechanic. Instead of adding +1g to each clothing item inside the basket, I will just add an extra +2g to the total weight. If that still does not work well, another option is to change the setup so that the laundered clothes are already placed in the shaded area. In that case, the player’s task would simply be to carry the clothes from the shaded area to the clothesline and hang them once the rain stops.

week 5 reading response

human vision and computer vision is actually quite similar in a way. humans are especially attuned to detect even the most subliminal changes in their physical environment, be it sound or light or movement etc.

to illustrate just how important change is for our “vision”, next time you’re in a very dark room (when you go to bed tonight), try and stare at the far corner without blinking or moving your eyes. you’ll begin to notice that, gradually, a darkness is creeping up from your peripherals and slowly makes it way towards the centerpoint of your vision. i thought this was so cool when i first discovered it, felt like i was falling into a void. this happens because the rods in your eyes (which are attuned to both light and movement) and the cones (which are attuned to color) are almost completely deprived of stimulation. your brain figures you dont need your vision if theres nothing to detect.

this is also the reason why we are constantly moving our eyes. ever notice the little micromovements your eyes are always making when your attention is focused externally? they need the movement to help keep them stimulated enough to see. and also, ever notice how, when theres a noise that’s been going on for a long time, you only notice it when it suddenly stops? the brain kind of filters out stimuli that are continuous and unchanging. it’s looking for change, just like computer vision does.

it’s important to realize how inseparable technology and art both are from human biology, it’s all modeled off of our understanding of ourselves. the farther we progress in the fields of biology, medicine, neuroscience, and psychology, the greater capacity we have for advancements and inspiration in ai, computers, architecture, and, by extension, interactive media art.

week 3

    • concept:

simple, i wanted to create a cute blinking birdie staring at some trippy stars and contemplating things. what on earth could he possibly be pondering about? i fear we will never know.

  • A highlight of some code that i’m particularly proud of:

i used while() and if() functions to make the background animation. it’s quite literally just a bunch of thin white concentric circle patterns bouncing off the edges of the canvas, overlapping with eachother and a static one in the middle. pretty neat.

//sky pattern
 rectMode(CENTER);
 strokeWeight(0.4);
 stroke(255);
 stroke("white");
 noFill();

 while (sky > 1 && sky < 900) {
   circle(200, 200, 1);
   circle(200, 200, sky);
   sky += 10;
 }

 stroke(0);

 //pattern 1
 Circle(10, x, y);
 if (y > 375 || y < 25) {
   speedY = speedY * -1;
 }
 if (x > 375 || x < 25) {
   speedX = speedX * -1;
 }
 x = x + speedX;
 y = y + speedY;

 // pattern 2
 Circle(10, a, b);

 if (b > 375 || b < 25) {
   speedB = speedB * -1;
 }
 if (a > 375 || a < 25) {
   speedA = speedA * -1;
 }
 a = a + speedA;
 b = b + speedB;

i also used a randomGaussian() function to have the birdie blink at random intervals.

function lilGuy(){
push();
let r = randomGaussian(50, 150);
stroke(0);
strokeWeight(1);
translate(90, 0);
fill(255);
arc(195, 355.5, 80, 160, 270, 0, PIE);
circle(195, 265, 39);
arc(194, 280, 55, 25, 180, 270, PIE);
strokeWeight(0);
arc(195.5, 360.5, 80, 170, 270, 0);
circle(195, 265, 38);
strokeWeight(1.5);
fill(255);
strokeWeight(1.5);
ellipse(192, 267, w, h);
if (r < 51 && r > 45) {
h = 1;
} else {
h = 17;
}
pop();
}
  • Reflection and ideas for future work or improvements:

if i had more time, i’d definitely add an interactive element, maybe some dialogue options so you can chat with the strange bird and get to the bottom of what he’s been musing about all mysteriously.

week 5 – midterm project

project concept:

for my midterm project, i’m planning to create a simple pixel rpg-style demo. the demo will start with an interactive cutscene, then the player will be able to navigate a room / setting.  the details are very vague, but i’ll figure out what works as i make progress.

design:

my visual inspirations for this project are the games Undertale (particularly the way the sprites and backgrounds are designed), and Sally Face, which i’m using as a reference on how to visually incorporate the dialogue interactions as well as the vibe i’m going for.

i drew the sprites via pixelart.com (honestly i spent more time on them than the coding itself…), and the background music is an 8-bit cover i found of a deftones song, which i thought sounded pretty awesome.  going forward, i intend to draw a pixel background with some interactive elements for the player to navigate. i want the overall experience to look eerie and sickly (which is why the sprite i made may seem a little jaundiced).

most frightening part and how i tackled it:

having to animate a sprite was definitely the most intimidating part for me. to start off, i reread the slides and really studied the examples provided.  problem was, i wanted my sprite to be able to move while the arrow keys are pressed, unlike the example in the slides where you have to spam the keys rapidly. to figure out how to achieve this, i did some googling and scrounged around for (mostly useless) advice on the internet (obviously including ai overview), and ultimately was referred back to the KeyIsDown() reference page on p5.js. however, in trying to incorporate what i was learning, the code got extremely messy and buggy. all sort of horrendous things happened to my little sprite – i cannot bear to speak of it. eventually, i figured things out myself through trial and error (like always), and, while heavily relied on my references to keep me on the right track, all the code is written by me. (i shall add comments later when i continue to work on the project.)

function draw() {
  background(0);
  if (keyIsDown(DOWN_ARROW)) {
    direction = 0;
    y += speed;
    step = (step + 1) % 4;
  } else if (keyIsDown(LEFT_ARROW)) {
    direction = 2;
    x -= speed;
    step = (step + 1) % 4;
  } else if (keyIsDown(UP_ARROW)) {
    direction = 1;
    y -= speed;
    step = (step + 1) % 4;
  } else if (keyIsDown(RIGHT_ARROW)) {
    direction = 3;
    x += speed;
    step = (step + 1) % 4;
  } else {
    step = 0;
  }
  if (y <= 0) {
    y = 0;
  }
  if (x <= 0) {
    x = 0;
  }
  if (y >= windowHeight - 126) {
    y = windowHeight - 126;
  }
  if (x >= windowWidth - 60) {
    x = windowWidth - 60;
  }
  image(sprites[direction][step], x, y, 70, 147);
}

references:

as mentioned earlier, ai was used in the sense that it popped up and tried to provide answers to the questions i googled. i used it as a tool to try and understand how KeyIsDown works when i was experiencing bugs and analyzed the (very simple) examples it provided, then tried to implement what i learned into my code. I did not ask it to fix my bugs or provide me with code.

https://p5js.org/reference/p5/keyIsDown/

https://drive.google.com/file/d/18ZMq9BB1l5XhMx5OfzNciU2OJQbUKvg3/view?usp=sharing

Week5 Reading

I found the emphasis on accessibility compelling as it resonates with me that groundbreaking ideas often emerge when constraints like hardware, resources or expertise are lowered. Still I find it questionable that whether the tone of inevitability “recent improvements make artistic experimentation a reality” oversimplifies deeper tensions: access to hardware, funding for arts-tech projects, and the proliferation of open-source culture are not universally available. The example mentioned like Videoplace and Sorting Daemon foreground surveillance as a recurring motif in art. They hihglight a critical stance toward automated perception. This aligns with my belief that technology should be examined through ethical, social, and political lenses. However, I would like the authors to more explicitly address issues of consent, data provenance, and the potential for misrepresentation when computer vision projects interpret publics.

The reading has made me to rethink my own ideas about visibility and control in realm of interactive art. It argues how tracking and head or gesture-based interactions can produce intimate, embodied experiences, yet it also raises questions about privacy concerns like who is being watched and for what purpose. The use of surveillance-tinged artworks like Standards and Double Standards and Sorting Daemon highlights a paradox  that technology that augments artistic expression can simultaneously normalize and critique surveillance. I wonder how far artists should go in personalizing tracking. Are there boundaries that preserve participants’ agency without stifling experimentation? I want us to think: How do we balance artistic intent with privacy concerns in real-time, participatory installations? What concrete techniques feel most useful for a new programmer without compromising ethical considerations? And how mcomputer vision can be used to create more reflexive, consent-aware audience engagement rather than passive observation?

MidTerm

Concept

I have been looking at the portfolio websites for quite a while. All looked same to me box of texts heather and theather. I had this idea long before to create a a 3D version of it. Like a human figure represented in form a hollogram. It is 3D figure, rotating. The interaction is done basically by scrolling. if the mouse is over the body and we scroll up. It will zoom in the body. and the attributes will be start appearing on the periphery like neon signs.  This might not be a professional portfolio, but that was the idea I had.  The assumption about the experiance is to be tech savy, and futuristic. I would want to add more mini interactions into it.

Design

For the figure,  I wanted to replicate the self portrait. But the without any hardcoded values other than the scaling ones. Initially, I want to be able to put custom coridinates, so that I am able to draw it anywhere. But to make it rotate, I later used rotate(). By it I didn’t had ot worry about the overall mangment of the coridnates.

the sketch

 

The hardest part till this moment was cerating the curve of the the neck collar, It is not visible right now in the sketch as I made whole neck black. I wil extend from it when I will be adding more attributes towards the final product.

The part I am most proud of is the making it rotate while everything remains intact. Initialy I wanted the neck to stay connected but it during some hit and trial. I noticed it seems more lively if its a bit apart and kept it like that. To  my undrestanding the next challenging part will be the navigation form the whole sektch to the indvidual partsor it along with their attributes.

For the usage of AI, I used it draw the curve of the neck colar. I couldn’t understand the how control points work in the curve() function. It was hard from me to do all the calculation, when already don’t understand the parameters of the funcion. So I gave it to gpt to wrtie it form line by line and put comments for each.

    let collarY = neckY + neckLength;  // directly under neck

    curve(
        neckX - 20, collarY - 70,  // control point
        neckX+2, collarY-10,            // start
        neckX + neckWidth-2, collarY-10,// end
        neckX + neckWidth + 20, collarY - 70 // control point
    );
}