Week 5 – Midterm Draft 1 Progress

Concept + Design

For this project, I wanted to go for something mystical yet down to earth. This is why I wanted to integrate “ordinary” objects such as trees, flowers, and combine them with magical designs, such as the fantasy-like structures on the background, and blue hues.

Although I am still in the process of deciding what the interaction will be for the viewers to engage with this project, I have narrowed my main ideas to two.

  1. The roses will display different colors and musical tracks whenever the viewer clicks on them. For example, one rose will show a different color after the viewer clicks on it. Another will change music every time it is clicked on it. And the third one might give an inspirational message after every click. This will allow an entertaining and colorful interaction to match the tone of the sketch.
  2. Another option, while less entertaining, will focus on the visual aesthetics, hence, the interaction will rely on a function that resembles a magnifying glass so viewers can take a closer look at all the objects displayed, and depending on where in the canvas they move the mouse, they will be able to listen to a different musical track.
Most Frightening/Complex Part

Although I haven’t written a code or program that can detect, prevent, or avoid specific problems or glitches, I one code I wrote to prevent the image in my canvas from having any issues is the preload function. This should allow the code to load properly before displaying the image.

// Edited Image PNG 
let palace;
function preload(){
   palace = loadImage('palace.PNG');
}

Another function I struggled to make but eventually led to a success was the custom shape, which initially was made in another sketch to define the shape of the tree. However, after I realized that I would need to move this shape in my final sketch, I introduced the translate code, along function “drawCustomShape” in order to execute the function for the custom shape and translate it in the canvas to my desired position.

function draw() {
  background(51);
  

 drawCustomShape(450, 230);
  drawCustomShape(-50, 230);

  
}


// function of custom shape (tree) + translation of position at x and y

function drawCustomShape(x, y) {
  push();

  translate(x, y);

  noStroke();
  fill("rgb(235,233,233)");
  beginShape();

  vertex(140, 95);
  vertex(140, 250);

  vertex(140, 250);
  vertex(100, 280);
  vertex(225, 280);
  vertex(225, 280);

  vertex(190, 250);

  vertex(190, 95);
  endShape(CLOSE);

  fill("rgb(32,32,228)");
  ellipse(120, 90, 170, 120);
  ellipse(180, 98, 130, 110);
  ellipse(150, 45, 140, 160);
  ellipse(200, 55, 150, 120);

  pop();
  // end of code for custom shape
}

 

Embedded sketch

 

Reflection and ideas for future work or improvements

My next steps are to find a way to display text to introduce the context and instructions for the interaction. I also plan to decide what said interaction will be based on the feedback and begin to write the codes necessary in a separate sketch before applying it to the final one. In the meantime, I will also search for the music pieces I will be using and edit them accordingly.

Furthermore, I need to plan how to include the option to start a new session without restarting the sketch. Any kind of feedback will be highly appreciated.

Week 5: Reading Response

I think both computer vision and human vision have benefits and downsides in terms of how they can comprehend the reality and meaning of the world. Computer vision simply relies on the quality of algorithms and environmental conditions, which often makes it fail to comprehend the meaning of videos. Human vision, on the other hand, allows us to instinctively comprehend everything in the world through our greatest tools called “eyes”. Furthermore, when it comes to emotions, computer vision is not able to fully understand human emotions. As mentioned in the reading, the emotion recognition system turns very subjective, complex, personal features, called emotions, into objective data, which I don’t think is ethically right, because we are essentially labeling people’s emotions in a way that does not perfectly depict them. However, computer vision can literally track everything in real time for as long as possible until the whole energy is consumed. We, as humans, cannot keep our eyes open and look at everything. But computer vision can stay active indefinitely, being able to record everything that is going on. Expanding on this, computer vision can depict the true reality of the world if all of the conditions are met and algorithms are implemented correctly. For example, Suicide Box was able to reveal the true reality of suicide that society was uncomfortable confronting. In this sense, computer vision is very effective in maintaining transparency. 

 

To enhance the quality of computer vision, we can control the environment of the physical world. For example, we can change the brightness and lighting of the background or change the color of objects so that the target is spotlighted, making it easier for computers to track.

 

In terms of the future of computer vision, I think more and more artists are going to incorporate computer vision into their interactive art as people are getting more interested in human and computer interaction, such as VR, AR, XR, and robotics. teamLab would be a great example. They exhibit artwork that allows people to interact with it. Specifically, in Sketch Aquarium, kids draw fish and then the fish appear on the screen so people can feed them or make them swim together. But I believe there are also ethical implications of using computer vision, such as tracking people’s personal data without consent and digital sexual harassment. Therefore, we should establish standards to make sure that computer vision tracking systems are used in appropriate ways.

Week 5: Reading Reflection

Our vision as people comes with a few prerequisites for creating a clear image in our mind, for example an object needs to be close enough with lighting that is sufficient for us to identify what it is. Computer vision is differentiated to our vision with the existence of a larger set of these prerequisites, the conditions need to be tuned to the abilities of the algorithm in place for accurate identification. With each system built for different purposes needing different unique conditions and pre-existing information to complete its purpose, while our vision is able to cover all ends such as motion detection and object detection without the need for different conditions. Which is why it might be a difficult task to integrate computer vision into interactive media, it is not only about picking the right system and mechanism, you need to set up the right circumstances for the system to successfully bring your vision to life.

However, even with the complication that come with it, the use of computer vision opens up a new realm of possibilities for interactive art. It takes the ability to interact with a user to another level with the possibility of now tracking tracking the movement of the user adding another layer to the ways the art can ‘listen, think and speak’ to the participant. This is especially true due to the increased accessibility to open source code and the technologies needed to navigate the world of computer vision, it is now easier than ever to utilize such technologies to explore themes more in depth. Though, this does raise the question that many of the works mentioned in the article tackle, which is surveillance. These technologies provide interactive art pieces with eyes, they are now partaking in the observing and tracking process of the work. This is more of a concern when it comes to those works installed in public spaces where the technology is used to detect the movement of people who did not choose to be part of the work. Which is why it is important to approach the use of such technologies with care to make the most of them in stepping up the level of interactivity in art works.

Week 5 Reading Reflection

When I compare computer vision to human vision, I realize how differently they work. For me, vision is immediate and layered with meaning, but for a computer, an image is nothing more than an array of numbers, vectors, and pixels. This reading reminded me of the summer I worked on an image understanding project with our faculty in the cybersecurity domain. I saw firsthand how computers don’t “see” an image the way I do. They only interpret numerical values and computations. Because of this, it’s even possible to generate two different sets of numbers that look the same to a computer, but when drawn out, they are hardly identical. That gap between what humans and computers perceive really fascinated me.

To help computers track what we want, techniques like frame differencing, background subtraction, and brightness thresholding come into play. Through this reading, I learned how important the setup is: lighting, camera placement, and reflective markers can make or break the system’s accuracy. It’s almost like computers need us to simplify reality so they can process it.

Where I feel conflicted is in the area of surveillance. In my cybersecurity project, I came to appreciate how powerful these systems can be, but also how much risk comes with that power. Since computers only “understand” numbers, they can be tricked, but they can also be used to monitor, record, and categorise people in ways that feel invasive. In interactive art, I see these tools as playful, engaging, and thought-provoking. But outside of art, they can become a threat. Surveillance powered by computer vision has the capacity to invade privacy, consolidate control, and even manipulate what we think of as reality. That tension makes me think harder about not just what computers can see, but what we allow them to see.

Week 5- Midterm Progress

Concept

For my midterm project, I wanted to build something interactive, competitive, and continuously evolving in difficulty. Inspired by the mechanics of hole.io, I decided to create a Blackhole game in p5 where the player controls a growing blackhole that consumes objects scattered across the canvas. The more the player consumes, the larger and stronger the blackhole becomes.

The concept behind this project is to give the user an engaging challenge: survive, grow, and adapt. I wanted to capture the thrill of growth mechanics where every interaction (consuming objects, avoiding hazards, gaining powerups) feels rewarding. Unlike my previous projects, this one leans toward a fast-paced arcade experience rather than something calm or exploratory.

To keep the game fun yet playable, I plan to design mechanics that gradually increase difficulty by reducing screen size. The goal is for the user to feel both challenged and accomplished as they make progress.

Design

The overall design will remain simple but dynamic, with a clear focus on gameplay elements. The blackhole will start small, almost unnoticeable, and will grow bigger as it absorbs objects.

The background will replicate the feeling of outer space, dark and starry, while asteroids and floating debris will act as obstacles and targets. Power-ups will be distinguishable by their shapes and colors, with each type offering unique abilities: faster movement, temporary freezing of asteroids, shrinking/growing effects, or invincibility.

The controls will be intuitive: the blackhole follows the player’s movement across the canvas, keeping the learning curve minimal so the focus remains on reaction and survival.

Frightening / Challenging Aspects

  • Considering the extreme conditions: what if the blackhole becomes larger than the screen? At that point, gameplay might break or lose its intended challenge.
  • Object Spawn Locations: How do I make sure that newly spawned objects do not appear inside the blackhole space, where they would be instantly consumed without interaction?
  • Minimum Distance Between Objects: Ensuring that two spawned objects are not too close to each other, which would make the game feel cluttered and unrealistic.
  • Shrinking Screen Space: The problem arises when objects fall outside the shrinking boundaries, how should those be handled?

 

Risk Prevention

  • Extreme Growth Prevention: To prevent the blackhole from growing uncontrollably, I will implement a mechanic where the hole begins to shrink if it hasn’t consumed anything in the past few seconds (e.g., 5 seconds). This ensures balance and keeps the game playable even in longer sessions.
  • Object Spawn Locations: While spawning new objects, I will check their coordinates against the blackhole’s current position and radius. If an object is detected within the hole’s space, it will be re-spawned at another random location.
  • Minimum Distance Between Objects: I plan to use a distance check between all newly spawned objects, ensuring that each object maintains a minimum distance from others. This can be achieved with a simple loop comparing coordinates and re-spawning objects if they are too close.
  • Shrinking Screen Space: I am exploring the use of constraint and filter functions to dynamically remove or reposition any objects that fall outside the shrinking boundaries. This way, the playable area remains fair and uncluttered as it reduces in size.

 

Week 5 – Midterm Progress

Concept
I’m a huge football fan and also a huge supporter of Barcelona FC. I’ve always been fascinated by goalkeepers who do their best to save their team points. Those incredible moments of jumping their highest and catching the ball are something I would love to implement. I especially remember the incredible journey of Dibu Martínez (Argentina’s main goalkeeper) against France in the 2022 Qatar World Cup, where he saved two out of five crucial penalties, ultimately winning them the World Cup. So, I thought, why not step away a bit from my Dexter-inspired assignment and move into my other passions.

Highlight of the code

shoot(targetX, targetY) {
    this.totalSteps = random(20, 30); 

    let distX = targetX - this.x;
    let distY = targetY - this.y;

    this.dx = distX / this.totalSteps;
    this.dy = distY / this.totalSteps;

    this.isFlying = true;
    this.currentStep = 0;
}

The Ball class uses a simple, yet highly effective, trick to achieve smooth motion without needing complex physics calculations. For example, the shoot(targetX, targetY) method instead of calculating a constant velocity that needs to be constantly adjusted, the game calculates total movement distance and divides it by a random number of steps (this.totalSteps). Thus, giving the ball its speed.

Sketch

Uncertainties & Complexities
I didn’t use sounds for now and any kind of images or icons. So, I’m a bit concerned about that. Apart from that, I quite like the progress I’ve made, and I think I’m able to finish the project on time. But, if I do get to add those images and icons, and sound properly soon, I want to incorporate other essential stuff that would make the game a lot better. It took a while how do I ensure that the ball always hits the intended target, and I figured it could be easily done when the currentStep equals the totalSteps, so that’s how I keep track of whether ball should stop or not. Collision detection was a bit of struggle too, but I already did Intro to CS project, so I used as Professor suggested once to use circles to detect the collision by dist() between their radiuses.

Future improvements
I asked couple of my friends while I was in the library what do they think of the project, and some said that I could add different ball types like if you save a golden ball you get +3 points instead of +1, and if you don’t you lose -2 lives instead of -1. Also, I’m not quite sure how it works on p5js, but when we worked on Processing during my Intro to CS class, we were able to add CSV file to have a database of users, so we could keep track of best score across all users. Maybe, I will look into that and try to figure out how to add a database.

 

Week 5 – Reading Reflection

Reading Levin’s essay really changed how I think about what it means for a computer to “see.” One of the most striking points was that, unlike humans, a computer has no built-in way to understand what it is looking at. Levin writes that video is “just a big, dumb array of numbers”,  which really stood out to me, people automatically recognize faces, objects, and events, but a computer has to be taught step by step what to look for. This helped me appreciate how much of computer vision is really about extracting meaning from raw data rather than simply displaying images.

I also found it interesting that helping the computer “see” is as much about the physical environment as it is about the code. Levin mentions that careful control of lighting and contrast, even using infrared light or retroreflective materials, can make the difference between a system that works and one that fails. That made me realize that when we design with computer vision, we are not just writing software, but we are shaping the entire stage so the camera has the best chance to succeed.

Finally, the discussion about surveillance made me think about the double-edged nature of computer vision in art. Works like Standards and Double Standards or Sorting Daemon use tracking not just for interaction but to make us feel observed, even judged.  I then reflected on how computer vision can be playful but also unsettling  and how artists can deliberately use that feeling of being watched to make a political or emotional point. It reminded me that computer vision is not neutral, it carries cultural and ethical weight, which is part of what makes it powerful in interactive art.

Week 5 – Reading Reflection

What stood out to me in the reading is how limited computer vision really is compared to human vision. As humans, we don’t think twice about recognizing objects, adjusting to poor lighting, or making sense of what we see in context. A computer, on the other hand, needs specific rules and conditions to function. It does not actually understand meaning but instead works through pixels, patterns, and features. If something changes in the environment, like lighting or background, the system can easily fail. That made me realize how much of computer vision is not about “seeing” the world the way we do but about narrowing down what the computer is expected to detect.

To make computer vision work, artists and designers often shape the environment so the system has fewer obstacles. This can be done by using clear contrasts, better lighting, or markers that help the camera distinguish what matters. There are also tools like background subtraction and motion tracking that simplify what the computer needs to follow. Hardware choices such as certain cameras, filters, or infrared technology also help in making the vision system more reliable.

In interactive art, I think this creates both opportunities and challenges. On the positive side, computer vision allows art to respond to the presence and movements of people, turning viewers into active participants. It makes installations feel alive and immersive in ways that would not be possible without tracking. At the same time, it carries the same logic as surveillance because the system is always watching and recording behavior. For me, this makes computer vision powerful but also a little unsettling, since it forces us to think about what it means to be observed and how that shapes the experience of art.

Week 5 – Reading Reflection

What are some of the ways that computer vision differs from human vision?
Computer is really context dependent compared to humans. We have eyes and can generally differentiate objects and perform any actions with any inputs, but no computer vision algorithm is completely autonomous. Each algorithm is dependent on its code and assumptions about the specific scene it is analyzing. If conditions such as absence of movement or poor lightning are present, then algorithm may fail.

What are some techniques we can use to help the computer see / track what we’re interested in?
As mentioned in the abstract, we need to increase the contrast more so the computer vision can differentiate between environment background and people’s movements. Those include lighting that silhouettes people, contrasting costumes. Also, using Infrared Illumination improves signal-to-noise ration in low-light conditions and retroflection marking materials.

Choosing the right imaging hardware is essential too. For example, telecentric lenses so object’s magnification is independent from the distance, polarizing filters to reduce glare from reflective surfaces, and very purposeful choice of cameras for high resolution, frame rate, short exposure, dim light, UV light, or thermals.

How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?
It is, for sure, a core mechanism and an engine that gives power to interactive art. I strongly believe that computer vision’s ability to detect, track, measure presence, motion, color, and size completely changed how interactive art is perceived since it’s invention. Techniques such as background subtraction of frame differencing, even though simple, are very profound in how they helped to make thousands of modern interactive art installations. For example, advanced tools like EyesWeb specifically focus on the tracking and surveillance and provide ‘analysis and processing of expressive gesture’. Now, it is not just about detecting movement, but rather interpreting for specific musical or visual artistic purposes. I also think that first interactive piece called Videoplace that I read about on my other IM class gives audience agency and computer vision acts as a bridge between human input and computer’s further output much like a computer mouse, but rather detects human movement and gestures.

Assignment 4 – Data Visualization

Concept:

For this assignment, I wanted to create a data visualization that conveys information through motion and color. Initially, I tried using real-world GDP data I downloaded from Kaggle, but I ran into errors because some columns were missing values, and I wasn’t sure how to handle those gaps. So, I decided to create my own fictional CSV file with made-up GDP numbers for a larger set of countries. Each country is represented as a bouncing circle whose size reflects its GDP and whose color adds a visual layer of distinction. The animation gives the data energy and makes patterns easier to notice. I also utilized arrays and a custom class to organize the circles, dynamically managing their movement, size, and color.

Code Highlight:

One part of my code I’m proud of is how the circles move and react to the edges of the canvas. Each circle has its own speed and direction, creating a sense of individuality while still forming a cohesive scene. I also added a feature that displays the country’s name when you hover over a circle, allowing you to read the data without cluttering the visualization.

if (dist(mouseX, mouseY, this.x, this.y) < this.size / 2) {
    fill(255);
    textAlign(CENTER, CENTER);
    textSize(18);
    text(this.name, this.x, this.y - this.size / 2 - 10);
}

Embedded Sketch:

Reflections & Future Improvements:

This project taught me how to combine datasets with visual techniques to make information more intuitive. I realized that adding motion and color can turn raw numbers into an engaging experience. Hover interactions enhanced the usability, making the visualization informative without overwhelming the viewer.

For future improvements, I would like to make the visualization more interactive, perhaps allowing users to filter by region or toggle between GDP and population. I also want to experiment with color gradients or circle trails that respond to velocity, so the data becomes even more expressive. Overall, this project helped me understand how programming, animation, and data can intersect to communicate ideas creatively.