Week 5- Midterm Progress

Concept

For my midterm project, I wanted to build something interactive, competitive, and continuously evolving in difficulty. Inspired by the mechanics of hole.io, I decided to create a Blackhole game in p5 where the player controls a growing blackhole that consumes objects scattered across the canvas. The more the player consumes, the larger and stronger the blackhole becomes.

The concept behind this project is to give the user an engaging challenge: survive, grow, and adapt. I wanted to capture the thrill of growth mechanics where every interaction (consuming objects, avoiding hazards, gaining powerups) feels rewarding. Unlike my previous projects, this one leans toward a fast-paced arcade experience rather than something calm or exploratory.

To keep the game fun yet playable, I plan to design mechanics that gradually increase difficulty by reducing screen size. The goal is for the user to feel both challenged and accomplished as they make progress.

Design

The overall design will remain simple but dynamic, with a clear focus on gameplay elements. The blackhole will start small, almost unnoticeable, and will grow bigger as it absorbs objects.

The background will replicate the feeling of outer space, dark and starry, while asteroids and floating debris will act as obstacles and targets. Power-ups will be distinguishable by their shapes and colors, with each type offering unique abilities: faster movement, temporary freezing of asteroids, shrinking/growing effects, or invincibility.

The controls will be intuitive: the blackhole follows the player’s movement across the canvas, keeping the learning curve minimal so the focus remains on reaction and survival.

Frightening / Challenging Aspects

  • Considering the extreme conditions: what if the blackhole becomes larger than the screen? At that point, gameplay might break or lose its intended challenge.
  • Object Spawn Locations: How do I make sure that newly spawned objects do not appear inside the blackhole space, where they would be instantly consumed without interaction?
  • Minimum Distance Between Objects: Ensuring that two spawned objects are not too close to each other, which would make the game feel cluttered and unrealistic.
  • Shrinking Screen Space: The problem arises when objects fall outside the shrinking boundaries, how should those be handled?

 

Risk Prevention

  • Extreme Growth Prevention: To prevent the blackhole from growing uncontrollably, I will implement a mechanic where the hole begins to shrink if it hasn’t consumed anything in the past few seconds (e.g., 5 seconds). This ensures balance and keeps the game playable even in longer sessions.
  • Object Spawn Locations: While spawning new objects, I will check their coordinates against the blackhole’s current position and radius. If an object is detected within the hole’s space, it will be re-spawned at another random location.
  • Minimum Distance Between Objects: I plan to use a distance check between all newly spawned objects, ensuring that each object maintains a minimum distance from others. This can be achieved with a simple loop comparing coordinates and re-spawning objects if they are too close.
  • Shrinking Screen Space: I am exploring the use of constraint and filter functions to dynamically remove or reposition any objects that fall outside the shrinking boundaries. This way, the playable area remains fair and uncluttered as it reduces in size.

 

Week 5 – Midterm Progress

Concept
I’m a huge football fan and also a huge supporter of Barcelona FC. I’ve always been fascinated by goalkeepers who do their best to save their team points. Those incredible moments of jumping their highest and catching the ball are something I would love to implement. I especially remember the incredible journey of Dibu Martínez (Argentina’s main goalkeeper) against France in the 2022 Qatar World Cup, where he saved two out of five crucial penalties, ultimately winning them the World Cup. So, I thought, why not step away a bit from my Dexter-inspired assignment and move into my other passions.

Highlight of the code

shoot(targetX, targetY) {
    this.totalSteps = random(20, 30); 

    let distX = targetX - this.x;
    let distY = targetY - this.y;

    this.dx = distX / this.totalSteps;
    this.dy = distY / this.totalSteps;

    this.isFlying = true;
    this.currentStep = 0;
}

The Ball class uses a simple, yet highly effective, trick to achieve smooth motion without needing complex physics calculations. For example, the shoot(targetX, targetY) method instead of calculating a constant velocity that needs to be constantly adjusted, the game calculates total movement distance and divides it by a random number of steps (this.totalSteps). Thus, giving the ball its speed.

Sketch

Uncertainties & Complexities
I didn’t use sounds for now and any kind of images or icons. So, I’m a bit concerned about that. Apart from that, I quite like the progress I’ve made, and I think I’m able to finish the project on time. But, if I do get to add those images and icons, and sound properly soon, I want to incorporate other essential stuff that would make the game a lot better. It took a while how do I ensure that the ball always hits the intended target, and I figured it could be easily done when the currentStep equals the totalSteps, so that’s how I keep track of whether ball should stop or not. Collision detection was a bit of struggle too, but I already did Intro to CS project, so I used as Professor suggested once to use circles to detect the collision by dist() between their radiuses.

Future improvements
I asked couple of my friends while I was in the library what do they think of the project, and some said that I could add different ball types like if you save a golden ball you get +3 points instead of +1, and if you don’t you lose -2 lives instead of -1. Also, I’m not quite sure how it works on p5js, but when we worked on Processing during my Intro to CS class, we were able to add CSV file to have a database of users, so we could keep track of best score across all users. Maybe, I will look into that and try to figure out how to add a database.

 

Week 5 – Reading Reflection

Reading Levin’s essay really changed how I think about what it means for a computer to “see.” One of the most striking points was that, unlike humans, a computer has no built-in way to understand what it is looking at. Levin writes that video is “just a big, dumb array of numbers”,  which really stood out to me, people automatically recognize faces, objects, and events, but a computer has to be taught step by step what to look for. This helped me appreciate how much of computer vision is really about extracting meaning from raw data rather than simply displaying images.

I also found it interesting that helping the computer “see” is as much about the physical environment as it is about the code. Levin mentions that careful control of lighting and contrast, even using infrared light or retroreflective materials, can make the difference between a system that works and one that fails. That made me realize that when we design with computer vision, we are not just writing software, but we are shaping the entire stage so the camera has the best chance to succeed.

Finally, the discussion about surveillance made me think about the double-edged nature of computer vision in art. Works like Standards and Double Standards or Sorting Daemon use tracking not just for interaction but to make us feel observed, even judged.  I then reflected on how computer vision can be playful but also unsettling  and how artists can deliberately use that feeling of being watched to make a political or emotional point. It reminded me that computer vision is not neutral, it carries cultural and ethical weight, which is part of what makes it powerful in interactive art.

Week 5 – Reading Reflection

What stood out to me in the reading is how limited computer vision really is compared to human vision. As humans, we don’t think twice about recognizing objects, adjusting to poor lighting, or making sense of what we see in context. A computer, on the other hand, needs specific rules and conditions to function. It does not actually understand meaning but instead works through pixels, patterns, and features. If something changes in the environment, like lighting or background, the system can easily fail. That made me realize how much of computer vision is not about “seeing” the world the way we do but about narrowing down what the computer is expected to detect.

To make computer vision work, artists and designers often shape the environment so the system has fewer obstacles. This can be done by using clear contrasts, better lighting, or markers that help the camera distinguish what matters. There are also tools like background subtraction and motion tracking that simplify what the computer needs to follow. Hardware choices such as certain cameras, filters, or infrared technology also help in making the vision system more reliable.

In interactive art, I think this creates both opportunities and challenges. On the positive side, computer vision allows art to respond to the presence and movements of people, turning viewers into active participants. It makes installations feel alive and immersive in ways that would not be possible without tracking. At the same time, it carries the same logic as surveillance because the system is always watching and recording behavior. For me, this makes computer vision powerful but also a little unsettling, since it forces us to think about what it means to be observed and how that shapes the experience of art.

Week 5 – Reading Reflection

What are some of the ways that computer vision differs from human vision?
Computer is really context dependent compared to humans. We have eyes and can generally differentiate objects and perform any actions with any inputs, but no computer vision algorithm is completely autonomous. Each algorithm is dependent on its code and assumptions about the specific scene it is analyzing. If conditions such as absence of movement or poor lightning are present, then algorithm may fail.

What are some techniques we can use to help the computer see / track what we’re interested in?
As mentioned in the abstract, we need to increase the contrast more so the computer vision can differentiate between environment background and people’s movements. Those include lighting that silhouettes people, contrasting costumes. Also, using Infrared Illumination improves signal-to-noise ration in low-light conditions and retroflection marking materials.

Choosing the right imaging hardware is essential too. For example, telecentric lenses so object’s magnification is independent from the distance, polarizing filters to reduce glare from reflective surfaces, and very purposeful choice of cameras for high resolution, frame rate, short exposure, dim light, UV light, or thermals.

How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?
It is, for sure, a core mechanism and an engine that gives power to interactive art. I strongly believe that computer vision’s ability to detect, track, measure presence, motion, color, and size completely changed how interactive art is perceived since it’s invention. Techniques such as background subtraction of frame differencing, even though simple, are very profound in how they helped to make thousands of modern interactive art installations. For example, advanced tools like EyesWeb specifically focus on the tracking and surveillance and provide ‘analysis and processing of expressive gesture’. Now, it is not just about detecting movement, but rather interpreting for specific musical or visual artistic purposes. I also think that first interactive piece called Videoplace that I read about on my other IM class gives audience agency and computer vision acts as a bridge between human input and computer’s further output much like a computer mouse, but rather detects human movement and gestures.

Assignment 4 – Data Visualization

Concept:

For this assignment, I wanted to create a data visualization that conveys information through motion and color. Initially, I tried using real-world GDP data I downloaded from Kaggle, but I ran into errors because some columns were missing values, and I wasn’t sure how to handle those gaps. So, I decided to create my own fictional CSV file with made-up GDP numbers for a larger set of countries. Each country is represented as a bouncing circle whose size reflects its GDP and whose color adds a visual layer of distinction. The animation gives the data energy and makes patterns easier to notice. I also utilized arrays and a custom class to organize the circles, dynamically managing their movement, size, and color.

Code Highlight:

One part of my code I’m proud of is how the circles move and react to the edges of the canvas. Each circle has its own speed and direction, creating a sense of individuality while still forming a cohesive scene. I also added a feature that displays the country’s name when you hover over a circle, allowing you to read the data without cluttering the visualization.

if (dist(mouseX, mouseY, this.x, this.y) < this.size / 2) {
    fill(255);
    textAlign(CENTER, CENTER);
    textSize(18);
    text(this.name, this.x, this.y - this.size / 2 - 10);
}

Embedded Sketch:

Reflections & Future Improvements:

This project taught me how to combine datasets with visual techniques to make information more intuitive. I realized that adding motion and color can turn raw numbers into an engaging experience. Hover interactions enhanced the usability, making the visualization informative without overwhelming the viewer.

For future improvements, I would like to make the visualization more interactive, perhaps allowing users to filter by region or toggle between GDP and population. I also want to experiment with color gradients or circle trails that respond to velocity, so the data becomes even more expressive. Overall, this project helped me understand how programming, animation, and data can intersect to communicate ideas creatively.

 

Week 5 Reading Response

Prompt:

What are some of the ways that computer vision differs from human vision? What are some techniques we can use to help the computer see / track what we’re interested in? How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

Response:

From my understanding plus the reading’s interpretation regarding computer vision, computer vision differs from human vision as human’s ability is sort of built-in inside our brain and takes way less energy and time to train our ability to recognize certain objects. For CV, however, they not only need numerous examples to learn certain facts, but also they consume huge amount of energy. Also, during the process of learning the ability to recognize, computer algorithms would often “cut” pictures into small pieces and train based on that while human learn to recognize as a whole.

Techniques that assist computer to see and track include frame differencing, background subtraction, brightness threshold, and simple tracking. I believe computer’s ability to track and surveillance could assist interactive art to be more interactive. For example, if I want to design a background music/scenes for a dancer, and if i want to change the background according to the dancer’s movements, I need CV as a technique to help me calculate parameters that could be applied into algorithms which later on could turn into shapes/sounds/etc..

 

Week 5 Midterm Progress

Concept:

For this project, I decide to create a design includes interactivity with sound. When I looked at how musicians create their music, they always apply something called as “soundtrack” that could be dragged around to edit their produced songs. So, for this midterm project, I decide to build on this concept.

However, it would be too technical if I just recreate a soundtrack control system and it wouldn’t be very artistic. To add certain aesthetics to the design and to simplify the system. I decided to create a visual system that is similar to this picture.

With different color representing different sound, user could drag the gridlines to increase/decrease the length of certain sound. With colored blocks designated to certain sound, I decrease the difficulty to choose which sounds are available to the sound track. Also, I want to design a system in which the user could click on the grid inside and change the color(grid) of the sound. There should also be a play button which plays the sound block once the user finish design the block.

Complex parts:

I always insist on making my design graphically dominant. Ideally, there’s no word included in the system. However, I want to achieve the effect of asking users to discover they could drag the grid lines so that they could alter the duration of a certain sound. Also, the operation of changing the color of the grid should also be very intuitive.

Design:

There should be a class for blocks as user could alter the color of the grid and the width of the grid.

First Stage work:(press f to enter full screen)

As displayed above, this version is extremely crude.

There are several Improvements I could make in the coming week

  1.  the entire interface should be more user friendly, right now it’s just ugly.
  2. it should be a 5*5 grid instead of a 1*5 grid
  3. the grid line alter function should be designed more beautifully
  4. there should be no word instruction

However, this version works right now, and I can play with the sound to make some simple music.

 

Week 4: Generative Text

Fullscreen sketch

Main Concept:

My main concept is the first home screen you see when you buy a new phone. There are so many “hello” messages in different languages popping up, and that makes me shiver and feel like I’m really getting a new phone. For this assignment, I wanted to replicate that feeling of “I’m getting a new thing.” I also thought generating “hello” in different languages would symbolize that even though we are divided by languages, we are all connected with each other, and it is important to understand and embrace one another.

 

The part of the code I’m proud of:

The part of the code I am most proud of is the update function. In order to calculate the time that has passed after the word was generated, I had to learn a new function called millis(), which basically gives you the number of milliseconds that have passed since the program started. I used multiple if-else statements to make the word gradually appear and disappear based on time intervals. For instance, the transparency of the word gradually increases from 0 to 255 within 1 second so that it does not pop up immediately. This was meant to imitate the iPhone’s way of generating “hello,” which gradually fades in. I also used the map() function, which we learned in class, to map 0 to 1 second to 0 to 255 in transparency. I am happy about the fact that I was able to fully utilize the concepts we learned in class in this update function inside the Hello class. 

update(){
    let passed = millis() - this.beginTime; 
    
    if (passed < 1000){
      //gradually fade in in 1 sec 
      this.alpha = map(passed, 0, 1000, 0, 255);  
    } else if (passed < 3000){
      //full transparancy after 3 secs
      this.alpha = 255;
    } else if (passed < 5000){
      //gradually fade out in 2 secs
      this.alpha = map(passed, 3000, 5000, 255, 0); 
    } else{
      //word faded out
      this.over = true;
    }
  }

Sketch:

 

Reflections & Future Improvements:

For future improvements, I would like to change the color of the word each time it pops up to make it more colorful and enjoyable for viewers. Furthermore, I want to avoid generating the same word two times in a row. I think I will be able to do this by using an if-else statement to control the conditions. Overall, I am happy with the outcome, as I was able to replicate a simpler version of the iPhone’s starting screen.

Reading Reflection – Week 4

Reading Norman’s chapter on how to design everyday things brought back memories of my experience with poorly designed washing machines. The majority of newer machines have vague buttons, knobs, and symbols that make even a single load of laundry complicated. Some controls are counterintuitive, with icons that are unidentifiable as any standardized symbol. Others have poorly laid-out buttons or poorly readable colors, which make discoverability nearly impossible.

These design errors are just the kind of problems Norman discusses in terms of discoverability and understanding. A washing machine should express its use naturally so that users would know what can be done, where to do it, and what the controls accomplish without regard to instructions. Norman’s principles of affordances and conceptual models are applicable here. The signifiers should refer to the possible actions on the machine, and affordances should tell us where and how to do them naturally. Poorly designed washers fail in all of these ways, keeping their users uninformed and frequently erring that would not have been done with better design.

Norman indicates that machines should be planned for human error, and I agree that anticipating mistakes is crucial in terms of usability. A properly thought-out machine should anticipate human error and guide the user instinctively through the procedure, without requiring perfect knowledge or precision. I think there is a fine line, however, between assisting users and overcompensating for every mistake. If a design strives to support as many potential errors as it can, it will end up being overcomplicated or even enable sloppy use, rather than teach or lead users to the correct way of using it. Brilliant design, in my opinion, should discourage the most common errors without shutting doors entirely on the freedom to learn from experience and make decisions, rather than trying to predict all potential errors. This equilibrium provides usability without sacrificing structure or clarity.

Reflecting on Norman’s principles, I realize how important it is to design for clear feedback in interactive media, especially in my p5.js projects. For example, if I create a sketch where clicking on shapes triggers animations or changes their color, I can’t assume that a first-time user will immediately understand what to do. To address this, I might add visual cues like hover effects, tooltips, or a brief on-screen instruction to indicate which elements are interactive and what actions will happen. Norman’s arguments remind me that good design means guiding users through interactions intuitively, so they can enjoy the experience without confusion, frustration, or guesswork. By applying this principle, I can make my projects more accessible, engaging, and user-friendly.