Midterm Progress | week 5

For my midterm I want to create a simple game inspired by the movie “Cloudy with a Chance of Meatballs”

where the avatar has to avoid the meatballs, each meatball avoided a point is gained. Using this catching game as a template I would have to change the code from catching the circles to gain a point to avoiding them.

Also I would like to have the player pick their avatar (the girl or boy) before starting. But I am worried that maybe I can’t do it because the collision between the ball and avatar might look off because different avatars means different sizes or shapes.

If everything goes well and I end up finishing faster than I think I could add levels to the game.

this is a rough sketch of what it will look like

Week 5 – Reading Reflection

After reading Computer Vision for Artists and Designers, I found myself reflecting on how this technology has made its way from industrial and military applications to creative fields like interactive art. One of the points that struck me is how accessible computer vision techniques have become, allowing artists and designers to explore new creative possibilities. It made me think about the similarities and differences between how humans and computers perceive the world. While human vision is complex, combining perception, intuition, and context, computers break down visual input into patterns, often missing the nuances that we take for granted. For example, when we see someone waving, we instantly understand the context — whether it’s a friendly greeting or an attempt to get attention, using cues like their expression and the setting. In contrast, computer vision algorithms like frame differencing detect only the motion of the hand, missing the gesture’s meaning. This shows us how computers rely on measurable inputs, while human vision combines objective information with subjective interpretation.

When it comes to helping computers track what we’re interested in, I think optimizing the physical environment plays a crucial role. The article discusses techniques like proper lighting and camera placement, which seem necessary for improving accuracy. These considerations remind me that even though we can program computers to “see,” there’s still a lot of fine-tuning involved to get the desired results. The tools mentioned, such as Processing and Max/MSP/Jitter, also seem promising for artists who may not have deep programming knowledge but want to experiment with computer vision.

I believe computer vision’s ability to track and surveil raises important questions about its use in interactive art. It offers exciting possibilities, such as creating responsive environments or immersive installations. However, it also introduces concerns about privacy and surveillance, especially when this technology is used outside artistic contexts. In interactive art, it can feel playful and creative, but the same technology in everyday spaces could feel invasive. This duality makes me wonder if we need to establish boundaries around how this technology is used, both for art and for broader societal purposes.

reading response | week 5

The reading explains how computer vision, originally used in military and law enforcement, is now accessible to artists and designers. It discusses the challenges of making computer vision work effectively and suggests tools to help artists use it in their projects. Computer vision and human vision differ in a few ways. Human vision relies on the brain’s ability to understand complex scenes, recognize patterns, and adapt quickly, while computer vision is a set of algorithms that extracts data from digital images. Human vision is intuitive, recognizing objects in various lighting conditions and angles. Computers, however, require explicit programming to process pixels and identify patterns. While human eyes focus on the most relevant parts of a scene, computers process images as data without any true understanding of what they’re “seeing.” Human visual perception isn’t fully understood, so computer vision isn’t perfect.

In interactive art, computer vision allows artists to track movements and respond to viewers in real-time, making the experience more engaging and interactive. But because it can also be used for surveillance, it raises questions about privacy and control. This mix of being both creative and intrusive becomes a double edged sword.

Reading Reflection – Week 5

I found this article “Computer Vision” to be quite fascinating, especially when the author talked about how computer vision technologies used to be limited to military and law-enforcement purposes, when it is now also used for programming and creative purposes. Computer vision holds a significant position in creating ideas and designs for areas such as novel artworks, games, and home automation systems. I really liked how this article showed a lot of examples of whatever the author was talking about, such as the first interactive artwork to incorporate computer vision and elementary computer vision techniques, while also showing visual examples of the concept. This really helped me understand the idea and see how it would play out in real life. I also really enjoyed how much I learned in quite a little amount of content compared to our other readings, which I’ll talk about later.

Some of the ways in which computer vision differs from human vision as stated in the article is “generalization,” where computer vision algorithms are not fully general as they are highly dependent on assumptions about the real-world video scene. On the other hand, humans are easily able to adapt and generalize what they’re seeing across a bigger range of contexts. Another difference is computers’ vs humans’ abilities to differentiate and infer what they’re seeing if the subjects are too similar in color or brightness. Computer vision struggles to work well if there isn’t a significant contrast between people and the environments, meanwhile humans are able to tell people and things apart, even if the brightness and color of the backgrounds are very similar. These made me think of how easy it is for us to determine what things are. We’re very lucky to be able to process these things so quickly, because not only would it harm us in terms of survival, it would also harm us in terms of social life. Imagine your friend sees you and waves at you, only for you to not be able to process or see them because they’re unintentionally matching with the wall. It would be embarrassing for them and complicated for you. It also helps that we can generalize enough to be able to tell that our friends are our friends instead of processing them as a newly discovered bug just because they decided to wear a new bright color palette of clothes one day.

One more difference that is also a technique humans can use to help the computer see is by using infrared. Infrared is invisible to the human eye, but helps improve the signal-to-noise ratio of video that’s capture in low-light conditions. This helps computer vision operate in near to complete darkness, something that humans can’t quite do. Another technique is the use of retroreflective marking materials, which helps ensure high-contrast video of tracked objects due to the light illuminating and reflecting back. Reading on this unintentionally reminded me of and answered a question I forgot I had back when I was a kid in elementary school. When crossing the street to get to school, I always wondered why the crossing guards wore bright yellow or orange jackets with silver stripes. This answered why. The high-contrast brightness and reflectiveness is to enhance their visibility, especially in dark conditions, by illuminating light back to the drivers.

Before reading this article, I would think that when it comes to interactive art, computer vision’s ability to track would allow for direct engagement with the artist and/or the audience. It enables immediate user feedback from viewers’ actions, such as Myron Krueger’s “Videoplace,” which analyzed participants’ silhouettes and movements to create graphical responses in real-time based on their interactions. These real-time interactions and visual responses are able to enhance the experience and make it more immersive and engaging. I thought that it would just be for good interactive fun. But after reading this, I realized how many issues could rise from it. It could end up being problematic, like Bureau of Inverse Technology’s “Suicide Box,” which has led to controversy over the ethics of recording suicides and if the suicides are even real. Even with the bad and good, it could also be used in a negative light to help shed light and raise awareness. David Rokeby’s “Sorting Daemon,” inspired by concerns over surveillance and profiling, uses computer vision to survey the environment and extract and sort people in his piece as a way to question the ethics of using technology for reasons such as profiling. I didn’t realize how complex the uses of computer vision could get, but now I’ve seen several perspectives on how it could be viewed as fun, beneficial, or problematic.

Overall, this was a really good read and I learned a lot more than I expected from it. I thought I would just learn what computer vision is and a couple of examples of how it works and how it could be implemented into interactive art, but I ended up learning way more than that

Reading Response 3

Something that drives me crazy is the confusion around door handles, especially when it’s unclear whether to push, pull, or if the door is automatic. Many doors have handles that imply pulling, but they actually require pushing, or they’re automatic but give no indication of it. This creates unnecessary frustration. Norman’s principles can easily improve this. First, using appropriate affordances would help—push bars for pushing and handles for pulling. This would visually communicate the correct action, reducing hesitation. Additionally, signifiers like clear signs or arrows could guide users, especially in high-traffic areas like stores or airports.

For automatic doors, better feedback could be useful. Subtle lights or sounds could indicate when a sensor has been triggered, letting users know the door will open. For example, in C2, the doors don’t always respond to you, and sometimes you are required to pull the doors so they open. This unpredictability creates unnecessary confusion and frustration.

In this case, enhancing the visibility of door functions and matching the conceptual model to user expectations (handles for pulling, plates for pushing) would make navigating doors far less frustrating. By aligning with Norman’s principles, door design could become far more intuitive and user-friendly.

Assigment 4 – Words and Wizardry

Inspiration

Anyone who knows me also knows that I’m a huge fan of the Harry Potter series and I decided to use that as the base idea for this project involving text and data.  In whatever way I could, I wanted to recreate some sense of wonder and enchantment that I feel like I’m a part of every time I read one of the books or watch one of the movies. In the series, things often move and react in magical ways, and that gave me the idea to animate words and make them interactive, almost like they have a life of their own. Instead of having static text on the screen, I imagined how fun it would be if the words could scatter, repel, and float, just like magic.

Concept

The idea I finally arrived at involved a colorful word collage filled with randomly chosen and placed words from Harry Potter vocabulary (resulting in a different look/arrangement every time the code runs). When the user hovers near a word, it gets “pushed” away as if repelled by a magical force.  When the screen is clicked, these words disintegrate, breaking down into smaller particles, as though they’re dissolving or being pulled apart by magic. The disintegration adds an additional layer of visual intrigue, similar to a magical spell or force at work. Once the words have completely broken down, the particles fall to the bottom of the screen and disappear. In their place, a meaningful Harry Potter quote appears on the screen. This quote stays visible for 6 seconds, offering the viewer a moment to absorb the message, before the entire cycle then resets, creating a loop that gives the  experience a continuous flow.

Implementation

This is how the project turned out!

These are some of the most important parts of the code that I am proud of :))

1. Placing Words on the canvas

The first part of the project was figuring out how to place the words on the screen randomly. I wrote a function called placeWords() that tries to put each word at a random location. If the word overlaps with another, it shrinks until it fits. This way, each time the page is loaded, the words appear in different spots, making each experience unique.

// Function to place words randomly on the canvas
function placeWords() {
  placedWords = [];
  for (let i = 0; i < words.length; i++) {
    let word = words[i];
    let fontSize = 60;
    let success = false;
    
    // Trying to place the word until successful or until the font size is too small
    while (!success && fontSize >= 10) {
      let x = random(width);
      let y = random(height);
      let angle = random(1) > 0.5 ? 0 : HALF_PI;
      
      // Checking if the word can be placed at the given position, size, and angle without overlapping
      if (canPlaceWord(word, x, y, fontSize, angle)) {
        placedWords.push(createWord(word, x, y, fontSize, angle));
        success = true;
      } else {
        fontSize -= 2;
      }
    }
  }
}
2. Animating Words

Next, I wanted the words to move away from the mouse, like they were being pushed by an invisible force. I used the mouse’s position to calculate the distance between the mouse and each word. If the mouse gets too close, the word moves away. If it moves far enough, the word slowly returns to its original spot.

// Drawing "Hogwarts" word (stationary)
    drawWord(hogwartsWord);
    
    // Drawing all other placed words with repel effect
    for (let word of placedWords) {
      // Calculating the horizontal/vertical distance between the mouse and the current word's position
      let dx = mouseX - word.currentX;
      let dy = mouseY - word.currentY;
      let distance = sqrt(dx*dx + dy*dy); // calculating the total Euclidean distance between the mouse and the word using Pythagoras' theorem
      let maxDistance = 100;
      let repelStrength = 35; //strength of repelling force
      
      // If the word is within the repel range (closer than maxDistance)
      if (distance < maxDistance) {
        // Calculating the repelling force proportional to the distance (closer = stronger force)
        let force = (1 - distance / maxDistance) * repelStrength;
         // Applying the repelling force to the word's horizontal/vertical position, pushing it away from the mouse
        word.currentX -= dx / distance * force;
        word.currentY -= dy / distance * force;
      } else {
        // If the word is outside the repel range, gradually moving it back to its original position using linear interpolation
        word.currentX = lerp(word.currentX, word.x, 0.1);
        word.currentY = lerp(word.currentY, word.y, 0.1);
      }

 

3. Disintegrating Words

Finally, I wanted the words to break apart and float away, like a spell was cast on them. To do this, I turned each word into a set of particles that would fly off the screen when triggered. I mapped each word’s size to a number of particles and gave them random velocities to make the effect look more natural. This disintegration is accompanied by a famous Harry Potter quote (loaded from a file quotes.txt ) being displayed on the screen.

function disintegrateWords() {
  // Converting each word into particles
  for (let word of [hogwartsWord, ...placedWords]) {
    // Calculating the number of particles to generate based on the word's font size
    // Larger font sizes produce more particles (between 5 and 50 particles)
    let particleCount = map(word.fontSize, 10, 120, 5, 50);
    for (let i = 0; i < particleCount; i++) {
      // Pushing a new particle object into the particles array with initial properties
      particles.push({
        x: word.currentX,
        y: word.currentY,
        vx: random(-2, 2),
        vy: random(-5, -1),
        size: random(6, 10),
        color: word.color
      });
    }
  }
  
  // Clearing the words
  placedWords = [];

  // Selecting and display a random quote
  currentQuote = random(quotes);
  // Splitting the quote into lines of about 40 characters each
  currentQuote = splitQuoteIntoLines(currentQuote, 40);
  quoteStartTime = millis();
  quoteEndTime = quoteStartTime + quoteDisplayDuration;
}

Reflections and Further Improvements

One thing I’d like to improve is the collision detection when placing words. Right now, the words sometimes get placed in awkward spots if they shrink too much. I’d like to refine the placement system so the words always look balanced on the screen. It might be a good idea to have words be more crowded around the centered word (Hogwarts) and then spreading out gradually. Also, I think adding more magical effects, like words changing color or spinning, would enhance the overall experience.

This project helped me better understand how interactive elements can transform a simple idea into something engaging and dynamic. I look forward to adding even more magical effects to this canvas in the future!

References

https://www.bloomsbury.com/uk/discover/harry-potter/fun-facts/harry-potter-glossary/

Assignment 4: NYUAD Cookbook

Concept

The other day I was wandering around Daymart with my friend who was trying to pick ingredients to cook soup. Looking at other people in the store, I noticed that some products are more popular than others, so I decided to create a humorous (anti) cookbook. When the reader clicks the mouse, a new recipe is generated with three random ingredients and two random actions from the lists that I have created.  The generated recipe is always absurd, and a funny image on the right complements this vibe.

Disclaimer: do not attempt cooking anything according to the recipe from this cookbook.

Highlight of the code I am proud of

Although the idea is not very complex, I have tried adding something that was not included into any of my works before – an image. It took some time, because I did not know that two preload() cannot exist in the same code since one will overpower another. I was moving the image preload to different segments of the code for a while, but then I have managed to figure out how to solve the issue.

function preload(){
  // loading text from the file
  strings = loadStrings("words.csv"); 
  // loading image
  img = loadImage('cookbook.jpg');
}

Sketch

Reflection

I am satisfied with the final result because it does make the user smile when reading all the generated recipes. One thing I would want to add is a similar method of randomising the image on the second page – instead of a single photo, it could be an actual illustration of the recipe. I am not sure how to do this yet, but I guess that several images would have to be linked to specific items from the lists in order to create an actual representation of the dish.

Assignment #4: Watch Out!

Concept & Inspiration.

For this project, my main goal was to create generative text that transforms everyday, mundane things we’re so used to into something unexpectedly funny and slightly absurd.

When thinking about text we frequently encounter, warning signs came to mind. They are a common part of our daily lives, but we have become so accustomed to them that we barely notice what is written anymore. In this project, I wanted to experiment with something as rigid and conventional as warning signs, turning them into something absurd through generative text.

Perhaps my surreal warning signs might catch the attention of passersby, if only for a moment?..

Highlights.

In this code, I worked with file uploads in p5.js to import both my spreadsheet (containing three rows of random text) and a custom font (Heavitas), using the loadFont() and loadStrings() functions we learned in class throughout the week.

This is the first project where I focused on making the code as concise and efficient as possible by identifying repetitive patterns. To achieve this, I used a technique we didn’t cover in class but found intuitive—an array of arrays, let rows = [[], [], []];.

Additionally, I applied concepts from previous lessons to optimize the use of for() loops in various parts of the code, which I’m particularly proud of.

// Splitting each row into separate words and storing in 'rows' array
for (let i = 0; i < 3; i++) {
rows[i] = split(strings[i], ','); 
}
// Setting array to store the randomly generated text for three rows
let signText = ["", "", ""];

// Building a function to generate a random sign from three rows
function generateRandomSign() {
for (let i = 0; i < 3; i++) {
signText[i] = rows[i][int(random(rows[i].length))]; // Random selection for each row
}
}

Reflection.

Although this project may seem simple, the code was challenging to work with as I struggled for a while to shorten it without removing any functionality. Nonetheless, despite these difficulties, I enjoyed the process, especially at the initial stage, when coming up with random words and imagining the crazy possibilities of their combinations.

In the future, I would like to focus more on making the text responsive and adding real interactivity. I would also love to incorporate a warning pattern animation to make the code more visually engaging when I have more time. Overall, working on this assignment was extremely valuable for my coding skills, as I can see myself using generative text for other projects, like creating a mini-game.

Embedded Sketch.

Mouse click to generate a random warning sign.

Assignment 4: Box office fireball

Concept

My goal is to practice data visualization in a playful way. I had a clear idea knowing that I want a Marvel character to fire a ball from its hand; the ball’s size is determine by the box office of the movie. 

I access Marvel’s box office data dataset from Kaggle. I deleted unnecessary information such as the length of the film, opening weekend gross, release date, etc and only kept the box office gross and the corresponding movie title (24 in total). I chose the Hulk to be my character because he is the easiest to draw.

Unfortunately, a superhero’s screentime is not determined by what cool power they have, but by the ability for them to make audiences pay 20$ to walk into the cinema.

Highlighted code

As user clicks on the screen, a random movie name from the sheet & a fireball whose size is based on the size of the box office will be generated. For example, The Incredible Hulk has a fireball that is way smaller than Captain America.

The font is accessed from here.

Please click on the canva.

At first, i didn’t set the boolean variable isFiring , so the ball always fires before mouse is pressed and it will no reset once it exits the canvas. I call the variable = false at the beginning, call it true when mouse is pressed, and call it false when the ball exits the frame so that it can be reset.

let isFiring = false; // boolean to check if fireball is in motion
  isFiring = true;
  // Reset fireball once it exits the canvas
  if (circleY < circleSize) {
    isFiring = false; // stop the fireball animation
  }

I also used the min_box = max(float())  method to determine the range so that the fireball can be mapped to fit the size of the screen. Thus, the overall fireball animation look like this:

function fireballAnimation() {
  // map the box office size to a reasonable circle size
  let boxOffice = movies.getColumn('gross_us');
  let max_box = max(float(boxOffice));
  let min_box = min(float(boxOffice));
  circleSize = map(boxOfficeSize, min_box, max_box, 1, 50);
  print(boxOfficeSize,selectedMovie); // just testing if they are corresponding

  // Draw the fireball
  fill(255, 100, 0); // Fireball color
  strokeWeight (2);
  ellipse(circleX, circleY, circleSize);

  // Move the fireball upwards
  circleX += 5;

  // Reset fireball once it exits the canvas
  if (circleY < circleSize) {
    isFiring = false; // stop the fireball animation
  }
}

Improvements

1.Character variety

It would be great if the character can change based on the  movie title on the right. Maybe adding assign if statement to each character might work.

2.Losing stroke

I’m not quite sure why the right arm does not have stroke weight when the canvas is restarted.

 

 

 

Reading Reflection – Week #4

Personally, I found the first chapter of Don Norman’s The Design of Everyday Things to be an insightful introduction to human-centered design and user experience. However, some aspects of it struck me as somewhat contentious. Having already read some of my peers’ reflections on the infamous “Norman Doors” (which are also found on our campus), I was intrigued by the mixed reactions. Some people found the design intuitive and user-friendly, while others have consistently struggled with it over time.

This divergence reminded me of my early experiences with social media. When I was 9, I had an account on the Russian social network VK, which (even then) I found incredibly easy to navigate. I also had a Facebook account at the same time, but I could never quite figure it out due to its cluttered and complex design. Interestingly, when I later traveled to Armenia, I discovered that many people there had the opposite perspective: they viewed VK’s interface as poor in terms of user experience, while Facebook was considered relatively straightforward. To this day, I still find Facebook extremely annoying to use, overwhelmed by its numerous buttons and pages whenever I open the sidebar or attempt to navigate through it.

This brings me to a broader question: is user experience really as universal as Don Norman suggests? Are there only conceptual models — mental representations of how objects work, based on users’ past experiences — that determine the value of design? How do cultural differences influence or limit the usability of objects? When discussing Norman Doors, who do we consider as the elusive “ideal user”? Is there truly a universal standard for which almost anyone can be considered the model for evaluating a design’s discoverability?