Reading Reflection 4

Computer Vision for Artists and Designers:

In reflecting on this paper on computer vision, I find its potential utility for artists and designers both compelling and distinct from human vision. The difference between computer vision and human vision mostly comes down to senses—humans use their five senses to process information, while computers need fixed algorithms to handle physical or visual data. But once that data is processed, we can program the computer to trigger a specific action based on what it sees.

A lot of the techniques in the paper revolved around pixel tracking, which is basically comparing one pixel to a predefined one until a match is found. This could be useful in something like a salad-sifting machine, where I could train the model to recognize red pixels as tomatoes and have it remove all the red objects, essentially removing all the tomatoes from my salad.

As for how computer vision’s ability to track and surveil affects its use in interactive art, I think it’s a double-edged sword. On one hand, it’s amazing for creating immersive, responsive art that can change depending on how people interact with it—like tracking movement or emotions to alter the artwork in real-time. But at the same time, the idea of constant surveillance can be slightly problematic, especially in art spaces where people want to feel free and unobserved. So, there’s this tension between using computer vision to enhance interactive experiences and making sure it doesn’t cross any lines when it comes to privacy.

Reading Response 5:

Computer vision and human vision differ in interesting ways. While human vision is natural and intuitive, allowing us to recognize patterns and emotions effortlessly, computers need specific algorithms to make sense of images. For instance, what we instantly understand as motion or objects, computers detect through methods like frame differencing or background subtraction. I honestly find it intresting how rigid and task-specific computer vision is compared to our flexibility. Furthermore, to help computers “see” what we want, it uses techniques like brightness thresholding or background subtraction, and sometimes adjusts the physical environment by using better lighting or reflective markers.
Moreover, in interactive art, computer vision creates exciting new opportunities but also brings up ethical questions. For instance, Videoplace used computer vision to create playful, full-body interactions, while Standards and Double Standards used it to explore themes of authority and surveillance. However, a question that popped into my mind is that, when you consider the ability of these systems to track movements and gestures, do you feel like the line between creative interaction and surveillance can sometimes blur? This reminded me of the movie M3GAN, where AI uses computer vision to care for a child, but the surveillance becomes invasive. What if we might see something similar with interactive art or technology, where the systems that are meant to engage us could start to feel more like surveillance. Hence, it’s an interesting balance between enhancing the experience but also respecting privacy.

MIDTERM PROGRESS

CONCEPT:

I couldn’t decide between a game or an artwork for my midterm project. However, I was playing on my phone, and there’s this one game that I still play to this day “Geometry Dash”. My siblings and I grew up around this game and we still love it to this day, and as the game design is basically shapes, I thought it would be the perfect game to try and re-create.

The main idea or goal is to control a character that moves through several/ repeating obstacles. The difficulty increases over time as the player’s score rises, with obstacles speeding up and appearing more frequently. There’s also a feature where the game randomly switches to an upside-down mode, adding unpredictability and complexity to keep the player more engaged.

Design:

So far, I haven’t really worked on the design but the layout of my game. I want to have a similar concept to the actual game, where the background is gradient, and changes colour that aligns to the beat of the background music and the obstacles. As for the obstacles, for now, I left them as simple shapes, rectangles, and spikes just to test everything out and see how it flows in the game. For the use of sound, I found online the original music used in Geometry Dash and implemented it in my game as well as adding a sound effect when the player dies. However, I still need to fix the background music so that when the player dies the song stops until he starts playing again, since I used the loop function it’s just playing over and over non-stop.

This is the inspiration for my design and how i would like it to turn out in the end.

User Interaction:

My user interactions are basically the player’s input in the game. The player must press the spacebar to jump. If the spacebar is held down, the player continues jumping until the key is released. As for my instructions and text, I’ve applied it in the beginning, so the game begins when the player presses the spacebar at the start screen. After a game is over, pressing “1” will restart the game. Moreover, I still need to work on the visual design of the Start page, as of now, I just left it as text. I’ve also added a score count which is displayed at the top of the screen, which increases by one as the player successfully passes an obstacle. In the game, the obstacles appear from the right side of the screen, and the player must jump to avoid them. Then the game randomly switches to an upside-down mode at higher scores, adding an extra challenge, but I still think I need to make it more complex and play around with the obstacles, as I fear the game might be too simple and boring the way it is now.

The Most Difficult Part of the Project:

The hardest part of making this game has been figuring out how to make the difficulty increase smoothly as I want the game to stay engaging throughout. I want the game to get harder as you play, but I also need to make sure it doesn’t become too hard too soon, to the point it just gets frustrating.

Collision Detection (When the Player Hits an Obstacle):

The other tricky part is making sure the game knows when the player hits an obstacle, especially the spikes.  For the spike obstacles, the spikes are drawn as triangles, but I treated them as if they were inside an invisible rectangle (called a bounding box) that surrounds the spike. This makes it easier for the game to detect if the player hits the spike. Even though the spike is a triangle, the game checks if the player touches the rectangle around the spike. I used the collideRectRect() function in p5.js. This function checks if two rectangles touch each other. Even though the spike is a triangle, the game uses a rectangle around it for simpler collision detection. If the player’s rectangle overlaps with the spike’s rectangle, the game registers a hit. The same goes for the rectangle obstacles.

How I Made It Less Risky:

To make sure the game doesn’t get too hard too fast, I tested how quickly the obstacles speed up and how often they appear. By setting limits on both, I made sure that the game gradually gets harder, but not too difficult right away.

 

Code so far:

 

 

Week 5 reading

This reading was instrumental in my understanding of how computer vision techniques can be harnessed in the realm of interactive art and design.

One of the most enlightening aspects of the article was its clear explanation of the fundamental differences between computer and human vision. Understanding these distinctions helped me grasp why certain approaches are necessary when implementing computer vision in artistic contexts. The emphasis on the limitations of computer vision systems, such as their struggle with environmental variability, underscored the importance of thoughtful design in both the physical and digital realms.

The article’s discussion of various techniques to optimize computer vision for artistic applications was particularly valuable. Levin’s explanations of methods like controlled lighting, and algorithms provided me with a toolkit of practical approaches. This knowledge feels empowering, as it opens up new possibilities for creating interactive artworks that can reliably detect and respond to elements in a scene.

The ethical considerations raised in the article regarding tracking and surveillance capabilities of computer vision were thought-provoking. Levin’s examples of artists like David Rokeby and the Bureau of Inverse Technology, who have used these technologies to comment on surveillance culture and social issues, inspired me to think about how I might incorporate similar critical perspectives in my own work.

Furthermore, the range of artistic applications presented in the article, from full-body interactions to facial expression analysis, expanded my understanding of what’s possible with computer vision in art. These examples serve as a springboard for imagining new interactive experiences and installations.

In conclusion, this reading has significantly enhanced my understanding of computer vision in the context of interactive art. It has equipped me with technical knowledge, practical approaches, and critical perspectives that I’m eager to apply in my own creative practice.

Reading Reflection 3

The Design of everyday things:

In “The Design of Everyday Things,” Don Norman critiques the reliance on logic in engineering design, arguing that effective design should anticipate and account for human error. While I understand his perspective, I find myself disagreeing with the notion that design flaws are solely to blame for user errors. Given the vast variability in human behavior, it’s nearly impossible to design for every possible error. For example, consider a standard hinged door: while it might pose no issue for an average person, a shorter individual may struggle with a handle positioned too high. Adjusting the handle height to accommodate one group could inadvertently create challenges for another.

That said, I agree that designers should strive to make their products as intuitive as possible for the average user. This brings me to my frustration with mixer grinders, which I find notoriously difficult to manage. Each new brand presents a unique setup process, often leading to confusion and errors. I believe the design of these devices could be greatly improved by using magnetized parts for easier assembly and reducing the number of buttons to just a power switch and perhaps a safety mechanism, as well as one additional button for varying power levels.

Additionally, one of Norman’s design principles that could enhance interactive media projects is the use of intuitive icons on buttons. These icons should visually convey the action triggered by the button, making it easier for users to understand and interact with the interface.

 

Assignment 4: Too much Espresso

Concept:

In this project, I visualized the frequency of Google searches for the word “espresso” since the beginning of 2024. My inspiration stemmed from the popularity of Sabrina Carpenter’s song “Espresso,” which has captured attention and sparked interest since its release early this year. This trend led me to hypothesize that the search volume for “espresso” would similarly experience a notable increase.

Sabrina Carpenter sweetens up Coachella 2024 with new retro pop single ...

To explore this hypothesis, I aimed to create a visual representation that illustrates the correlation between the song’s popularity and the search frequency of the term “espresso.” I envisioned an effect that mimics espresso pouring out of a teacup, with the volume of the pour symbolizing the number of searches. This is accomplished using circles: the larger the circle, the greater the volume of searches.

Highlight:

A key highlight of this project was my attempt to ensure that the color of the circles corresponded to the volume of searches for “espresso.” I aimed to create a visual gradient where the shades of brown varied in darkness or lightness based on the search frequency. To achieve this, I mapped the espresso values to a color variable, allowing me to adjust the fill color of the circles by assigning this color variable as an argument to the fill().

  // Color based on espressoValue with brown tones
let colorVal = map(dataPoint.espressoValue, 0, maxValue(), 10, 120); // Adjust the color range for darker tones
fill(colorVal, 40, 20); // More muted brown tones
noStroke();

Finding the right numbers for the brown tones was also a matter of trial and error.
Reflections:

The final sketch of this visualization organizes time in an ascending manner, with the top of the y-axis representing the beginning of 2024 and the lower end depicting the months leading up to the present. The size of the circles indicates the volume of searches, while the shades of brown inversely correlate with search frequency—darker shades represent lower search volumes, and lighter shades signify higher volumes. This relationship may appear counterintuitive to viewers, highlighting one of the significant flaws in this project.

In future iterations, I would aim to reverse this color representation for clearer communication of the data. Additionally, I would like to enhance the aesthetic of the espresso pouring from the cup to create a more natural and visually pleasing effect.

READING

After reading, my initial thought was how, in the real world today, we are always competing to make inventions that already function smoothly look fancy and aesthetically futuristic. This causes us to overlook the importance of simple designs that just work. The examples given in the text, like doors, switches, and stoves, are good examples of how, in today’s world, simple objects can become confusing when designed poorly just for the sake of appearance. In my opinion, I appreciate good design when it works easily, rather than the overcomplicated ones that frustrate me. For example, when he talked about his friend’s experience with a door, it really stood out to me because it shows how humans can take something simple and turn it into something people struggle with, which is also frustrating.

For me, it’s annoying when designers care more about how things look than how they function. It feels like they’re taking away the object’s purpose. It’s not just me who feels this way, because older generations also struggle with this. They have a hard time catching up with how the world functions now, so adding complexity to something that’s supposed to be simple just makes it harder for them. Sometimes it feels like designers expect us to admire their work without caring if we can actually use it. What’s the point of a nice-looking door if I don’t know how to open it? This applies to all kinds of things, not just objects, but whether it’s a phone app or a media project, it needs to be easy to use.

In today’s world, many products are designed with minimalism in mind. While they may look cool and modern, they often hide important functions. For example, take Kim Kardashian’s sink design in her house. The sink looks cool and ultra-modern, but it’s confusing because it doesn’t function like a regular sink. The surface is completely flat with no visible basin, and the water drains into a small slit. While the design looks unique and futuristic, it takes away from the sink’s practicality. When I saw it for the first time, my first thought was, “How does the tap work?” This is a perfect example of minimalist design taken too far, where the design hides or complicates something as simple as a sink’s basic function. The goal of making something aesthetically pleasing can actually make its usability hard and confusing.

Overall, I think the ideas of “discoverability” and “understanding” are important when it comes to the process of designing something. Discoverability helps someone figure out what to do, while understanding helps them know how it works. To me, these two concepts are essential to good design. This also applies to the media work I want to do. If something isn’t clear, the message gets lost. Good design should be so simple and natural that you don’t even think about it. That’s something I want to keep in mind for any future projects I create, whether in film, media, or design. In the end, I think it’s important not to overlook the balance between how something looks and how well it works.

ASSIGMENT #4 – GENRATIVE TEXT

CONCEPT:

For this assignment, I wanted to create generative art that involved text. While searching for inspiration, I came across Scott Garner’s work on Creative Blog, where he used p5.js to create an art piece with stars forming text against a night sky backdrop. This inspired me to explore how stars, randomness, and interaction could be combined to form dynamic, visually engaging text. For my piece, I will create stars that will scatter and form text shapes based on user interaction, blending order and randomness, just like Garner’s example. Ill also make it a space theme while exploring Gaussian distribution to add in the art background.

EMBEDDED SKETCH:

HIGHLIGHT OF MY CODE:

The best part of my code is the interactive switching feature that lets people switch between an arranged text and scattered stars. When you click on a button, this interaction starts. It’s a dynamic, responsive experience where stars move from randomly scattering to making a word. This is how the code works:

// Function to scatter the white dots randomly across the canvas
function scatterDots(canvasWidth, canvasHeight) {
  whiteDots.forEach(dot => {
    dot.currentX = random(canvasWidth);  // Set random x position within canvas width
    dot.currentY = random(canvasHeight);  // Set random y position within canvas height
  });
}

// Function to reset the white dots and stop forming the text
function resetDots() {
  formingText = false;  // Stop forming the text
  scatterDots(500, 500);  // Scatter the dots randomly across the canvas again
}

// Detect mouse presses and trigger button actions
function mousePressed() {
  buttons.forEach(button => {
    if (button.isClicked(mouseX, mouseY)) {  // Check if the button is clicked
      button.action();  // Execute the action tied to the button
    }
  });
}

In the code snippet shown above, I used three key functions to create interactivity and movement:

  1. scatterDots(): This function scatters the white dots randomly across the canvas by assigning each dot a random x and y position within the canvas width and height. It adds randomness, simulating stars in the sky.
  2.  resetDots(): This function resets the dots to scattered mode by setting ‘formingtext’ to false and calling again scatterDots().
  3. mousePressed(): This function detects mouse presses and triggers button actions when the user clicks on the canvas, adding interactivity.

I used these functions to let users switch between random scatter mode and text formation mode based on button clicks. I figured out how to implement these by looking back at class PowerPoints and just experimenting until I got it right.

REFLECTION:

When I think back on this assigment, I like how the stars and the shooting star worked together to make a lively and interactive visual experience. The way the stars scattered and formed the text was just the right mix of randomness and order, just the way I imagined it would be. It looked more real and unpredictable because the shooting star moved in a Gaussian way.

For future updates, I’d like to look into adding more complex interactions. For example, users could drag to control how the stars or shooting stars move. In addition, improving the shooting star trail could make it look more real by making it fade more smoothly over time.

REFRENCES:

Garner, S. (2016, August 10). Explore creative code with p5.js. Creative Bloq. https://www.creativebloq.com/how-to/explore-creative-code-with-p5js.

 

SOLAR SYSTEM

Concept:

For this assignment, I honestly had no clue what to create, at first, I wanted to create generative text data; however, I couldn’t think of a sketch, so I decided to do visualization data instead. My concept was straightforward as I was still trying to understand the codes, so I decided to do a simple solar system, but the data wasn’t accurate it was just for the sketch and the placements in p5. I first started with a blank black background with the planets orbiting, which was too basic, so I decided to use the Lerp function, which I got my inspiration from Jheel’s assignment last week, to change the color to blue gradually. Furthermore, I added shooting stars and normal starts, to make it look more appealing.

Highlight:

The highlight of my code is the animation of the planets and setting the planets, as it was the hardest to figure out. However, the PowerPoint and previous in-class work helped a lot, and without them, I would still be trying to make it work.

// Draw and animate planets orbiting the sun
 for (let i = 0; i < planets.length; i++) {
   let planet = planets[i];

   // planet position based on orbit
   angles[i] += planet.speed;
   let x = sun.x + cos(angles[i]) * planet.distance;
   let y = sun.y + sin(angles[i]) * planet.distance;

   // Draw the orbit path
   stroke(255, 255, 255, 50);
   noFill();
   ellipse(sun.x, sun.y, planet.distance * 2);

   // Draw the planet
   noStroke();
   fill(planet.color);
   ellipse(x, y, planet.diameter);

   // Display planet name
   fill(255);
   textSize(12);
   text(planet.name, x + planet.diameter / 2 + 5, y);
 }

Reflection:

For improvements , as you can see the planet, are going out of the canavs, i tried fixing it by making the orbit smaller, but then everything look tight, so i left it as it is. Also I believe some user interaction would’ve been a great addition, as of now there isn’t any interaction, I should’ve maybe allowed the users to control the orbiting of the planets using the mouse, or maybe the shooting stars.

My design:

Reading Response 4

In Norman’s text, he argues for having a human-centered design, focusing on creating products that are simple and straightforward to use by aligning the design with what users need and can do. In context to that, one thing that drove me crazy (probably because I was really hungry) is how confusing digital appliances can be—like the air fryer I tried to use in the dorm. I expected it to be super easy, but instead, both me and Amna my sister, had to spend ages trying to figure out the functions because the instructions were just vague images and the digital display wasn’t clear. For someone who doesn’t cook often, it was frustrating to the point where I had to search TikTok to find out how to use it, and still it took ages to figure out as I had to find a similar Air fryer. To fix this, I think appliances like this should have built-in, interactive tutorials. Imagine turning on the air fryer for the first time and having it guide you step-by-step on the screen, showing you exactly how to use it. That way, you wouldn’t have to guess or waste time searching for help online.

In terms of applying Norman’s principles to interactive media, his ideas about affordances and signifiers would be super helpful. For example, in apps or websites, using clear icons or buttons that naturally show what they do would make everything easier to navigate. Also, feedback is key, like when you press a button, having a small animation or sound that lets you know the app is working on your request. It’s those little things that make the user experience smoother. Plus, having a simple design that allows users to quickly figure out how everything works without needing a tutorial, would make interactive media way more intuitive, kind of like how appliances should work right out of the box without you needing to look up instructions.