Week 5 – Reading Response (mss9452)

From reading this I understood just how different “seeing” actually is from one person or entity to the next.

As a human, I don’t have to think about the process of seeing. I see a person, and I immediately understand a face, an expression, perhaps even an intention. But computer vision doesn’t see meaning, it sees numbers. The article clearly states that high-level image understanding is still a very difficult task, whereas low-level image understanding is much more feasible. A computer doesn’t see “a person walking,” it sees the differences in pixel values.

For instance, frame differencing involves subtracting one frame from another to identify movement. This seems so very mechanical in contrast to how easily we, as humans, can identify movement. Background subtraction involves comparing a live image with a pre-stored image of the background to identify what is out of place. As humans, we can easily identify a person regardless of how much the lighting changes. However, for a computer, lighting, contrast, and setup are very important.

One thing that I found particularly interesting is that rather than trying to make computers see the world as we do, the reading proposes that we design the physical world so that it is easier for the computer to see, using high contrast, controlled lighting, and reflective surfaces. This is a reversal of the situation, rather than trying to make the algorithm “smarter,” we are trying to make reality more computable. For me this is  fascinating because interaction design is not just about the digital world, it is also about the physical world.

However, with interactive art, this is even more complicated. Tracking technologies can enable very powerful experiences of embodiment, such as in early works such as Videoplace, where silhouettes and motion become interactive elements. I find the concept of being able to have meaningful artistic experiences through simple detection technologies very appealing. The system does not fully “understand” the body but simply tracks enough information to react to it.

However, at the same time, the reading points out works such as Sorting Daemon, which emphasize surveillance and profiling. This was somewhat uncomfortable. The same technology that enables playful interaction can also extract, categorize, and analyze individuals. In the context of interactive art, being tracked can be very engaging. In other contexts, it can be very invasive.

I think it’s this tension that makes computer vision so potent in interactive media. It turns the body into data, but this data can either be expressive and interactive or controlling and analytical. As artists and designers, we’re not simply using tracking as a tool, we’re making choices about visibility and power.

This reading has made me more conscious of the fact that computer vision isn’t about simulating human vision. It’s about finding patterns that the machine can calculate. And perhaps it is interactive art that is where human vision and machine vision intersect.

week 5 – Midterm Progress (mss9452)

Concept:

For my project I plan on redesigning the classic game “Snake,” however I do plan on adding my own twists to the game.

  1. In this game the Snake is represented as glowing circular outlines, moving like a light source.
  2. For the background I intend to add islamic geometric patterns for the aesthetics, but I still haven’t implemented it yet.
  3. More Features will be added to make the game more engaging and interesting.

UI Design:

Interactions remain the same as the classic game:

  • Arrow Keys to control direction
  • Mouse click to start/restart game
  • T key to toggle theme

I decided to keep interactions simple to preserve the familiarity of the game, in hopes that the visual redesign could shift the experience from purely arcade-based to more atmospheric and reflective. Additionally, there’s a time counter, to make players more aware of the duration rather than just the score.

Code Structure:

I tried making the program more organized by into different layers, where each part does a specific function. They are:

  1. Game Logic: to handle movement, growth, collision and fruit spawning.
  2. Rendering: for background, glowing snake, grid display and UI overlay
  3. Interaction: to handle key input and theme toggling

I made sure to separate things to avoid confusion and to make adding additional features easier.

Uncertain / Complex Parts:

The thing I’m most worried about is adding sounds to the game as there are several different things that need it such as:

  • fruit collection
  • collision
  • background sound

There are several issues I’ve identified which include the timing of the sounds, possible audio restrictions and making sure the audio is not distracting.

To avoid this risk early on, I experimented with basic p5.js sound playback on its own. I tested playing simple oscillator sounds based on mouse interaction to make sure that:

  • Audio plays back properly after user interaction.
  • There are no console errors.
  • Sound playback can be controlled in short bursts.
  • Sound playback can synchronize with game events such as fruit pickup.

By testing sound playback independently before incorporating it into the full game logic, I avoided the risk of ruining the main system later on.

 

 

Week 1 – self-portrait

my concept: My self-portrait is a very simple one, I’ve gotten the idea of the shape from the koala example we’ve seen in class. I wanted to make it look cool (even though it didn’t go that well), so I added an interaction where a sword shows up. I’ve gotten this idea while playing a video game called “Elden Ring” where one of the characters would make swords show up.

A section of the code I feel a bit proud of is this:

if (hasSword && swordPosY < swordStopY) swordPosY += 6;
if (!hasSword && swordPosY > -200) swordPosY -= 6;

it was quite confusing getting it to work right, and took time for me to find a good position and make it work the way I want it.

How was this made: I tried keeping the portrait simple, so I mostly used things like ellipse, arc and rect from the references section on the p5 website. I also used AI to look for functions I could use with the swords, cause it seemed faster than me looking them up, it ended up giving me some functions that were quite new and complicated. from AI I ended up using the mousePressed() function to move the sword, which I remember we did see soemthing similar being used in last week’s class. Another function it suggested was lerp() which I didn’t use.

Reflection: I’ve never used Js before, so this was a new experience to me especially with the p5 library, where I got to learn how to arrange shapes and manage coordinates. Making a portrait from such basic things was really fun. If I had more time I would’ve tried adding another animation with the sword instead of it just dropping down. Another thing I could’ve done was making the facial expression more personal and less goofy. The last thing was making a better background like it being night with the moon and stars up there.