Week 5: Reading Response

The main difference between computer and human vision is that while humans can instantly recognize faces, objects, and contexts, a computer sees only raw pixel data unless explicitly programmed to interpret it. We have the ability to recognize people, expressions, colors, animals, etc instantly that computers have to be taught how to do and even that to only some extent. To help computers “see” or track objects of interest, techniques such as frame differencing, background subtraction, and brightness thresholding are used to extract meaningful features from video input. These methods allow interactive systems to detect movement, isolate subjects, and respond dynamically to user actions, forming the foundation of many interactive media works. However, each of these techniques has limitations, such as sensitivity to lighting changes or reliance on high-contrast visuals, highlighting the need for carefully designed environments to optimize detection accuracy.

Still, computer vision has improved vastly over the past few decades and has especially become much more accessible for artists, designers, and even beginner programmers to incorporate vision-based interactivity into their work. Software environments like Processing, Max/MSP/Jitter, and Macromedia Director provide frameworks for integrating computer vision, either through direct coding or via plug-ins and toolkits. Additionally, stand-alone applications like BigEye and EyesWeb expand these capabilities, enabling real-time motion tracking and expressive gesture analysis that is much easier to get your hands on.

In interactive art, computer vision’s capacity for tracking and surveillance introduces both creative potential and ethical concerns. Artists can leverage motion detection and object tracking to create immersive, responsive installations that engage audiences much more than just standing there staring. However, the same tools that enable interactivity can also invoke concerns about surveillance, privacy, and data collection. Since interactive art often explores the relationship between technology and human experience, the ability of computer vision to monitor and track movement, especially without explicit user consent, raises questions about autonomy and control. What information is the computer storing? Can it be used against me? These are questions people who are being recorded don’t even have the chance to ask. Ultimately, while computer vision expands the possibilities for dynamic and participatory art, its application must be carefully considered to balance both innovation and creativity with ethical responsibility.

Week 5 Reading

What are some of the ways that computer vision differs from human vision?

The biggest distinction is that while when we see a photo, it is intuitive for us to distinguish a person from the background, it is comparatively harder for computers to make the distinction. This means, early computers had to use movement to distinguish which pixels on a display belonged to the object or person of interest and which were only part of a background. Furthermore, to detect movement computers had to calculate the change in color or brightness of pixels between frames whereas these things are quite simple for human vision.

What are some techniques we can use to help the computer see / track what we’re interested in?

frame differencing – comparing frames to determine movement.

background subtraction – has an original background scene, compares with captured scene to determine what is not part of the background and is the object of interest.

brightness thresholding – looking for changes in luminosity to determine the position/change in position of objects.

How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

It allows for the extra dimension of interaction. Many artworks, including the one with the poem’s falling letters. This means that you can take more than just key or button input sand by using computer vision to track human movements, you make the artwork more intuitive to operate (consider that people may not know you need to press a button but if you see your own image on the projector you already know how to control that shadow).

Reading Response 4 – Computer Vision for Artists and Designers (Week 5)

In his article, Levin delves into the relationship between code and creative expression, illustrating how coding and computation offer a unique medium for artists to explore new forms of interactivity and non-verbal communication. This perspective was particularly eye-opening for me, as it shed light on how computation is not just a tool for efficiency or automation but also a canvas for artistic exploration.

One of the most fascinating aspects discussed in the article was computer vision. While the term itself is somewhat new to me, I was surprised to learn that efforts to advance this field began over half a century ago. It is remarkable to realize that machines can now collect visual data and “interpret” it, mimicking human perception in ways that were once the realm of science fiction. Computer vision models allow computers to identify human features, recognize expressions, and even infer emotions—all of which have groundbreaking implications, not only for fields like surveillance and security but also for art. In interactive media, for instance, artists are using computer vision to create installations that respond dynamically to human presence, movement, or even facial expressions, transforming passive spectators into active participants in digital art.

However, despite its exciting artistic applications, computer vision carries an eerie undertone due to its origins. The fact that this field was initially a military endeavor makes its transition into the realm of creative expression feel somewhat uncanny. The same technology that was once developed for warfare—such as guiding missiles or identifying enemy targets—is now being used to make art installations more immersive. This contrast raises an unsettling question: can a technology born from conflict and control ever be fully dissociated from its original intent?

Beyond its history, the rapid advancement of computer vision presents an undeniable threat to human privacy. Today, no one is truly safe from being recognized, analyzed, and cataloged by ubiquitous surveillance cameras, facial recognition systems, and AI-powered security networks. What was once considered futuristic is now an everyday reality—public spaces are filled with CCTV cameras that can track individuals in real time, while social media platforms use facial recognition to tag people in photos automatically. While some of these applications serve practical or even artistic purposes, they also blur the boundaries between technological progress and ethical concerns. When does interactivity cross into intrusion? At what point does an artistic exploration of human expression become indistinguishable from surveillance?

Week5 Reading: “Computer Vision for Artists and Designers”

Reading this article felt like taking a peek behind the scenes of how computers “see” the world, which is a process that’s really different from how we see it. Unlike our eyes that automatically pick up on contexts, depths, and meanings, computers have to break everything down into pixels and simple data to be able to process it. They use techniques like frame differencing, background subtraction, and brightness thresholding to detect movement and distinguish objects from their surroundings. In other words, while we instantly recognize a face or a smile, a computer needs a lot of help to do even simple things like tell the difference between a moving person and a stationary background.

What really stood out to me was how much effort goes into making the computer’s vision work well. It’s much more than just writing code. It’s also about setting up the right physical conditions like using specific lighting, reflective materials, or even special cameras to boost the system’s accuracy. This mix of coding and physical tweaking shows that computer vision is as much an art as it is a science.

I also found it interesting how computer vision’s tracking and surveillance capabilities have shaped interactive media art. On one hand, these systems let artists create installations where your movements become part of the artwork (like in the classic Videoplace) but on the other hand, there’s a darker side: the same technology that can create immersive art experiences can also be used to monitor and profile people. This duality makes me think about the ethical implications and the balance between creating engaging art and respecting personal privacy.

To sum up, the article not only breaks down the technical side of computer vision for beginners but also opens up deeper questions about technology’s role in our lives, both as a creative tool and a way of surveillance.

Week 5 – Reading Response

Given how important sight is to humans in regards to navigating and interacting with the world around us, granting similar abilities to a machine is a fascinating concept. Of course, it introduces just as many technical issues as it does opportunities, and what little I do know about computer vision/graphics is that it gets complicated very quickly. That aspect also shows just how impressive the human body is, since it takes huge amounts of work to create even a basic emulation of what comes naturally to us. The examples mentioned in the reading (frame differencing, background subtraction, and brightness thresholding) seem somewhat straightforward, but they each rely on very specific criteria to achieve their purposes.

There were a number of points made throughout the reading that stood out to me. For one, the beginning of the text mentions that most early applications of computer vision were military in nature, due to the prohibitive nature of the field at the time. While the technology is now more available than ever, the potential for misuse is also just as high. This has been seen in more general cases in the past few years like AirTag stalking, Zoombombing, etc. Computer vision is a particularly bad case given how cameras are literally everywhere nowadays, ready to collect PII or otherwise violate our privacy. A less concerning point I liked was how in order to optimize the digital performance of computer vision, you have to provide the right physical environment for it to work in. While it is rather obvious when you think about the requirements and constraints each technique has, I appreciated the duality of the process and allusion to how “two in harmony surpasses one in perfection.”

Week 5 – Reading Response

The webpage on Flong.com, “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers”, explores techniques of computer visions for artists and programmers. Computer vision varies from human vision in many aspects. Computers can be more sensitive to light of different wavelengths (like infrared), something which isn’t possible in humans. It can also be trained to process images and recognise patterns. However, a computer doesn’t have the same intuitive skill humans do, and tend to not understand complex or abstract concepts. We can use techniques like background subtraction, retro reflective materials and brightness thresholding mentioned in the reading in order to help computers track what we are interested in. Computer vision tracking can significantly influence interactive media. Of course, as with most things, we have to consider ethical issues, especially when tracking might be done in public spaces. Computer vision can be extremely beneficial for data collection, analysing the audiences’ behaviour. Advanced technology can also be extremely useful for seamless communication in interactive media. Through combinations of physical inputs and well-thought creative concepts, we can create more advanced ideas in the field of interactive media. While computer vision has its limitations, we need to carefully understand its advantages and use these characteristics effectively .

Week 3 Project

Inspired by my floor’s theme I am making batman. I used object-based coding to make the rain appear to fall. Adding music also gave it the batman like mood I was looking for. I think the way that half of the rain falls in front of him and half falls behind him was something that took me a while to conceptualize. Therefore this was a part of the code I was proud of.

  // draw raindrops that go behind batman
  for (let drop of drops) {
    if (drop.z <= 1.5) {
      drop.fall();
      drop.show();
    }
  }
  //draw batman
  fill(0)
  noStroke()
  ellipse(200, 250, 100, 100)
  quad(400-250, 248, 250, 248, 400-120, 400, 120, 400)
  triangle(152,240,160,170,182,240)
  triangle(400-152,240,400-160,170,400-182,240)
  fill(255)
  quad(163,250,183,250, 190, 260, 170, 260)
   quad(400-163,250,400-183,250, 400-190, 260, 400-170, 260)
  
  
  // then draw the raindrops in front of batman
  for (let drop of drops) {
    if (drop.z > 1.5) {
      drop.fall()
      drop.show();
    }
  }
}
  //making the raindrops fall
class RainDrop {
  constructor() {
    this.x = random(width);
    this.y = random(-height, 0);
    //perspective
    this.z = random(1, 3);
    // this makes the background slower than the foreground
    this.speed = map(this.z, 0.5, 2, 2, 6);

In terms of improvements I definitely could have made batman less minimalistic, and I think doing some other things to the rain to make it even more real (vary thickness of raindrops to account for perspective and stuff like that) which would add to the artwork. Lastly, adding some sort of more interactive element would have also made it better.

Week 4 – reading response

Reading Response: The Psychopathology of Everyday Things

Something That Drives Me Crazy: Confusing Shower Knobs

One of the most frustrating design flaws in everyday life is the lack of a standardized, intuitive shower knob system across different countries. If you’ve ever traveled internationally, you’ve likely encountered showers that seem to require a degree in engineering to operate.

In the United States, many showers have a single-knob system where you turn left for hot water and right for cold. However, some models require you to pull or push the knob, which isn’t always obvious. In the United Kingdom, separate hot and cold taps are still common, making us mix water manually. In Norway, for example, some showers have buttons that must be pressed in a particular sequence before water flows. The inconsistency means that travellers like me often accidentally scald themselves, turn on the water at full blast, or get an unexpected cold shock.

The biggest issue is discoverability. There’s often no clear indication of how the system works. Some showers even have extra knobs that control water pressure or temperature separately, adding to more confusion. Without obvious instructions, we are left to experiment, sometimes getting drenched unexpectedly in freezing or boiling water.

Applying Norman’s Principles to Improve Shower Design

Don Norman’s principles of discoverability, affordances, and signifiers could help improve shower designs:

  1. Clear affordances. The shape and placement of knobs should signify their function. A lever-style handle naturally implies to us that it has to be turned, while a button clearly calls for you to press it.
  2. Icons or labels could tell us about temperature directions, with simple red/blue colors universally suggesting hot and cold temp.
  3. Natural mapping. This could be a horizontal sliding control that moves left for hot and right for cold. It would be more intuitive than rotating knobs in random directions.

Another potential hi-tech solution is a digital shower interface with a display which could get rid of confusion entirely.

Week 4 – generative text output

Inspiration for This Project

I wanted to create an interactive and visually engaging experience that merges astrology with generative art. The idea was to provide users with a simple yet immersive way to receive a zodiac-based “psychic reading,” followed by animations and visuals. Astrology is often associated with mysticism and magic, so I aimed to reflect that via changing background colours and adding floating particles. For the visual part, I took my inspiration from this website: https://www.horoscope.com/us/index.aspx

Code Highlight I Am Proud Of

One part of the code I’m particularly proud of is the getZodiacColor function, which assigns a unique background color to each zodiac sign:

function getZodiacColor(sign) {
  let colors = {
    "Aries": color(255, 99, 71), "Taurus": color(107, 142, 35), "Gemini": color(255, 215, 0), "Cancer": color(70, 130, 180),
    "Leo": color(255, 165, 0), "Virgo": color(46, 139, 87), "Libra": color(123, 104, 238), "Scorpio": color(148, 0, 211),
    "Sagittarius": color(255, 140, 0), "Capricorn": color(139, 69, 19), "Aquarius": color(0, 191, 255), "Pisces": color(72, 61, 139)
  };
  return colors[sign] || color(240);
}

This function is simple, but it instantly transforms the visual feel of the project based on the user’s selection, creating some sense of personalization.

Reflection

For future projects, I’d love to explore more complex generative animations, such as constellations that change based on the zodiac sign. Things like integrating sound effects or subtle ambient music could enhance the mystical atmosphere. Another direction could be adding more interactive elements, like having particles respond to mouse movement, making the experience feel even more magical and immersive.

Here is the full code:

let signs = [
  "Aries", "Taurus", "Gemini", "Cancer", "Leo", "Virgo",
  "Libra", "Scorpio", "Sagittarius", "Capricorn", "Aquarius", "Pisces"
];

// Zodiac readings for each sign
let readings = {
  "Aries": ["Today is a day for bold moves.", "A new adventure awaits you.", "Your energy will attract opportunities."],
  "Taurus": ["Stay grounded, but take a leap of faith.", "Patience will bring unexpected rewards.", "A financial opportunity is coming your way."],
  "Gemini": ["A conversation will spark inspiration.", "Your curiosity leads to a surprising discovery.", "Adaptability is your greatest strength today."],
  "Cancer": ["Your emotions will guide you well.", "A nostalgic moment will bring clarity.", "Trust your intuition—it knows the way."],
  "Leo": ["Your confidence will open doors.", "A bold move will lead to admiration.", "Shine your light and others will follow."],
  "Virgo": ["Your keen eye will catch an important detail.", "Organization will bring unexpected rewards.", "A small habit change will lead to a breakthrough."],
  "Libra": ["Balance is key today.", "A relationship will deepen in an unexpected way.", "Harmony will find you when you least expect it."],
  "Scorpio": ["Mystery surrounds you—embrace it.", "Transformation is closer than you think.", "Your passion will lead you to new heights."],
  "Sagittarius": ["An exciting journey is on the horizon.", "Your optimism will inspire someone today.", "The universe is conspiring in your favor."],
  "Capricorn": ["Hard work pays off—stay focused.", "A disciplined approach will yield results.", "Your perseverance will be rewarded soon."],
  "Aquarius": ["Innovation is your ally today.", "A sudden insight will change your path.", "Your unique perspective is your greatest strength."],
  "Pisces": ["Your dreams hold important messages.", "Creativity will flow effortlessly.", "A moment of solitude will bring deep understanding."]
};

let dropdown, button, output;
let bgColor;
let particles = [];

function setup() {
  createCanvas(400, 300);
  textSize(16);
  textAlign(CENTER, CENTER);
  
  // Create dropdown menu for zodiac signs
  dropdown = createSelect();
  dropdown.position(100, 100);
  
  for (let sign of signs) {
    dropdown.option(sign);
  }
  
  // Create button to generate reading
  button = createButton("Get Your Reading");
  button.position(100, 140);
  button.mousePressed(generateReading);
  
  output = "Select your sign and receive your reading";
  bgColor = color(240);
  
  // Create floating particles for magical effect
  for (let i = 0; i < 50; i++) {
    particles.push(new Particle());
  }
}

function draw() {
  background(bgColor);
  fill(50);
  text("Psychic Zodiac Reading", width / 2, 50);
  text(output, width / 2, 80);
  
  // Update and show floating particles
  for (let p of particles) {
    p.update();
    p.show();
  }
}

// Generate random reading based on selected zodiac sign
function generateReading() {
  let selectedSign = dropdown.value();
  let possibleReadings = readings[selectedSign];
  output = possibleReadings[int(random(possibleReadings.length))];
  bgColor = getZodiacColor(selectedSign);
}

// Assign unique background color for each zodiac sign
function getZodiacColor(sign) {
  let colors = {
    "Aries": color(255, 99, 71), "Taurus": color(107, 142, 35), "Gemini": color(255, 215, 0), "Cancer": color(70, 130, 180),
    "Leo": color(255, 165, 0), "Virgo": color(46, 139, 87), "Libra": color(123, 104, 238), "Scorpio": color(148, 0, 211),
    "Sagittarius": color(255, 140, 0), "Capricorn": color(139, 69, 19), "Aquarius": color(0, 191, 255), "Pisces": color(72, 61, 139)
  };
  return colors[sign] || color(240);
}

// Particle class for floating magic effect
class Particle {
  constructor() {
    this.x = random(width);
    this.y = random(height);
    this.vx = random(-1, 1);
    this.vy = random(-1, 1);
    this.alpha = random(100, 255);
  }

  // Update particle movement
  update() {
    this.x += this.vx;
    this.y += this.vy;
    if (this.x > width || this.x < 0) this.vx *= -1;
    if (this.y > height || this.y < 0) this.vy *= -1;
  }

  // Display particle as a glowing dot
  show() {
    noStroke();
    fill(255, this.alpha);
    ellipse(this.x, this.y, 5, 5);
  }
}

 

Week 4 – Generative Text

Concept:
This piece is meant to imitate a page turning, but instead of the expected behaviour it slides a new page down from the top. The text itself is pulled from a .txt file, which contains bits of placeholder text (Lorem ipsum dolor) as well as snippets of text from the p5.js reference that pertain to implementing text.

Highlight:
My highlight is still fairly simple, but I chose it since I often forget to include bounds checking or error handling in my code and end up causing myself problems down the road. This snippet checks to make sure that the newly created page is not receiving empty input for its inner text, and cleans up the pages array once a certain number of pages have been created while leaving only the most recent instance.

// // Create a new page on click
  let newText = txt[floor(random(txt.length - 1))];
  let tries = 0;
  // // Make sure new page's text is not empty
  while (!newText && tries < 10) {
    newText = txt[floor(random(txt.length - 1))];
    tries++;
  }
  // // Cull older pages at some upper limit
  if (pages.length >= 10) {
    pages = subset(pages, pages.length - 1, 1);
    console.log("reset");
    console.log(pages);
  }

Embed:

Reflection:
I’m not super satisfied with this piece, doubly so since I tried a few different concepts that I was unable to get to work properly. I had been having issues with getting the data to load from a file, and only managed to get it to work once I pivoted to this idea. I had also wanted to have the sliding direction change between the top, sides, and bottom, either changing randomly or going in a clockwise direction. This week was a bit rough in terms of workload so I was unable to spend as much time adding features to this piece as I would have liked.