week #5 – reading

How Computer Vision is Different from Human Vision

Computer vision and human vision work in very different ways. People can see and understand images quickly because our brains recognize patterns, shapes, and objects naturally. We also understand depth, colors, and movement easily. Computers, however, need special instructions to “see” things. They analyze images by looking at tiny dots called pixels and using math to find patterns. Unlike humans, computers have trouble with things like bad lighting, unclear backgrounds, or objects that look similar. This means computer vision needs special tricks to work well in different situations.

How We Help Computers See and Track Objects

Since computers do not see like humans, we use special techniques to help them track objects. One method is frame differencing, where the computer looks at two pictures and spots the differences to detect movement. Background subtraction helps the computer ignore things that do not change and focus only on moving objects. Another technique is brightness thresholding, where the computer finds objects based on how bright or dark they are. More advanced methods use color tracking or infrared cameras to improve accuracy. These tools help computers find and follow objects better, even in complex scenes.

How Computer Vision Affects Interactive Art and Surveillance

Computer vision is used in art to make interactive experiences where people can control things with their movements. For example, in Myron Krueger’s Videoplace, people’s shadows on a screen could interact with digital images. Artists also use computer vision to explore serious topics like surveillance. In Sorting Daemon, David Rokeby used it to study how cameras track people in public places. Another example, Suicide Box, recorded people jumping off a bridge to show how some events are ignored by society. While computer vision makes art more exciting, it also raises questions about privacy and how technology watches people without their permission.

Week #5- midterm progress

My Concept:

For my midterm project, im attempting to bring Bikini Bottom to life through an engaging and nostalgic digital experience. The project features the three iconic homes from SpongeBob SquarePants, SpongeBob’s pineapple house, Squidward’s stone home, and Patrick’s simple rock. The project is a tribute to childhood memories, capturing the essence of the beloved cartoon that many grew up watching. I plan to make it more interactive by making users solve different problems using hints, enabling them to interact with different objects.

Design:

For my design I hope to closely mirror the original aesthetic of SpongeBob SquarePants, using vibrant colors and playful elements to capture the cartoon’s distinct underwater atmosphere. The houses are placed in a scene that resembles Bikini Bottom’s sandy ocean floor, with blue gradients representing the ocean background as shown below: 

The interactive element plays a crucial role in this project. As the user moves their mouse over each house, the door lights up to indicate interactivity. When a user clicks on the door, an passcode input box pops up, where the user needs to type in the correct code to unlock it (by using hints). When a user interacts with the door, they enter the house where, depending on the house chosen, will have a different theme and interactive objects that match the vibes/themes of the character. This will also be done by changing the genre of music inside each house.

Challenges:

One of the main challenges was making the opening door action simple and easy to use. Instead of just clicking the door, I had to add an input field that only appears when needed. Managing this input box so it doesn’t always stay on the screen was tricky. Another challenge was handling incorrect passcodes. I made the program display “Incorrect” on the screen. This required making sure the message only appears when needed and resets when the user tries again.

Finally, I had to ensure that when the correct passcode is entered, the screen updates immediately. This meant making sure the input box and button disappeared at the right time while smoothly transitioning to the next screen.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
let doorX, doorY, doorWidth, doorHeight;
let isUnlocked = false;
let correctPasscode = "1234"; //the passcode
let userInput = "";
let message = "";
function setup() {
createCanvas(300, 200);
//door position
doorX = width / 2 - 40;
doorY = height / 2;
doorWidth = 80;
doorHeight = 120;
//input box
inputBox = createInput('');
inputBox.position(width / 2 - 50, height / 2 + 50);
inputBox.hide(); //hidden at the start
//create submit button
submitButton = createButton('Enter');
submitButton.position(width / 2 + 60, height / 2 + 50);
submitButton.mousePressed(checkPasscode);
submitButton.hide(); //initially hidden
}
let doorX, doorY, doorWidth, doorHeight; let isUnlocked = false; let correctPasscode = "1234"; //the passcode let userInput = ""; let message = ""; function setup() { createCanvas(300, 200); //door position doorX = width / 2 - 40; doorY = height / 2; doorWidth = 80; doorHeight = 120; //input box inputBox = createInput(''); inputBox.position(width / 2 - 50, height / 2 + 50); inputBox.hide(); //hidden at the start //create submit button submitButton = createButton('Enter'); submitButton.position(width / 2 + 60, height / 2 + 50); submitButton.mousePressed(checkPasscode); submitButton.hide(); //initially hidden }
let doorX, doorY, doorWidth, doorHeight;
let isUnlocked = false;
let correctPasscode = "1234"; //the passcode
let userInput = "";
let message = "";

function setup() {
  createCanvas(300, 200);
  
//door position
  doorX = width / 2 - 40;
  doorY = height / 2;
  doorWidth = 80;
  doorHeight = 120;

//input box
  inputBox = createInput('');
  inputBox.position(width / 2 - 50, height / 2 + 50);
  inputBox.hide(); //hidden at the start

//create submit button
  submitButton = createButton('Enter');
  submitButton.position(width / 2 + 60, height / 2 + 50);
  submitButton.mousePressed(checkPasscode);
  submitButton.hide(); //initially hidden
}

At the moment the dimensions for clicking the door is not perfect at all, but it’s just to give an idea. (click at the bottom of the door)

passcode is “1234”

Week 5: Midterm Project Progress

Concept

For my midterm project, I wanted to create something that is 1) funny and 2) inspired by our campus. I thought hard and long, and eventually came up with an idea to create a game based on an “incident” that happened to me at the Palms. I love sitting under the palm trees outside C2 when the weather is nice, and one day I was joyfully sitting on one of the tables doing work on my laptop when suddenly I heard a quick popping sound and felt something splatter in front of me – a bird had pooped on my laptop.

At a loss for words as to why, out of all the open space around me, it chose to land right on me, I questioned my luck and intensely cleaned around. Ever since then, whenever I go to the palms, I am reminded of that day, laugh at how “lucky” I was, and glance up from time to time to check for birds above me.

Design

The idea of the game is to dodge bird 💩 falling from the sky while at the same time trying to finish your homework. The user needs to protect the computer using the left and right arrow keys without getting hit from what is falling from above. To complete the homework, the user needs to repeatedly press the space bar until the progress bar fills up. If the user stops pressing the space, the space bar will decrease. The goal is to finish the homework as fast as possible without getting hit. The shortest time will be recorded for the user to see.

Challenging Parts

  • Creating the progress bar and incorporating decay so that the progress decreases upon inactivity
  • Checking for collision between the computer and bird poop
  • Making bird poop fall randomly from the sky (from random columns at random times with perhaps random speeds)
  • Showing time elapsed and keeping track of minimum time
  • Creating a visually appealing game interface

I tried creating a simple progress bar that fills up on the pressing of the space bar. It works by drawing a rectangle where the width is based on the progress variable, and this variable is incremented when the space bar is pressed and decremented slightly every time to add to the challenge.

Week 4 response

Introduction:

When I was reading the book I noticed how my thoughts that I had internally were said outside. Maybe I didn’t know how to word it as well but the idea was just as clear. The way he describes the process and the way the designers think of it vs how we as the users think of the items is different and that kind of makes it uncomfortable to use. There are things specifically on campus that drive me crazy sometimes because of how it is made and not efficient.

Reading Response:

Based on the reading something that annoys me would be the doors on campus, specifically the doors at Dining Hall 2. They don’t have sensors to enter and require you to pull but then open themselves automatically. The harder you pull the more opposing force it uses to prevent you from opening it manually. I prefer doors that can just open from a push either way. But how do I use the frustration of these inefficient made doors in interactive media? Well, I would make it so that it’s user-friendly. Something that is so easy to use that intrinsic to use people, something that’s almost close to common sense for the everyday layperson. Not focusing on creating something that’s easy as a coder or an artist but something thats easy for my audience.

 

Data visualization:

Below is a data visualization that’s very simple and not advanced at all but just very basic to show data that’s all.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
let users = [];
let colors = ['#ff0000', '#00ff00', '#0000ff', '#ffff00', '#ff00ff', '#00ffff', '#ffa500', '#800080', '#008080', '#ffd700'];
let timeAwakeInput, timeWorkingInput, submitButton, newUserButton, userSelect;
let currentUser = 0;
let barWidth = 20;
function setup() {
createCanvas(800, 500);
background(240);
drawAxes();
timeAwakeInput = createSelect();
for (let i = 2; i <= 24; i += 2) {
timeAwakeInput.option(i);
}
timeAwakeInput.position(10, 10);
timeWorkingInput = createSelect();
for (let i = 0.5; i <= 24; i += 0.5) {
timeWorkingInput.option(i);
}
timeWorkingInput.position(10, 40);
submitButton = createButton('Submit Entry');
submitButton.position(10, 70);
submitButton.mousePressed(addEntry);
newUserButton = createButton('New User');
newUserButton.position(110, 70);
newUserButton.mousePressed(createNewUser);
userSelect = createSelect();
userSelect.position(10, 100);
userSelect.changed(changeUser);
createNewUser();
textAlign(LEFT, CENTER);
text('Time Awake (hours):', 150, 25);
text('Time Working (hours):', 150, 55);
}
function createNewUser() {
let newUser = {
id: users.length,
color: colors[users.length % colors.length],
entries: []
};
users.push(newUser);
userSelect.option('User ' + (newUser.id + 1), newUser.id);
userSelect.selected(newUser.id);
currentUser = newUser.id;
}
function changeUser() {
currentUser = parseInt(userSelect.value());
}
function addEntry() {
let timeAwake = parseFloat(timeAwakeInput.value());
let timeWorking = parseFloat(timeWorkingInput.value());
if (timeWorking > timeAwake) {
alert("Time working cannot exceed time awake!");
return;
}
users[currentUser].entries.push({
timeAwake: timeAwake,
timeWorking: timeWorking
});
updateGraph();
}
function drawAxes() {
stroke(0);
line(50, height - 50, width - 50, height - 50); // X-axis
line(50, height - 50, 50, 50); // Y-axis
textAlign(CENTER);
text("Time Awake (hours)", width/2, height - 10);
push();
translate(15, height/2);
rotate(-HALF_PI);
text("Time Working (hours)", 0, 0);
pop();
for (let i = 0; i <= 24; i += 4) {
let x = map(i, 0, 24, 50, width - 50);
line(x, height - 50, x, height - 45);
textAlign(CENTER);
text(i, x, height - 35);
}
for (let i = 0; i <= 24; i += 4) {
let y = map(i, 0, 24, height - 50, 50);
line(45, y, 50, y);
textAlign(RIGHT);
text(i, 40, y);
}
}
function updateGraph() {
background(240);
drawAxes();
let groupWidth = barWidth * users.length;
for (let i = 0; i < users.length; i++) {
let user = users[i];
for (let entry of user.entries) {
let x = map(entry.timeAwake, 0, 24, 50, width - 50);
let barHeight = map(entry.timeWorking, 0, 24, 0, height - 100);
// Adjust x position based on user index
let adjustedX = x - groupWidth/2 + i * barWidth + barWidth/2;
fill(user.color);
rect(adjustedX - barWidth/2, height - 50 - barHeight, barWidth, barHeight);
push();
fill(0);
textAlign(CENTER);
textSize(10);
text(entry.timeWorking + "h", adjustedX, height - 55 - barHeight);
pop();
}
}
}
let users = []; let colors = ['#ff0000', '#00ff00', '#0000ff', '#ffff00', '#ff00ff', '#00ffff', '#ffa500', '#800080', '#008080', '#ffd700']; let timeAwakeInput, timeWorkingInput, submitButton, newUserButton, userSelect; let currentUser = 0; let barWidth = 20; function setup() { createCanvas(800, 500); background(240); drawAxes(); timeAwakeInput = createSelect(); for (let i = 2; i <= 24; i += 2) { timeAwakeInput.option(i); } timeAwakeInput.position(10, 10); timeWorkingInput = createSelect(); for (let i = 0.5; i <= 24; i += 0.5) { timeWorkingInput.option(i); } timeWorkingInput.position(10, 40); submitButton = createButton('Submit Entry'); submitButton.position(10, 70); submitButton.mousePressed(addEntry); newUserButton = createButton('New User'); newUserButton.position(110, 70); newUserButton.mousePressed(createNewUser); userSelect = createSelect(); userSelect.position(10, 100); userSelect.changed(changeUser); createNewUser(); textAlign(LEFT, CENTER); text('Time Awake (hours):', 150, 25); text('Time Working (hours):', 150, 55); } function createNewUser() { let newUser = { id: users.length, color: colors[users.length % colors.length], entries: [] }; users.push(newUser); userSelect.option('User ' + (newUser.id + 1), newUser.id); userSelect.selected(newUser.id); currentUser = newUser.id; } function changeUser() { currentUser = parseInt(userSelect.value()); } function addEntry() { let timeAwake = parseFloat(timeAwakeInput.value()); let timeWorking = parseFloat(timeWorkingInput.value()); if (timeWorking > timeAwake) { alert("Time working cannot exceed time awake!"); return; } users[currentUser].entries.push({ timeAwake: timeAwake, timeWorking: timeWorking }); updateGraph(); } function drawAxes() { stroke(0); line(50, height - 50, width - 50, height - 50); // X-axis line(50, height - 50, 50, 50); // Y-axis textAlign(CENTER); text("Time Awake (hours)", width/2, height - 10); push(); translate(15, height/2); rotate(-HALF_PI); text("Time Working (hours)", 0, 0); pop(); for (let i = 0; i <= 24; i += 4) { let x = map(i, 0, 24, 50, width - 50); line(x, height - 50, x, height - 45); textAlign(CENTER); text(i, x, height - 35); } for (let i = 0; i <= 24; i += 4) { let y = map(i, 0, 24, height - 50, 50); line(45, y, 50, y); textAlign(RIGHT); text(i, 40, y); } } function updateGraph() { background(240); drawAxes(); let groupWidth = barWidth * users.length; for (let i = 0; i < users.length; i++) { let user = users[i]; for (let entry of user.entries) { let x = map(entry.timeAwake, 0, 24, 50, width - 50); let barHeight = map(entry.timeWorking, 0, 24, 0, height - 100); // Adjust x position based on user index let adjustedX = x - groupWidth/2 + i * barWidth + barWidth/2; fill(user.color); rect(adjustedX - barWidth/2, height - 50 - barHeight, barWidth, barHeight); push(); fill(0); textAlign(CENTER); textSize(10); text(entry.timeWorking + "h", adjustedX, height - 55 - barHeight); pop(); } } }
let users = [];
let colors = ['#ff0000', '#00ff00', '#0000ff', '#ffff00', '#ff00ff', '#00ffff', '#ffa500', '#800080', '#008080', '#ffd700'];
let timeAwakeInput, timeWorkingInput, submitButton, newUserButton, userSelect;
let currentUser = 0;
let barWidth = 20;

function setup() {
  createCanvas(800, 500);
  background(240);
  drawAxes();

  timeAwakeInput = createSelect();
  for (let i = 2; i <= 24; i += 2) {
    timeAwakeInput.option(i);
  }
  timeAwakeInput.position(10, 10);

  timeWorkingInput = createSelect();
  for (let i = 0.5; i <= 24; i += 0.5) {
    timeWorkingInput.option(i);
  }
  timeWorkingInput.position(10, 40);

  submitButton = createButton('Submit Entry');
  submitButton.position(10, 70);
  submitButton.mousePressed(addEntry);

  newUserButton = createButton('New User');
  newUserButton.position(110, 70);
  newUserButton.mousePressed(createNewUser);

  userSelect = createSelect();
  userSelect.position(10, 100);
  userSelect.changed(changeUser);

  createNewUser();

  textAlign(LEFT, CENTER);
  text('Time Awake (hours):', 150, 25);
  text('Time Working (hours):', 150, 55);
}

function createNewUser() {
  let newUser = {
    id: users.length,
    color: colors[users.length % colors.length],
    entries: []
  };
  users.push(newUser);
  userSelect.option('User ' + (newUser.id + 1), newUser.id);
  userSelect.selected(newUser.id);
  currentUser = newUser.id;
}

function changeUser() {
  currentUser = parseInt(userSelect.value());
}

function addEntry() {
  let timeAwake = parseFloat(timeAwakeInput.value());
  let timeWorking = parseFloat(timeWorkingInput.value());

  if (timeWorking > timeAwake) {
    alert("Time working cannot exceed time awake!");
    return;
  }

  users[currentUser].entries.push({
    timeAwake: timeAwake,
    timeWorking: timeWorking
  });

  updateGraph();
}

function drawAxes() {
  stroke(0);
  line(50, height - 50, width - 50, height - 50); // X-axis
  line(50, height - 50, 50, 50); // Y-axis
  
  textAlign(CENTER);
  text("Time Awake (hours)", width/2, height - 10);
  
  push();
  translate(15, height/2);
  rotate(-HALF_PI);
  text("Time Working (hours)", 0, 0);
  pop();
  
  for (let i = 0; i <= 24; i += 4) {
    let x = map(i, 0, 24, 50, width - 50);
    line(x, height - 50, x, height - 45);
    textAlign(CENTER);
    text(i, x, height - 35);
  }
  
  for (let i = 0; i <= 24; i += 4) {
    let y = map(i, 0, 24, height - 50, 50);
    line(45, y, 50, y);
    textAlign(RIGHT);
    text(i, 40, y);
  }
}

function updateGraph() {
  background(240);
  drawAxes();
  
  let groupWidth = barWidth * users.length;
  
  for (let i = 0; i < users.length; i++) {
    let user = users[i];
    for (let entry of user.entries) {
      let x = map(entry.timeAwake, 0, 24, 50, width - 50);
      let barHeight = map(entry.timeWorking, 0, 24, 0, height - 100);
      
      // Adjust x position based on user index
      let adjustedX = x - groupWidth/2 + i * barWidth + barWidth/2;
      
      fill(user.color);
      rect(adjustedX - barWidth/2, height - 50 - barHeight, barWidth, barHeight);
      
      push();
      fill(0);
      textAlign(CENTER);
      textSize(10);
      text(entry.timeWorking + "h", adjustedX, height - 55 - barHeight);
      pop();
    }
  }
}

 

Week 5 — Reading Response

As I read about the differences between computer vision and human vision, I couldn’t help but reflect on how fascinating it is that these two systems operate so differently. I thought that the comparison really highlighted the strengths and limitations of both. Human vision is incredibly flexible and context-aware — we can recognize a friend’s face even in poor lighting or from an unusual angle. Computer vision, on the other hand, relies on algorithms that need carefully controlled conditions to function effectively. This made me realize how much work goes into “teaching” computers to see, and it reminded me of my own experiences. I’ve always been intrigued by how technology intersects with human behavior, and this article deepened my appreciation for the challenges of bridging that gap.

To help computers “see” what we’re interested in, the article outlines several techniques, like frame differencing, background subtraction, and brightness thresholding. I thought these methods were clever in how they simplify the problem — breaking down complex visual data into manageable pieces. For example, using brightness thresholding to track a single bright point, like a flashlight, seems almost elegant in its simplicity. I believe, that these foundational techniques are a great starting point for anyone looking to experiment with computer vision (like me hehe). They also highlight the importance of designing the physical environment to support the algorithm, which is something I hadn’t fully considered before. It’s not just about writing code; it’s about creating a world that the computer can understand.

The article’s discussion of tracking and surveillance in interactive art really resonated with me, especially given my legal background. I think computer vision is a perfect example of a tool that can be both empowering and unsettling. On one hand, it enables incredible artistic experiences, like Myron Krueger’s Videoplace, where participants can interact with virtual elements using their bodies. On the other hand, it raises questions about privacy and control, as seen in projects like Suicide Box or David Rokeby’s Sorting Daemon. These works use computer vision to critique surveillance, and I think they’re a powerful reminder of how technology can shape — and be shaped by — societal values.

Personally, I believe that computer vision’s capacity for tracking and surveillance adds a layer of complexity to its use in art. As a CS major with a legal background, I’m particularly drawn to the ethical dimensions of this technology. I’d love to explore how interactive art can balance the wonder of computer vision with a critical awareness of its implications. For instance, what if an installation invited participants to play with tracking technology while also making them aware of how their data is being used? I think there’s a lot of potential here to create works that are not only engaging but also thought-provoking, encouraging viewers to reflect on the role of technology in their lives.

Midterm Progress Report

Concept/Ideation

I started off by thinking about what is the vibe I wanted to have in my project. I browsed through Pinterest boards to see if I could find some inspiration. This is where I came across this cute photo:

This may contain: the sky is filled with stars and clouds

I wanted my game’s background photo to have a similar color palette and aesthetic. It also reminded me of the nostalgic feeling I had as a kid when playing simple, food-themed games — like cooking games where you’d make ramen or decorate baked goods. Inspired by that, I decided to create a game with a similar concept but scaled down to fit within my scope. I knew from the start that I wanted it to be candy-themed.

Brainstorming

I quickly drew up the ideas that I had for the design and started to sketch the initial visuals:
I then used Figma to actually create the background of the game in addition to the necessary visual assets:

I idea is that the game involves three boxes, each corresponding to a color of candy falling from the sky. To catch the candy, you move the box of the matching color to the spot where the candy is about to fall. To switch between the boxes, simply press the spacebar. While there is no traditional winning system, you score one point for each candy you catch. And there will never be more than one candy falling at the same vertical level at a time.

Code Progress/Concerns

Right now, I’m focused on having two classes: one for the box and one for the candy. These will be central to the gameplay mechanics, as the candy will fall and the player will control the box to catch it. The most uncertain part of my midterm project is ensuring smooth interaction between the boxes and candies, especially when switching between boxes using the spacebar. This could lead to timing issues or bugs that affect the gameplay experience. To minimize this risk, I plan to write some code to test the collision detection and box-switching functionality. Currently I’ve preloaded by assets:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function preload() {
bg = loadImage("bg.png");
startButton = loadImage("start.png");
mouseSound = loadSound("mouse.mp3");
boxImages[0] = loadImage("box1.png");
boxImages[1] = loadImage("box2.png");
boxImages[2] = loadImage("box3.png");
candyImages[0] = loadImage("candy1.png");
candyImages[1] = loadImage("candy2.png");
candyImages[2] = loadImage("candy3.png");
}
function preload() { bg = loadImage("bg.png"); startButton = loadImage("start.png"); mouseSound = loadSound("mouse.mp3"); boxImages[0] = loadImage("box1.png"); boxImages[1] = loadImage("box2.png"); boxImages[2] = loadImage("box3.png"); candyImages[0] = loadImage("candy1.png"); candyImages[1] = loadImage("candy2.png"); candyImages[2] = loadImage("candy3.png"); }
function preload() {
  bg = loadImage("bg.png");
  startButton = loadImage("start.png");
  mouseSound = loadSound("mouse.mp3");

  boxImages[0] = loadImage("box1.png");
  boxImages[1] = loadImage("box2.png");
  boxImages[2] = loadImage("box3.png");

  candyImages[0] = loadImage("candy1.png");
  candyImages[1] = loadImage("candy2.png");
  candyImages[2] = loadImage("candy3.png");
}

in addition to structuring my two main classes for OOP:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class Box {
constructor(x, y, width, img) {
this.x = x;
this.y = y;
this.width = width;
this.img = img;
}
move(dx) {
this.x = constrain(this.x + dx, 0, width - this.width);
}
display(isActive) {
image(this.img, this.x, this.y, this.width, 60);
if (isActive) {
stroke(255, 0, 0);
strokeWeight(2);
noFill();
rect(this.x, this.y, this.width, 60);
}
}
}
class Candy {
constructor(x, y, img, type) {
this.x = x;
this.y = y;
this.size = 40;
this.img = img;
this.type = type;
}
fall() {
this.y += gravity;
}
display() {
image(this.img, this.x, this.y, this.size, this.size);
}
}
class Box { constructor(x, y, width, img) { this.x = x; this.y = y; this.width = width; this.img = img; } move(dx) { this.x = constrain(this.x + dx, 0, width - this.width); } display(isActive) { image(this.img, this.x, this.y, this.width, 60); if (isActive) { stroke(255, 0, 0); strokeWeight(2); noFill(); rect(this.x, this.y, this.width, 60); } } } class Candy { constructor(x, y, img, type) { this.x = x; this.y = y; this.size = 40; this.img = img; this.type = type; } fall() { this.y += gravity; } display() { image(this.img, this.x, this.y, this.size, this.size); } }
class Box {
  constructor(x, y, width, img) {
    this.x = x;
    this.y = y;
    this.width = width;
    this.img = img;
  }

  move(dx) {
    this.x = constrain(this.x + dx, 0, width - this.width);
  }

  display(isActive) {
    image(this.img, this.x, this.y, this.width, 60);
    if (isActive) {
      stroke(255, 0, 0);
      strokeWeight(2);
      noFill();
      rect(this.x, this.y, this.width, 60);
    }
  }
}

class Candy {
  constructor(x, y, img, type) {
    this.x = x;
    this.y = y;
    this.size = 40;
    this.img = img;
    this.type = type;
  }

  fall() {
    this.y += gravity;
  }

  display() {
    image(this.img, this.x, this.y, this.size, this.size);
  }
}

 

Reading Reflection – Week 5

In Computer Vision for Artists and Designers by Golan Levin, the main difference between human and computer vision is that while humans naturally interpret context and meaning, computers analyze images numerically.

People can recognize objects in varying conditions, but computers need specific training to do the same. For example, while we can recognize a face even if it’s partially obscured, computer vision would struggle unless it’s specifically trained to do so. This difference stood out to me because it shows how much more complex human perception is compared to a computer’s mechanical analysis.

To help computers track things we’re interested in, techniques like thresholding (simplifying images), edge detection (finding shapes), and optical flow (tracking movement) are used. More advanced methods, like machine learning, help computers recognize objects by learning from large datasets. These methods don’t give the computer understanding, just the ability to process data.

Levin also explores how CV’s tracking abilities are used in both art and surveillance. Artists use CV for interactive installations that respond to movement, but the same tech is used for facial recognition and monitoring. This can be a bit problematic as while CV enhances art, it also enables tracking people, sometimes without consent. Some artists challenge this by confusing or exposing surveillance systems.

This made me think about how CV has become so embedded in our everyday lives, from facial recognition on our phones to tracking in stores. While these systems often make things more convenient, they also normalize constant monitoring. For artists, I think it’s important to be aware of the implications of using CV, especially for privacy and surveillance.

Midterm Progress Report

My concept

I had several ideas for my midterm, but as the spring break is coming up, everyone keeps asking: “Where do you wanna go?” The variety of my friends’ responses made me think: what if people travel not to see a desired destination per se, but to experience certain emotions?

That’s how i came up with my project idea: an airport departure board where, instead of regular destinations, users choose from emotional states like: peace, guidance, re-start, sign. 

User Interaction

Here’s how the interaction works:
1. The user sees the departure board with different emotional destinations.
2. Clicking on a mood smoothly zooms in on that selection.
3. After zooming in, a pop-up appears with a random message or affirmation related to their chosen emotion.
4. Users can go back to the main screen and pick another mood.

I might remove the zoom in feature if that doesn’t work out and let the user receive their message right away. I also wanna add different sounds: airport sound as the background noise and the pop-up sound.

Complex Part

Complexities with this project started before i even got to the coding part. I had a certain picture in my mind and i realized the only way to bring it to reality is by drawing it myself, so i made this:

Then, i realized that it will take me too much time to colour it and have the best final look, so i just took another picture and redesigned it a bit:

If the quality worsens or something else goes wrong, i’ll have to redesign this completely by combining several pictures, but hopefuly i’ll stick with this one.

Code-wise the most challenging part of this project is the zoom-in transition when a user selects a mood. If the zoom feels too abrupt, the experience won’t be immersive.

To solve this, i want to try implementing smooth zoom animation using lerp() and ensure the quote pop-up appears naturally after zooming in.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
if (zooming) {
zoomFactor = lerp(zoomFactor, targetZoom, 0.08);
if (zoomFactor > targetZoom - 0.1) {
zooming = false;
messageShown = true;
}
}
if (zooming) { zoomFactor = lerp(zoomFactor, targetZoom, 0.08); if (zoomFactor > targetZoom - 0.1) { zooming = false; messageShown = true; } }
if (zooming) {
  zoomFactor = lerp(zoomFactor, targetZoom, 0.08);
  if (zoomFactor > targetZoom - 0.1) {
    zooming = false;
    messageShown = true; 
  }
}

 

Reading Reflection – Week 5

Computers do not have any general visual understanding unless they are told exactly what to look for, and this is quite shocking considering how powerful they are. A task as simple as differentiating between a dog and a cookie can pose an incredible challenge to machines.

Tim Stearns on X: "I use @teenybiscuit's wonderful animal vs. food images when I teach 1st-year undergrads about the challenges that AI faces in image recognition tasks that we're good at, but

Humans process visual information holistically, incorporating prior knowledge, experience and intuition to interpret the visible world. Meanwhile, computers speak the language of 1s and 0s, and they have to be taught by humans to see and discern certain objects and situations, using specific techniques like frame differencing for detecting motion, background subtraction for detecting presence and brightness algorithms for tracking objects. As it is up to humans to teach computers how to make sense of pixels, I think there is room for creativity in how we translate and represent the visual world to machines.

In a world where computers are developing a greater sense of “sight” day by day, data privacy and surveillance becomes an important topic of discussion. Those developing and deploying products based on computer vision, including computer scientists, artists and anyone in between, have to ensure fair and ethical use of the data, if any is collected. At the same time, I think it is also important for everyone else to learn to recognize and question the real world applications of computer vision as we go on with our daily lives.

Week 4 – Reading Response

One thing that drives me crazy is modern microwaves. Many microwaves have confusing button layouts, very cryptic labels, and just overall  inconsistent functionality across different brands. Like, some have dedicated buttons for specific foods like popcorn or  pizza and bury basic functions like adjusting power levels. Worse, sometimes  theres very little feedback if you press the wrong button, the microwave might beep, but it won’t tell you why. To improve this, designers could apply mapping and feedback more effectively. For example, a better approach would mean grouping related buttons together, and maybe place the most important buttons at the top, or make them significantly bigger.

In relation to the author and his principles of design, in the future, I could incorporate clearer feedback mechanisms for user actions in my projects. For instance, if a user hovers over an interactive element, I could add visual signifiers like small animations or color changes to show that the item is clickable. In addition to that, I would pay close attention to mapping, making sure that the user’s input feels directly connected to the system displayed. If I were creating a game where the user controls a character for example, I would make sure that the movements are mapped logically to the controls, such as using the arrow keys to move the character in the expected direction (up for up, down for down).