Week 11 – Reading Reflection

Design Meets Disability 

In this reading, the author argues that simplicity can be just as important as universality in design. While universal design aims to include every possible user, trying to add too many features can actually make a product harder to understand and use. The author emphasizes that cognitive accessibility, how easy something is to learn and navigate, is often overlooked because it is harder to measure than physical accessibility. By valuing simplicity over complexity, designers can sometimes create products that are more inclusive in practice, even if they do not meet every theoretical definition of universal design. The author also suggests that good designers learn to balance the ideal design brief with what will actually work best for users, sometimes choosing to remove features for a better overall experience.

I agree with the author’s argument that simplicity can be a powerful form of accessibility. In my experience, many products become confusing or overwhelming when they try to offer every possible option to every type of user. However, I also think that sometimes having more features can be beneficial for users. In many cases, a product with extra options feels like a better deal because it allows for more customization or more advanced use when needed. While simplicity is appealing, I don’t always want to sacrifice functionality for the sake of minimalism. I believe the best approach depends on the context: some products should stay simple, but others can genuinely improve the user experience, and even accessibility, by offering richer features as long as they remain intuitive to use.

Week 10 – Reading Reflection

Bret Victor’s A Brief Rant on the Future of Interaction Design argues that most futuristic designs, like touchscreens and “smart” glass devices, aren’t really futuristic. They still limit how humans can use their bodies. He points out that we interact with the world using our hands, senses, and movement, yet technology reduces all that to swiping and tapping. He calls this “Pictures Under Glass,” where we just touch flat screens instead of truly feeling or manipulating things.

In the follow-up, Victor explains that he wasn’t trying to give solutions but to spark awareness. He doesn’t hate touchscreens, he just wants people to imagine beyond them. He warns that if we keep improving only on what already exists, like adding small features to tablets, we’ll miss the chance to design something that fully connects with our physical abilities. True interaction design, he says, should use our hands’ full potential, our sense of touch, and our natural way of exploring and learning through movement.

What stood out to me most is how Victor connects design to human capability. It made me realize how much of what we call “interaction” today actually leaves our bodies out. I immediately thought about how different it feels to press a real piano key or strike a drum versus tapping a flat screen. When I play piano, I’m not just using my fingers, I’m using my arms, posture, timing, and even breath. There’s weight, resistance, and texture. You feel the sound before you even hear it. That’s something a touchscreen can’t replicate. It’s the same in sports, when you shoot a basketball, your body memorizes the angle, force, and balance, your muscles learn the rhythm. Victor’s idea reminded me that our body is a part of thinking and learning, not separate from it.

I also really liked how he said that tools should amplify human ability, not narrow it. Imagine if technology worked like that, instead of us adapting to it, it adapts to how we move and feel. A “future” instrument, for example, could let you physically mold sound with your hands, or a learning app could respond to your gestures, rhythm, or even posture, not just clicks. Victor’s message isn’t just about design, it’s about reimagining creativity, learning, and expression in a more human way. It’s like he’s saying, the real future of technology isn’t in shinier screens, it’s in rediscovering how alive and capable we already are.

Week 10 – The Diet Drum (Deema and Rawan)

Our Concept: 

Our project drew inspiration from last week’s readings on human-computer interaction, particularly the ways in which technology can respond to subtle human behaviors. We explored how interactive systems often mediate our engagement with the environment and even with ourselves, creating experiences that feel responsive, social, or even playful.

With this perspective, we asked ourselves: what if an instrument didn’t just make sound, but responded directly to human behavior? Instead of rewarding interaction, it could intervene. Instead of passive engagement, it could create a performative, almost social response.

From this idea, the Diet Drum emerged — a device that reacts whenever someone reaches for a snack. The system is both humorous and relatable, externalizing the human struggle of self-control. When a hand approaches the snack bowl, a servo-powered drumstick strikes, accompanied by a short melody from a passive buzzer. The result is a playful, judgmental interaction that transforms a familiar, internal tension into an amusing and performative experience.

How It Works

  • Photoresistor (LDR): Detects hand movements by monitoring changes in light. As a hand blocks the sensor, the reading drops.

  • Servo motor: Moves a drumstick to perform a percussive strike, physically reinforcing the “warning” aspect of the interaction.

  • Passive buzzer: Plays a short melody as a playful, auditory cue.

  • Arduino Uno: Continuously monitors the sensor and triggers both motion and sound.

When the LDR senses that a hand has blocked the light, the Arduino makes the servo play the melody and hit the drum. This creates a clear, immediate connection between what a person does and how the system responds, showing ideas from our readings about how devices can react to gestures and sensor input.

Video Demonstration

assignment10

Challenges

Throughout development, we encountered several challenges that required both technical problem-solving and design fixing:

  • System reliability: While the setup initially worked smoothly, leaving it for some time caused it to fail. Figuring out the problem took us some time because we didn’t know what went wrong and whether it was the setup or the code. So we had to use AI, to help us figure out what to do in order to know if the problem was from the wiring or the code, and then after knowing it’s from the wiring, we had to partially rebuild and retune the system to restore functionality.

  • Mechanical stability: Keeping the drumstick steady during strikes was more difficult than anticipated. Any slight movement or misalignment affected the accuracy and consistency of the strikes, requiring several adjustments.

  • Audio timing: The melody initially played too long, delaying servo motion and disrupting the intended interaction. Shortening the audio ensured that the strike and sound remained synchronized, preserving the playful effect.

  • We used AI, to help with some code difficulties we had, to fit with our original idea.

Code Highlights

One part of the code we’re especially proud of is how the sensor input is mapped to the servo’s movement.

float d = constrain(drop, MIN_DROP, MAX_DROP);
float k = (d - MIN_DROP) / float(MAX_DROP - MIN_DROP); 
int hitAngle = SERVO_HIT_MIN + int((SERVO_HIT_MAX - SERVO_HIT_MIN) * k);
unsigned long downMs = STRIKE_DOWN_MS_MAX - (unsigned long)((STRIKE_DOWN_MS_MAX - STRIKE_DOWN_MS_MIN) * k);

strikeOnce(hitAngle, downMs);

This makes the drumstick respond based on how close the hand is, so each action feels deliberate rather than just an on/off hit. It lets the system capture subtle gestures, supporting our goal of reflecting nuanced human behavior. AI helped us with this in terms of knowing when exactly and how hard the strike should hit.

Future Improvements

Looking forward, we see several ways to expand and refine the Diet Drum:

  • Adaptive audio: Varying the melody or warning tone based on how close the hand is could enhance the playfulness and expressiveness.

  • Mechanical refinement: Improving the stability of the drumstick and optimizing servo speed could create smoother strikes and more consistent feedback.

  • Compact design: Reducing the size of the device for easier placement would make it more practical for everyday use.

  • Visual cues: Adding optional LEDs or visual signals could enhance the  feedback, making the system even more engaging.

Github Link:

https://github.com/deemaalzoubi/Intro-to-IM/blob/b321f2a0c4ebf566082f1ca0e0067e33c098537f/assignment10.ino

https://github.com/deemaalzoubi/Intro-to-IM/blob/b321f2a0c4ebf566082f1ca0e0067e33c098537f/pitches.h

Reading Reflection – Week 9

Physical Computing’s Greatest Hits (and misses) 

Reading Tigoe’s article on “Physical Computing’s Greatest Hits (and Misses)” gave me a new appreciation for how simple ideas can be both educational and engaging in physical computing. I found it interesting that recurring themes, like theremin-like instruments, gloves, floor pads, and video mirrors, aren’t just repeated because they’re easy, but because they allow for creativity and experimentation. Even projects that seem simple, like LED displays or mechanical pixels, can produce surprising, beautiful, or playful results when combined with unique designs or gestures. I also liked how the article emphasized that physical computing focuses on human input and experience rather than just the machine’s output, which makes the interaction more meaningful and enjoyable.

I was especially inspired by projects that blend physical interaction with emotional or playful elements, such as remote hugs, interactive dolls, or meditation helpers. These projects show how technology can respond to people in subtle ways, creating experiences that feel alive or personal. I can see how many of these themes, like body-as-cursor or multitouch surfaces, could be adapted to new ideas, highlighting the importance of creativity over originality. Reading this made me think about how I might design my own physical computing projects, focusing on the human experience, interaction, and the joy of discovery rather than trying to invent something completely new from scratch.

Making Interactive Art: Set the Stage, Then Shut Up and Listen 

This reading helped me understand that interactive art is fundamentally different from traditional art. It’s not about presenting a fixed statement; it’s about creating a space or instrument where the audience can explore and participate. I found it interesting that the artist’s role is to suggest actions and provide context without telling people what to think or do. The comparison to directing actors made this clear: just as a director provides props and intentions, interactive artists set up the environment and let participants discover meaning through their actions. I realized that this approach makes the audience an essential co-creator of the artwork.

I was inspired by the idea that interactive art is a conversation between the creator and the audience. It made me think about how designing for discovery and participation can lead to more meaningful and engaging experiences. I liked how the reading emphasized listening to the audience, observing their reactions, and letting the work evolve based on their interaction. This approach feels very open and collaborative, encouraging creativity both from the artist and the participants. It made me consider how I could apply this perspective to projects or experiences I create, focusing on engagement and exploration rather than fixed outcomes.

 

Week 9 – Arduino: analog input & output Assignment

My Concept:

For this project, I wanted to control two LEDs using sensors in a creative way. One LED would respond to a potentiometer, changing its brightness based on how much I turned it. The other LED would respond to a pushbutton, turning on or off each time I pressed it. The idea was to combine analog input (potentiometer) and digital input (pushbutton) to control outputs in two different ways. I also wanted to make sure everything was wired properly on the breadboard with the Arduino so it all worked together. At first, I wanted to do a breathing light where it kind of switches on and off gradually like it’s a human breathing, but it took me hours to try and it still wasn’t working, so that was a big challenge for me, because I couldn’t figure it out. I had to try a couple of times and it wasn’t giving me what I wanted. So, I decided to stick on to the simple foundation of it, because i realized I needed more time to get comfortable with using the potentiometer and switch.

Video Demonstration: 

assignment_week9.mov

Hand-Drawn Schematic: 

Code I’m most proud of:

// Pin Definitions 
const int potPin = A0;       // Potentiometer middle pin connected to A0
const int buttonPin = 2;     // Pushbutton connected to digital pin 2
const int greenLedPin = 8;   // Green LED connected to digital pin 8 
const int blueLedPin = 9;    // Blue LED connected to digital pin 9 

// Variables 
int potValue = 0;           // Stores analog reading from potentiometer
int ledBrightness = 0;      // Stores mapped PWM value for blue LED
int buttonState = HIGH;     // Current reading of the button
int lastButtonState = HIGH; // Previous reading of the button
bool greenLedOn = false;    // Green LED toggle state

void setup() {
  pinMode(buttonPin, INPUT_PULLUP); // Enable internal pull-up resistor
  pinMode(greenLedPin, OUTPUT);
  pinMode(blueLedPin, OUTPUT);
  
  Serial.begin(9600); 
}

void loop() {
  //Read potentiometer and set blue LED brightness
  potValue = analogRead(potPin);                   // Read 0-1023
  ledBrightness = map(potValue, 0, 1023, 0, 255); // Map to PWM 0-255
  analogWrite(blueLedPin, ledBrightness);         // Set blue LED brightness

  // Read pushbutton and toggle green LED 
  buttonState = digitalRead(buttonPin);

  // Detect button press (LOW) that just changed from HIGH
  if (buttonState == LOW && lastButtonState == HIGH) {
    greenLedOn = !greenLedOn;                       // Toggle green LED state
    digitalWrite(greenLedPin, greenLedOn ? HIGH : LOW);
    delay(50);                                     // Debounce delay
  }

  lastButtonState = buttonState;

The code I’m most proud of is  basically the overall code where the green LED toggles ON/OFF with the pushbutton while the green LED changes brightness with the potentiometer. It was tricky at first because the button kept flickering, but I solved it with a debounce and I asked chatgpt to help me on what I had wrong and why this might have been happening. Here’s the code: 

I’m proud of this code because it solves the switch problem elegantly; the green LED reliably turns on and off without flickering, while the blue LED smoothly responds to the potentiometer. It made me feel like I had full control over both analog and digital inputs in the same circuit.

Reflection and Improvements:
This project taught me a lot about wiring and sensor behavior. One big challenge was the pushbutton;  at first, it would flicker, or the LED would turn on/off unexpectedly. Also, when I tried pressing the button with my finger, the breadboard wires kept moving because there were so many of them. Eventually, I realized using a pen to press the button made it much more stable and consistent.

For future improvements, I’d like to:

  • Use more organized jumper wires to make the breadboard less messy
  • Maybe add more sensors or another LED with creative behaviors
  • Explore smoother debouncing techniques in software to make the switch even more responsive

Even with the challenges, seeing both LEDs working together exactly how I wanted was really satisfying. It showed me how analog and digital inputs can interact with outputs, and how careful wiring and coding logic are equally important.

Github Link: 

https://github.com/deemaalzoubi/Intro-to-IM/blob/c9dc75423cd71b8ea21d50a5756c4d5e7f420ba5/assignment_week9.ino

Weel 8 Reading – Her Code Got Humans on the Moon

While reading “Her Code Got Humans on the Moon“, I could learn and see how Margaret Hamilton is such an inspiring person . She was a computer scientist and software engineer who played a huge role in NASA’s Apollo missions back in the 1960s. What’s so amazing about her is that she didn’t just write code; she created entire systems for software reliability, which was a completely new idea back then. For the Apollo missions, she developed error-detection routines, priority scheduling, and backup systems so that the spacecraft could continue operating even if something went wrong. Basically, she designed software that could think ahead and handle mistakes, which was crucial when human lives were literally on the line. Her work laid the foundation for modern software engineering practices. What’s also incredible is how determined she was that she even brought her daughter to work, proving that her focus and dedication didn’t let anything stop her from solving these insanely complex problems.

What really fascinates me is how her technical contributions changed the way people think about software. Before her, software was often seen as just “a bunch of instructions,” but Margaret Hamilton treated it as an engineered system that could prevent disaster and adapt to unpredictable situations. This mindset completely shifted the tech world; today, every piece of software we use, from smartphones to cars, benefits from principles she pioneered. Learning about her makes me realize that software isn’t just about writing code; it’s about designing systems that are safe, reliable, and resilient. Her work shows that combining technical skill with persistence and creativity can literally launch humans to the moon and reshape the way we build technology forever.

Week 8 Reading – Norman,“Emotion & Design: Attractive things work better”

Don Norman’s essay “Emotion and Design: Attractive Things Work Better”, really made me think about how much our feelings affect the way we interact with everyday objects. Norman explains that when something looks or feels pleasant, it actually makes it easier to use. Positive emotions make us more creative and patient, so small design flaws don’t bother us as much. On the other hand, stress or frustration narrows our focus and makes problem-solving harder, which is why usability is especially important in critical or high-pressure situations. I love how he uses examples like teapots, some ugly but practical, some beautiful but tricky, to show that the context and mood of the user really change which design works best.

What really stuck with me is how this idea goes beyond just making things look nice. Design isn’t only about functionality or efficiency; it’s also about how the product makes us feel. When we enjoy using something, we naturally perform better and feel more satisfied. It reminded me that the best designs are the ones that balance function, usability, and beauty, things that work well and bring a little joy into our daily routines. For me, that’s what makes a product feel complete: it’s not just useful, it’s also a pleasure to own and use. A perfect example for me is my piano keyboard. I’ve used many different keyboards over the years, and the ones that are pleasant to touch, look sleek, and respond just right make me want to play more and practice longer. Even if a slightly cheaper or simpler keyboard could technically produce the same sound, it just doesn’t feel as inspiring. The way it looks and feels actually affects my creativity and focus, making the music I play feel better.

Week 8 Unusual Switch Assignment

My concept: 

For this project, I wanted to create a switch using my elbow as the activator. I set up two coins on the table and connected them to the Arduino with jumper wires, so that when my elbow touched both coins at the same time, it would complete the circuit. The Arduino reads this connection as a “press” and turns on an LED. I liked how this made the human body part of the circuit, creating a physical and playful way to interact with electronics.

The process of connecting the coins, taping the jumper wires, and adjusting the setup taught me that even small details, like how the wires touch the coins and keeping them stable, really affect whether the circuit works. The project was about exploring how we can rethink simple switches and find unexpected ways to trigger electronics.

Video Demonstration: 

elbow switch.mov

Arduino Code: 

// Elbow-activated switch with LED

const int switchPin = 2;   // Pin connected to Coin A
const int ledPin = 13;     // Pin connected to LED 

void setup() {
  // Set Pin 2 as input with internal pull-up resistor
  pinMode(switchPin, INPUT_PULLUP);

  // Set LED pin as output
  pinMode(ledPin, OUTPUT);

  // Start with LED off
  digitalWrite(ledPin, LOW);
}

void loop() {
  // Read the state of the switch
  int state = digitalRead(switchPin);

  // If coins are bridged (elbow touches), state is LOW
  if (state == LOW) {
    digitalWrite(ledPin, HIGH);  // Turn LED ON
  } else {
    digitalWrite(ledPin, LOW);   // Turn LED OFF
  }

  // Small delay for stability 
  delay(50);
}

Reflection:

Building this project was a lot of trial and error, and I learned so much about Arduino inputs, digitalRead, and resistors along the way. I realized that small details, like how the wires touch the coins and keeping the connections stable, make a huge difference. At first, I used a 10k pull-down resistor, but the LED wouldn’t stay on reliably when I touched the coins with my elbow. Eventually, I removed the external pull-down and switched to Arduino’s internal pull-up resistor, which made the switch much more stable and responsive.

I also loved seeing how simple code can control hardware in an immediate way, and how experimenting with the physical setup really affects the digital outcome. It was a fun reminder of how hands-on hardware projects are a mix of coding, problem-solving, and creativity.

Ideas for future work or improvements: 

Later on, I’d love to try using different parts of the body or gestures as switches, not just elbows. I could also add more LEDs or other outputs to make it feel more interactive and playful. It would be interesting to experiment with pressure-sensitive  sensors to make the switch respond more smoothly.

Eventually, I could imagine turning this into a small game or interactive piece, where your body becomes part of the control system. Also, I can imagine how interesting it gets when one could actually use their hands as part of the project where they would have more control on the switches and the whole system.

Github Link: 

https://github.com/deemaalzoubi/Intro-to-IM/blob/8753e3a8fa154b92aa973ab1735085c253a33d30/week%208elbowswitch.ino

Midterm Project – Her Knockout!

Project Concept
For my midterm project, I wanted to create something that’s not only fun but also meaningful and personally significant. I wanted to focus on women’s empowerment, sports, and music—three areas I’m passionate about—and make a project that reflects these interests. My inspiration came from an essay I wrote in my FYWS class analyzing Nike’s “Dream Crazier” ad about women in sports (https://www.youtube.com/watch?v=zWfX5jeF6k4). From there, I decided to focus on MMA as the sport for my game. For the characters, I got them from Canva.

The game begins with the player choosing one of three female MMA characters, then moves into a gameplay section where the player can punch, kick, jump, and move to knock down negative comments. This is meant to symbolize the criticism and negativity that women in sports often face. But I didn’t want the project to end there; I also wanted it to educate. Inspired by typing challenge games like https://typetest.io/, I added a typing game that lets players learn MMA terminology while being entertained and challenged to type each fact correctly. This combination allows the game to be both meaningful and engaging.

How It Works
The game starts with an introduction screen where players can choose from three female MMA characters. Each character has a unique image and a short “thought” bubble that appears when clicked, adding a little personality to the selection. Once a character is chosen, the player enters the main gameplay stage. Here, you can move left and right, jump, punch, and kick to knock down flying negative comments. The goal is to clear enough comments to win, representing overcoming the negativity that women in sports often face.

To make the experience more engaging, I added an upbeat background music track that plays during the game, helping to set the energy and keep players entertained. In the typing game, I also included images for each MMA fact to make it more visual and less plain. These images were sourced from Google, and the music I used is “The Feeling (DNA Rework)” by Massano on SoundCloud. Combining visuals and audio adds another layer of immersion, making the game feel polished and lively.

After reaching a winning score, the game transitions into a typing challenge focused on MMA terminology. Players see a word or phrase with a matching image and must type it correctly. Correct answers earn points and trigger a confetti animation, while mistakes display “Wrong!”. The typing game continues until all the facts are completed, combining both entertainment and education. Through this process, I personally learned not only a lot about coding techniques like key press handling, animations, and typing effects, but also about MMA itself—its terminology, techniques, and the skill it takes to compete. The game also includes a restart function, so players can press “9” at any point to reset and start over, keeping it simple and practical.

What I’m Proud Of
I’m really proud of how I used the key press and input functions to make the game interactive and smooth. For example, movement, punching, and kicking are all tied to specific keys, which allows the gameplay to feel responsive and intuitive. I also made the negative comments move in a dynamic way, so the player has to actively engage with the game rather than just stand still.

Another thing I’m proud of is the combination of gameplay and learning. The transition from the action-based MMA game to the typing challenge was a design decision I think worked well. It keeps the player engaged, reinforces learning about the sport, and adds a sense of progression. I also like the small touches, like character thought bubbles, confetti effects, and the animated victory dance, which make the game feel lively and rewarding.

Code I’m Proud Of
I’m especially proud of this part because it implements a smooth “typing effect” for the characters’ thought bubbles:

function drawThoughtBubble(x, y, textContent, charObj) {
  textSize(width / 60);
  let padding = 20;
  let maxWidth = 300;

  // update typing
  if (charObj.displayedLength < textContent.length) {
    charObj.displayedLength += typingSpeed;
    if (charObj.displayedLength > textContent.length) charObj.displayedLength = textContent.length;
  }

  let displayText = textContent.substring(0, charObj.displayedLength);

  let words = displayText.split(' ');
  let lines = [];
  let currentLine = '';

  for (let word of words) {
    let testLine = currentLine + word + ' ';
    if (textWidth(testLine) > maxWidth) {
      lines.push(currentLine);
      currentLine = word + ' ';
    } else {
      currentLine = testLine;
    }

Each word appears gradually using the displayedLength variable, which keeps track of how many letters of the full sentence have been shown. By increasing this number frame by frame, the text “types itself” on the screen instead of appearing all at once, creating that dynamic typing effect.

I also implemented automatic line wrapping, which means that when a line of text gets too wide for the bubble, it automatically moves to the next line. To do this, I check the width of each line using the textWidth() function. This calculates how wide a piece of text will be on the canvas in pixels, letting me break lines before they overflow the bubble.

To calculate the height of the bubble and make sure the text fits neatly, I used  textAscent and textDescent. These functions tell you the distance the text reaches above and below the baseline, respectively, so by adding them together for each line (and including some padding), I could make the bubble the right size for any number of lines.

Finally, I added a small tail to the bubble with simple ellipse() shapes, giving it a classic comic-style speech bubble appearance. Combining all these elements was tricky but worth it. It ensures that the text always looks clean and readable on the screen, adding personality to the characters and making the game feel more polished and interactive.

Areas for Improvement
While the game works well overall, there are a few areas I’d like to improve. For one, the instructions could be clearer for the player, especially for the controls and how to transition between the punching/kicking game and the typing game. It would also be nice to give the user more options to explore, such as an on-screen button to mute the music, or the ability to navigate directly to either game from a menu without having to restart the whole experience. Adding more interactivity and choices could make the game feel less linear and more engaging, encouraging players to experiment and spend more time exploring the content.

Problems I Ran Into

One of the biggest challenges I faced was merging the different sections of the project into a single p5.js canvas. Initially, I had each part—the intro screen, the punching/kicking game, and the typing game—running in separate sketches, and I thought combining them would be straightforward. In reality, it was tricky to adjust all the positioning, scaling, and interactions so everything fit together smoothly. Because of this, I had to change the design of the typing game to make it easier to merge with the other parts. I spent a lot of time tweaking variables like character positions, text alignment, and animation effects to ensure the full project worked as one seamless game. It was a slower process than expected, but I’m proud that I managed to unify all the parts into a playable and polished experience.

(Here, I’ll include a screenshot of the previous version of the typing game to show the original design before I changed it for merging.)

Computer Vision for Artists and Designers Reading Response

Q1: What are some of the ways that computer vision differs from human vision?

Humans see in a flexible and intuitive way. We can recognize a friend even if they’re in the shadows, wearing different clothes, or drawn as a cartoon. Computers, on the other hand, are much more rigid. They need clear cues, like the right lighting, steady backgrounds, and often lots of training data, just to recognize something we would see instantly. Computers don’t bring context or common sense either. If I see someone running, I might guess they’re late or playing a sport; a computer just sees moving shapes. This difference means that in art, computer vision often works best when the artist designs the environment to make it easier for the machine to see, but not to interpret it like humans. 

Q2: What are some techniques we can use to help the computer see / track what we’re interested in? 

On the technical side, artists use things like motion detection (spotting what changes between frames), background subtraction (separating a moving person from a still background), or color filters (tracking a red ball). More advanced tools can follow body joints or estimate a skeleton, which is great for dance or performance. But beyond algorithms, the environment is just as important. If you give the system good lighting, a solid backdrop, or make participants wear bright colors, the system would be able to focus on one thing and spot them more easily. It’s less about forcing the computer to be “smart” and more about designing the whole setup so the vision works smoothly.

Q3: How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

Computer vision’s ability to track people is both a strength and a challenge for interactive art. On the positive side, tracking makes the art responsive, installations can change based on where you move, how you gesture, or even how many people are in the space. This creates a playful, engaging experience that feels alive. But because the same technology is often used for surveillance, it can also make people feel watched. That changes how the audience experiences the artwork, sometimes it’s fun, but sometimes it raises concerns about privacy. Many artists lean into this tension: some use tracking purely for interaction, while others use it to make us think critically about how surveillance works in our daily lives.