Week 9 – Reading Response – Shahram Chaudhry

Physical Computing’s Greatest Hits (and misses)

I think my main takeaway from this reading would be that we could have interactions but what’s more important is for those interactions to be meaningful.I liked how he pointed out that waving your hand over a sensor “has little meaning by itself.” As creative coders, I think that’s such an important reminder. It’s easy to get caught up in cool tech and forget the why behind our interactions. If the action doesn’t hold some significance, it ends up feeling more like a CS demo than an expressive piece (no shade, I’m a CS major myself).

The part about video mirrors also really resonated. I totally agree. They’re super visually engaging (who doesn’t love staring at themselves?), but there’s not much to do. It reminded me of our early class discussions about high vs low interaction. Just because something responds doesn’t mean it creates depth. And also I think mirrors are often more reactive than interactive.

I loved the section about interactive pets, especially since I’m not really into real animals. The idea of a cuddly robot pet that behaves like a dog but doesn’t shed or poop? Count me in.

Making Interactive Art: Set the Stage, Then Shut Up and Listen

I found this reading really refreshing because it reframes the role of the artist in interactive work in a way that feels freeing. The title, Set the Stage, Then Shut Up and Listen, is blunt, but it hits hard. It’s such a shift from the traditional idea of art being about your expression, your vision, and instead saying, “Hey, I’ve built the framework,  now let them do something with it.” That resonated with me and also reminded me of the advice Professor Mang gave us when presenting our mditerm projects, letting our audience interact with our projects, without any additional instructions or explanations. While it can get frustrating if the audience doesn’t interact the way we expected, I think that’s where we actually find room for improvements in our work. Also, if they don’t interact in the way we expected them to, maybe it’s because we didn’t design with them in mind.

I agree with the idea that interactive work is the beginning of a conversation rather than the whole message. It means not trying to force people into a specific reaction, but creating space for them to explore and find their own meaning. That kind of openness can be scary, but it’s also really exciting.

I also really liked the part about using space and affordances thoughtfully. Like, if something has a handle, we naturally want to grab it. That kind of design isn’t just about aesthetics, it’s about instinct and behavior. As someone making interactive things, I think the key takeaway for me is the shift in mindset, moving away from a rigid, outcome-driven approach where I expect the audience to engage in a specific way, and instead embracing curiosity about how they actually interact. It’s less about forcing a response and more about observing what they do, learning from it, and letting that shape the work.


Week 9 – Shahram Chaudhry – The Emotional Snap

You know those days when you’re going about your day, composed, unfazed, maybe a little affected by how things are going,  a compliment here lifts your mood a bit, an awkward text there dims it slightly. Nothing dramatic. That’s me, most of the time. Collected. Measured. But then someone says something, or does something, and boom, something inside flips. And I get  triggered (Only a good squash session can fix that).

That’s the idea behind this project, the emotional snap, that flips from calm to intensity. The potentiometer controls a blue LED, which I chose because blue is often associated with calmness (or at least that’s the association I have). The idea is: when things are calm, you’re still feeling things, but softly. You turn the dial, and the blue LED glows brighter or dimmer depending on how strongly you’re feeling. It’s gradual, and ever-changing, just like most of our emotional states.

But then there’s the toggle switch. 

When flipped UP, it triggers a red LED, one that doesn’t fade in or out. It’s either ON or OFF. That red LED represents those intense moments of anger, panic etc. The contrast here is what makes the circuit special. On one hand, you have the blue LED, whose brightness gently flows with the potentiometer, like your emotional depth shifting over time. On the other, the red LED is binary, triggered by the switch, like someone pushing a very specific emotional button.

So this project is a metaphor for the way we, as humans, respond to the world around us.

The code for it:

int potPin = A0;      
int switchPin = 2;     
int redLED = 9;      
int blueLED = 10;    

void setup() {
  pinMode(redLED, OUTPUT);
  pinMode(blueLED, OUTPUT);
  pinMode(switchPin, INPUT);   
  Serial.begin(9600);
}

void loop() {
  int potValue = analogRead(potPin);
  int brightness = map(potValue, 0, 1023, 0, 255);
  int switchPosition = digitalRead(switchPin);
  if (switchPosition == HIGH) {
    //Red LED ON 
    digitalWrite(redLED, HIGH);
    analogWrite(blueLED, 0);
  } else {
    // Blue LED with analog control
    digitalWrite(redLED, LOW);
    analogWrite(blueLED, brightness);
  }
}

The schematic design:

Video Demo:

IMG_0611

 

 

Week 8 – Shahram Chaudhry – The Pressure Within

I’ve always been fascinated by pressure sensors – the way a simple press or change in force can trigger something to happen. You see them everywhere: in home alarm systems, automatic doors, and even in those dramatic scenes from Mission Impossible, where a character steps on the wrong tile and sets off a trap. That kind of precision and sensitivity has always intrigued me. So, when I realized I could actually build one myself, I thought, why not?

The concept seemed simple enough at first: a pressure-activated switch that lights up an LED when you press on it. But I didn’t want to make something tiny that would trigger with a fingertip,  I wanted it to react only to real weight or force. My vision was to create a larger pressure pad, something that felt closer to those movie sensors that go off when someone steps on them. I figured, “How hard could it be?” Spoiler: harder than I thought.

In the beginning, I tried to build everything at once , the sensor pad made from cardboard and aluminum foil layers, and the circuit on the breadboard. The problem was, when it didn’t work, I had no idea why it wasn’t working. Was it a loose connection? Or had I messed up the circuit itself? I went back and forth for a while, trying to fix both at the same time, which honestly just made it more confusing.

To make things worse, I started out using thin copper wire, thinking it would be neat and professional-looking. But those wires didn’t hold contact well at all, every time I moved the board a little, the connection would break. After a few frustrating tries, I switched to jumper wires, which made testing much easier. Around that point, I decided to simplify the problem by building a smaller version first. I wanted to prove the circuit worked before investing time into rebuilding the big pressure pad again.

I realized the key was to finish and test the circuit first basically, to complete the LED and resistor setup with two extra test wires, make sure it lit up when the wires touched, and then integrate those wires into the foil pad. Once I took that approach, the small version worked flawlessly, and then the larger version came together perfectly afterward. 

​​I first used a green LED just to test the circuit, but later I wanted the project to feel more meaningful. I chose red instead not just because it looked like an alarm color, but because it represents the anxiety and tension that come with pressure. It’s a small change, but it gave the whole project a deeper meaning.

Here’s the video demonstration of both the small and large versions of my pressure sensor project.

IMG_0571 IMG_0580

Shahram Chaudhry – Week 8 – Reading Response

 

Attractive Things Work Better

I found this reading surprisingly relatable (although initially with the 3 teapots, I was a little confused). Norman’s main point that beauty and usability aren’t opposites and that they can co-exist, really made me rethink how I view design. He talks about how our emotions directly affect how we perform tasks. For example, negative affect (like anxiety) actually focuses the mind, which I never thought about before. I used to assume all anxiety was bad, but Norman explains that in situations where quick focus is needed, like immediate problem-solving, that stress can actually help.

What also stood out to me was the idea that people are more forgiving of poor design when they’re in a positive mood. I’ve totally felt that. When I’m calm, I barely notice small glitches on Brightspace, but when I’m stressed,  like submitting an assignment at the last minute, the same delay feels ten times longer and way more frustrating.

I liked his reflection about beauty too, especially the part about how true beauty isn’t just surface-level. A product can look good, but to be truly beautiful, it has to work well and make sense to use. That reminded me of how we say “beauty is in the character” for people; Norman’s basically saying that the same applies to design. Beauty in products has to go deeper than aesthetics, it has to come from function, usability, and how it makes us feel.

Overall, this reading made me realize that emotion is not a distraction in design, it’s actually a tool. How we feel shapes how we interact, and that’s something I’ll keep in mind whenever I evaluate or create something from now on.

Her Code Got Humans On The Moon — And Invented Software Itself

I really enjoyed this reading, especially the part where the author points out that one of the “founding fathers” of software was actually a mother. I thought that was both funny and powerful. It highlights how Margaret Hamilton broke stereotypes in a field that was (and still is) dominated by men. The story captured how she managed to fit into that environment, joking around with her colleagues and saying she was “one of the guys”,  but also how she stood out because of her intelligence and persistence. What struck me most was how her higher-ups ignored her idea for error checking, insisting astronauts were “too well-trained to make mistakes.” It reminded me of a previous reading where we discussed how engineers often think so logically that they expect others to be perfect, almost machine-like. But humans aren’t machines, and Hamilton proved that. When an astronaut actually made the very mistake she had warned about, it became her “I told you so” moment,  except it came with nine hours of problem-solving that could’ve been avoided.

As a computer science major, I found it fascinating that error checking wasn’t considered intuitive back then. Today, we’re taught to expect mistakes and build systems that can handle them, but that mindset didn’t exist yet. Hamilton’s work showed that great engineering isn’t just about logic, it’s about anticipating imperfection, because humans are imperfect anyways. 

MidTerm Project – Shahram Chaudhry

The Sketch

https://editor.p5js.org/sc9425/full/RnrYJ2fls

Concept Development and Final Concept

I originally imagined this project more like a game, where users would have a limited time to quickly label memories as good or bad, and discard the bad ones to “win.” The goal was simple: clean up the mental space by getting rid of what weighs us down. But as I worked on it more, especially while curating the kinds of memories to include my perspective started to shift.

I realized memories aren’t always black or white. They’re messy, layered, and often emotionally ambiguous. A single moment can carry joy and pain, nostalgia and regret. So the project evolved. Rather than forcing users to judge a memory under a timer, I wanted to create a quieter, more reflective experience,  one where the user has gentle control: to reveal, sit with, or discard memories at their own pace.

For instance, I studied abroad in Paris and found it magical: exploring the city, trying new foods, feeling independent. But I recently came across a post by someone who had a completely different experience there. They couldn’t afford daily subway rides, had to walk 6.5 kilometers to class, and got by on snacks. For them, Paris wasn’t the city of love, it was a daily struggle. That contrast stuck with me. Same place, completely different emotional weight. And that’s what Mind Palace became about: subjective memories, and giving people space to decide what they mean and what to do with them.

In terms of the UI, I think I made meaningful improvements during development. Initially, I had a simpler design with a pink color scheme, thinking it would naturally signify the brain or mind because that’s the color of the brain icon. However, when I showed it to classmates, several of them were confused about what it represented. Based on that feedback, I decided to pivot. I found an actual artistic image of a brain online that better communicated the theme, and I reduced its transparency so it wouldn’t overpower the rest of the experience. This way, the background sets the mood and context without distracting from the interactive elements.

The previous design was:


The final design:

How It Works

The Mind Palace starts with a simple instruction screen. Once the user clicks to begin, memories, represented as floating film icons (often associated with memories), gently drift across the screen. 

The user interacts using just their index finger, tracked by the webcam. Initially, I had a gesture (open palm) to reveal a memory, but after feedback in class, I realized it felt a bit unituitive. So I simplified it, now just hovering over a memory for 2 seconds reveals it. It made the interaction smoother and avoided asking users to remember too many gestures.

Once a memory is revealed and the user has had a chance to read it, they can discard it using a thumbs-down gesture. I have made sure that users can’t just hover over and directly discard a memory without it being revealed, because then users will just be discarding random memories. To make the gesture recognition more robust and avoid accidental deletion, I also made sure users had to hold the thumbs-down gesture for a second, so it wouldn’t trigger accidentally.

For resetting the experience, I originally thought about using an “OK” gesture, like saying “I’m done.” But since reset is a pretty major action, and misfires could be annoying, I decided to keep it simple: users just press the Escape key. It also felt kind of full circle, like they press a button to enter and a key to exit. I focused on keeping things intuitive and reflective. I meant to give the user space to engage with each memory calmly, without rushing.

Each memory is intentionally ambiguous. For example: “The last message I never replied to”

This could evoke very different emotions depending on the person engaging with it. For some, it might feel empowering,  a sign of setting boundaries, moving on, or finally letting go of something that no longer serves them. For others, it might bring up guilt, anxiety, or a lingering sense of “what if.” That’s the heart of the project: recognizing that memories aren’t fixed in meaning. What feels like healing to one person might feel like avoidance to another. By keeping the memories vague yet emotionally charged, I encourage reflection, allowing each user to project their own story onto them.

I’m especially proud of implementing gesture recognition. It’s something I’d seen at IM showcases before, but I didn’t think I’d be able to do it myself. Understanding hand landmarks and translating them into reliable, smooth gestures took time, but I managed to make it functional and fairly intuitive. Here’s the core gesture logic I used:

function isThumbsDown(landmarks) {
  const thumbTip = landmarks[4];
  const wrist = landmarks[0];
  return (
    thumbTip.y > wrist.y &&
    !isFingerUp(landmarks, 8) &&
    !isFingerUp(landmarks, 12) &&
    !isFingerUp(landmarks, 16) &&
    !isFingerUp(landmarks, 20)
  );
}

function isFingerUp(landmarks, tipIndex) {
  const midIndex = tipIndex - 2;
  return (landmarks[midIndex].y - landmarks[tipIndex].y) > 0.05;
}

I also made some simple but thoughtful design choices like placing the webcam feed at the top so users can always see if they’re in frame. That helped during testing and made the interaction clearer.

Challenges and Improvements

Gesture recognition was a big concern for me. It’s surprisingly tricky to get right,  too strict, and gestures feel frustrating to perform (and even to code); too loose, and false positives ruin the experience. One major challenge was simply understanding the hand landmark system, there are 21 tracked points per hand, and it took a while to learn which ones corresponded to each finger joint and how to use them meaningfully in gesture logic.

At first, I tried more complex calculations for gestures, but it quickly stopped feeling intuitive. Users had to “perform” gestures perfectly, and the experience lost its flow. Now I’ve simplified it: instead of complicated checks, I just use the thumb and index finger landmarks in straightforward ways, plus a timing delay. For example, the thumbs-down gesture only triggers if it’s held for one full second. This makes it much harder for it to fire accidentally while still keeping the interaction easy and natural for users.

Another improvement  would be adding variety,  either by generating new memory phrases dynamically or letting users add their own. Right now, the memory list is static. Adding this level of customization could make each user’s Mind Palace feel more personal. I also think sound effects tied to each gesture (reveal, discard, reset) would enhance immersion and make the interactions feel more responsive.

 



Week 5 – Reading Response – Shahram Chaudhry

One thing that really stood out to me from this week’s reading is how different computer vision is from human vision. We take it for granted that we can look at a scene and instantly make sense of it. We can tell if it’s day or night, if there’s someone in the frame, if they’re walking or just waving – all without thinking. But to a computer, a video is just a bunch of colored pixels with no meaning. It doesn’t “know” what a person or object is unless we explicitly program it to. There are several techniques to help computers track. For example, frame differencing which compares two frames and highlights motion could be  helpful in detecting someone walking across a room or background subtraction to reveal new people or objects that appear. These sound simple, but they’re super powerful in interactive media. 

What makes this especially interesting is how computer vision’s ability to track things brings up both playful and serious possibilities. On one hand, it’s fun,  you can build games that react to your body like a mirror or let users move objects just by waving. But on the other hand, it opens doors to surveillance and profiling. Installations like The Sorting Daemon use computer vision not just to interact, but to critique how technology can be used for control. Or take the Suicide Box, which supposedly tracked suicides the Golden Gate Bridge. And it made me wonder, did it actually alert authorities when that happened, or was it just silently recording? That blurred line between passive tracking and ethical responsibility is something artists can explore in powerful ways.

Also, while humans can interpret scenes holistically and adapt to new contexts or poor lighting, computer vision systems tend to be fragile. If the lighting is off, or the background is too similar to a person’s clothes, the system might fail. No algorithm is general enough to work in all cases,  it has to be trained for specific tasks. We process thousands of images and scenes every day without even trying. For a machine to do the same, I am assuming it would need countless hours (or even years) of training. Nevertheless, clever engineering and artistic intuition means that we can still make good interactive art with the current state of computer vision.



Week 5 – Midterm Progress

For my midterm, I knew I wanted to incorporate a machine learning library, specifically for gesture recognition. I initially explored building a touchless checkout interface where users could add items to a cart using hand gestures. However, I realized the idea lacked creativity and emotional depth.

I’ve since pivoted to a more expressive concept: a Mind Palace Experience (not quite a game), where symbolic “memories” float around the screen  – some good, some bad. The user interacts with these memories using gestures: revealing, moving, or discarding them. The experience lets users metaphorically navigate someone’s inner world and discard unwanted memories, ideally the painful ones. Here’s a basic canvas sketch of what the UI could look like.

At this stage, I’ve focused on building and testing the gesture recognition system using Handsfree.js. The core gestures, index finger point, pinch, open palm, and thumbs down, are working and will be mapped to interaction logic as I build out the UI and narrative elements next.

The code for different gestures.

function isPinching(landmarks) {
  const thumbTip = landmarks[4];
  const indexTip = landmarks[8];
  const d = dist(thumbTip.x, thumbTip.y, indexTip.x, indexTip.y);
  return d < 0.05;
}

function isThumbsDown(landmarks) {
  const thumbTip = landmarks[4];
  const wrist = landmarks[0];
  return (
    thumbTip.y > wrist.y &&
    !isFingerUp(landmarks, 8) &&
    !isFingerUp(landmarks, 12) &&
    !isFingerUp(landmarks, 16) &&
    !isFingerUp(landmarks, 20)
  );
}

function isOpenPalm(landmarks) {
  return (
    isFingerUp(landmarks, 8) &&
    isFingerUp(landmarks, 12) &&
    isFingerUp(landmarks, 16) &&
    isFingerUp(landmarks, 20)
  );
}

function isFingerUp(landmarks, tipIndex) {
  const midIndex = tipIndex - 2;
  return (landmarks[midIndex].y - landmarks[tipIndex].y) > 0.05;
}

The sketch link:

https://editor.p5js.org/sc9425/full/n6d_9QDTg

Week 4 – Shahram Chaudhry – Reading Response

As a senior actively applying to jobs, one thing that really drives me crazy is Workday applications. When I spend time filling out a long online application, only for the site to suddenly prompt me (after I click “Next”) to log in or create an account. Sometimes, it even realizes I already have an account and redirects me to a different version where I am signed in. The logical solution would be to sign up or log in first, but the problem is that not all applications require it upfront. Some let you start without logging in, others ask midway, and many don’t make it clear at all. This inconsistency forces me to guess the “correct” order of actions every time, which becomes especially frustrating when data gets lost in the process. A better approach would be a more consistent and user-friendly UX design: clearly prompting users at the start to sign in, continue an existing application, or proceed as a guest with visible consequences for each. Even more importantly, any entered information should be temporarily saved in local storage or cache, so if a redirect occurs, my progress isn’t wiped out. Small design choices like these can make a huge difference in preserving user effort and trust.

As for the reading, it was a fun and informative read. I used to think interaction design and experience design were basically interchangeable. But now I understand that interaction design is more about usability, discoverability, and understanding, while experience design is about the emotional impact and satisfaction we feel when using something. That distinction really clicked for me. I also loved the conversation about how engineers think logically and they think everyone does too or everyone should at least, but systems need to be intuitive not logical. Just because the engineers wrote a manual to use it, and the logical step for users is to read the manual before using the system, doesn’t mean they will. So it’s the desginers/engineers job to understand their audience/people and design accordingly. Human’s shouldn’t have to try to understand all the different rules different machines/systems have. I mean I can see the microwave with at least 10 different buttons, but I always use the same settings. It’s not that I don’t want more options, it’s that the way they’re presented makes them hard to understand. This is where Norman’s principles of discoverability and understanding come in. If something is hard to discover or hard to understand, we just stop using it.

In interactive media, principles like affordances, signifiers, and feedback can greatly improve how users interact with a piece. Affordances aren’t just properties of an object, but also depend on what the user is capable of doing. A heavy chair might afford sitting, and maybe lifting, depending on who is interacting with it. That highlights how important it is to design with the user in mind, not just the object. For example, a scrollable layout that hints at more content affords downward movement, a pulsing microphone icon acts as a signifier that it’s listening for input, and a search bar that updates results in real-time offers immediate feedback. These small design choices guide, inform, and respond to user actions in intuitive ways. I applied these ideas in my own p5.js generative art sketch. I included a placeholder that says “Enter a word” to clearly signal that users need to type something in the input box. Once the user presses Enter, the word appears instantly on screen providing immediate feedback.



Week 4 – Shahram Chaudhry – Antakshari

This project started with a simple yet poetic idea , visualizing lines from old Urdu poetry on screen, placing the main verse at the center and surrounding it with words from the next line, like floating thoughts. But then I began replacing those with some of my favorite Urdu song lyrics , the kind I really connect with , trying to merge different verses across songs to create something emotionally layered. That’s when I was reminded of Antakshari, a fun game many of us grew up playing. The rules are simple: one person sings a song, and the next person has to sing another song that starts with the last letter of the previous one. I loved the idea of using that structure as the basis for generative text. I initially wanted to build this fully in Urdu, but I realized it would be tricky to detect and match the last letter accurately in the script. So, I switched to English to get the logic working smoothly. Instead of full lyrics, I used individual words and for each word, the next one is chosen randomly from a list of possible options starting with the last letter. That means each run of the program generates a unique chain of words, but with a meaningful constraint just like Antakshari. I also added visual flair: a short delay between words, small random rotations and scaling for different text sizes so it wouldn’t look like a grid structure and feel a little unpredictable.

The two functions placeWord() and pickNextWord() were the heart of my sketch. I was particularly proud of how I used my prior knowledge of dictionaries to implement the words dictionary, which acts as a lookup system. It takes the last letter of the previous word and finds a list of possible next words starting with that letter. Then I randomly select one to create branching possibilities making the output  different every time.

In placeWord(), I figured out how to make each word feel organic and unique by adding randomness to its angle and size using random(-PI/16, PI/16) and a scaling factor. I also calculated the text width dynamically so I could position the next word without overlap between words. These choices made the flow of words appear natural.

function placeWord(word) {
  let fontOriginalSize = 24;
  let wordMargin = 8; 

  let angle = random(-PI / 16, PI / 16);
  let scaleFactor = random(0.9, 1.3);
  //find color from colormap
  let firstLetter = word.charAt(0).toLowerCase();
  let wordColor = colorMap[firstLetter];
  
  
  textSize(fontOriginalSize * scaleFactor);
  let wordWidth = textWidth(word);

  placedWords.push({
    word: word,
    x: gridX,
    y: gridY + random(-3, 3),
    size: fontOriginalSize * scaleFactor,
    angle: angle,
    color: wordColor
  });
  
  //move the horizontal position to right for next word
  gridX += wordWidth + wordMargin;
  
  //starts a new line/row
  if (gridX > width - 100) {
    gridX = 50;
    gridY += gridSpacing;
  }
  //canvas not filled checked by seeing if we reach bottom or not
  if (autoRun && gridY < height - 50) {
    pickNextWord();
    lastPlacedWord = nextWord;
    setTimeout(() => placeWord(nextWord), 150);
  }
}
function pickNextWord() {
  let lastLetter = lastPlacedWord.charAt(lastPlacedWord.length - 1).toLowerCase();
  //find the candidate words using lastletter and choose a random word
  let candidates = words[lastLetter];
  nextWord = random(candidates);
}

Here’s the sketch:

Eventually, I’d love to go back to my original vision and bring Urdu fully into the experience. That would mean figuring out last-letter detection in Urdu script and possibly integrating a calligraphic font to preserve the beauty of the language. If I can pull that off, the result would be a truly generative, Urdu Antakshari as a perfect blend of nostalgia, music, and generative text. 

 



Week 3 – Reading Response

A strongly interactive system, in my view, must first be intuitive. This doesn’t mean it has to be entirely self-explanatory, but if users need several pages of instructions just to figure out how to interact, the design has already failed. I mean who is going to even read 2 pages of manual/instructions these days, right? The interaction should feel discoverable through exploration.

As Crawford explains, any interaction has three parts: listen, think, and speak. If even one of these elements is weak, the entire interaction becomes unsatisfying. I think that in creative computing or digital art we often overemphasize the input (listening) and visuals (speaking) but ignore the algorithm or logic (thinking). The system should have enough diversity in its logic to produce interesting, varied outcomes rather than repeating the same response every time. Take Super Mario as an example. Imagine a version where you could only press the spacebar to jump, and Mario never actually moves only the background scrolls to simulate motion. Technically, that would still be “interactive,” but it would quickly become boring. A good interactive system allows you to achieve multiple outcomes for example move left, right, jump, pause, and even fail in different ways, giving players control. This diversity of interaction is what makes it engaging.

I also think interaction should strike a balance between predictability and surprise as we’ve discussed in class. Too much predictability leads to boredom  for instance, a game with only one level becomes dull after you memorize where every obstacle is. But if an interaction is completely chaotic it becomes unintuitive  and confusing. The balance is a design where users can learn patterns but still encounter enough new elements to stay curious.

Reflecting on my own p5 sketches, I realize I rely heavily on mouse clicks, which makes them feel reactive rather than truly interactive. To improve, I could add keyboard controls, use microphone input for both visual and audio effects, and introduce randomness so every run feels slightly different. Immediate feedback maybe by simulating animations, color changes etc. would also make interactions more interesting. Finally, asking friends to try the sketches and improving based on their feedback would help me design from the user’s perspective, not just my own.