Week 10 – musical instrument

Concept

For this week’s assignment, Shahram and I made a small musical instrument using an Arduino. We decided to create a pressure-sensitive drum pad that lets you play different drum sounds depending on how hard you press it.

The main part of the project is a force-sensitive resistor (FSR) that acts as the drum pad. When you press on it, the Arduino reads how much pressure you applied and plays a sound through a small buzzer. The harder you hit it, the longer the sound lasts, kind of like playing a real drum.

We also added a button that lets you switch between three drum sounds: a kick, a snare, and a hi-hat. So pressing the pad feels interactive, and you can change the type of drum as you play. It’s a really simple setup, but it was fun to experiment with.

Schematic

 

Video Demo

IMG_0808

const int FSR_PIN = A0;        
const int BUTTON_PIN = 2;     
const int PIEZO_PIN = 8;     

// Drum sounds 
int kickDrum = 80;             // low drum
int snareDrum = 200;           // mid drum
int hiHat = 1000;              // high drum

int currentDrum = 0;         
int lastButtonState = HIGH;    

void setup() {
  pinMode(BUTTON_PIN, INPUT);  
  pinMode(PIEZO_PIN, OUTPUT);
  Serial.begin(9600); 
}

void loop() {
  int pressure = analogRead(FSR_PIN);
  if (pressure > 20) {
    int duration = map(pressure, 10, 1023, 10, 200);
    // Play the drum sound
    if (currentDrum == 0) {
      tone(PIEZO_PIN, kickDrum, duration);
    } else if (currentDrum == 1) {
      tone(PIEZO_PIN, snareDrum, duration);
    } else {
      tone(PIEZO_PIN, hiHat, duration);
    }
    delay(50);  
  }
  int buttonState = digitalRead(BUTTON_PIN);
  //if button was just pressed, we need to change drum sound
  if (buttonState == LOW && lastButtonState == HIGH) {
    currentDrum = currentDrum + 1;
    if (currentDrum > 2) {
      currentDrum = 0;
    }
    delay(200); 
  }
  lastButtonState = buttonState;  // Store utton state 
}

Future Improvements

For future improvements, we’d like to add a potentiometer to control the sound more precisely, allowing the player to adjust tone or volume in real time while drumming. We could also include LEDs that light up based on which drum sound is active and how hard the pad is hit. These additions would make the drum pad feel more dynamic,  and visually engaging.

 

Week 10 – reading

A Brief Rant on the Future of Interaction Design – initial article

The part of the reading that I found most interesting was the idea that we should stop limiting human interaction to just the tips of our fingers. It made me question why so many interactive designs and technologies focus almost entirely on finger-based gestures. Why not explore other parts of the body as tools for interaction?

I think part of the reason is that we, as humans, feel we have the most control over our fingers – they’re precise, sensitive, and easy to coordinate. But that doesn’t mean interaction should stop there? How could we design experiences that involve other body parts, ones that are still easy to control, but that go beyond what’s been unimaginatively repeated in current design?

The article emphasises how much of our natural ability to sense and manipulate the world through touch has been lost in “Pictures Under Glass”, flat screens that give us no tactile feedback. That really stood out to me, because it highlights how technology, despite its innovation, can sometimes strip away what makes interaction human.

Overall, the reading made me realise how limited our current designs are compared to the potential of the human body. I think it helped me to challenge myself and image more creative ways to interactive with my projects.

Responses

I appreciate his point on the iPad being revolutionary but if it acts the same in 20 years, it won’t be as there has been no big enough change to stay revolutionary. But what are those next steps? Mind control, hand and arms altogether with holograms? It is difficult to come up with answers or suggestions to the rants that were mentioned but I also see the finger being a cap to how we interact with daily tech objects.

Week 9 assignment

For this assignment, I decided to use the potentiometer and the button. I wanted the potentiometer to control how bright the RGB light is and use the switch button for 2 purposes, to change the general colour of the RGB but to also start and stop the blinking of the red alarm LED I used.

I had the code use 4 different colours but have this also a factor on how many blinks occur with the LED light.

I had the switch button be used as a way to have the red LED on or off.

To be honest, I used the Claude Sonnet 4..5 model to give me ideas on how I can make the whole system more interesting / creative. That is how the idea of having the number of pings be dependent on the colour came from.

this is the code I used

// PIN location
const int POT_PIN = A0;           // Potentiometer
const int BUTTON_PIN = 2;         // Push button 
const int RGB_RED_PIN = 9;        
const int RGB_GREEN_PIN = 10;     
const int RGB_BLUE_PIN = 11;      
const int MODE_LED_PIN = 13;      // Ree LED 

int potValue = 0;                 
int brightness = 0;               
bool buttonState = false;         
bool lastButtonState = false;     
int colorMode = 0;               
unsigned long previousMillis = 0; //  non-blocking blink timing
const long blinkInterval = 150;   // blink interval for mode indicator
bool modeLedState = false;        

const char* modeNames[] = {"White", "Red", "Green", "Blue"};

void setup() {
  Serial.begin(9600);
  
  // set pins
  pinMode(POT_PIN, INPUT);
  pinMode(BUTTON_PIN, INPUT);
  pinMode(RGB_RED_PIN, OUTPUT);
  pinMode(RGB_GREEN_PIN, OUTPUT);
  pinMode(RGB_BLUE_PIN, OUTPUT);
  pinMode(MODE_LED_PIN, OUTPUT);
  
  // initialise LEDs as off
  analogWrite(RGB_RED_PIN, 0);
  analogWrite(RGB_GREEN_PIN, 0);
  analogWrite(RGB_BLUE_PIN, 0);
  digitalWrite(MODE_LED_PIN, LOW);
}

void loop() {
  // potentiometer read
  potValue = analogRead(POT_PIN);
  
  // map the values of the potentiometer to a brightness value
  brightness = map(potValue, 0, 1023, 0, 255);
  
  // button read
  buttonState = digitalRead(BUTTON_PIN);
  
  // when button is pressed, change the colour
  if (buttonState == HIGH && lastButtonState == LOW) {
    colorMode = (colorMode + 1) % 4;  //cycle through 0-3
    Serial.println(modeNames[colorMode]);
    delay(50);  
  }
  
  lastButtonState = buttonState;
  
  // change RGB LED based on selected colour
  switch(colorMode) {
    case 0:  //  white
      analogWrite(RGB_RED_PIN, brightness);
      analogWrite(RGB_GREEN_PIN, brightness * 0.8); 
      analogWrite(RGB_BLUE_PIN, brightness * 0.6);  
      break;
      
    case 1:  // red 
      analogWrite(RGB_RED_PIN, brightness);
      analogWrite(RGB_GREEN_PIN, 0);
      analogWrite(RGB_BLUE_PIN, 0);
      break;
      
    case 2:  // green 
      analogWrite(RGB_RED_PIN, 0);
      analogWrite(RGB_GREEN_PIN, brightness);
      analogWrite(RGB_BLUE_PIN, 0);
      break;
      
    case 3:  // blue 
      analogWrite(RGB_RED_PIN, 0);
      analogWrite(RGB_GREEN_PIN, 0);
      analogWrite(RGB_BLUE_PIN, brightness);
      break;
  }
  
  // Mode 0 = no blinks (off)
  // Mode 1 = 1 blink
  // Mode 2 = 2 blinks 
  // Mode 3 = 3 blinks
  handleModeIndicator();
  delay(10);  // Small delay for stability
}

//function to handle the 
void handleModeIndicator() {
  unsigned long currentMillis = millis();
  static int blinkCount = 0;
  static unsigned long patternStartTime = 0;
  static bool isBlinking = false;
  
  // when white, keep LED off
  if (colorMode == 0) {
    digitalWrite(MODE_LED_PIN, LOW);
    blinkCount = 0;
    isBlinking = false;
    return;
  }
  
  // new blink pattern every 2 seconds
  if (!isBlinking && (currentMillis - patternStartTime >= 2000)) {
    isBlinking = true;
    blinkCount = 0;
    patternStartTime = currentMillis;
  }
  
  // perform blinks equal to color mode number
  if (isBlinking) {
    unsigned long timeInPattern = currentMillis - patternStartTime;
    int currentBlink = timeInPattern / (blinkInterval * 2);  // *2 for on+off
    
    if (currentBlink < colorMode) {
      // still within the blink count for this mode
      unsigned long timeInBlink = timeInPattern % (blinkInterval * 2);
      
      if (timeInBlink < blinkInterval) {
        digitalWrite(MODE_LED_PIN, HIGH);  // LED on
      } else {
        digitalWrite(MODE_LED_PIN, LOW);   // LED off
      }
    } else {
      digitalWrite(MODE_LED_PIN, LOW);
      isBlinking = false;
    }
  }
}

 

IMG_2228

 

Week 9 reading

Physical Computing’s Greatest Hits (and misses)

The article highlights that originality doesn’t always mean creating something entirely new from scratch. Many people shy away from existing physical computing ideas because they think someone else has already done them, but that’s not the point. What matters is how we, as artists and engineers, can bring our own perspective, creativity, and purpose to these ideas. The author emphasises that real creativity lies in reimagining how people interact with technology in expressive and human centred ways.

weird switch

For my project, I was inspired from the gym leg adductor machine where you have to close your legs against weight to work on your inner thigh muscle. I created the circuit just like in class but I added the 2 strips of copper tape on my inner thigh so that they when the legs were closed, the electricity would flow through. I also added crocodile clips just to extend the length of the circuit.

IMG_2157

I drew a rough schematic on how I wanted the system to work.

 

Week 8 – reading

Her Code Got Humans On The Earth

Margaret Hamilton’s story resonates with me as an aspiring software engineer, especially seeing how she navigated a world that wasn’t built for her. She was able to bring her daughter Lauren to the lab on weekends, letting her sleep on the floor while she coded into the night. That choice wasn’t just about balancing work and family, but showing both are achievable. This  actually saved a mission, when Lauren accidentally crashed the Apollo simulator by pressing P01 during flight, Hamilton saw the danger immediately and warned NASA. They brushed her off, insisting astronauts were too perfect to make mistakes and did not take her concern seriously. But during Apollo 8, astronaut Jim Lovell did exactly what Lauren had done, wiping out all the navigation data. Hamilton and her team spent nine hours finding a fix to bring them home. Hamilton wasn’t just writing code, she was inventing the entire idea of software engineering in real-time, creating the practices we still rely on today. Her work reminds me that the best engineers aren’t the ones who assume everything will go perfectly, but the ones who plan for when it doesn’t. Her thinking of all branches of an act is what makes her an incredible software engineer.

Attractive Things Work Better

As someone studying computer science, Norman’s argument that “attractive things work better” initially felt weird to hear, like permission to prioritise aesthetics over functionality. But it makes sense as good designs should balance both aesthetics and usability, creating experiences that are functional and resonant. What really resonated was his point about positive affect making us more tolerant of minor difficulties. When I’m working with tools that feel good to use, I don’t rage-quit when I hit a bug. But when I’m already stressed and the interface is terrible, every small friction angers me more. This is why critical systems, like hospital applications, should be completely simple and understandable, while something non-critical like a coffee ordering app can afford to prioritise delight over efficiency.

However, I’m uncertain whether beauty can truly compensate for poor usability. Norman says “when we feel good, we overlook design faults,” but this happens far too often with modern apps. Apple’s system apps, from the clock to the calculator, are aesthetically beautiful but frustratingly impractical for users who need advanced features.

Still, I agree with his main point, we’re not computers evaluating products on pure utility. We’re emotional beings, and our feelings genuinely affect our performance. As engineers, we should build things that not only work but also make people feel capable and confident.

 

Midterm

Inspiration

For this project, I want to create an interactive digital art piece that explores the true scale of reality by gradually zooming from large, natural environments down to microscopic and atomic levels.

Visual Elements

Flower Screen

  • Add a tree, birds, more flowers, a grass field, and the sun for a fuller composition.

  • Include animations such as swaying grass, apples falling from the tree, and birds flying across the screen to make it feel alive.

Leaf Screen

  • Add details like insects, the stem, and a more zoomed-in view of the leaf.

  • Animate insects crawling across the surface to bring a sense of realism.

Cell Screen

  • Show multiple plant cells floating in a jelly-like substance.

  • Design them to resemble real plant cells, with more detail and fluid animation.

Atom Screen

  • Illustrate atoms with orbiting ellipses that cross over each other.

  • Show the nucleus clearly, with protons and neutrons on display.

Interaction: Zoom Functionality

  • Replace the two-finger pinch with a two-hand gesture for zooming, making it more intuitive and reducing accidental zooms.

  • Add smooth zoom animations between levels instead of abrupt page changes, to create a more immersive transition.

Sound Design

  • Integrate sounds that complement each environment:

    • Flower screen: natural ambient sounds (e.g., wind, birds).

    • Leaf screen: subtle insect sounds.

    • Cell screen: soft “jelly-like” sounds.

    • Atom screen: buzzing or electrical sounds.

  • Add a “zoom-in” sound effect to enhance transitions

    (All sounds are sourced from Pixabay.com.)

Machine Learning

To enhance user interactivity, I incorporated machine learning using the ml5 library, which integrates well with p5.js and is relatively simple to implement. I set two thresholds, “close” and “far”, based on the distance of the user’s hands. These thresholds determine when the zooming action is triggered, making the interaction feel more natural and responsive.

Extra details and screenshots

I added a home page to show users the hand gestures and extra button functionalities.

Screen Recording 2025-10-07 at 00.17.22

Challenges

Coming up with creative ideas for this project was challenging, and implementing the zooming feature was especially difficult since I had never attempted it before. Getting it to work smoothly took a lot of trial and error.

This link from p5 was helpful – https://editor.p5js.org/mimimimimi/sketches/SOkckqY_r https://editor.p5js.org/Luxapodular/sketches/rk__bPdcm but also just experimenting with the ease in and out values to make the zoom as natural as possible.

// ===== TRANSITIONS =====
// initiate zoom transition between scenes
function startZoomTransition() {
  isTransitioning = true;        // flag to indicate transition is active
  transitionProgress = 0;        // reset
  
  // Play zoom sound for every transition at 50% volume (if not muted)
  if (zoomSound && !isMuted) {
    zoomSound.setVolume(0.5);
    zoomSound.play();
  }
}

// update for each frame
function updateTransition() {
  if (!isTransitioning) return;  
  
  transitionProgress += 0.03;    // increment by 3% each frame 

  //check if 100% (1)
  if (transitionProgress >= 1) {
    isTransitioning = false;     // stop transition
    transitionProgress = 0;      // reset
    currentPage = currentPage === SCENES.length - 1 ? 1 : currentPage + 1;
    playSceneSound(); // Play sound for the new scene
  }
}

// applies visual zoom effect during transitions
function applyCameraTransform() {
  // create smooth easing curve: slow start, fast middle, slow end
  const easeT = transitionProgress < 0.5
    ? 4 * transitionProgress ** 3      // first half: cubic ease-in
    : 1 - (-2 * transitionProgress + 2) ** 3 / 2;  // Second half: cubic ease-out
  
  // calculate zoom level: smoothly interpolate from 1x to 100x zoom
  const zoom = lerp(1, 100, easeT);
  
  // get the target point to zoom into for current scene
  const [x, y] = SCENES[currentPage].zoomTarget;
  
  // apply camera transformation:
  translate(x, y);     // move to zoom target point
  scale(zoom);         // apply zoom scaling
  translate(-x, -y);   // move back to keep target centered
}

final code – https://editor.p5js.org/kk4827/sketches/9CleTb6y1

week 5 – reading

  • What are some of the ways that computer vision differs from human vision?

As humans, we are able to look at something and classify it no matter the angle, lighting. A computer on the other hand just sees pixels of certain colours and we see the result as a reflection of real life. Then this is how machine learning gets involved, through hundred of images being labelled, colours and patterns identified, the computer is able to tell what it is looking at. That is exactly how my ml model in my midterm is able to detect which hand is which, which fingers are which.

As humans, we are told what is what by our environment and we see this come up with blind people in particular. What we see as green, one may not see it the same way. So in that sense, we are similar.

  • What are some techniques we can use to help the computer see / track what we’re interested in?

Frame differencing – detects motion by comparing each pixel in one video frame with the next. The difference in brightness indicates movement and this requires stable lighting and a stationary camera.

Background subtraction – detects presence by comparing the current frame against a stored image of the empty scene. Areas that differ significantly likely represent objects of interest, but this is sensitive to lighting changes.

Brightness thresholding – distinguishes objects based purely on luminosity and comparing each pixel’s brightness to a threshold value. This works when you can control illumination through backlightin.

By combining these techniques, we can create more complex art representations such as contact interactions (triggering events when a silhouette touches a graphic object), overlap interactions (measuring shared pixels between silhouette and virtual elements), or reflection interactions (computing angles when objects strike the silhouette). Warren’s research shows that once you’ve identified body pixels, implementing sophisticated interactions requires “little more than counting pixels” – making computer vision accessible for creating responsive installations, games, and performance systems where participants interact with virtual creatures or control visual elements through gesture and movement.

  • How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

The surveillance aspect is unavoidable, computer vision in art exists in the same technological ecosystem as security systems and facial recognition.

Different artists engage with this differently. Krueger’s Videoplace uses vision technology playfully where people willingly participate. But Lozano-Hemmer’s Standards and Double Standards explicitly creates “a condition of pure surveillance” using symbols of authority and that visibility of surveillance is the point. Rokeby’s Sorting Daemon confronts automated profiling by making visible the disturbing implications of computer vision used for racial categorisation, using surveillance tools to critique surveillance itself. Jeremijenko’s Suicide Box is honeslty very creepy to me, to see deaths and have them shown really raises questions about who has the right to see those sort of moments.

This is a topic that raises questions on consent and where data of users is being stored. If I was to interact with some art, should I assume that it won’t store any input that I am feeding it

Week 5 – midterm progress

So for my midterm, I want to create some form of art and use Machine Learning. I want to have a visualisation of biology. I want to show a flower and have it keep being zoomed in to the atomic level. I want to use the ML model to detect the pinching motion, and this would trigger the page change.

index.html -> leaf.html -> cell.html -> atom.html

Firstly, I wanted to focus on the ML model and have the motion be detected. I used the ‘Hand Pose Detection with ml5.js’ video from the coding train as a foundation. I changed the parameters just to detect the right hand index finger and thumb.

Currently, I have incredibly basic images for the 4 pages and I will work on making them more aesthetic. The last page has OOP principles from the several atoms and the spinning electrons.

I also want to add some sort of noise to the first 3 images to represent what environment you could find them in. I am also thinking of making the transition between them represent some sort of medium between the 2 images.

 

class Atom {
  constructor(x, y, rotationSpeed = 0.02, innerOrbitRadius = 40, outerOrbitRadius = 60) {
    this.x = x;
    this.y = y;
    this.rotationSpeed = rotationSpeed;
    this.innerOrbitRadius = innerOrbitRadius;
    this.outerOrbitRadius = outerOrbitRadius;
    this.rotation = 0;
    this.nucleusSize = 20;
    this.electronSize = 8;
    this.outerElectronSize = 6;
  }

 

Week 5 – reading

  • What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?

I wis Norman’s analysis went deeper into economic incentives behind poor design. While he touches on cost-cutting measures, like companies using “the least expensive sound device” that can only beep, or single lights with confusing flash patterns instead of clear displays, he doesn’t fully address whether some companies intentionally create poor user experiences.

I suspect many businesses actually profit from user confusion. Think about subscription services with deliberately maze-like cancellation processes, or software that makes basic functions require premium upgrades (Dark UI/UX patterns). Norman notes that when people struggle with technology, “the people are blamed for not understanding the machine” – but this blame-shifting can be profitable through tech support fees, extended warranties, or forcing users toward expensive premium versions. Apple is notoriously known for doing this.

Norman hints at this when discussing cost reduction forcing designers to use inadequate feedback systems, but I wish he’d been more direct about the perverse economic incentives. Sometimes bad design isn’t just oversight – it’s strategy. Companies know users will adapt and memorise workarounds rather than switch products, especially when switching costs are high.

  • How can you apply some of the author’s principles of design to interactive media?

In my interactive media and web design work, I constantly rely on Norman’s concept of discoverability. Having built apps and websites, I’ve learnt there are established conventions most users already understand – the hamburger menu, colour changes indicating clickable text, or standard navigation patterns.

These conventions work as what Norman calls “signifiers”, they provide clues about what actions are possible and where to perform them. When I use a familiar icon or follow expected layout patterns, I’m leveraging users’ existing mental models rather than forcing them to learn arbitrary new systems in the hopes of a simpler smoother user experience.

Norman’s principle of natural mapping is also significant. Just as he advocates arranging light switches to match the pattern of the lights they control, I arrange interface elements to match users’ spatial expectations. Navigation goes where people expect it, buttons look like buttons with proper visual hierarchy, and interactive elements behave as they should.

The key insight I take from Norman is that good interactive design should feel invisible – users shouldn’t have to think about how to use it. I should be able to show my work to someone who hardly interacts with websites and be confident that they will able to navigate around my work without additional instructions. If I have to add text for something simple, it is a good indicator that I didn’t design my solution well.