Week 10 Reading Response

This reading made me rethink what “future technology” really means. I used to think of futuristic interfaces as things like touchscreens or holograms, but the author argues that these are actually limiting, that “Pictures Under Glass” remove the tactile richness that makes human interaction powerful. The way he describes how our hands feel and manipulate objects made me realize how much design has ignored our most natural abilities. We’ve gotten so used to flat screens that we’ve forgotten how much more expressive and intuitive touch and movement can be.

What stood out most to me was the idea that technology should amplify human capabilities, not replace or dull them. The examples about picking up a book or tying your shoes really drove this home. Those actions are effortless because they’re built on feedback, texture, and motion. It made me think about how most digital interfaces feel numb and disconnected by comparison.

I also liked how the author ended with a kind of challenge that the future is a choice. It made me see that innovation shouldn’t just be about what’s new, but about what’s better for human experience. As someone interested in design and tech, this made me want to think beyond screens and imagine tools that respond to our hands, our bodies, and the way we naturally move through the world.

Week 10: Group Project “NYUAD DJ Kit”

Main Concept:

The main concept for our group project is a DJ, since we wanted to experience what it feels like to be one. A DJ needs to handle many sounds and instruments using their unique artistic skills to create music that makes people happy and excited. Thus, we crafted this device called “NYUAD DJ Kit.” By using it, you can choose different kinds of songs with various speeds and a base sound produced by a wooden stick. This is a unique way to compose new kinds of songs as a DJ.

Demonstration Video

Schematic:

Code we’re particularly proud of:

The parts of the code we’re most proud of are the one shown below. The if else statement allows us to move to the next song and play it. When the button is pressed, meaning the pin is low, we set buttonPressed to true and noteIndex to 0 so that the song plays from the beginning. We also used the modulo operator to ensure that we always go back to the first song after the last one. The else if statement resets the buttonPressed state to false, so that the next time we press the button, it plays the next song

//handle music switching using modulo
if (digitalRead(BUTTON_PIN) == LOW && !buttonPressed) {
    buttonPressed = true;
    //move to the next song
    currentSong = (currentSong + 1) % 3; 
    //set note to 0 so that a song plays from the beginning
    noteIndex = 0; 
    isPlaying = false;
    noTone(BUZZER_PIN);
    delay(250); //delay for 250 milliseconds
  } else if (digitalRead(BUTTON_PIN) == HIGH) {
    //set buttonPressed to false to play next song 
    buttonPressed = false;
  }

The second snippet of code allows the servo to move every servoDelay milliseconds, controlling its speed and angle. We applied the concept we learned in class called “non-blocking” to ensure that this operation does not affect the rest of the program. Inside the if statement, we use the write() function from the Servo library to change the servo’s angle each time it runs. This way, the servo continues changing its angle until it reaches either 180° or 0°, incrementing by a step each time servoDelay milliseconds have passed. We’re happy that we were able to apply multiple concepts we learned in class, such as non-blocking and modulo, to our DJ project. As references, we used the example file ToneMelody from the Digital section and the Knob example from the Servo library. We also used ChatGPT to help us figure out how to apply the non-blocking concept so that the melody can move to the next one without affecting the rest of the program, which allows the servo to continue moving smoothly.

//Use non-blocking to not affect the rest of the program
if (currentTime - lastServoUpdate >= servoDelay) { //if servoDelay mills has passed
    lastServoUpdate = currentTime;
    //Change the angle of the servo by servoPos 
    myservo.write(servoPos);
    servoPos += servoStep;
    //Start decrementing if servo reaches 0 or 180 degrees
    if (servoPos >= 180 || servoPos <= 0) servoStep = -servoStep;
  }

Reflections & Future Improvements:

In terms of reflections, we struggled a lot to make the base work because we needed to attach the wooden stick to the servo and it was not stable at all at first. The way we attached it was by using tape, which was the primary cause of why it was unstable. As a result, every time we ran the servo fast, the stick fell off or the servo stopped working for some reason. We eventually managed to make the stick and servo stable by placing some weight on top of the setup so that no matter how fast the servo moved, the stick remained stable.

As for future improvements, we want to enhance the quality of the base because right now we’re just using a wooden stick, and it doesn’t produce a loud enough sound for a party situation. Furthermore, as the stick swings faster, its swing range becomes smaller, so we need to move the bottle manually to allow the stick to reach it. We believe this happens because the servoDelay becomes too small, reaching about 1 ms, so the servo can’t physically keep up. Therefore, next time we should use constrain() on the mapped value to prevent electrical noise from going out of range. This way, we can allow the servo to catch up with the speed that we want.

Week 9 Assignment

Concept

This project uses one digital sensor (a toggle switch) and one analog sensor (an LDR) to control two LEDs. The toggle switch turns a green LED fully on or off, and the LDR controls the brightness of a yellow LED. The darker it gets, the brighter the LED becomes.

Demo

Schematic

Code Snippet
void loop() {
  // Switch controls green LED
  int switchState = digitalRead(switchPin);
  if (switchState == LOW) {
    digitalWrite(greenLED, HIGH);
  } else {
    digitalWrite(greenLED, LOW);
  }

  // Analog part: light sensor controls yellow LED brightness
  int lightVal = analogRead(ldrPin);
  int brightness = map(lightVal, 0, 1023, 255, 0); // darker room = brighter LED
  Serial.println(brightness);
  analogWrite(yellowLED, brightness);
}
Reflection and Future Improvements

This project helped me clearly see the difference between analog and digital inputs: the LDR smoothly controls LED brightness while the switch simply turns the other LED on or off.  Moving forward, I’d like to try adding more outputs, like a buzzer or an RGB LED, to make the reactions more expressive.

Week 9 Reading Reflections

Physical Computing’s Greatest Hits and Misses

Reading “Physical Computing’s Greatest Hits” made me realize that I don’t always have to invent something completely new. I learned that common projects are “hits” for a good reason. They are popular because they create a very clear and intuitive link between what a person does and what happens next. It showed me that these projects are powerful building blocks for learning, not just simple exercises to be done once and forgotten.

This article changed how I think about my own creative work. I used to worry a lot about being totally original, but now I see it’s more important to focus on the feeling or experience I’m creating. It’s okay to use a familiar idea, like a glove that senses movement, as long as I use it in a way that feels surprising or meaningful. It’s a relief to know that the human experience is more important than the technology itself.

Making Interactive Art: Set the Stage, Then Shut Up and Listen

The main lesson I learned from “Making Interactive Art” is that my job isn’t to tell the audience what to think or do. My first instinct is usually to explain everything so people “get it,” but this reading showed me that’s the wrong way. Instead, my role is to “set the stage” and to provide the space, the objects, and maybe a few hints. The real art isn’t just the thing I built. It’s the experience that people have when they explore it for themselves.

This approach means I have to “shut up and listen,” which requires a lot of trust. I have to trust that my design gives enough clues and trust that the audience will be creative. It helped me see that when people use my project in a way I didn’t expect, it’s not a failure, it’s a discovery. This makes the audience a partner in the art.

Unusual Switch

Concept

For my unusual switch, I wanted to use a light sensor to create a simple “password” that isn’t typed or pressed in the usual way. Instead of a button, the user has to shine a flashlight on the sensor in a specific pattern to activate the LED. The idea is inspired by a morse-code style signal: the user must turn the light on for a set amount of time, then off for a set amount of time, and repeat this sequence to successfully trigger the switch. This makes the interaction more playful and unusual, relying on timing and attention rather than direct contact.

How it works

The project uses an LDR to detect light intensity and an LED as the output indicator. The Arduino reads the LDR value and compares it to a threshold to decide if the light is “on” or “off.” The user must follow the correct timing sequence: light on for three seconds, then off for three seconds, repeated three times. If the pattern is completed correctly, the green LED turns on for a few seconds to indicate success.

Github https://github.com/MohdAlkhatib/introtoim/blob/main/sketch.c

void loop() {
  int success = 0;

  for (int i = 0; i < 3; i++) {
    if (analogRead(ldrPin) > lightThreshold) {
      delay(3000); // light on for 3 seconds
      if (analogRead(ldrPin) < lightThreshold) {
        success++;
        delay(3000); // light off for 3 seconds
      }
    }
    delay(500); // brief pause between attempts
  }

  // if the user does the signal correctly 3 times in a row, turn on the LED
  if (success == 3) {
    digitalWrite(greenLED, HIGH);
    delay(3000);
    digitalWrite(greenLED, LOW);
  }

  delay(1000); // short pause before restarting
}
Future Improvements

In the future, I could make the switch more flexible by allowing different light patterns as “passwords” or adjusting the timing for each sequence. I could also add multiple LEDs or even a small sound to give feedback for each correct or incorrect step.

Week 8 Reading Reflection

Norman,“Emotion & Design: Attractive things work better”

This reading really made me think about how design doesn’t just shape what we see, but also how we feel and act. Norman’s main idea is that attractive things actually work better because they put us in a good mood. When we feel happy or calm, we think more clearly and creatively, but when we’re stressed or frustrated, we make more mistakes. It’s interesting how something as simple as how a product looks can change the way our brains approach problems.

What I found most interesting is how emotion can affect performance. I’ve definitely felt that before. When something looks well-designed, I automatically assume it’s easier to use, and I’m less frustrated when it isn’t.

This idea made me think differently about my own projects. I usually focus on getting the function right, but Norman reminded me that appearance and emotional impact are just as important. A design that feels good to use can make people more patient, creative, and willing to explore it which is exactly what good design should do.

Her Code Got Humans on the Moon

This article made me realize how much impact one person’s focus and persistence can have. Margaret Hamilton led the team that wrote the software for the Apollo moon missions, at a time when very few women were in computer science. What stood out to me was how seriously she treated her work, seeing programming not just as a task but as a new discipline worth perfecting. She built systems that could handle errors before they even happened, thinking carefully about every possible failure. While others dismissed her concerns, she stayed confident and made sure the software could handle anything that might go wrong.

The part that impressed me most was when she wanted to include extra code for a mistake that everyone said would never happen, and later it did during Apollo 11. Because of her planning, the astronauts were able to land safely. That moment showed how much preparation and humility matter in great work. Hamilton’s story made me reflect on how important it is to imagine every possibility, even the unlikely ones, and to stay confident in your ideas even when others doubt you. It reminded me that success often comes from quiet persistence and from caring enough to get the details right.

Midterm Report

My Concept

For the midterm, I created Highway Havoc, a fast-paced driving game where the player must weave through incoming traffic and avoid collisions while maintaining control at high speeds. My goal was to build an arcade-style driving experience that feels alive, unpredictable, and immersive. Cars rush toward you, radar traps flash if you’re speeding, and the scrolling environment creates a sense of depth and motion.

To make the gameplay feel as close to real highway driving as possible, I implemented several interactive and behavioral features:

  • Dynamic traffic flow: Cars and buses spawn randomly, sometimes creating congested traffic and other times leaving open gaps.
  • Lane-based speeds: Vehicles on the left lanes drive faster than those on the right, mimicking real traffic patterns.
  • Adaptive driving: Vehicles automatically slow down and match the speed of the car ahead to prevent crashes.
  • Autonomous behavior: Vehicles occasionally use turn signals for three seconds and switch lanes when the adjacent lane is clear of other NPC cars. They can still crash into the player to make the game more challenging.
  • Reactive NPCs: There’s a 10% chance that a vehicle will get “spooked” and change lanes when the player flashes their headlights.

Website: https://mohdalkhatib.github.io/highwayhavoc/

Embedded Sketch

 

How it works

The game is built using object-oriented programming to manage all its core components: Player, EnemyCar, SchoolBus, and several static environment classes like LaneLine, Tree, and Radar. To make the player car look like it’s going upwards, all the objects on the canvas move downwards depending on the difference between the player’s speed and the object’s speed. Timed progression is handled using millis(), allowing the game to dynamically scale its difficulty It begins with just two enemy cars and gradually increases to seven over time. After 30 seconds, a special SchoolBus class appears. This bus inherits most of its behavior from EnemyCar but acts as a larger, slower obstacle, adding variety and challenge to the gameplay.

Player movement is handled with smooth lerp() transitions instead of instant lane jumps, creating a realistic sliding motion. The car tilts slightly while switching lanes, giving the animation a sense of weight and momentum.

// Smooth lane movement
this.x = lerp(this.x, this.targetX, 0.15); // 0.15 is speed of slide

// Calculate tilt: proportional to distance from targetX
let tilt = map(this.x - this.targetX, -63, 63, 0.2, -0.2); // max tilt ±0.2 rad 

// Rotate car while sliding
translate(this.x, this.y);
rotate(tilt); 

 

The garage menu is implemented as a separate state in the program. All player cars are stored in a 2D array (matrix) where the first dimension represents car type and the second represents color variants. Selecting a car updates the active type index, and selecting a color updates the corresponding color index. The game then loads the pre-rendered image from playerCarImgs[type][color] for the main gameplay.

let playerCarImgs = [];
let carColors = [
  ["#f10a10", "#b77bc6", "#ffde59"], // Car 1 colors 
  ["#737373", "#302d2d", "#df91c1"], // Car 2 colors 
  ["#7ed957", "#6f7271", "#df91c1"]  // Car 3 colors 
];
let selectedCarIndex = -1;
let selectedColorIndex = 0;
let selectedColorIndices = [0, 0, 0]; // store each car's color choice
// Load player cars (3 types × 3 colors)
  for (let i = 0; i < 3; i++) {
    playerCarImgs[i] = [];
    for (let j = 0; j < 3; j++) {
      playerCarImgs[i][j] = loadImage(`player_car_${i}_${j}.png`);
    }
  }

 

From a design standpoint, I’m especially happy with how the visual composition is handled. The positions of all road elements (lane markers, trees, etc.) are responsive relative to the center of the road, ensuring that everything remains aligned even if the screen size changes. The gradual introduction of more enemies also creates a natural difficulty curve without overwhelming the player early on.

One of the features I’m most proud of is the radar system. It actively monitors the player’s speed and position. If the player passes a radar while exceeding the speed limit, the game transitions to a “game over” state. At that moment, p5 captures a snapshot of the player’s car beside the radar, displays it on the game over screen, and plays a camera shutter sound effect. Similarly, collisions trigger a crash sound, giving the sense of danger on the highway.

detect(player) {
      return abs(this.y - player.y) < 10 && playerSpeed > speedLimit;
}

if (radar.detect(player)) {
      gameState = "GAMEOVER";
      engineSound.stop();
      captureGameOver("You were caught speeding!");
      shutterSound.play();
}

// Crashed or caught by radar: take snapshot
let snapWidth = 900;
let snapHeight = 300;

let sx = constrain(player.x - snapWidth / 2, 0, width - snapWidth);
let sy = constrain(player.y - snapHeight / 2, 0, height - snapHeight);

gameOverImage = get(sx, sy, snapWidth, snapHeight);
gameOverMessage = message;

 

Problem I Resolved

One of the biggest challenges I faced was figuring out how to implement realistic car sounds that respond naturally to gameplay. My initial idea was to use separate audio clips for different speeds or for moments when the player accelerates or decelerates. However, this quickly became too complex and didn’t sound seamless in motion.

Instead, I switched to a single looping engine sound and mapped both its playback rate and volume to the player’s current speed. This made the sound naturally increase in pitch and intensity as the car accelerated, without needing multiple clips. To add more realism, I also added a short braking sound effect that triggers automatically when a large drop in speed is detected, simulating tire friction or skidding during sudden stops.

function handleInput() {
  if (keyIsDown(UP_ARROW)) playerSpeed = min(playerSpeed + 0.1, maxSpeed);
  if (keyIsDown(DOWN_ARROW)) {
    let oldSpeed = playerSpeed;
    playerSpeed = max(playerSpeed - 0.5, 0);
  }

  // Adjust pitch and volume of engine
  let volume = map(playerSpeed, 0, maxSpeed, 0.1, 1.0);
  engineSound.setVolume(volume);
  let pitch = map(playerSpeed, 0, maxSpeed, 0.8, 4.0);
  engineSound.rate(pitch);
}

// If the speed decreased by 5 or more within 1 second
if (speedDrop >= 5) {
      if (!brakeSound.isPlaying()) {
        brakeSound.play();
      }
}

 

Areas for Improvement

One issue I noticed is that enemy cars sometimes merge into the same lane at the same time, leading to overlapping or clipping. Improving their lane-changing logic to better detect other vehicles could make the traffic feel more realistic. I’d also like to expand the variety of vehicles by adding a larger collection of car models and giving each one unique attributes such as top speed, acceleration, and braking ability.

 

 

Midterm Progress

For my midterm project, I’m making a driving game where the player controls a car and navigates through traffic. The car stays in place on the screen, and the world around it moves to create the illusion of motion. Players can control their car’s lane position and adjust their speed with the arrow keys.

The design centers on creating a believable driving experience in which the player is encouraged to drive recklessly. The player manages their speed while avoiding collisions with NPC cars, which follow traffic-like behavior with different speeds depending on their lanes. Headlights add an element of strategy, as flashing them can occasionally cause a slower car in front to switch lanes. Speed cameras introduce a sense of tension by encouraging players to manage speed carefully.

The code is organized with:

    • A Player class for position, speed, and headlights.

    • An EnemyCar class for NPC vehicles with varying lane speeds.

    • A Road and Environment system that handles moving lane markings, roadside trees, and cameras to simulate forward motion.

    • A main game loop that updates movement, detects collisions, and handles interactions between different classes. (e.g. Radar object detecting the Player object, Player object tailgating the EnemyCar object).

The most complex part so far has been creating the illusion of motion. Since the player’s car stays in one place, I needed to make the road, lane markings, trees, and other cars move dynamically to trick the eye into feeling like the player is driving forward. To minimize this risk, I prototyped moving background elements and tested different speeds for roadside objects versus lane lines. These tests helped me establish a convincing sense of motion, which makes the game feel much more immersive.

Sketch so far

Week 5 Reading Reflection

When I think about how computer vision differs from human vision, what stands out most is how little meaning computers can extract on their own. To a human, even a blurry image is packed with context, emotion, and symbolism. To a computer, it’s just a grid of pixel values that need to be processed before anything useful can be recognized. This is why the article highlights techniques like frame differencing, background subtraction, and brightness thresholding, which are methods that help a computer separate what is important from what is in the background.

To make these systems work, we often need to modify the image so the computer can interpret it. That might mean fixing the lighting, using high-contrast markers, or limiting the scope of what the computer is supposed to track. In a way, we design the world to fit the algorithm rather than expecting the algorithm to fully match the complexity of the world.

In interactive media, I think this capacity to interpret and track movement opens up exciting new directions for creativity. Computer vision gives artists a way to design works that respond directly to a person’s gestures, which is actually something I used in a project for another IM course. It transforms the audience from passive viewers into active participants, making the artwork something dynamic and alive. By combining human imagination with the computer’s ability to detect patterns, interactive art can become more immersive and responsive than ever before.

Week 4 Visualizing Data

My Concept

For this project, I wanted to create a data-driven timelapse showing airports across Europe. Each airport is represented as a circle, and its size corresponds to the number of passengers in a given week. The idea is to make busier airports stand out visually while showing the flow of air traffic over time. I got the idea as I was looking for datasets on Kaggle. The first dataset I found had the Daily flight information on hundreds of European airports from 2016-2022. The problem was that it didn’t contain the coordinates of the airports for me to plot them in p5 as I envisioned. Thankfully, I found a second dataset that contains the coordinates of airports from around the world. I decided to remove all the non-European airports from the second dataset, then cross filter the two datasets by airport code to make a third csv file with all the information I need before using p5.

Code Highlight

Since I wanted to create a third dataset. I needed to clean out any irrelevant, redundant, or incomplete columns, then merge the two csv files. To remove the non European airports from the coordinates file, I used the below C++ program which reduced the number of rows from the original 83,000 to a couple hundred.

while (getline(inputFile, line)) {
        stringstream ss(line);
        string ident, lat, lon, continent;
        getline(ss, ident, ',');
        getline(ss, lat, ',');
        getline(ss, lon, ',');
        getline(ss, continent, ',');
        continent = trim(continent);
        cout << continent << endl;
        
        if (continent == "EU") {
            cout << "Found an EU row" << endl;
            outputFile << line << "\n";
        }
        
    }

I then wrote this code to only take the airports that appear in both flights.csv and airports.csv, and write then to a third file, which I uploaded to p5

while (std::getline(flightsFile, line)) {
         std::stringstream ss(line);
         std::vector<std::string> cols;
         std::string cell;

         while (std::getline(ss, cell, ',')) {
             cols.push_back(trim(cell));
         }

         if (cols.size() < 6) continue; // skip malformed rows

         std::string icao = cols[3];

         // Skip row if airport not found in airportMap
         auto it = airportMap.find(icao);
         if (it == airportMap.end()) continue;

         // Write row with coordinates
         outFile << cols[0] << "," << cols[1] << "," << cols[2] << ","
                 << cols[3] << "," << cols[4] << "," << cols[5] << ","
                 << it->second.lat << "," << it->second.lon << "\n";
     }

     airportsFile.close();
     flightsFile.close();
     outFile.close();

     std::cout << "Merged CSV with coordinates created successfully.\n";
     return 0;

Finally, in p5, I looped through all the airport data and positioned them based on their coordinates mapped to the canvas size. Then I made the sizes of the airports correspond to their traffic using Power of Log Scaling to make the less busy airports extra small and the busy airports extra big so the contrast is clearer.

// Map coordinates and size
 for (let i = 0; i < airports.length; i++) {
   airports[i].x = map(airports[i].lng, minLng, maxLng, 50, width - 50);
   airports[i].y = map(airports[i].lat, minLat, maxLat, height - 50, 50); // Inverted Y because p5 coordinates are top to bottom

   // Power of log scaling for exaggerated size
   airports[i].size = map(
     pow(log(airports[i].passengers + 1), 2),
     0,
     pow(log(maxPassengers + 1), 2),
     2,
     60
   ); 
 }

Sketch

Reflection

I was impressed by how even large datasets can be visualized effectively with only a few lines of code once the mapping is set up. Using loops and data structures allows complexity to emerge naturally. This project reinforced how math and data can drive visual design, and how small adjustments, like using a logarithmic scale instead of linear mapping, can completely change how readable a visualization is.

For future improvements, I would like to make the timelapse smoother, add color coding by country or region, and include tooltips or labels when hovering over airports so users can see exact values. I could also allow filtering by week or year to explore trends more interactively. These enhancements would make the visualization more engaging and informative.