Week 9 – Foundations of a Mood Lamp

Concept

The idea behind this project is to create a simple mood lamp. You can switch between different colors using buttons and control the brightness with a potentiometer.

An important feature is that when you set a certain brightness for one color, that same brightness carries over when you switch to a different color. This makes the lamp feel intuitive to use. For example, if the lamp is set to 50% brightness on blue, switching to red keeps it at 50% instead of jumping to full brightness. This allows smooth color transitions without having to constantly readjust brightness.

The project combines digital inputs (buttons) and an analog input (potentiometer) to control analog outputs (LED brightness) in a simple but effective way.

https://github.com/kzeina/Intro-To-IM

week9 circuit demo

Code I’m Proud of

//use internal pull-up resistor (button connects to 5v when pressed)

   //default state is high, pressed = low

   pinMode(buttonPins[i], INPUT_PULLUP);

I used INPUT_PULLUP, which is a built-in resistor in Arduino that helps stabilize button inputs. It works by keeping the pin in a known state , HIGH when unpressed and LOW when pressed, preventing it from randomly floating between readings. I discovered it through ChatGPT while debugging an issue where my LEDs were turning on unexpectedly. I learned that because my setup only used buttons as inputs (without any pull-up resistors), the readings were unstable, causing the LEDs to light up when they shouldn’t.

 

void loop() {

 //loop through each button to check if it’s pressed

 for (int i = 0; i < 3; i++) {

   //button pressed (low because of input_pullup)

   if (digitalRead(buttonPins[i]) == LOW) { 

     //set this led as the active one

     activeLED = i;                         

     //turn off all other leds

     for (int j = 0; j < 3; j++) {

       if (j != i) analogWrite(ledPins[j], 0);

     }

   }

 }

 //if an led is active, adjust its brightness based on the potentiometer

 if (activeLED != -1) {

   //read analog value from potentiometer (0–1023)

   int sensorValue = analogRead(A0);            

   //control brightness of the active led

   analogWrite(ledPins[activeLED], sensorValue/4);  

 }

}

Frankly, I’m proud of almost all my code for this project. My background in computer science definitely helped me figure out the logic more easily. I used a nested loop that goes through each button (which I stored in an array, along with the LED pins) and checks its state. When a button is pressed, its corresponding LED turns on while all the others turn off. To let the potentiometer control the brightness of the active LED, I created a variable called activeLED. As long as it’s not set to -1 (meaning no LED is active), the code reads the potentiometer’s value and uses it to adjust the LED’s brightness.

Future Improvements

For future improvements, I’d like to make the interaction feel smoother and more dynamic. One idea is to add a toggle feature so pressing the same button again turns its LED off instead of just switching between them. I’d also love to make the LEDs fade in and out when changing colors to create a softer transition effect. It could be interesting to display which LED is currently active on an LCD screen or even through the Serial Monitor. Finally, adding sound feedback, like a small beep when an LED is switched, would make the experience more interactive and responsive.

Week 9 – Reading Reflection

Physical Computing Themes & Making Interactive Art

Reading Tom Igoe’s “Physical Computing’s Greatest Hits (and Misses)” and “Making Interactive Art: Set the Stage, Then Shut Up and Listen” really resonated with me because it reminded me of my own experience designing interactive projects. My first big interactive project was the midterm photobooth, and I remember feeling overwhelmed by all the features I wanted to implement. I was worried that what I thought was straightforward and easy to use might not actually be intuitive for other people. This made me realize how important it is to balance functionality, design, and user experience, something both readings emphasize in different ways.

From the first reading, I was particularly drawn to mechanical pixels, because it reminded me of the Al Bahr Towers in Abu Dhabi, what I like to call the “pineapple towers”, whose scaled windows open and close in response to sunlight. It made me think about how even simple, repeating elements can be visually engaging while also responding to a system or environment. I started to see that originality doesn’t always come from inventing something completely new, but from how you execute and contextualize it.

Igoe’s second reading really made me reflect on my tendency to over-explain. I often want to make sure people ‘get it,’ but the article reminded me that interactive art is a conversation, not a lecture. It’s about providing context, affordances, and suggestions, then letting the audience explore, discover, and even surprise you with unexpected interactions. I like that this perspective treats the audience as active participants rather than passive observers, which matches my belief that both aesthetics and engagement are equally important. If a project doesn’t look appealing, people won’t approach it. If it doesn’t engage them, it loses its purpose.

What stood out most to me is how much trust you have to put in the design and in the audience. Seeing a project unfold in unexpected ways isn’t a failure; it’s part of the collaborative experience. I also realized that while I enjoy seeing people interact in ways I hadn’t anticipated, it only works as long as the interaction isn’t harmful.

Week 8 – Unusual Switch

Concept

For my project, I used the HC-SR04 ultrasonic sensor, which measures distance by sending out a sound wave and timing how long it takes for the echo to bounce back. I used it to detect how close a person’s body is to the sensor. When the person is far (but not too far), a yellow LED lights up. When they get close, a red LED turns on instead. I chose these two colors to mimic a childhood game where the closer you got to a hidden object, the ‘warmer’ you were, so red represents ‘hot,’ and yellow means ‘warm.’

Here’s my wiring and a video of my circuit in action 🙂
https://drive.google.com/drive/folders/1kgAL550ryRCarylolh-Xjpr2KqJABRaU?usp=drive_link

and here’s my GitHub repository

 Code I’m Proud Of

long readDistance() {
 //low for clean start
 digitalWrite(trigPin, LOW);
 delayMicroseconds(2);
 digitalWrite(trigPin, HIGH);
 delayMicroseconds(10);
 digitalWrite(trigPin, LOW);
 long duration = pulseIn(echoPin, HIGH);
 //convert to cm
 //sound speed ≈ 343 m/s → 0.034 cm/µs.
 //div 2 cause roundtrip
 return duration * 0.034 / 2;
}

The part of my code I’m most proud of is the readDistance() function. Through it, I learned how the ultrasonic sensor actually works, sending a pulse, waiting for the echo, and then calculating distance using the speed of sound. I followed a YouTube tutorial to understand the basics, and then used ChatGPT to help debug the issues I ran into. I even got to use some of my physics knowledge to convert time into distance, which made it extra fun since it reminded me of things I had learned before.

Further Improvements

Sometimes the sensor glitches a bit, and I suspect it’s because of my wiring. The HC-SR04 usually needs female-to-male jumper wires to connect properly to the Arduino, but I had to improvise with what I had. Using the Serial Monitor really helped me check if the sensor readings were accurate, but I’d like to clean up my circuit and test again to make it more stable. With proper connections, I think the readings would be much smoother and more consistent.

Another improvement I’d like to try is turning this setup into a Morse code interpreter. Instead of just showing colors for ‘close’ and ‘far,’ I could make the distance readings represent dots and dashes, and then have an LCD screen display the translated message. It would make the project more interactive and add a creative twist while still keeping the hands-free concept. I think it’d be really satisfying to build something that turns simple movements into an actual form of communication.

Week 8 – Reading Reflection

Reading about Margaret Hamilton alongside Don Norman’s ideas made me reflect a lot on my own experiences as an IM major and CS minor. Hamilton’s work is astounding, not just in its technical brilliance, but in the stakes involved. I can’t imagine the pressure of knowing that a single error in your code could be catastrophic. That hit home for me when I thought back to my summer internship: I often felt anxious during functional testing, making sure that a feature actually worked before even thinking about optimizing it. Like Hamilton, I had to prioritize reliability over aesthetics at first, knowing that structure and efficiency could come later. Similarly, when working with Professor Moore on designing a robot extension, our creative vision for the robot was limited by its movements and what was functional, which was frustrating but also helped me come to the realization that function always had to come first.
Norman’s ideas about aesthetics and human experience made me notice the other side of this balance. I’ve realized that I learn better and retain information more effectively when it’s presented aesthetically, which is why I rewrite notes or design presentations carefully. I genuinely think it affects focus and engagement. Reflecting on Hamilton through Norman’s lens, I see that even in high-stakes work, there can be an “internal aesthetics” for the creator, like the satisfaction of elegantly structured, reliable code. This reading also made me think about how much balance is a part of my own work. I often want to implement my vision exactly, but my capabilities, time, and technical constraints sometimes get in the way. It can be frustrating, but I see now that navigating this tension, between function, reliability, and human experience, is a universal part of creating, whether you’re sending astronauts to the Moon or designing a robot extension. Hamilton’s example pushes me to aim for excellence under pressure, while Norman reminds me that design is also about how people, whether users or creators, experience the work.

PhotoboothMidterm Project

Concept:

For my project, I wanted to recreate that nostalgic, slightly awkward fun of using a photo booth, but in a digital, interactive way. The idea was to make something playful that still feels personal, where you type your name, pick a filter and a frame, and get a “thank you” screen at the end, almost like a real booth session. It’s simple, but it captures that small moment of anticipation and result. From entering your name to seeing your face appear across different frames and filters, I wanted it to feel like a sequence, a mini experience rather than just a photo being taken.

I was inspired by the photo booths you see at malls, the kind people use with friends, but I also wanted it to reflect me a little more. Since I love using my digital camera, I leaned into that and made it more camera-oriented, blending that nostalgic booth feel with something that connects to my own personality and interests. 

What I’m Proud of:

Not to toot my own horn, but I’m proud of various aspects of this project. I’ll start with the more design-oriented aspects, then get to the more technical parts.

For the frames, I originally wanted to use these camera frames on pinterest but I found the resolution to not be optimal so I decided to just make it myself using Canva. For reference, here’s a Pinterest frame and mine side by side. 

I decided on only three frames just to make the user experience notfeel overwhelming, but would love to add more in the feature.

Another design aspect I’m proud of is the flow of which the project goes. The path from one screen to another makes for a simple yet effective way to guide the user through the experience.

Now for the technical! There’s quite a few so I will narrow it down to my top 3. 

Firstly, this could count as design but setting a current screen variable made testing out different screens so much easier, without having to go back to the beginning everytime. It felt like a shortcut.

Secondly, when the project goes from one screen to the other, some features stayed and overlapped with features from the currents screen so learning when to show and hide was interesting and made me learn how important order is with code (even more than before). I think getting the hang of hide and show is very important for big scale project so I’m glad to have learnt it through this project.

Lastly, fitting the video feed to the frame. This might’ve been the second most difficult thing I had to deal with just because when I was looking at different ways to fit the feed into a frame, mostly by consulting chatGPT, it kept making me think that I had to look at the type of picture and make part of it transparent, and when that didn’t work, I was thinking of just doing it manually which is where frameData came from.

//define video feed positions per frame
//this lets us fit the live video inside each frame nicely
const frameData = {
 camera1: { x: 355, y: 249, w: 635, h: 440 },
 camera2: { x: 255, y: 245, w: 650, h: 480 },
 camera3: { x: 253, y: 215, w: 730, h: 530 },
};

So for every frame, it set the position and size for the feed.

Areas of improvement:

While I really wanted to use FaceMesh from ml5.js and had originally planned to draw a little red circle on the user’s nose when a face was detected, I couldn’t get it to work properly. I spent a huge amount of time debugging, revisiting documentation, and even consulting AI for help, but it kept interfering with my video feed in unpredictable ways. Eventually, I decided to set that idea aside for now, not because I don’t want to do it, but because I want to do it right.

In the future, I’d also love to add more frame options and maybe even a way for users to design their own frames. I imagine a simple drag-and-drop system where they can choose shapes, colors, or even add text onto the live feed. 

Overall, I really enjoyed working on this project. I think being from a coding background definitely helped me approach it with less hesitation, especially when dealing with the logic and structure. But I also found myself overflowing with ideas and not enough time to bring them all to life, which was both exciting and frustrating. In the end, I’m proud of how it came together. It feels playful and functional, and I think that balance is what makes it work.

References:

I used ChatGPT to debug for the most part, and used the ml5.js documentation when I was attempting to implement face detection to draw a circle on the nose.

P.S. I don’t have the sketch embedded since I worked on the project in VS Code instead of the online p5 editor, but here’s the GitHub website link: https://kzeina.github.io/Photobooth/ .

Midterm Progress

Concept:

I think it’s safe to say that a lot of events on campus have photobooths, and even stores in malls. I wanted to do something similar, but fully digital. While my current sketch isn’t very aesthetic yet, I plan to model it after the Urban Outfitters photobooth, adding playful visuals and frames to make the experience interactive and fun, like a real-life booth.

The user interacts with the photobooth through a series of pages. They start at the main booth screen, where they can click a button to enter their name on the Start Page. After submitting their name, they move to the Filter Page to see a live video feed and choose a filter by pressing the arrow keys. Once they select a filter and click “OK,” they reach the Frame Screen, where they can take a photo. Finally, the Thank You Screen appears with an option to restart, which loops them back to the booth, letting them take another photo. Each page guides the user step by step, making the experience clear and interactive.

Code design :

All the pages are in their own class.

    • PhotoBooth: The main screen where users start. Displays the booth and a button to enter the next step.
    • StartPage: Handles the user’s name input and the “Start” button to move to the filter selection.
    • FilterPage: Shows a live video feed and lets users choose a filter using arrow keys, with an “OK” button to proceed.
    • FrameScreen: Applies the selected filter to the video and allows the user to take a photo with a camera button.
    • ThankYouScreen: Confirms the photo has been taken and offers a button to restart the booth.
    • Flow control: currentScreen determines which class is displayed, and button/key interactions manage transitions between screens.

Difficulties:

I think that throughout implementing this, the filter page class was the most troubling. I had to use the translate function to mirror my video feed because I wanted a mirrored picture as the result, and I needed it to only apply this to the video, and not the text and button. 

//mirror the video (so it feels like a mirror)

    push();

    translate(width, 0);

    scale(-1, 1);

    image(this.video, 0, 0, width, height);

    pop();

Another key part of the project is the filters and how the selection works. Each filter is applied as an overlay on the live video feed, and the user can browse through them using the left and right arrow keys. I also made sure the selection loops continuously, so moving past the last filter brings you back to the first one.

 //sepia filter overlay

    if (this.filters[this.currentFilter] === "sepia") {

      fill(112, 66, 20, 60);

      noStroke();

      rect(0, 0, width, height);




      //black & white filter using canvas filter

    } else if (this.filters[this.currentFilter] === "bw") {

      push();

      tint(255, 255);

      drawingContext.filter = "grayscale(100%)";

      translate(width, 0);

      scale(-1, 1);

      image(this.video, 0, 0, width, height);

      drawingContext.filter = "none";

      pop();

    }

 

nextFilter() {

    //go to next filter (wraps around)

    this.currentFilter = (this.currentFilter + 1) % this.filters.length;

  }




  prevFilter() {

    //go to previous filter (wraps around)

    this.currentFilter =

      (this.currentFilter - 1 + this.filters.length) % this.filters.length;

  }

To deal with, or let’s say minimize the risks with these two implementations, I first tested them out in a separate sketch on p5.js, and when I made sure they worked how I wanted them to, I added them to my project flow. I also added console logs to help me debug and then removed them once everything was working as expected.

Lastly, this project really tested my understanding of how order matters. I had to use .hide() a lot to make sure buttons and features weren’t carried on between screens. 

Reflection:

Now that the structure is solid, I can focus on making it look appealing. I’m open to any suggestions and even suggestions for my current flow!

Here’s my Sketch:

Reading Reflection Week 5

I’ve always thought of digital art and computer vision as really complex concepts, almost intimidating, because they felt so tied to technical expertise. What I appreciated about Levin’s article is how he breaks it down so it feels a little less intimidating. Reading it, I realized that these tools don’t always need to be heavy or advanced to be meaningful; even simple algorithms can create powerful, interactive experiences. 

That point resonated with me because I’ve actually worked on a VR project before for a class final. It was a “build your own concert” type of game, and looking back, I can see how much it connected to the kind of work Levin describes. We weren’t doing anything revolutionary with the code, but the way people could move and interact with the environment really made the experience. Especially since our target audience was those who can’t attend concerts due to health concerns. His emphasis on preparing the environment as much as the software clicked with me here. 

One of the main ways human and computer vision are different is through interpretation. While we humans can think outside the box, make different connections, see things from various angles, and recognize their different forms. Computers see things as a set of pixels and only ‘think’ in terms of structured algorithms. It’s very literal. What happens when we combine both?

This is why I was especially drawn to Krueger’s Videoplace. I’d read about him in Understanding IM before, and I’ve always liked his idea of using the human body itself as the interface. It made me think about how Steve Jobs framed the first iPhone: our fingers are the “best pointers out there,” which is why touchscreens felt so natural. It’s a reminder that sometimes the most effective tech feels the least like “technology” and the most like an extension of ourselves.

At the same time, I couldn’t help but think about the ethical side. While Levin highlights the playfulness and creative potential of computer vision, I also worry about issues like privacy and misuse. As an artist or designer, you can try to account for these risks, but sometimes it’s beyond your control once the technology is out in the world. For me, that overlap between limitless creativity and real-world responsibility is something I want to keep in mind as I continue building my portfolio in IM.

Overall, this reading made computer vision feel less like an intimidating black box and more like a tool I could actually experiment with. It left me curious not just about what I can create, but about how to do it thoughtfully.

Data Visualization

Concept:

My work for this week was inspired by a website called Receiptify, which takes your Spotify data and makes a list of your statistics and displays it in the format of a receipt. This is data visualization in itself, while I didn’t make a version of it, I chose to use it to generate my dataset. I asked it to give me my top 50 songs for the month, then I made those into a dataset, which I uploaded onto p5.js. I was racking my brain for different ways music can be represented in a visual sense, and then I saw one of my suitemates got coasters that look like vinyls, which gave me the idea to represent the songs as vinyls. I think most of us have seen people use vinyls as room decor rather than for playing music; this work is sort of a spin on that (pun intended ?).

Part(s) I’m Proud of:

Note: To be completely honest, I did consult ChatGPT for a few things, just to make the process less overwhelming.

1- I remembered Professor Aya saying to try not to hardcode variables, and I am proud that I thought of this part. I decided not to hardcode the number of rows in my grid, in case I want to edit the number of songs in my dataset in the future.

  //compute rows automatically based on number of songs

  //not hard coded in case i want to change the number of

  //songs in the future

  //ceil cause we need to round up

  let rows =ceil(records.length / cols);

2- I made it so that the vinyls expand when you click on them, and I am proud of my implementation because it makes the audience experience less static; you’re not just visualizing the data, you’re also interacting with it.

/when mouse is pressed

function mousePressed() {

  //loop through each vinyl

  for (let i = 0; i <records.length; i++) {

    //check if mouse is inside vinyl

    if (dist(mouseX, mouseY, records[i].x,records[i].y) < 25) {

      //deselect if same vinyl clicked

      if (activeIndex === i) activeIndex= -1;

      //otherwise set this as active

      else activeIndex= i;

    }

  }

}

Heres the sketch:

Reflection: 

There’s always room for improvement. If I spend more time on this, I’d probably make the rest of the vinyls disappear when a vinyl is clicked. Instead of just a color per artist, I can make it more realistic by adding album covers. A possible full upgrade would be making it almost a game where the user chooses between these vinyls, then we get an animation of a vinyl actually being played through a player, and the actual song being played. It would be a different way to interact with music, as opposed to seeing it as a huge list of strings, as we do on Spotify.

Reading Reflection Week#4

When Norman was talking throughout the reading about frustrations with design, the first thought that popped into my head was the very complex, ‘interactive’ online shopping sites. I put ‘interactive’ in quotation marks as the design lags, which takes away the whole point of the experience, and I feel like they value aesthetic more than they do user experience. As a user, I’m just trying to look for clothes. Why are you making it so complicated for me? When Norman began to explain HCD, I remembered the workflow of the company I interned at this summer, used for their development department. The company was in charge of an Enterprise Resource Planning system. Suppose you’re a client and you raise a ticket. The ticket goes through Presales, then Analysis, where the time for the development is estimated, and the business need is highlighted by a functional consultant, finally it’s picked up by a developer. After the code is done and reviewed, it’s pushed to functional testing before it goes to the client. Through this flow, it ensures the code is fully tested for the client, which means the probability of the client having errors is minimized. 

In terms of applying the author’s principles to Interactive Media, I think especially as we are learning the technology (like p5js), it’s very easy to lose track of the aesthetics and forget about user experience. There’s a sacrifice to be made, or maybe a balance to be found between prioritizing user experience over aesthetics, but aesthetics is part of the user experience as well. Let’s take websites, for example, the likelihood of wanting to use a poorly decorated website is slim; it may work perfectly, but it wouldn’t look appealing. At the other end of the spectrum is the example I gave earlier, where aesthetics completely take over and the user experience is no longer enjoyable.

Everybody is Connected – Week 3

Concept:

I always see people talk about how everyone and everything are connected one way or another, and I find that to be very true. I always find weird connections that bring me and others together, and in my classes, they seem to overlap more often than not, which is a good thing! In all honesty, at first I was going to attempt an artwork that incorporated the spiderman suit pattern somehow, but I found myself overcomplicating it and confusing myself, so I went with this idea instead. A web of connections, and every time you press, more connections happen, like how the more you interact with people or things, the more connections you ge,t whether to others or a bigger scope of understanding.

Part I am proud of:

//draw nodes and connections
function drawWeb() {
  background(0);

  //draw connections
  //the further apart the nodes are
  //the fainter the lines
  for (let i = 0; i < nodes.length; i++) {
    for (let j = i + 1; j < nodes.length; j++) {
      let d = dist(nodes[i].x, nodes[i].y, nodes[j].x, nodes[j].y);
      if (d < connectionDistance) {
        let alpha = map(d, 0, connectionDistance, 255, 50);
        stroke(255, alpha);
        line(nodes[i].x, nodes[i].y, nodes[j].x, nodes[j].y);
      }
    }
  }

  //draw all nodes
  for (let node of nodes) {
    node.show();
  }
}

I used the help of ChatGPT to help me with the thought process, but essentially, I wanted to make the closer connections look bolder and the further ones more faint. I hadn’t used alpha previously; I always rely on manually choosing the color instead, so this was a hands-on way for me to learn.

//when mouse is pressed
//add more nodes and draw connections
function mousePressed() {
  // add 5 new nodes at random 
  for (let i = 0; i < 5; i++) {
    nodes.push(new Node(random(width), random(height)));
  }
  
  drawWeb(); //redraw everything including new nodes
}

I also chose to redraw every time new nodes and connections are made because otherwise, we could see a lot of overlap, and the work would lose some of its visual appeal.

The Art Piece:

Reflection:

To reflect, I want to talk about a struggle I faced that is not related to the final product or my code, but rather that I found myself getting overwhelmed because I had so many ideas, yet neither the time nor the skill to create them yet and so I decided to kind of take a step back and take a more simplistic route. Simple can sometimes be better, and I feel like, in hindsight, the work I created is a pretty solid representation of how my brain feels at times with creative projects like these.