PhotoboothMidterm Project

Concept:

For my project, I wanted to recreate that nostalgic, slightly awkward fun of using a photo booth, but in a digital, interactive way. The idea was to make something playful that still feels personal, where you type your name, pick a filter and a frame, and get a “thank you” screen at the end, almost like a real booth session. It’s simple, but it captures that small moment of anticipation and result. From entering your name to seeing your face appear across different frames and filters, I wanted it to feel like a sequence, a mini experience rather than just a photo being taken.

I was inspired by the photo booths you see at malls, the kind people use with friends, but I also wanted it to reflect me a little more. Since I love using my digital camera, I leaned into that and made it more camera-oriented, blending that nostalgic booth feel with something that connects to my own personality and interests. 

What I’m Proud of:

Not to toot my own horn, but I’m proud of various aspects of this project. I’ll start with the more design-oriented aspects, then get to the more technical parts.

For the frames, I originally wanted to use these camera frames on pinterest but I found the resolution to not be optimal so I decided to just make it myself using Canva. For reference, here’s a Pinterest frame and mine side by side. 

I decided on only three frames just to make the user experience notfeel overwhelming, but would love to add more in the feature.

Another design aspect I’m proud of is the flow of which the project goes. The path from one screen to another makes for a simple yet effective way to guide the user through the experience.

Now for the technical! There’s quite a few so I will narrow it down to my top 3. 

Firstly, this could count as design but setting a current screen variable made testing out different screens so much easier, without having to go back to the beginning everytime. It felt like a shortcut.

Secondly, when the project goes from one screen to the other, some features stayed and overlapped with features from the currents screen so learning when to show and hide was interesting and made me learn how important order is with code (even more than before). I think getting the hang of hide and show is very important for big scale project so I’m glad to have learnt it through this project.

Lastly, fitting the video feed to the frame. This might’ve been the second most difficult thing I had to deal with just because when I was looking at different ways to fit the feed into a frame, mostly by consulting chatGPT, it kept making me think that I had to look at the type of picture and make part of it transparent, and when that didn’t work, I was thinking of just doing it manually which is where frameData came from.

//define video feed positions per frame
//this lets us fit the live video inside each frame nicely
const frameData = {
 camera1: { x: 355, y: 249, w: 635, h: 440 },
 camera2: { x: 255, y: 245, w: 650, h: 480 },
 camera3: { x: 253, y: 215, w: 730, h: 530 },
};

So for every frame, it set the position and size for the feed.

Areas of improvement:

While I really wanted to use FaceMesh from ml5.js and had originally planned to draw a little red circle on the user’s nose when a face was detected, I couldn’t get it to work properly. I spent a huge amount of time debugging, revisiting documentation, and even consulting AI for help, but it kept interfering with my video feed in unpredictable ways. Eventually, I decided to set that idea aside for now, not because I don’t want to do it, but because I want to do it right.

In the future, I’d also love to add more frame options and maybe even a way for users to design their own frames. I imagine a simple drag-and-drop system where they can choose shapes, colors, or even add text onto the live feed. 

Overall, I really enjoyed working on this project. I think being from a coding background definitely helped me approach it with less hesitation, especially when dealing with the logic and structure. But I also found myself overflowing with ideas and not enough time to bring them all to life, which was both exciting and frustrating. In the end, I’m proud of how it came together. It feels playful and functional, and I think that balance is what makes it work.

References:

I used ChatGPT to debug for the most part, and used the ml5.js documentation when I was attempting to implement face detection to draw a circle on the nose.

P.S. I don’t have the sketch embedded since I worked on the project in VS Code instead of the online p5 editor, but here’s the GitHub website link: https://kzeina.github.io/Photobooth/ .

Midterm Progress

Concept:

I think it’s safe to say that a lot of events on campus have photobooths, and even stores in malls. I wanted to do something similar, but fully digital. While my current sketch isn’t very aesthetic yet, I plan to model it after the Urban Outfitters photobooth, adding playful visuals and frames to make the experience interactive and fun, like a real-life booth.

The user interacts with the photobooth through a series of pages. They start at the main booth screen, where they can click a button to enter their name on the Start Page. After submitting their name, they move to the Filter Page to see a live video feed and choose a filter by pressing the arrow keys. Once they select a filter and click “OK,” they reach the Frame Screen, where they can take a photo. Finally, the Thank You Screen appears with an option to restart, which loops them back to the booth, letting them take another photo. Each page guides the user step by step, making the experience clear and interactive.

Code design :

All the pages are in their own class.

    • PhotoBooth: The main screen where users start. Displays the booth and a button to enter the next step.
    • StartPage: Handles the user’s name input and the “Start” button to move to the filter selection.
    • FilterPage: Shows a live video feed and lets users choose a filter using arrow keys, with an “OK” button to proceed.
    • FrameScreen: Applies the selected filter to the video and allows the user to take a photo with a camera button.
    • ThankYouScreen: Confirms the photo has been taken and offers a button to restart the booth.
    • Flow control: currentScreen determines which class is displayed, and button/key interactions manage transitions between screens.

Difficulties:

I think that throughout implementing this, the filter page class was the most troubling. I had to use the translate function to mirror my video feed because I wanted a mirrored picture as the result, and I needed it to only apply this to the video, and not the text and button. 

//mirror the video (so it feels like a mirror)

    push();

    translate(width, 0);

    scale(-1, 1);

    image(this.video, 0, 0, width, height);

    pop();

Another key part of the project is the filters and how the selection works. Each filter is applied as an overlay on the live video feed, and the user can browse through them using the left and right arrow keys. I also made sure the selection loops continuously, so moving past the last filter brings you back to the first one.

 //sepia filter overlay

    if (this.filters[this.currentFilter] === "sepia") {

      fill(112, 66, 20, 60);

      noStroke();

      rect(0, 0, width, height);




      //black & white filter using canvas filter

    } else if (this.filters[this.currentFilter] === "bw") {

      push();

      tint(255, 255);

      drawingContext.filter = "grayscale(100%)";

      translate(width, 0);

      scale(-1, 1);

      image(this.video, 0, 0, width, height);

      drawingContext.filter = "none";

      pop();

    }

 

nextFilter() {

    //go to next filter (wraps around)

    this.currentFilter = (this.currentFilter + 1) % this.filters.length;

  }




  prevFilter() {

    //go to previous filter (wraps around)

    this.currentFilter =

      (this.currentFilter - 1 + this.filters.length) % this.filters.length;

  }

To deal with, or let’s say minimize the risks with these two implementations, I first tested them out in a separate sketch on p5.js, and when I made sure they worked how I wanted them to, I added them to my project flow. I also added console logs to help me debug and then removed them once everything was working as expected.

Lastly, this project really tested my understanding of how order matters. I had to use .hide() a lot to make sure buttons and features weren’t carried on between screens. 

Reflection:

Now that the structure is solid, I can focus on making it look appealing. I’m open to any suggestions and even suggestions for my current flow!

Here’s my Sketch:

Reading Reflection Week 5

I’ve always thought of digital art and computer vision as really complex concepts, almost intimidating, because they felt so tied to technical expertise. What I appreciated about Levin’s article is how he breaks it down so it feels a little less intimidating. Reading it, I realized that these tools don’t always need to be heavy or advanced to be meaningful; even simple algorithms can create powerful, interactive experiences. 

That point resonated with me because I’ve actually worked on a VR project before for a class final. It was a “build your own concert” type of game, and looking back, I can see how much it connected to the kind of work Levin describes. We weren’t doing anything revolutionary with the code, but the way people could move and interact with the environment really made the experience. Especially since our target audience was those who can’t attend concerts due to health concerns. His emphasis on preparing the environment as much as the software clicked with me here. 

One of the main ways human and computer vision are different is through interpretation. While we humans can think outside the box, make different connections, see things from various angles, and recognize their different forms. Computers see things as a set of pixels and only ‘think’ in terms of structured algorithms. It’s very literal. What happens when we combine both?

This is why I was especially drawn to Krueger’s Videoplace. I’d read about him in Understanding IM before, and I’ve always liked his idea of using the human body itself as the interface. It made me think about how Steve Jobs framed the first iPhone: our fingers are the “best pointers out there,” which is why touchscreens felt so natural. It’s a reminder that sometimes the most effective tech feels the least like “technology” and the most like an extension of ourselves.

At the same time, I couldn’t help but think about the ethical side. While Levin highlights the playfulness and creative potential of computer vision, I also worry about issues like privacy and misuse. As an artist or designer, you can try to account for these risks, but sometimes it’s beyond your control once the technology is out in the world. For me, that overlap between limitless creativity and real-world responsibility is something I want to keep in mind as I continue building my portfolio in IM.

Overall, this reading made computer vision feel less like an intimidating black box and more like a tool I could actually experiment with. It left me curious not just about what I can create, but about how to do it thoughtfully.

Data Visualization

Concept:

My work for this week was inspired by a website called Receiptify, which takes your Spotify data and makes a list of your statistics and displays it in the format of a receipt. This is data visualization in itself, while I didn’t make a version of it, I chose to use it to generate my dataset. I asked it to give me my top 50 songs for the month, then I made those into a dataset, which I uploaded onto p5.js. I was racking my brain for different ways music can be represented in a visual sense, and then I saw one of my suitemates got coasters that look like vinyls, which gave me the idea to represent the songs as vinyls. I think most of us have seen people use vinyls as room decor rather than for playing music; this work is sort of a spin on that (pun intended ?).

Part(s) I’m Proud of:

Note: To be completely honest, I did consult ChatGPT for a few things, just to make the process less overwhelming.

1- I remembered Professor Aya saying to try not to hardcode variables, and I am proud that I thought of this part. I decided not to hardcode the number of rows in my grid, in case I want to edit the number of songs in my dataset in the future.

  //compute rows automatically based on number of songs

  //not hard coded in case i want to change the number of

  //songs in the future

  //ceil cause we need to round up

  let rows =ceil(records.length / cols);

2- I made it so that the vinyls expand when you click on them, and I am proud of my implementation because it makes the audience experience less static; you’re not just visualizing the data, you’re also interacting with it.

/when mouse is pressed

function mousePressed() {

  //loop through each vinyl

  for (let i = 0; i <records.length; i++) {

    //check if mouse is inside vinyl

    if (dist(mouseX, mouseY, records[i].x,records[i].y) < 25) {

      //deselect if same vinyl clicked

      if (activeIndex === i) activeIndex= -1;

      //otherwise set this as active

      else activeIndex= i;

    }

  }

}

Heres the sketch:

Reflection: 

There’s always room for improvement. If I spend more time on this, I’d probably make the rest of the vinyls disappear when a vinyl is clicked. Instead of just a color per artist, I can make it more realistic by adding album covers. A possible full upgrade would be making it almost a game where the user chooses between these vinyls, then we get an animation of a vinyl actually being played through a player, and the actual song being played. It would be a different way to interact with music, as opposed to seeing it as a huge list of strings, as we do on Spotify.

Reading Reflection Week#4

When Norman was talking throughout the reading about frustrations with design, the first thought that popped into my head was the very complex, ‘interactive’ online shopping sites. I put ‘interactive’ in quotation marks as the design lags, which takes away the whole point of the experience, and I feel like they value aesthetic more than they do user experience. As a user, I’m just trying to look for clothes. Why are you making it so complicated for me? When Norman began to explain HCD, I remembered the workflow of the company I interned at this summer, used for their development department. The company was in charge of an Enterprise Resource Planning system. Suppose you’re a client and you raise a ticket. The ticket goes through Presales, then Analysis, where the time for the development is estimated, and the business need is highlighted by a functional consultant, finally it’s picked up by a developer. After the code is done and reviewed, it’s pushed to functional testing before it goes to the client. Through this flow, it ensures the code is fully tested for the client, which means the probability of the client having errors is minimized. 

In terms of applying the author’s principles to Interactive Media, I think especially as we are learning the technology (like p5js), it’s very easy to lose track of the aesthetics and forget about user experience. There’s a sacrifice to be made, or maybe a balance to be found between prioritizing user experience over aesthetics, but aesthetics is part of the user experience as well. Let’s take websites, for example, the likelihood of wanting to use a poorly decorated website is slim; it may work perfectly, but it wouldn’t look appealing. At the other end of the spectrum is the example I gave earlier, where aesthetics completely take over and the user experience is no longer enjoyable.

Everybody is Connected – Week 3

Concept:

I always see people talk about how everyone and everything are connected one way or another, and I find that to be very true. I always find weird connections that bring me and others together, and in my classes, they seem to overlap more often than not, which is a good thing! In all honesty, at first I was going to attempt an artwork that incorporated the spiderman suit pattern somehow, but I found myself overcomplicating it and confusing myself, so I went with this idea instead. A web of connections, and every time you press, more connections happen, like how the more you interact with people or things, the more connections you ge,t whether to others or a bigger scope of understanding.

Part I am proud of:

//draw nodes and connections
function drawWeb() {
  background(0);

  //draw connections
  //the further apart the nodes are
  //the fainter the lines
  for (let i = 0; i < nodes.length; i++) {
    for (let j = i + 1; j < nodes.length; j++) {
      let d = dist(nodes[i].x, nodes[i].y, nodes[j].x, nodes[j].y);
      if (d < connectionDistance) {
        let alpha = map(d, 0, connectionDistance, 255, 50);
        stroke(255, alpha);
        line(nodes[i].x, nodes[i].y, nodes[j].x, nodes[j].y);
      }
    }
  }

  //draw all nodes
  for (let node of nodes) {
    node.show();
  }
}

I used the help of ChatGPT to help me with the thought process, but essentially, I wanted to make the closer connections look bolder and the further ones more faint. I hadn’t used alpha previously; I always rely on manually choosing the color instead, so this was a hands-on way for me to learn.

//when mouse is pressed
//add more nodes and draw connections
function mousePressed() {
  // add 5 new nodes at random 
  for (let i = 0; i < 5; i++) {
    nodes.push(new Node(random(width), random(height)));
  }
  
  drawWeb(); //redraw everything including new nodes
}

I also chose to redraw every time new nodes and connections are made because otherwise, we could see a lot of overlap, and the work would lose some of its visual appeal.

The Art Piece:

Reflection:

To reflect, I want to talk about a struggle I faced that is not related to the final product or my code, but rather that I found myself getting overwhelmed because I had so many ideas, yet neither the time nor the skill to create them yet and so I decided to kind of take a step back and take a more simplistic route. Simple can sometimes be better, and I feel like, in hindsight, the work I created is a pretty solid representation of how my brain feels at times with creative projects like these.

Reading Reflection – Week 3

In my opinion, what makes a strongly interactive system is, as Crawford put it, the “cyclic process in which two actors alternately listen, think, and speak”. It reminded me of something I am learning about in another class of mine, where we read Sartre. A French Philosopher and a major player when it comes to existentialism. In his work, ‘Why Write’, he makes a point in saying that with written forms of art, like books, simply writing it isn’t enough. The reader and the action of reading are what bring the work to life. He describes writing as a creative direction in which the author guides the reader to think in a certain way, but the perception of the reader is still uniquely theirs. In a way, we have presented the speaker of the two actors, which would be the writer. Then the listener and thinker would be the reader. The role of both actors here isn’t passive engagement; the listener should be thinking about what the speaker is saying to make connections and give what the speaker’s saying a broader meaning. Likewise, the speaker should think about what they say and how to respond to what was said by the listener when the roles are reversed. 

I believe that implementing in p5.js what I have in mind would be rather difficult, but a welcome challenge. User input really matters here, and I think adding aspects like microphone access might be a way to consider that. While I’m not sure how complex projects can go on p5, I think something where it registers voice commands to do certain tasks would be very interesting and interactive. Also, having seen previous IM showcases, a potential idea I have in mind, which would combine arduino and p5js would be a functioning (or almost) Spiderman web shooter.

 

Reading Reflection – Week#2

As I was watching Casey Reas’ talk, I kept thinking back to the artists we’re talking about in Understanding Interactive Media. His instructions for his art reminded me of Ben Patterson’s ‘Paper Music’ and Sol LeWitt’s ‘Wall Drawing’, both are examples of art where you have a set of instructions, but each performance or art piece differs based on how the artists and performers interpret the instructions. I feel like both these works are also perfect examples of the idea of both order and chaos coming together. Elements of randomness and control are in use at the same time; you have control in the set of instructions, but you also have randomness in the way the performer will interpret these rules. Works like this make me think that a balance is essential, but what the ratio between order and chaos is rather vague. I believe it depends on the work, so the optimal balance shouldn’t be generalized, but on a case-by-case basis. You have works like Jean Arp’s collages, which are based on the law of chance, which I would say rely mostly on randomness. I don’t believe control would make much sense there. 

On the other hand, if I think of my most recent work for this week’s task, I would say randomization would make it better. I could randomize the different colors of the design on the coffee cup, but I’d need to control the color combinations and the colors to choose from. So this would be an example of where control is necessary while also maintaining an element of randomness. Ultimately, I think the amount of chaos and order that goes into one’s work depends on their intention with the work and what experience they want the watcher to have. 

Loops + Art – Zeina Khadem

Concept:

The first cup of coffee, usually accompanied by music of Fairuz, in the morning, is a staple in every Lebanese household. I knew I wanted to do something that is connected to my culture and identity, so I looked up a picture of the signature morning setup and tried my best to mimic it. 

A little bit of background, we call the coffee cup ‘finjan’ and the pot ‘rakwe’. The coffee itself is usually Turkish coffee. 

 

 

 

 

Proud of This Part(s)!
I’m proud of two parts of my code, one is where I used loops to imitate the designs on the cup.

//loop for the oval design of the cup (for)
let ovals = 2;
for (let i = 0; i < ovals; i++) {
  let x = 450 + i * 60;
  let y = 300;
  fill(" #3F51B5");
  ellipse(x, y, 20, 70);
}

//loop for the flower (while)
let flowers = 3;
let i = 0;
while (i < flowers) {
  let x = 430 + i * 50;
  let y = 275;
  stroke("#3F51B5");
  strokeWeight(5);
  line(x, y, x, y + 45);
  noStroke();
  fill("#F44336");
  ellipse(x, y, 10, 20);
  i++;
}

It was very fun to try and figure out the spacing, and using both for and while loops to get myself used to their separate structures, as I tend to be more prone to using for loops.

The other part of my code that I’m really proud of is the little animation when you press your mouse on the ‘rakwe’, it fills the ‘finjan’ and displays a welcome text. 

//little animation for when you press the rakwe
//coffee cup seems to be full
//and a welcome text
if (
  mouseIsPressed &&
  mouseX > 65 &&
  mouseX < 185 &&
  mouseY > 120 &&
  mouseY < 205
) {
  fill("#50211C");
  ellipse(480, 250, 80, 10);
  textAlign(RIGHT);
  textStyle(NORMAL);
  textSize(50);
  fill('black');
  text("Ahla w Sahla", 420,100);
}

My Work:

Reflection:

It’s really fun taking concepts we implemented in class, but actually applying them to your own creative work. You also wind up learning new things on the way, for example, this was my first time using text in p5.js

While I like the final result, I definitely see as we learn more, the more interactive and complex I can make it. I would like to make the ‘rakwe’ draggable, and have the user manually fill the cup with coffee. Also, maybe a possibility of under- or over-filling the cup and different display messages for each scenario. 

 

Self Portrait – Zeina Khadem

Concept

From the moment we got this assignment, I knew I wanted to create a self-portrait in a cartoon style that I loved as a kid. The first thing that came to mind was the Toca Boca games my cousins and I played all the time. Those characters had such a simple yet distinct style that really stuck with me, so I decided to take an inspiration picture from the game and model my self-portrait on it as closely as I could. I wanted my version to capture the charm of those characters while also being recognizable as me.

 The Inspiration Picture

I’m Particularly Proud of…

One part of the code I’m especially proud of is the mouth and eyebrows. I used arcs to draw them, which turned out to be trickier than I expected. Getting the curves and angles right took some trial and error, but it was so satisfying to see them finally come together.

//mouth
fill(0);
arc(200, 265, 20, 5, 0, PI, CHORD);

//eyebrows
fill('#D3A45F');
noStroke();
arc(158, 195, 30, 11, PI, TWO_PI, OPEN);
arc(242, 195, 30, 11, PI, TWO_PI, OPEN);

The Portrait

             

Always Room for Improvement

I think my portrait leans toward simplicity, which was partly intentional because I wanted it to reflect that clean, cartoonish style. That said, there’s so much room for improvement! In the future, I’d love to experiment with adding a more dynamic background to give the portrait some life. I’d also like to attempt a full-body version, which would be a fun challenge in balancing proportions and keeping the cartoon aesthetic consistent.

Another thing I want to work on is making my code more efficient and modular. Right now, some parts are repetitive, and I can see ways to organize it better with functions or loops. Overall, this project was a great opportunity to combine coding with creativity, and I’m really proud of the result.