Selfie4Two (Final Project)

Concept

For my final project, I decided to create a Photo Booth that questions and criticises the act of taking an individual selfie while encouraging the processes of documenting memories with others. The purpose of the work is to discourage and eliminate the ego-centric approach behind how we make use of our smartphone’s camera. In the same way that the widespread use of smartphones has increased the awareness of responsible online practices, I wanted to create something that spoke to the importance of how we utilise this incredibly accessible method of documenting our lives. Essentially, Selfie4Two is a work that focuses on celebrating interpersonal connection through technology.

Technical Approach & Challenges

Initially, my approach in creating this Photo Booth concept was to utilise machine learning technology through an image classification model. This model would discern whether there were one or two people in the frame and whether they were holding up their phone (suggesting that they are taking a picture of themselves). I would then incorporate this model into p5js and load an ASCII webcam rendering based on what was detected, presenting question marks for one person and hearts for groups. Screenshots of the image classification model are seen below.

I was able to functionally incorporate it into p5 with some initial testing but could not effectively do so with the ASCII webcam rendering that I had made, meaning I was forced to avoid using it in order to finish the project.  This certainly limited the usability of the project as well as minimising how effective the concept was conveyed. In an attempt to resolve this, I included emojis (maintaining the theme of internet culture) onto the initial webcam display that can be clicked to display the ASCII renderings.

In terms of the ASCII webcam rendering, I was able to create two variations with one being comprised of question marks and the other of hearts. These are both displayed below.

After incorporating these into my final code, however, they ran much slower and did not display correctly. Unfortunately, I could not figure out how to fix this and had to present this version of the code as my final project.

const density = "  \u2661";
const density1 = "  ?";
let video;
let emojiSize = 32; // Size of the emojis

function setup() {
  createCanvas(window.innerWidth, window.innerHeight);
  video = createCapture(VIDEO);
  video.hide();
  let button = createButton("reset");
  button.mousePressed(resetSketch);
}

function draw() {
  background(0);
  video.loadPixels();

  // Draw the video
  image(video, 0, 0, width, height);

  let emoji = "🤳🧍❔";
  let emoji1 = "🤳🧑‍🤝‍🧑❤️";

  text(emoji1, width / 6, height / 4);
  text(emoji, width / 1.2, height / 4);
  textSize(32);
  textAlign(CENTER, CENTER);

  // Calculate bounding boxes for emojis
  let emojiBox = {
    x: width / 6 - emojiSize / 2,
    y: height / 4 - emojiSize / 2,
    width: textWidth(emoji),
    height: emojiSize,
  };

  let emojiBox1 = {
    x: width / 1.2 - emojiSize / 2,
    y: height / 4 - emojiSize / 2,
    width: textWidth(emoji1),
    height: emojiSize,
  };

  // Check if mouse click is inside the bounding box of the first emoji
  if (
    mouseX > emojiBox.x &&
    mouseX < emojiBox.x + emojiBox.width &&
    mouseY > emojiBox.y &&
    mouseY < emojiBox.y + emojiBox.height
  ) {
    coupleAscii();
  }

  // Check if mouse click is inside the bounding box of the second emoji
  if (
    mouseX > emojiBox1.x &&
    mouseX < emojiBox1.x + emojiBox1.width &&
    mouseY > emojiBox1.y &&
    mouseY < emojiBox1.y + emojiBox1.height
  ) {
    singleAscii();
  }
}

function singleAscii() {
  clear();
  textSize(10);
  background(0);
  video.loadPixels();
  fill(255, 180, 180);
  stroke(255, 180, 180);
  strokeWeight(1);
  let asciiImage1 = "";
  for (let j = 0; j < video.height; j++) {
    for (let i = 0; i < video.width; i++) {
      const pixelIndex = (i + j * video.width) * 4;
      const r = video.pixels[pixelIndex + 0];
      const g = video.pixels[pixelIndex + 1];
      const b = video.pixels[pixelIndex + 2];
      const avg = (r + g + b) / 3;
      const len = density1.length;
      const charIndex = floor(map(avg, 0, 255, 0, len));
      const c = density1.charAt(charIndex);
      text(
        c,
        map(i, 0, video.width, 0, width),
        map(j, 0, video.height, 0, height)
      );
    }
  }
}

function coupleAscii() {
  clear();
  textSize(10);
  background(0);
  video.loadPixels();
  fill(255, 180, 180);
  stroke(255, 180, 180);
  strokeWeight(1);
  let asciiImage = "";
  for (let j = 0; j < video.height; j++) {
    for (let i = 0; i < video.width; i++) {
      const pixelIndex = (i + j * video.width) * 4;
      const r = video.pixels[pixelIndex + 0];
      const g = video.pixels[pixelIndex + 1];
      const b = video.pixels[pixelIndex + 2];
      const avg = (r + g + b) / 3;
      const len = density.length;
      const charIndex = floor(map(avg, 0, 255, 0, len));
      const c = density.charAt(charIndex);
      text(
        c,
        map(i, 0, video.width, 0, width),
        map(j, 0, video.height, 0, height)
      );
    }
  }
}
function resetSketch() {
  clear();
  background(0);
  video.loadPixels();

  // Draw the video
  image(video, 0, 0);

  push();
  let emoji = "🤳🧍❔";
  let emoji1 = "🤳🧑‍🤝‍🧑❤️";

  text(emoji1, width / 6, height / 4);
  text(emoji, width / 6, height / 2);
  textSize(32);
  textAlign(CENTER, CENTER);
  pop();

  // Calculate bounding boxes for emojis
  let emojiBox = {
    x: width / 6 - emojiSize / 2,
    y: height / 4 - emojiSize / 2,
    width: textWidth(emoji),
    height: emojiSize,
  };

  let emojiBox1 = {
    x: width / 6 - emojiSize / 2,
    y: height / 2 - emojiSize / 2,
    width: textWidth(emoji1),
    height: emojiSize,
  };

  // Check if mouse click is inside the bounding box of the first emoji
  if (
    mouseX > emojiBox.x &&
    mouseX < emojiBox.x + emojiBox.width &&
    mouseY > emojiBox.y &&
    mouseY < emojiBox.y + emojiBox.height
  ) {
    coupleAscii();
  }

  // Check if mouse click is inside the bounding box of the second emoji
  if (
    mouseX > emojiBox1.x &&
    mouseX < emojiBox1.x + emojiBox1.width &&
    mouseY > emojiBox1.y &&
    mouseY < emojiBox1.y + emojiBox1.height
  ) {
    singleAscii();
  }
}

I also intended to add an Adafruit Neopixel LED Strip through Arduino into my project in order to emphasise the experience of a Photo Booth. The LED strip would be attached to the screen and would turn on if the image classification model detected two people. As I was already experiencing issues with the code, I was unable to incorporate this into my final project despite attempts at wiring and programming it.

Improvements

Clearly, there is a lot of space for improvement within my final project. Other than improving the functionality of all the components I was unable to incorporate properly (or at all), one key area of improvement would have been testing and sharing my ideas with someone more experienced in order to gauge their feasibility. Had I known ml5 libraries tend to run slower in p5js, I may have chosen to pursue a different concept altogether.

Most importantly, this project is proof of the importance of simplifying ideas and starting to work on them ahead of time. The main issue that lead to this outcome was that I did not give myself enough time to understand these new concepts (image classification, ASCII) which naturally lead to me being unable to incorporate them successfully within a p5js sketch. Going forward, I will ensure that I set myself realistic goals and provide myself with a workable timeline by experimenting with ideas earlier on.

Leave a Reply