Week 6: Midterm Project

Concept:

Anyone who knows me knows that I love baking and cooking. Being in the kitchen and creating something from scratch is such an amazing experience for me. Even though some people may argue that being in the kitchen is actually stressful, I don’t see it that way. It disconnects me from all the stress and anxiety of the outside world for a while and leaves me feeling relaxed. So for my midterm project, I wanted to create a kitchen simulation to let others feel a semblance of the calmness and joy I get when in the kitchen. 

My program is so easy to navigate. At first, the user is prompted to choose one of three recipes, Chocolate Cupcakes, Spaghetti and Meatballs or Avocado salad. After clicking on a recipe, the program takes the user through a few steps to prepare the recipe they chose. The user has to follow all the instructions to successfully deliver the recipe or else they fail and have to start over. After completing a recipe, the user is given two options, to follow a link with the actual recipe, if they want to try it in real life, or to go back to the homepage to start a new recipe. The program also allows users to return to the home page at any given time by pressing esc on the keyboard. I tried to make the program as fun and as engaging as possible. Additionally, I chose a calm music track for the background to add an extra level of tranquility as well as a soft palette of colors to maintain the overall theme of the program which is calming and relaxing.

*open in a new window to hear the music

How it works:

I created my program by creating a function for each screen and calling these functions at the necessary time. I created a list to store the ingredients that are being displayed on the screen and used OOP to define an ingredient class that handles the display and hide-when-clicked methods. I used clip art images for all the icons and pictures included in the program and sized them as necessary to fit the way I want them to. Additionally, I linked a font file to define the font I wanted to use. I chose this font because it looks cute and cartoon-ish even, which makes the program look more fun and gives it an element of style that my program would lack if I had used a standard font.

I included dynamic features where possible and tried to make the motion randomized instead of defined (the dressing drops, the home page animation and the smoke). I handled all the transitions using the mouseClicked() function to determine which screens are displayed when and after which conditions are met. I tried to make my code as modifiable as possible, that is making it easy to add more screens, more ingredients, more functionalities, even more recipes if needed, by using encapsulation and modularity. I found that compressing the functions in p5js made it easier and clearer for me to navigate through the code. 

For the smoke screens, I included an internal timer in my code that records the start time and the current time and compares them till the required time has elapsed and then if no action was taken then the smoke screen is displayed. I tried severaltimesto create it with a different logic but it did not work and I found this to be the most appropriate to use in my code.

if (screen === 3) { //screen is the cupcake oven screen
   CupcakeOven();
   currentTime = millis();
   if (currentTime - startTime > cupcakeinterval) { //count the seconds elapsed and //compare it to the allowed interval, if it is greater than the time allowed, //cupcakes burnt so go to smoke screen
      screen = 5; //go to the cupcake smoke screen
   }
}

 

Problems faced:

The main problem I faced in the beginning was the transitions. I had all the screens defined and working perfectly, but I could not get them to link together properly without facing so many bugs. I tried so many different ways to get them to call each other from within the functions itself, but I would get stuck on one single screen forever. I tried to link them using mouseButton, it worked partially but I faced so many problems with the ingredients when I implemented it. When I finally decided to use mouseClicked(), I tried to call the functions from within and use if functions to identify when and which screen is needed to be called, but I also faced errors 🙁  It was after a lot of trial and error that I realized having a screen variable that is controlled through the mouseClicked() and then having draw() call the screen functions is the way to go. 

Another major problem I faced was implementing the ingredients objects and how to make them disappear when clicked and how to link the screen transitions with the ingredients. At first it was mostly trial and error, but then I decided to take a step back and understand every little detail related to how I am implementing them and come up with a solution that works on paper and then implement it step by step. I am proud of the way I got it to work in the end because it just makes sense to me and makes the whole process smooth and straightforward as well as easy to add/remove/modify the ingredients as needed. It is also worth noting that I originally wanted to implement a drag and drop feature for the ingredients but it was too complicated to use within my program especially that I wanted to use resized images for the ingredients so I settled on mouse clicks instead.

Last but not least, I faced problems with implementing the smoke screens and how to make them appear automatically after the time elapses. It was a bit confusing for me at first to implement it, it never worked at all to the point where I decided to completely disregard the idea of smoke screens, but I managed to get it working in the end. I wanted to add a timer functionality where it displayed the timer on screen so that the user can visually see how much time they have left, but it created so many bugs and did not work effectively, even though it is a straight forward feature, it clashed with my other functionalities, so i decided that the users can count to 5 or 7 themselves.

Future improvements and reflection:

I would love to add more features to make the program more advanced like adding mixing and frosting stages for the cupcakes and adding a stirring stage for the spaghetti as well as some steam while it is cooking. I would also love to add more choices within the recipes so that users can choose what dressing they want for the salad for example. I am overall satisfied with what I have now and I am proud that I got my program to look like what I had initially planned for it.

References:

images:

https://www.clipartmax.com/

https://www.freepik.com/

https://www.vecteezy.com/vector-art/11234689-cocoa-sack-with-seeds

recipes:

https://sallysbakingaddiction.com/super-moist-chocolate-cupcakes/

https://www.onceuponachef.com/recipes/spaghetti-and-meatballs.html

https://www.onceuponachef.com/recipes/summer-avocado-salad.html

music:

https://www.ashamaluevmusic.com/cooking-music

Font:

https://textfonts.net/one-little-font.html#google_vignette

midterm project

Introduction

For my midterm project, I created a spooky juggling game called Morbid Juggler. I thought of the concept in week 2, and for the project involving arrays and classes I made a simple game that lets users add balls using the keyboard and juggle them using the cursor. Morbid Juggler version 2 is built on top of the same motion logic, but instead of using standard input (mouse and keyboard), players have to use their webcam and their hands to interact with the game. To add a new ball, the user makes the “🤘” gesture with their left hand. The balls automatically move in a parabolic trajectory, and the goal of the game is to not drop the balls, i.e. catch them before they leave the screen. To catch a ball, users can use any hand and pinch their fingers and drag a ball across the screen. To throw the ball again, they just have to release the pinch.

How it works

The following explains the balls’ motion and is from my documentation for my week 2 project:

I save the initial time when an (eye)ball is created, and measure the time that has passed since. This allows me to use values of elapsedTime as a set of x values, which, when plugged into a quadratic equation, give a parabola. Changing the coefficients in the equation allows me to modify the shape of the parabola and thus the trajectory of the (eye)ball, such as how wide the arc created by its motion is. I played around with the coefficients and decided to use (0.4x)(2-x), which works nicely for a canvas of this size.

A more detailed explanation of how the balls are stored in memory and moved with each frame update can be found here.

For tracking hands using the webcam, I used a Javascript library called Handsfree.js. I was initially going to use PoseNet, but I realized it wasn’t the best for this use case. PoseNet’s landmark model tracks 17 points on the human body, but for my game, I didn’t need all that. I just needed to track the user’s fingers, and PoseNet only returns one keypoint for the user’s wrist. So, I looked up other libraries and found Handsfree.js, which is built on the same framework, Google’s Mediapipe, as PoseNet, but it is more geared towards hand tracking. It tracks all fingers and knuckles and even has built in gesture recognition to detect pinches, which is what my users would need to do to drag balls around the screen. Furthermore, it was very easy to train the model to recognize new gestures using the website. It lets you collect your own data and create a gesture model to plug into your code. For example, this is the code for recognizing the “🤘” gesture.

handsfree.useGesture({
    name: "addBall",
    algorithm: "fingerpose",
    models: "hands",
    confidence: "9",
    description: [
      ["addCurl", "Thumb", "HalfCurl", 1],
      ["addDirection", "Thumb", "DiagonalUpRight", 1],
      ["addDirection", "Thumb", "VerticalUp", 0.19047619047619047],
      ["addCurl", "Index", "NoCurl", 1],
      ["addDirection", "Index", "VerticalUp", 0.3888888888888889],
      ["addDirection", "Index", "DiagonalUpLeft", 1],
      ["addCurl", "Middle", "FullCurl", 1],
      ["addDirection", "Middle", "VerticalUp", 1],
      ["addDirection", "Middle", "DiagonalUpLeft", 0.47058823529411764],
      ["addCurl", "Ring", "FullCurl", 1],
      ["addDirection", "Ring", "VerticalUp", 1],
      ["addDirection", "Ring", "DiagonalUpRight", 0.041666666666666664],
      ["addCurl", "Pinky", "NoCurl", 1],
      ["addDirection", "Pinky", "DiagonalUpRight", 1],
      ["addDirection", "Pinky", "VerticalUp", 0.9230769230769231],
    ],
  });

Then I can use this information like so:

function addBall() {
  const hands = handsfree.data?.hands;
  if (hands?.gesture) {
    if (hands.gesture[0]?.name == "addBall") {
      let x = sketch.width - hands.landmarks[0][9].x * sketch.width;
      let y = hands.landmarks[0][9].y * sketch.height;
      console.log(x, y);
      balls.push(new Ball(x, y));
      canAddBall = false;
      // go to sleep for a second
      setTimeout(() => {
        canAddBall = true;
      }, 1000);
    }
  }
}

The hardest part of this project was working with Handsfree.js. There is criminally limited documentation available on the website, and I had to teach myself how to use it by looking at the few demo projects the author of the library had created. For hand tracking, there was a piece of code that closely approximated what I wanted to do. It loops through the hands in the object returned by handsfree.data, and for each hand, it loops through all the fingers. Each finger can be identified using its landmark, and its location and other information can be used elsewhere in the program. For example, in handsfree.data.landmarks[handIndex][fingertips[finger]], handIndex=0 and finger=8 represents the tip of the left index finger. In Morbid Juggler, when handsfree.data.pinchState for any hand or finger becomes held near a ball, the ball sticks to the tip of the pointer finger. When it becomes released, the ball restarts its parabolic motion.

To see the full code for the project, refer to the p5 web editor.

Playing the game

The game can be played here. When the project has loaded, click on ‘start game’. It takes a second for Handsfree to load. You can check if the game is fully loaded by waving your hands in front of your webcam. You should see the skeleton for your hand mapped on the canvas.

Make a “🤘” sign with your left hand (tuck in your thumb, and make your index and pinky straight!). You’ll see a new ball pop from between your fingers. Add a few more balls like this, but be quick! Don’t let them leave the screen, or the counter in the corner of the screen will tell you how badly you’re losing even after death. To keep the balls from falling off, pinch your fingers like you would pick up a real juggling ball, drag the ball you’re holding across the screen, and release your fingers somewhere near the bottom left corner of the screen, so the ball can travel again. You can add as many balls as you like — at your own risk.

Final thoughts

I think there’s still room to fine tune the hand tracking. If I were to upgrade the game, I would probably use a different library than Handsfree.js. The tracking information isn’t the most accurate, and neither is it consistent. For example, even when a hand is being held up still, the keypoints on screen are seen to jitter. Since during dragging, the balls stick to the tip of the pointer finger, the balls were jittering badly too. I later added some smoothing using the lerp() function to fix that. I also had to do a lot of trial and error when picking a gesture that would trigger the function to add a new ball. The model wasn’t very confident about a lot of the other gestures, and kept adding balls erroneously. The “🤘” sign was the final choice because it was explicit enough and did not resemble other gestures a user might inadvertently make while playing the game.

One thing worth special mention is the background music used in the game, which was produced by my friend by sampling my other friend’s vocals. I heard the track and liked the spooky yet playful feel to it and asked if I could use it in my project. My friend agreed, and now I have a bespoke, perfectly fitting music to accompany my project.

Week 5 – Midterm

Concept

“Remy the Rat” is a dark visual novel about a comical and relatable experience related to pets. It tells the story of a teenager that forgets to feed his brother’s rat, and his attempt in solving the issue.

Elements

All of the background drawings were made by me on Paint. They are extremely minimalistic, almost like they could have been made by a five year old, but truthly that was the aesthetic that I was trying to achieve. This rough look is also enhanced by the doodled dialogue boxes of the library p5.scribble. The background music was also done by me, being the recording of boiling meat tweaked on Audacity.

Below are some pictures:


Future Improvements

Unfortunately, I was not able to add some of the game mechanics that I wanted to, such as controlling the sprites of a character, or choosing your own paths, which could have improved the concept and the replayability of the game. Nonetheless, I am satisfied with the final product, and I believe that taking a more minimalistic approach was more effective.

CATSAWAY- Midterm Project

  • some images:

  • concept of my project

As a student at NYUAD, I really love our campus cat, Caramel. She makes being here even better. I enjoy graphic design, and it’s fun for me to draw. I’m also getting into coding, and I want to learn more about it. So, I had this idea for a project I call “Catsaway.” It’s a game inspired by Caramel. In the game, she can fly around the campus without crashing into the pointy palm tree corners. Instead, she gracefully glides through the soft palm leaves. To make her fly, you just press the spacebar. It’s a way to enjoy a little adventure through our campus. This project lets me combine my love for art and my interest in coding, turning my fondness for Caramel into a fun game.

My Game starts with my landing page that have 2 options either to start the game directly or taking you into the instructions page.

  • how my project works and what parts you’re proud of (e.g. good technical decisions, good game design)

I’m happy with how I brought Caramel to life in the game. Players get to control Caramel as she gracefully flies around our campus, avoiding obstacles like pointy palm trees. I’ve also made sure the game feels like NYUAD by adding familiar buildings and fluffy clouds.

I’m proud of the technical choices I made. The game has different parts, like the start page and instructions, to make it easy to play. I even added sound effects, like a cheerful jump sound when Caramel takes off. Players can restart the game or go back to the start page with simple key presses.

  • Some areas for improvement and problems that you ran into (resolved or otherwise)

During the development of my project, I encountered a few challenges and areas for improvement. One major issue I faced was optimizing the game’s performance. As the game got more complex with added elements, like the buildings and clouds, I noticed some slowdowns. I had to spend time fine-tuning the code to make sure the game runs smoothly, especially on lower-end devices. This was a valuable lesson in optimizing game design.

I had to work on enhancing the game’s instructions. I realized that players might need clearer guidance on how to play, so I made improvements to ensure that the instructions are more user-friendly.

Another challenge was making sure that the game’s look and feel truly capture the essence of NYUAD and Caramel. It required some iterations to get the graphics just right and create that immersive atmosphere.

While these challenges arose during development, I’m happy to say that I addressed them, resulting in a game I’m proud to share. The process of tackling these issues taught me valuable lessons in game development and design.

  • Future plans:
  • Play on Phones: Right now, you can play the game on a computer P5.js, but it would be great to play it on your phone or tablet. I need to make sure it works well on all kinds of devices.
  • High Score List: Having a list that shows who has the highest scores can make the game competitive. People will want to beat each other’s scores and share their achievements.
  • Listening to Players: I’ll pay attention to what players say and try to make the game better based on their ideas. They’re the ones playing it, so their feedback is important.
  • Making Money: If I want to earn money from the game, I can think about ways like letting people buy things in the game or showing ads. But I’ll be careful not to make the game annoying with too many ads 🙂

My own checklist of the requirements:

  • At least one shape

The background was coded the exact same way as the first-ever assignment, The NYUAD building, clouds, highline, and trees. I’ve used plenty of shapes  

  • At least one image

The Cat is an imported image sketched by Hand using Procreate.  as well as the NYUAD logo 

  • At least one sound

When the cat is flying the moment you hit on the Up Arrow you will hear a jumping sound

  • At least one on-screen text

The Score and the final score is a screen on text 

  • Object Oriented Programming

The game is entirely made using OOP

  • The experience must start with a screen giving instructions and wait for user input (button / key / mouse / etc.) before starting

The landing page has 2 options either wait for the user input to click on the space bar to start playing immediately or press on the letter “i” (stands for instructions) to access the instructions page.

  • After the experience is completed, there must be a way to start a new session (without restarting the sketch)

Again there are 2 options either to press on the space bar to replay or press on the letter B (stands for begin) to take you to the landing page again

Code:

let pipes = []; // An array to store information about the pipes in the game.
let score = 0; // Keep track of the player's score.
let gameOver = false; // A flag to check if the game is over.
let backgroundImage; // Not used in the code, possibly intended for a background image.
let backgroundX = 0; // The horizontal position of the background image.
let gameStarted = false; // Indicates whether the game has started.
let speed = 10; // The speed of the game elements, adjustable.
let gameState = 'landing'; // Tracks the current state of the game.

function preload() {
  // Loading images and sounds used in the game.
  // backgroundImage = loadImage('11.png');
  landing = loadImage('Catsaway(1).png');
  instruction = loadImage('instructions.png');
  caramel = loadImage("caramel.PNG");
  palm1 = loadImage("palm2-1.PNG");
  palm2 = loadImage("palm2 -2.PNG");
  ending = loadImage("score2.png");
  NYUAD = loadImage("NYUAD.png");
  
  // Load the jump sound
  jumpSound = loadSound('sfx_point.mp3');
  // hitSound = loadSound('sfx_hit.mp3');
}

function setup() {
// Set up the initial canvas for the game and create a bird object.
  createCanvas(800, 450);
  bird = new Bird(); 
}

function draw() {

  if (gameState === 'game') {
    // Code for the game state
    // Drawing buildings, clouds, and other elements on the canvas
    // Pipes are also created, and the game logic is implemented
    background(156,207,216,255);
    fill(207,207,207,255);
    rect(150,150, 500, 300);
    ellipse(400,150, 490, 100)
    
    // strokes for the building 
   fill(169, 169, 169)
    rect(150, 220, 500, 10);
    rect(150, 235, 500, 10);
    rect(150, 250, 500, 10);
    rect(150, 265, 500, 10);
    rect(150, 280, 500, 10);
    rect(150, 295, 500, 10);
    // rect(200, 220, 10, 100);
    rect(260, 220, 10, 100);
    rect(320, 220, 10, 100);
    rect(380, 220, 10, 100);
    rect(440, 220, 10, 100);
    rect(500, 220, 10, 100);

    
    image(NYUAD, 55, 0);
    fill(171,109,51,255);
    rect(150, 300, 510, 100);
    
    
    fill(121,68,19,255);
    rect(150, 310, 500, 5);
    rect(150, 317, 500, 5);
    rect(150, 324, 500, 5);
    rect(150, 331, 500, 5);
    
    rect(150, 390, 500, 20);
    

    
    // the 2 buildings
    fill(115,115,115,255);
    noStroke();
    square(0, 250, 200);
    square(600, 250, 200);
    
    // windows 
    fill(207,207,207,255);
    rect(20,260, 40, 50)
    rect(70,260, 40, 50)
    rect(120, 260, 40, 50)
    rect(640, 260, 40, 50)
    rect(690, 260, 40, 50)
    rect(740, 260, 40, 50)
  
        noStroke();
    fill(255);
    // First Cloud
    ellipse(200, 100, 80, 60);
    ellipse(240, 100, 100, 80);
    ellipse(290, 100, 80, 60);
    ellipse(220, 80, 70, 50);
    ellipse(260, 80, 90, 70);

    // Second Cloud
    ellipse(400, 80, 60, 40);
    ellipse(440, 80, 80, 60);
    ellipse(490, 80, 60, 40);
    ellipse(420, 60, 50, 30);
    ellipse(460, 60, 70, 50);

    // Third Cloud
    ellipse(600, 120, 90, 70);
    ellipse(640, 120, 110, 90);
    ellipse(690, 120, 90, 70);
    ellipse(630, 100, 80, 60);
    ellipse(670, 100, 100, 80);
    
    ellipse(0, 80, 60, 40);
    ellipse(40, 80, 80, 60);
    ellipse(90, 100, 60, 40);
    ellipse(140, 150, 50, 30);
    
    fill(15,138,70,255);
    ellipse(100, 420, 90, 70);
    ellipse(60, 450, 90, 70);
    ellipse(140, 420, 110, 90);
    ellipse(190, 420, 90, 70);
    ellipse(130, 500, 80, 60);
    ellipse(170, 500, 100, 80);
    
    ellipse(600, 420, 90, 70);
    ellipse(640, 420, 110, 90);
    ellipse(690, 420, 90, 70);
    ellipse(630, 500, 80, 60);
    ellipse(670, 500, 100, 80);
    
    
    fill(0,166,81,255);
    ellipse(0, 420, 90, 70);
    ellipse(40, 420, 110, 90);
    ellipse(90, 420, 90, 70);
    ellipse(30, 500, 80, 60);
    ellipse(70, 500, 100, 80);
    ellipse(670, 420, 90, 70);
    ellipse(700, 420, 110, 90);
    ellipse(740, 420, 90, 70);
    ellipse(750, 500, 80, 60);
    ellipse(760, 500, 100, 80);
    
    if (!gameOver) {
      bird.update();
      bird.show();
      for (let i = pipes.length - 1; i >= 0; i--) {
        pipes[i].show();
        pipes[i].update();
        if (pipes[i].hits(bird)) {
          gameOver = true;
        }
        if (pipes[i].offscreen()) {
          pipes.splice(i, 1);
        }
      }
      if (frameCount % 15 === 0) {
        pipes.push(new Pipe());
      }
      textSize(32);
      fill(255);
      text(score, 100, 30);
      for (let i = pipes.length - 1; i >= 0; i--) {
        if (pipes[i].pass(bird)) {
          score++;
        }
      }
    } else {
      image(ending, 0, 0, width, height);
      textSize(64);
      fill(255, 0, 0);
      text("", 200, height / 2 - 32);
      textSize(50);
      fill(0);
      text("" + score, 450, height / 2);
      // Provide a restart option

      // Check for restart key press
      if (keyIsDown(32)) { // SPACE key
        restart();
      } else if (keyIsDown(66)) { // 'B' key
        gameState = 'landing';
      }
    }
  } else if (gameState === 'landing') {
    
    // Code for the landing screen state
    // Displays the landing image and checks for keypress to start the game
    background(0);
    image(landing, 0, 0, width, height);
    textSize(32);
    fill(255);
    textAlign(CENTER, CENTER);
    text("", width / 2, height / 2);
    // Check for start key press (SPACE key) or 'i' key press for instructions
    if (keyIsDown(32)) { // SPACE key
      startGame();
    } else if (keyIsDown(73)) { // 'i' key
      gameState = 'instruction';
    }
  } else if (gameState === 'instruction') {
    // Code for the instruction screen state
    // Displays instructions and checks for keypress to start the game
    background(0);
    image(instruction, 0, 0, width, height);
    textSize(32);
    fill(255);
    textAlign(CENTER, CENTER);
    if (keyIsDown(32)) { // Check for SPACE key press
      startGame();
    }
  }
}

function startGame() {
  // Function to start the game
  // Resets game variables and sets the game state to 'game'
  gameState = 'game';
  gameStarted = true;
  bird = new Bird();
  pipes = [];
  score = 0;
  gameOver = false;
}

function restart() {
  // Function to restart the game
  // Calls the startGame function to reset the game
  startGame();
}

function keyPressed() {
    // When the UP_ARROW key is pressed, the bird jumps (if the game is not over)
    // Plays a jump sound when the bird jumps
  if (keyIsDown(UP_ARROW) && !gameOver && gameState === 'game') {
    bird.jump();
    jumpSound.play();
  }
}

function restartGame() {
  bird = new Bird();
  pipes = [];
  score = 0;
  gameOver = false;
}

class Bird {
  // Bird class to handle bird-related functionality
  constructor() {
    this.y = height / 2;
    this.x = 64;
    this.gravity = 0.6;
    this.lift = -15;
    this.velocity = 0;
  }

  show() {
    noFill();
    ellipse(this.x, this.y, 32, 32);
    image(caramel, this.x - 70, this.y - 30, 150, 90);
  }

  update() {
    this.velocity += this.gravity;
    this.velocity *= 0.9;
    this.y += this.velocity;
    if (this.y > height) {
      this.y = height;
      this.velocity = 0;
    }
    if (this.y < 0) {
      this.y = 0;
      this.velocity = 0;
    }
  }

  jump() {
    this.velocity += this.lift;
  }
}

class Pipe {
  // Pipe class to handle pipe-related functionality
  constructor() {
    this.top = random(height / 2.5);
    this.bottom = random(height / 2);
    this.x = width;
    this.w = 20;
    this.speed = speed; // Adjust the speed here as well
    this.highlight = false;
  }

  show() {
    fill(106, 69, 46, 255);
    if (this.highlight) {
      fill(106, 69, 46, 255);
      noStroke();
    }
    rect(this.x - 8, 0, this.w + 6, this.top);
    rect(this.x, height - this.bottom, this.w + 2, this.bottom);
    const palmX = this.x - 82;
    const palmYTop = this.top - 250;
    const palmYBottom = height - this.bottom - 120;
    image(palm2, palmX - 20, palmYTop + 195, 200, 200);
    image(palm1, palmX, palmYBottom, 200, 200);
  }

  update() {
    this.x -= this.speed;
  }

  offscreen() {
    return this.x < -this.w;
  }

  hits(bird) {
    if (bird.y < this.top || bird.y > height - this.bottom) {
      if (bird.x > this.x && bird.x < this.x + this.w) {
        this.highlight = true;
        return true;
      }
    }
    this.highlight = false;
    return false;
  }

  pass(bird) {
    if (bird.x > this.x && !this.highlight) {
      return true;
    }
    return false;
  }
}

 

Week 6 | Midterm Project

The Concept:
In this engaging “Chromamood” game, players take control of a catching hand, moving it through the game screen width. The objective is to collect words falling from the top of the screen. These words encapsulate various human moods, such as “happy,” “calm,” and “excited.” But the interesting part of the game lies in its categorization system. These words are neatly organized into distinct categories, with each category intimately linked to a unique display color and an associated score. As players adeptly gather these words, they trigger a series of cascading effects. The real magic happens as the words are seamlessly woven into the live video feed, enhancing the text display in the most visually enthralling way possible. The text dynamically updates, with its color shifting harmoniously, reflecting the precise word that the player has successfully captured.

Why “ChromaMood”?
The name “ChromaMood” is a fusion of two essential elements in the game. “Chroma” relates to color and represents the dynamic background shifts triggered by the words collected, where each mood category is associated with a distinct color. “Mood” encapsulates the central theme of the game, which revolves around collecting words that express various human emotions and moods

How the project works:
The game starts on a welcoming screen with the title “ChromaMood” and two options: “start” and “help.” Clicking “start” leads you to the main game screen. Here, you’ll see a hand-shaped object set against a black background, with words falling from the top. The game’s twist is in the words you catch. Each word represents a different mood, like “happy” or “angry,” and it impacts the background and your score. For example, catching words associated with anger turns the background red and gives you a score of 2. The words on the background change too. The goal is to get the highest score by collecting words that reflect positive moods within a limited time.

Welcoming Screen
Chromamood In-game Screenshot

Code Highlight I am proud of:
The part of the game I am proud of is developing the video display background as I had no idea of how to do it. It required somehow intricate coding to seamlessly integrate dynamic text updates with the live video feed, ensuring a cohesive and engaging user experience. The idea is that I am dealing with the sketch as pixels. So, I update the pixels with every character of the words the player catches. Honestly, it was challenging for me to do it but I watched some tutorials and managed to figure it out.

Sketch:
The video and sound effects are not working in the embedded iframe below. Open it on P5JS editor for better experience.

Soma areas of improvement:
A potential enhancement to consider is shifting the way we control the hand object. Instead of relying on traditional keyboard controls, we could explore the possibility of tracking the player’s movements in the live video feed. By doing so, we could introduce a new level of interactivity, allowing players to physically interact with the game. This change not only simplifies the game’s control mechanics but also aligns perfectly with its fundamental concept. It would create a more immersive experience, where players can use their own motions to catch the falling words, making the game even more engaging and unique.

Reading refection : Making art is like setting a trap

Philip Larkin, a poet, offers a straightforward perspective on the artistic process, particularly when it comes to creating poems. He simplifies it into three stages:

Step one: Intensely Feel Something
At the start of your creative journey, you need to be deeply connected to a specific emotion. It’s like having a passionate fire burning inside you, driving you to take action.

Step two: Convey that Emotion
The next phase involves using words and other forms of expression to enable others to feel the same emotion that’s so alive within you. However, this part can be quite challenging, as not everyone may easily grasp the feeling you intend to convey.

Step three: Let Others Experience It
Once you’ve crafted your artistic work, it’s time to share it with the world. When people engage with your poetry or observe your artwork, it should ignite the same emotions in them that you originally felt. This is when your art truly succeeds.

Although Larkin presents this process as simple, the creation of art often involves complexities, particularly in the stages between these steps. Crafting the actual artwork can be a time-consuming and mysterious journey.

Larkin suggests that to evoke the emotions in others, you must first find satisfaction within yourself. This means that while creating art, it should resonate with your inner self. If it does, it increases the likelihood that others will experience similar feelings when they interact with your work.

One very intresting concept in the text is that some artists dedicate their entire lives to expressing the same emotion in various ways. It’s as though they possess one profound sentiment they wish to share with the world.

While Larkin’s stages offer a simple framework for art, the true artistry often unfolds in the nuances between these phases. Having that one special feeling you want to convey can propel your creative journey throughout your lifetime.

Week#5 Midterm Project

I wanted to make my midterm project really personal so as to motivate me to want to code and really push myself and the only topic that instantaneously came to mind was music. I am an avid music lover and have used music as a coping mechanism during times of (lowkey) depression and so I naturally garnered a really strong feeling about it. I decided to create my own version of the “coffee shop experience” which really struck a chord in me. My rendition of the “coffee shop”, as aforementioned, centers around music. A vintage radio with an on/off, skip and pause button, which necessitates user interaction, would play a sequence of 5(?) songs – more details listed below. Each song has their own uniquely specific background, each relating to a specific memory, and each background has an element of dynamicity. It’s important to note that the song title and artist name will be displayed on the radio for each song that plays.

Checklist:
1) create a radio using p5.js javascript.
(on/off button – when “on”, music plays, skip button, pause button)
(off button – sequence of songs randomizes + turns off music).
(make sure songs autoplay on loop)
2) (image and sound) – different songs(x5) have different backgrounds.
3) (list/arrays and on-screen text) – song title shows on display when song is playing.

Specific worries/ possible challenges:
1) keeping track of specific parts of codes:
it’s absolutely imperative that my code is well organised and commented, more so for me since I’m in the process of establishing somewhat of a foundation in coding. Keeping track of variable names, classes, functions are most imperative. One solution could be to work on separate sketch.js files for specific parts (for example., backgrounds) specifically for code sections that will be encompassed in classes. Once I’ve made sure it works, I can always copy and paste it onto the main sketch.js file. I’ll also need to copy and paste regularly my main file code onto googledocs/ word doc just so that it’s easier for me to keep track and stay more organised – I find that the p5.js window size limits readability.

2) localising “click” function to specific shapes:
throughout my assignments where interactivity is present, I’ve used “click” functions that result in a change, but it has never been localised to a specific area. It was general and random, which hence enabled flexibility. Because of this, I am naturally worried of the complexity of the code. And with this follows a subsequent building anxiety: the numerous conditions (if/for loops) that majorly depend on how the user decides to interact. To me, this particular segment appears as a big knotted, balled up thread which I have to individually unpick and so at the moment seems incredibly daunting… Of course though nothing is impossible with an internet search (in-depth if necessary). According to my memory, the topic of localising the “click” function to specific shapes has been touched upon in one of the coding train videos. The same can be applied to the if and for loops.  Furthermore there is always the “coffee shop experience” example I can use for guidance or as a source of reference. 

Week 5 – Midterm Progress

Concept)
Looking around, I see that most people on campus are worn out both mentally and physically. Many students have hit a point where they are so stressed, but don’t even have time to manage and relieve stress. From this, I started thinking that I want to do something with ‘screaming,’ as it’s a good means of letting out stress and frustration.

I believe there are a few games that use volume or pitch of sound as a way to move around characters. Below is one of the game I have used as a reference.

Most games, I found out, use either volume or pitch of input sounds, not both. Based on this, I decided to use both factors. I plan to use the volume as the speed of the character and the pitch as the location (up and down) of the character. With these controls, the player will have to navigate through a certain map, collecting coins. (Below is a reference of what I’m thinking of.)

Programming Ideas)
There should be functions that create the map with collectables. There should also be a function that takes the input sound and translates it to the player(character)’s location and moving speed. There should also be something that keeps track of the contact of player and the wall, and another that keeps track of the scores.

Getting sound input and translating it to the player’s position and its moving speed would involve interactivity.

Complex Parts)
I think the hardest part would be figuring out how I’m going to translate the volume and the pitch of the user’s sound. It can’t be too sensitive as the game will be come frustrating, but it also can’t be too hard to move the user around.

I also think figuring out when the player has contacted a wall (which should end the game) would be a challenge.

Reducing Risks)
I think I’ll work on two parts separately: the movement of the user using sound and the moving map. I will first work on the movement of the user, testing out different volumes and pitches of the user. If I feel like the sensitivity is just right, I will move on to work separately on the map.

I will have to figure out the shape of the maps and how I want to generate it. I will also think about how to randomly place collectibles. When both parts are done, I can combine the two projects to create a full game.

 

Week 5 – Reading Reflection

The study that was given as our reading covers how the technology of computer vision started and how it evolved through different usages of different artists and fields. I think most of the readers would have expected how computer vision’s utility is limitless- we’ve already seen so many works that use computer vision that suggests potentials of further development and extension.

The reading gave a good reminder that as much as using computer vision can lead to fascinating works, it holds lots of limitations and needs careful considerations in order to maintain good accuracy. The fact that how precisely the technology can work depends on our decisions is another charm of it in my opinion.

I also like how the work Suicide Box sparks questions. I understand that there can be different views (especially ethical) on the approach. I wouldn’t say a certain side is more ‘correct’ than the other. However, I do want to say that the sole fact that it sparked questions and discussions about an issue that people tended to walk away from and ignore is a significance on its own.

Week#5 – Reading Reflection

Golan Levin’s reading sheds light on computer vision and its role in interactive art and creative design. It’s intriguing how the seemingly contradictory worlds of computers and art come together to create interactive art that deeply engages people. This irony lies in the fact that digital technology often absorbs individuals, disconnecting them from the physical world. However, when art and computers combine, they captivate and transport individuals into alternate realms that encourage technological interaction. During this interaction, artists embed powerful meanings into their pieces, which become all the more memorable because people physically engage with them.

On a different note, one aspect that stands out is how computer vision has evolved beyond art and found applications in diverse fields, including healthcare, transportation, security, and entertainment. This evolution reflects the pivotal role of computer vision in our modern lives. However, its important to note that the reading also raises ethical and societal questions, particularly in the context of surveillance-themed artworks like “Sorting Daemon” and “Suicide Box.” These pieces challenge us to consider the implications of using computer vision for both artistic and surveillance purposes, which ultimately blurs the line between observation and intrusion.

In conclusion, computer vision has undergone a remarkable transformation, becoming a powerful tool for artists, designers, and creators across various fields. Its integration with cleverly designed physical environments showcases the interdisciplinary nature of interactive art and design, bridging the gap between the virtual and physical worlds. While offering incredible creative possibilities that often proffer a deep and meaningful message, it also prompts us to reflect on the ethical and societal implications of this technology.