Reading Reflection- Week 5

In this passage the author talks about computer vision, which is a technology which allows computers to interpret and understand visual information from the surrounding environment.

What is so cool about this is how the computers can understand what they see through the use of computer vision, like movements and objects. Lets take the game “Limbo Time” as an example, where players use their hands, and a computer tracks their movements to play the game. Its fascinating how these simple techniques like movements can create some intriguing interactive experiences.

Another example which fascinates me is the “Messa di Voce,”a performance where voices were transformed into images. Its crazy how the simple voices and sounds themselves transforms into images. As a musician this art-piece really caught my attention and showed me the range of possibilities that are achievable using computers.

Lastly, I found it interesting how computer vision is becoming more accessible in multimedia tools. They talked about plug-ins for programs like Processing and Max/MSP/Jitter that let artists and designers easily add computer vision features to their projects. It’s like having a toolbox full of cool gadgets that make it easier to create interactive art or games, which for us could be useful for our future projects.

These examples show how artists use technology to make interactive projects in lots of different  varieties of ways. In the age of Artificial Intelligence, it’s cool to see how these early ideas helped make the tech world we have now. These are like the building blocks for the interactive designs we see all the time.

 

Week5- Midterm Progress

Concept

For this project, I’ve drawn inspiration from a childhood game my brother used to play, called the “Fishing game.” It involved catching fish with a fishing hook within a time limit. I’ve adapted this concept into my own version of the game, adding some twists and modifications. Instead of a traditional fishing game with a box of fish, my game features fish scattered randomly across the canvas. The objective remains the same: capture each fish individually using a fishing hook (the mouse) and deposit them into a fish bowl before time runs out. This adaptation allows for a unique gameplay experience while still capturing the essence of the original game.

Design

1. The game begins with an instructions page featuring a button labeled “Click here to continue” at the bottom.

2. Upon clicking, the player is directed to a page where they can select the number of fish to be displayed on the canvas, adjusting the game’s difficulty.

3. After choosing the level of difficulty, the player clicks a button labeled “Click here to start the game.”

4. Upon starting the game, a countdown timer begins, indicating the time limit for completing the game.

5. Background music plays throughout the game, creating an immersive experience.

6. If the player successfully captures all the fish before the timer runs out, they win the game.

7. Upon winning, a new page displays congratulating the player on their victory and showing their completion time.

8. A victory music plays to celebrate the player’s success.

9. If the player fails to capture all the fish before the timer expires, they lose the game.

10. Upon losing, a new page shows that the player has lost the game.

11. An end game music plays to signal the conclusion of the game.

Challenges with the project

1. Implementing the functionality to drag and drop each fish individually to the fish bowl has posed a significant challenge.

2. Creating an on-screen countdown timer that accurately tracks the remaining time and triggers the end game page when it reaches zero has been another obstacle.

3. Providing users with the ability to select their preferred difficulty level by choosing the number of fishes in the game initially proved to be problematic, requires for code refining.

4. Design challenge: Positioning the fish bowl on a table above water and making the users to drag the fish from the water surface into the bowl, which would make it aesthetically pleasing.

Code

let fishes = [];
let fishBowl;
let fishingHook;
let timer;
let gameStarted = false;
let gameFinished = false;
let numFishes = 5; //  default number of fishes


function preload() {
  
  fishBowl = loadImage('bowl.png'); // Load image for fish bowl
  fishingHook = loadImage('hook.png'); // Load image for fishing hook
  gameMusic = loadSound('gamemusic.mp3'); // Load background music
  victoryMusic = loadSound('victorymusic.mp3'); // Load victory music
  losingMusic = loadSound('losingmusic.mp3'); // Load losing music
  
}

function setup() {
  
  createCanvas(800, 600);
  timer = new Timer(numFishes); // timer object with the number of fishes
  
}

function draw() {
  background("#2196F3");

  if (!gameStarted) {
    displayInstructions(); // Display instructions if game hasn't started
  } 
  
  else {
    timer.update(); // Update timer

    if (!gameFinished) {
      // Draw fish bowl
      image(fishBowl, width / 2 - 50, height / 2 - 50, 150, 150);

      // Draw fishing hook
      image(fishingHook, mouseX - 25, mouseY - 25, 50, 50);

      // Display and update fishes
      for (let fish of fishes) { //checks each elements of the "fishes" array
        fish.display();
        
        //hooking the fishes to the hook
        if (dist(mouseX, mouseY, fish.x, fish.y) < 25) {
          fish.hooked = true;
        
        }
        
        if (fish.hooked) {
          fish.move(mouseX, mouseY); //the fish is hooked to the hook
        }
      }

      // Check if all fishes are inside fish bowl
      let allFishesInside = true;
      for (let fish of fishes) {
        if (!fish.insideBowl) {
          allFishesInside = false;
          break;
        }
      }
      if (allFishesInside) {
        gameFinished = true;
        timer.stop();
        
        //play music
        if (timer.timeLeft > 0) {
          
          victoryMusic.play(); // Play victory music if game finished before countdown ends
          gameMusic.stop(); // Stop background music

        } 
        
        else {
          
          losingMusic.play(); // Play losing music if countdown ends before game finished
          gameMusic.stop(); // Stop background music

        }
        
      }
    } 
    
    else {
      fill(255)
      textSize(40);
      textAlign(CENTER, CENTER);
      text("Game Over!", width / 2, height / 2);
      text("Time left: " + timer.getTime() + " seconds", width / 2, height / 2 + 40);
      
    }
  }
}

function displayInstructions() {
  
  // Display instructions
  fill('rgb(0,0,134)'); //color of ellipse
  ellipse(width/2, height/2,650,150); //ellispe in the center behind the circle
  fill(255); //color of the texts
  textSize(40); //size of texts
  textAlign(CENTER, CENTER); //texts in the center of the canvas
  text("Click to start the game", width / 2, height / 2);
  
}


function mousePressed() {
  if (!gameStarted) { //if the game is not started 
    gameStarted = true; //the variable is set to true and it resets the fishes array
    fishes = []; // Reset fishes array
    
    //adding fishes to the canvas
    for (let i = 0; i < numFishes; i++) {
      fishes.push(new Fish(random(width), random(height)));
    }
    
    timer.start(); //countdown starts
    gameMusic.loop(); // Play background music on loop

    
  }
}

class Fish {
  constructor(x, y) {
    this.x = x;
    this.y = y;
    this.hooked = false;
    this.insideBowl = false;
    this.fishImage = loadImage('fish.png'); // Load fish image
  }

  display() {
    image(this.fishImage, this.x - 25, this.y - 25, 50, 50); // Draw fish image
  }

  move(x, y) {
    this.x = x;
    this.y = y;
    
    // Checking if the fish is inside the fish bowl
    if (dist(x, y, width / 2, height / 2) < 50) {

      this.insideBowl = true; // If the distance is less than 50, then the insideBowl property is set to true


    }
  }
}


class Timer {
  constructor(numFishes) {
    this.totalTime = numFishes * 1; // Adjust the total time based on the number of fishes
    this.timeLeft = this.totalTime;
    this.running = false;
  }

  start() {
    this.startTime = millis();
    this.running = true;
  }

  stop() {
    this.endTime = millis();
    this.running = false;
  }

  update() {
    if (this.running) {  //if the timer is running
      
      let timePassed = millis() - this.startTime; //calculates the time passed since the timer started
      
      this.timeLeft = max(0, this.totalTime - floor(timePassed / 1000)); // Calculate remaining time
      
      if (this.timeLeft === 0) {
        this.stop(); // Stop the timer when time runs out
      }
    }
  }

  getTime() {
    return this.timeLeft;
  }
}

Future Improvements and plans

1. Add more colors and deigns to the overall design of the game to make it more attractive, I want to make the design more aesthetically eye pleasing.

2. Add an instructions page to guide users on how to play the game.

3. Enable users to drag the fishes individually to the fish bowl for a more interactive experience.

4. I want to add more music to the game like subtle sounds and musics when the user interacts with the game.

5. Refine all aspects of the code to align with the initial plans and ensure smooth functionality.

Week 6 Reading Response

Diving into the article about computer vision for artists and designers felt like opening a door to a room where art meets science in the most fascinating way. I’ve always thought of computer vision as something you’d find in sci-fi movies or high-tech security systems, not something that could be part of creating art. The idea that artists are using this technology to make interactive pieces where the artwork changes based on how people move around it is pretty cool. It’s like the art is alive, reacting to us just as much as we react to it.

Reading about projects like Videoplace and Sorting Daemon really got me thinking. It’s one thing to look at a painting on a wall, but it’s something entirely different to be part of the art itself. The thought that my movements could influence an art piece, creating something unique each time, is both exciting and a bit mind-blowing. It’s not just about observing; it’s about participating, and that changes the whole experience of what art can be.

The technical side of things, like how computer vision can track movements and interpret them into visual changes in an art piece, was a bit complex but also intriguing. I didn’t realize so much went into making these installations work. It’s not just about having a good idea for an art piece; it’s also about solving problems, like making sure the lighting is right so the cameras can see what they need to. This made me appreciate the art even more, knowing the blend of creativity and technical skill that goes into it.

However, what stuck with me the most was thinking about the bigger picture—like what it means for a machine to watch us and then create art based on that. It’s a bit like the machine is an artist too, interpreting our actions in its own digital way. And while it’s amazing, it also raises questions about privacy and how much we’re comfortable with machines “seeing.”

Overall, the article was a peek into a future where art and technology blend in ways that make the audience a part of the artwork. It’s a reminder that creativity knows no bounds, especially when artists start playing around with tech to see what new forms of expression they can discover. It makes me wonder what other surprises the art world has in store as technology keeps advancing.

Midterm Progress

Inspiration

I grew up in a village without a park, a place where families, friends, and pets gather to enjoy nature and each other’s company. To experience the joy of a park, I had to travel to nearby villages or cities. This sparked my love for parks and inspired my midterm project: creating a virtual park. My goal is to recreate the sense of community and fun found in a park, from children playing on swings to people walking their dogs and old friends catching up. This project is my way of bringing the park experience to those who, like me, have always admired it from afar.

Concept

Building on the foundational inspiration, the core idea of my midterm project is to craft an immersive user experience that mimics the tranquility and community feel of a real park. To achieve this, I’ve begun by setting up a serene background image that serves as the canvas for the park’s life. Central to the scene are two swings, each with a figure that moves in harmony with the swing’s motion. To ensure these movements are as natural as possible, I’ve implemented the lerp() function, which allows for smooth transitions and adds a lifelike quality to the animations.

Adding another layer of interaction to the park, I’ve introduced a jukebox feature. This allows users to engage with the environment actively by selecting and controlling the music to suit their mood, enhancing the personal experience of the park. While the music feature is currently in development, with plans to enable song changes and stops, it promises to be a significant aspect of user engagement.

Looking ahead, I plan to enrich the park’s atmosphere with spritesheets for various people movements, creating a dynamic and bustling park scene. This will include groups of people sitting on the grass or on carpets, adding to the communal vibe. Additionally, to further the immersion, I intend to incorporate ambient sounds, such as birds chirping, which will be a constant backdrop to the park experience, not subject to user control. This sound layer aims to deepen the user’s connection with the virtual environment, making it feel alive and vibrant. Moreover, diversifying the playground equipment, especially with different swings, will offer a variety of interactions for the user, simulating the choice and freedom one finds in a real park. This expansion not only adds visual appeal but also invites users to explore and find their favorite corner of the virtual park.

Challenging Aspects

Drawing Complexities: The first significant challenge is the detailed drawing of park structures and objects using p5.js. Every element, from swings to human-structure, requires time-consuming manual drawing. This process demands a high level of precision and artistic skill to ensure the park’s visual appeal and thematic consistency.

Interactive Features: Introducing interactivity, such as clickable buttons and objects that respond to mouse hover, adds another layer of complexity. For example, making the jukebox highlight when hovered over involves sophisticated input detection and dynamic response coding, enhancing user engagement but also increasing development complexity.

Animating Characters with Spritesheet: Utilizing spritesheets for character movement presents a formidable challenge. Ensuring these animated figures navigate the park without colliding with objects or wandering off the grass involves intricate collision detection and boundary management.

References:

https://www.youtube.com/watch?v=8uLVnM36XUc

Midterm Project Progress

My midterm project will center around an idea of a digital drum kit. As a drummer, I sometimes want to test new beats and patterns when I am not around a physical kit, and in such times I like to use the “musicca” website(https://www.musicca.com/drums), and this is the website that I got the inspiration for my project from. Each sound on the drum kit is tied to a specific key and the digital drum kit is played just like a physical one would, by combining sequences of keys corresponding to the notes. Accurate and user-friendly implementation of the sounds and keys will be a major part of the project. I will use the image below for a drum kit and I will label each drum/cymbal with the corresponding key so that new users can easily navigate and learn in a matter of seconds. I have taken this image from “musicca”, only editing out the drums sticks that are present on the website as I think my implementation does not require drum sticks, because of a different visual feedback that is discussed below.
Additionally, I have found all the essential sounds for each key from the website “freesound” (https://freesound.org/).
Next, I wish to make the project more visual so that besides the audio feedback, users can also get a visual feedback corresponding to the sound that has been played. For that I will implement circle waves that arise from the center of the drum/cymbal that has been played and will propagate across the screen. This is going to be the most challenging part of the project, and I am thinking about implementing collision effect for those waves, but I will decide on it later depending on the visual effects created. I am still searching for ideas to implement besides the circle waves that will make appealing visuals, but so far I sympathize with this implementation the most.
There are 2 classes for the project so far. A class for drums, each object being a different drum or cymbal, and a class for the waves, each object being the wave tied to a certain drum/cymbal.
There will be a button at the bottom of the drum kit image, suggesting the users to start a new session if the previous user is done, and the functionality of button is simply to remove the existing waves from the screen, everything else does leave a residual effect from the previous to the new user.

Midterm Project Idea | Automated Interactive Kaleidoscope

Concept, Thoughts and Research

For the midterm project ,I had initially thought of making a game but decided to challenge myself with something different.  I thought of making interesting generative art with user interaction and customization. I came across this kaleidoscope piece by Sheenest that uses Perlin noise to automatically generate random kaleidoscopes (without user input) – https://github.com/sheenest/Perlin-Kaleidoscope-Project . This idea of using symmetry to create generative art has always intrigued me . Kaleidoscopes have inspired a lot of art and any way of automating them + making them interactable is something I want to work on . Some cool kaleidoscope art that I found :

Something like this is inspiration for what I want to do. For sound effects, I plan on using my favorite Youtuber Melodysheep’s sound library –

Music Library — Melodysheep

This sound library was originally designed for space /astronomy videos but I believe  alternating the tracks will create a very immersive experience for the user .

The user interaction that I want to add to this project would be :

  1. Some way to draw something on the screen
  2.  Then play the program, the program uses Perlin noise to change the points and colors of what was drawn to create beautiful patterns
  3. Customizable bars for the user , I found a library on GitHub that automatically generates a GUI for the variables in the program . Here’s the link – https://bitcraftlab.github.io/p5.gui/.
  4. A way to save the image
  5. An option to add your own image instead of drawing something
  6. (maybe) use computer vision in some way to use audio to influence the pattern

Identifying Challenging aspects of the project 

Some key challenges that I expect to overcome are :

  1. Adding user interactivity AND applying Perlin noise is something that hasn’t been done before in such projects. Enabling the user to draw something with the mouse(on a graphics buffer that can be hidden later) is going to be challenging to implement .
  2. Thinking about how noise can be used to vary the user input so that it appears different but not too different . The user input should still significantly affect the final result.
  3. Sound effects associated with the project – when to play and pause sounds and if computer vision ends up being a part of it , how should sound influence the noise factor ?

Understanding Code for Kaleidoscope 

I started by looking at some classes in Sheenest’s code that I could recycle and modify to integrate user input into. One class that I plan to use after modification is the lin class (which represents a randomly drawn line) :

class lin {

  constructor( maxCurl , maxSpeed , size , sides , strokeMax , strokeMin , steps , fluct = 5 , order = 0.6) {

    // this.keyVel = createVector( 0 , 0 );

    // maxSpeed = 30; 
    this.maxCurl = maxCurl;
    this.maxSpeed = maxSpeed ;
    this.size = size ;
    this.sides = sides; 

    this.strokeMax = strokeMax ;
    this.strokeMin = strokeMin ;

    this.steps = steps ;

    this.trail = [] ; 
    this.vel = new polar ( 0 , 0 );

    this.noiseRad = new perlin( 
      'noiseRad',
      random(100),
      random(10),
      0.2 ,
      this.steps , 
      fluct , // more octaves/fluct , more fluctuation in velcities, default value is 5
      0.5 // 
    );

    this.noiseTheta = new perlin( 
       'noiseTheta', //name
       random(100), //seed 
       random(10), //step 
       0.2 , // counts 
       this.steps , 
       4 , // lod
       order //  order referring to the falloff level of the curves
       // more order/falloff , more persistence in the curves (more circles)
    );


    this.cHSB = []; 

    this.noiseHue = new perlin(
      'noiseHue',
      random( 100 ),
      random( 10 )
    );

    this.noiseSat = new perlin(
      'noiseSat',
      random( 100 ),
      random( 10 )
    );

    this.noiseBright = new perlin(
      'noiseBright',
      random( 100 ),
      random( 10 )
    );
    
    let bound = 1/sqrt(2) * boundary/2 ; 
    this.o_pos = createVector( random( - bound , bound ) , random( -bound, bound ) ); //random start position
    //this.pos is the independent position value that syncs with trail[0] after each iteration
    this.pos = this.o_pos.copy(); 

    //this.hueVal and this.cVal is the independent color value that syncs with cHSB[0] after each iteration
    this.hueVal = random(360) ;//random hue value at the start
    this.cVal = color ( this.hueVal , 0 , 0 );

    this.trailMemory = [] ; 
    this.cHSBMemory = [];
    this.memorySize = 1000;
    this.colormemorySize = this.memorySize * 4 ; //coz max inverse_speed = 4, each iteration of update() adds a max of 4 color values into cHSBMemory

    this.reverseCount = 0 ;
    this.reverseColorCount = 0 ;

    this.inverse_speed = [] ; // first value is for syncing of colors when this.vel is low
    // array size is equal to memmory size, to save past inver_speed values to tally with inverseColorCount

  }

This class creates a line object . I plan on adding simple user interactivity with something like this by creating a line object from user input and making an instance of this class -then manipulating the lines smoothly .

A very simple and primitive  example of the kind of code I can use for user interactivity is:

function draw() {
  if (mouseIsPressed) { // check if the mouse button is pressed (preferably the left click button) 
    stroke(0); // set the stroke color 
    strokeWeight(5); // set the stroke weight
    line(pmouseX, pmouseY, mouseX, mouseY); // draw a line from the previous mouse position to the current mouse position
  }

Instead of drawing a line using the line function, I will use the constructor for the lin class to create an instance of the line / multiple lines drawn by the user.

I have spent much time understanding the code generally used to create kaleidoscopes and thinking about design and user interactivity  .I look forward to  integrating user interactivity to create an intriguing generative art experience.

 

 

 

 

Reading Reflection – Week#5

This weeks reading was about Computer Vision, its types and implementation. One thing that stuck out for me was the accessibility of simple computer vision techniques and the emphasis the author put on how even undergraduate or high school students can use some of the techniques described in the text. The author even provided source code for four of the techniques mentioned in the paper, which is another indicator of accessibility of the subject. Presenting the subject of computer vision to a widespread audience with different interests and proficiency in the field can result in a broader reach of computer vision into different disciplines and we can see applications of a vast variety and creativity.
However, it is not be mistaken as an easy subject, as it has been regarded in the past, but rather rather a subject with low barrier of entry and high ceiling, as the opportunities and possibilities with computer vision keep increasing indefinitely, especially with novel applications using AI and ChatGPT, self-driving cars, AI image generators, etc. This seems like the tip of the iceberg as we enter into an era of advanced AI and VR, hence this reading can be an excellent starting point for anybody who wants to interact with the roots of the tree that might shape the future.

Midterm Progress

Concept and User Interaction

One of the games that went viral in South Korea is the Ideal Type World Cup. It’s a simple game where several rounds are played, each presenting two options of celebrities, and you have to choose one out of the two. The selected option advances to the next round, and you continue choosing the one you like more until there is only one winner left.

Recently, while I was studying, I wanted to take a break, so I decided to play this game with my friends. I found one on the internet about the ideal Korean male actor and then played another one about food. Then, it occurred to me that it would be very interesting to try making this World Cup with the food we have on campus. The final idea for my midterm project is to create a Campus Food World Cup, where there will be photos of food from places on campus (Marketplace, D2, D1, Mysk, Blacksmith, etc.). Users will click on the photos of the food they prefer to find out what their top campus food is.

Design

Some elements that I would like to add to my project, as part of the design, include a beautiful interface and fonts. Additionally, there will be background music playing during the game and another piece of music at the end of the game. When the user selects the winner, I would like to add a sound effect. The overall interface will feature colors related to our campus, using NYU colors, etc., to establish a connection with our campus. Also, on the starting page, I plan to add some text about our campus and the dining systems we have.

Code

The biggest challenge I faced initially when I started to conceptualize the logic for my project was figuring out how to make my code remember the winners I had chosen from each round and continue the game to the next round until there was only one winner. To understand how the logic should work, I decided to experiment with numbers before loading all the pictures and designing the project. Therefore, I created an array of 32 numbers.

To facilitate several rounds of the game, I introduced a ‘winner’ variable, which would be determined by the mouse location.

if (currentPair.length === 2) {
    let winner = (mouseX < width / 2) ? currentPair[0] : currentPair[1];
    nextRoundCandidates.push(winner);

If the mouse location is on the left half of the canvas, the ‘winner’ variable would be the element on the left side, and the same logic applies to the right side.

Then, I created an array called ‘nextRoundCandidates’ so that the array would contain all the elements that should pass to the next round.

if (currentRoundCandidates.length === 0) {
    if (nextRoundCandidates.length === 1) { // If only one candidate remains, the game is over
      gameIsOver = true;
      return;

If there are no more candidates left for the next round, the program recognizes that the game is over.

if (gameIsOver) { // Restart the game if it's over
  gameSetup();
  return;
}

The program then executes the gameSetup() function.

function gameSetup() {
  // Reset and reshuffle the game for a new start
  candidates = Array.from({length: 32}, (_, i) => i + 1);
  currentRoundCandidates = shuffle(candidates);
  
  nextRoundCandidates = [];
  currentPair = [];
  updateCurrentPair();
  roundNumber = 1;
  gameIsOver = false;
  loop(); // Restart the drawing loop
  
  // Show a restart message
  showMessage = true;
  messageStartFrame = frameCount;
}

This code resets the game to its starting conditions, reshuffling the candidates and resetting all relevant variables for a new game.

Although there are still many aspects to add to this project, I am pleased that I was able to write the code for the main logic of the game. Now, I will focus on the details and design part of the project to make it visually appealing.

Reading response | Week 5 | Aadil Chasmawala

This week’s reading on computer vision was very interesting. The key idea presented was how computer vision is becoming more and more accessible to ordinary people and how we can use its power to create better and more interactive art.

The various uses of computer vision that the author gives in the beginning of the article illustrate how simple concepts can be so effective at generating something intriguing. The author’s discussion of some simple algorithms that enable object tracking was particularly interesting to me. I looked deeper into the code listings and found the algorithms very useful. I hope to include them in some way for my midterm project.

Additionally, the author’s emphasis on cleverly designing the physical environment stood out for me. His demonstration of how code can be simplified by creating the right physical environment was interesting to think about . It reminded me of how sometimes we focus on the wrong problem and looking at it from a broader perspective may greatly help . 

Overall, the idea of computer vision becoming more and more accessible to the general public excites me . With the advent of AI tools like Sora and Dall-E ,it seems like it’s only going to be a matter of time until creativity and design takes precedence over raw technical skill. These are the skills that I look forward to developing .

 

Midterm Project Draft: Interactive Hand-Gesture Controlled Game

Project Concept
The core idea behind my midterm project is to develop an interactive game controlled entirely through hand gestures. The game will leverage a hand-tracking library to interpret the player’s hand movements as input commands. I aim to create an engaging and intuitive user experience that does not rely on traditional input devices like keyboards, mice, or game controllers.

Design & User Interaction
Players will interact with the game through various hand gestures. For instance, an open hand gesture will start the game, while making a fist and holding it for a brief moment will pause or resume the game. The game’s mechanics and objectives will be designed around these gestures, ensuring that the player’s physical movements are seamlessly translated into in-game actions.

To detect and interpret these gestures, I will use a hand-tracking library that provides real-time hand position and gesture recognition. The player’s hand movements will be captured through a webcam and processed to identify specific gestures. Based on the detected gestures, the game will execute corresponding actions, such as starting, pausing, or resuming gameplay.

hand

 

Code Design
Gesture Detection Functions: I have implemented functions like detectOpenHandToStart() and detectHands() to detect specific hand gestures. These functions use the hand-tracking library’s predictions to analyze the hand’s position and orientation.

Hand Highlighting: The highlightHand() function visualizes the player’s hand position on the screen, enhancing user feedback and interaction.

Gesture Recognition Algorithms: Functions are OpenHand () and ClosedFist (), which distinguish between different hand gestures by analyzing the distances between hand landmarks. These algorithms are crucial for converting physical gestures into game commands.

let video;
let handpose;
let predictions = [];
let isGamePaused = false;
let fistDetectedTime = 0;
const fistToggleDelay = 2000;

function setup() {
  createCanvas(640, 480);
  video = createCapture(VIDEO);
  video.hide();
  handpose = ml5.handpose(video, modelReady);
  handpose.on('predict', results => {
    predictions = results;
  });
}

function modelReady() {
  console.log("Handpose model ready!");
}

function draw() {
  background(255);
  image(video, 0, 0, width, height);
  detectHands();
}

function detectHands() {
  if (predictions.length > 0) {
    const landmarks = predictions[0].landmarks;
    highlightHand(landmarks, 'green');

    if (isClosedFist(landmarks)) {
      let currentTime = millis();
      if (currentTime - fistDetectedTime > fistToggleDelay) {
        isGamePaused = !isGamePaused;
        console.log(isGamePaused ? "Game Paused" : "Game Resumed");
        fistDetectedTime = currentTime;
      }
    }
  }
}

function highlightHand(landmarks, color) {
  fill(color);
  landmarks.forEach(point => {
    ellipse(point[0], point[1], 10, 10);
  });
}

function isOpenHand(landmarks) {
  let minDist = Infinity;
  for (let i = 4; i <= 20; i += 4) {
    for (let j = i + 4; j <= 20; j += 4) {
      let dist = distanceBetweenPoints(landmarks[i], landmarks[j]);
      if (dist < minDist) {
        minDist = dist;
      }
    }
  }
  return minDist > 50;
}

function isClosedFist(landmarks) {
    let maxDist = 0;
    for (let i = 4; i < landmarks.length - 4; i += 4) {
        let dist = distanceBetweenPoints(landmarks[i], landmarks[i + 4]);
        if (dist > maxDist) {
            maxDist = dist;
        }
    }
    return maxDist < 40; 
}

function distanceBetweenPoints(point1, point2) {
  return Math.sqrt(Math.pow(point2[0] - point1[0], 2) + Math.pow(point2[1] - point1[1], 2) + Math.pow(point2[2] - point1[2], 2));
}

Challenges & Risk Mitigation
The most challenging aspect of this project was developing reliable gesture recognition algorithms that can accurately interpret the player’s intentions from the hand’s position and movement. Misinterpretation of gestures could lead to a frustrating user experience.

To address this challenge, I focused on refining our gesture recognition algorithms (isOpenHand() and isClosedFist()) to improve their accuracy and robustness. I conducted testing with different hand sizes and lighting conditions to ensure the algorithms’ reliability across a wide range of scenarios. Additionally, I implemented visual feedback mechanisms (via highlightHand()) to help players adjust their gestures for better recognition.

Next Steps
In conclusion, this project represents a significant step towards creating more natural and immersive gaming experiences. I aim to explore new possibilities in game design and interaction by leveraging hand gestures as input.