Midterm project – Cyberpunk Breach

Demo and sketch below:

Concept:

This project is based off a childhood game of mine, called magic touch. The core concept of that game is you are a wizard, and you must stop robots from attacking your castle, in the game these robots fall slowly carried by balloons containing symbols. You must draw the symbol on the balloon to pop it, and when all the balloons are popped on the robot they fall to their death.

In my case, I made my game practically completely camera based, with no usage of keyboard at all, and a single use of your mouse just to toggle full screen. It is cyberpunk themed, and you are being attacked by drones, you must draw the symbols rotating around the drones with your hand to eradicate them and so that they don’t breach the system.

Implementation:

The code hierarchy consists of 2 folders, one for assets, and one for all the scripts.

The assets folder is self explanatory, it consists all my music/sound effects, images and fonts.

The script folder consists of 12 java script files (excluding sketch.js which is outside the folder) . I will be summarizing what each file does while providing more technical context when needed.

CyberButton.js: This file contains a class called CyberButton, which takes in the position, width and height, and the label for the button (text inside button).

However most of the code is designing the button itself, it has an outer blue outline with a transparent inside and a “filled” cyan color. As well as 2 purple trapezoids coming out of the button and are diagonally opposite.

HandTracking.js: This is where the magic happens, this entire file contains the code for all the hand tracking and the optimization of said hand tracking. It consists of a class used to store the Kalman filter sittings for each hand shown on screen. I will quote my midterm progress post to explain what a Kalman filter is.

To explain the core concept:

The filtering has 3 steps:

– Predict

– Update

– Estimate

The Kalman filter works in a simple loop. First, it predicts what the system should look like next based on what it already knows. Then, it checks that prediction against a new (noisy) measurement and corrects itself.

Because of this, the Kalman filter has two main steps. The prediction step moves the current estimate forward in time and guesses how uncertain that estimate is. The correction step takes in a new measurement and uses it to adjust the prediction, giving a more accurate final estimate.

This file also calculates the distance between your thumb and index to determine when you are pinching and when you are not.

The way the pinching logic works is kind.. of over complicated for the game play. I am sure there is most probably a better way, but this is the way I figured out and if it works it works.

Now when drawing with your hand, we know that the detector itself is very sensitive, and sometimes your drawings just stop midway and that ruins the gameplay because of the sheer sensitivity of the model. I have the value of pinching so that it is true when below 30 (distance). However, this ONLY becomes false if the value exceeds 60 (this can be changed in options). This allows for leeway and basically gives you some sort of grace. You would need to basically have your index and thumb really close to pinch, but to make the pinching “false” you would have to make the distance between them really far (60, double of the threshold to pinch).

    if (pinchd < 30) { 
      isPinching = true;
    }

---------------------------------

let isActuallyPinching = pinchd < pinchThreshold;
  // Gives the user a 30 pixel buffer for when drawing to reduce the probability of accidentally stopping drawing.

  // When we are drawing, we push the point of our cursor to the current path
  if (isActuallyPinching) {....}

OnBoarding.js: This contains all the information the user needs before starting the game, so how to play, how to navigate the menu, and how to make sure your user experience is as good as it can be.

drones.js: This file contains a class called Drone. We have 3 types of drones that will spawn during the game play, a normal drone, a miniboss drone, and a boss drone. What differentiates each drone is the amount of symbols you need to draw to eradicate the drones. For a normal drone, you get 1-2 symbols to draw, a mini boss has 5-8 symbols. and a boss has 15 symbols. There are 5 different symbols to draw, so symbols will be repeated. For the drones, I am using a sprite for the drone with an idle animation for the falling and a death animation. The mini boss drone is tinted purple and slightly bigger, while the boss drone is tinted and red and is very large.

global.js: This was kinda just to clean everything up, and this contains all the global variables used in the project.

// Path of the drawing
let currentPath = [];
// The variable that will hold the stroke recognizer class.
let recognizer;
// Keep track of the state of the game (start with the splash screen)
let state = "menu";
// Hand model, will become true when it is intiialized and ready
let modelReady = false;
// Variable for the camera feed
let video;
// Split stripes into animations
let animations = {};
// Raw data of the stripe sheets
let sheets = {};
// Background photo of the menu
let menubg;
// Master volume default at 50%
let masterVolume = 50;
// Threshold
let pinchThreshold = 60;
// Distance between thumb and index
let pinchd = 0;
// CyberPunk font
let cyberFont;
// Store the buttons
let btns = [];
// Store the hands
let hands = [];
// miniboss timer
let minibossTimer = 0;
// For ml5js, contains hand data
let handPose;
// Holds the value of the estimated x position from the Kalman filter
let smoothX = 0;
// Same as above but for y
let smoothY = 0;
// Kalman filter ratio
let kf;
// Timer before user can go menu
let gameOverTimer = 0;
// Sync level (0-100)
let syncLevel = 0;
// Last boss spawn
let lastBossMilestone = 0;
// Duration of the onboarding screen
let duration = 8000;
// Array to hold the drones
let drones = [];
// Timer to keep track of when to spawn drones
let spawnTimer = 0;
// Keep track when the boss is on screen
let bossMode = false;
// Variables to store music & sound effects
let syncmusic;
let game1music;
let game2music;
let onboardingmusic;
let breachedmusic;
let mainmenumusic;
// Holds all gameplay music to loop it
let gameplaymusic = [];
// Tracks which song in the gameplaymusic array is up next
let currentTrackIndex = 0;
// Keep track of how long the onboard screen has been going on for.
let onboardingStartTime = 0;
// Score of the current run
let score = 0;
// Store in browser memory or 0 if first time
let highscore = localStorage.getItem("breachHighscore") || 0;


// Draw cursor
function drawCursor(x, y) {
  push();
  fill(0, 255, 255);
  noStroke();
  ellipse(x, y, 20);
  fill(255);
  ellipse(x, y, 8);
  pop();
}

Menu.js: This file draws the menu, putting our background image, and our 3 buttons (play, options, quit).

Option.js: This file is to draw the option page, which can be accessed through clicking the option button. There are 3 things you can change in options, the pinch threshold we talked about earlier, the Kalman filter smoothening (latency – smoothness tradeoff). And finally the master volume of the game.

Play.js: This file contains the play page, where the background is made, where score is handled and where the spawning of the drones is done. The neat thing about the score system is, the saved high score persists across sessions, so even if you close the game with p5js, and re-open it, or even close your browser, as long as you don’t clear your cookies and site data, your high-score from any previous session will remain. This is done because p5js will store this information locally in your browser, and will be permanent till deleted manually.

localStorage.setItem("breachHighscore", highscore);

A normal drone spawns every 9 seconds, a mini boss drone will spawn every 20 seconds, and a boss drone will spawn every 1500 points.

This is all monitored by the function handleSpawning:

function handleSpawning() {
  if (!bossMode) {
    // Stop all other spawns once we hit the warning threshold (400)
    // This allows existing drones to clear before the boss arrives at 1500
    let nextThreshold = lastBossMilestone + 1500;
    if (score < nextThreshold - 100) {
      // Warning: Red pulse if Miniboss is 3 seconds away
      let nextMinibossTime = minibossTimer + 20000;
      if (millis() > 5000 && nextMinibossTime - millis() < 3000) {
          drawWarning("MINIBOSS INBOUND");
      }

      // Check for Miniboss spawn every 20 seconds, avoiding start of game
      if (millis() > 20000 && millis() - minibossTimer > 20000) { 
          drones.push(new Drone("miniboss"));
          minibossTimer = millis();
      }

      // Spawn a drone when game start, then spawn a normal drone every 9 seconds.
      if (spawnTimer === 0 || millis() - spawnTimer > 9000) {
        drones.push(new Drone("normal"));
        spawnTimer = millis();
      }
    }

    // Warning: Final Boss warning when close to 1500 points
    if (score >= nextThreshold - 300 && score < nextThreshold) {
        drawWarning("CRITICAL SYSTEM BREACH DETECTED");
    }

    // Check for Final Boss trigger at 1500 points
    // Ensure the screen is actually clear of other drones before spawning
    if (score >= nextThreshold && drones.length === 0) {
        bossMode = true;
        lastBossMilestone = nextThreshold;
        let finalBoss = new Drone("boss");
        finalBoss.x = width / 2; // SPAWN CENTER
        drones.push(finalBoss);
    }
  }
}

When a mini boss or a boss is about to appear, red flashing lines will appear on the screen to warn the user of them being inbound:

// Visual warning effect function
function drawWarning(msg) {
    push();
    let alpha = map(sin(frameCount * 0.2), -1, 1, 50, 200);
    fill(255, 0, 50, alpha);
    textFont(cyberFont);
    textAlign(CENTER, CENTER);
    textSize(width * 0.032);
    text(msg, width / 2, height / 2);
    
    // Glitch line effect
    stroke(255, 0, 50, alpha);
    line(0, random(height), width, random(height));
    pop();
}

Recognizer.js: This is an open source code that I took which allows for symbol detection, as well as drawing and adding your own custom symbols. I edited the code slightly to delete every symbol I won’t be using, so that the detector doesn’t waste our time by saying the symbol drawn is something that isn’t in the game. And I added 2 custom symbols being “W” and “S”.

Score.js: This screen pops up after you die, and just shows your score, final score, and what to do to get back to the menu so that you can play again.

Splash.js: This is where the game begins, and just allows for the initialization of everything, the game will ask you to raise your hand and keep it raised while it “syncs” before moving to the on boarding screen.

Sprite.js: This file contains the code to handle the sprite, split it up, and animate it so it is used properly during game play.

// Slices a sheet into an array of images
function extractFrames(sheet, cols, rows) {
  let frames = [];
  let w = sheet.width / cols;
  let h = sheet.height / rows;

  for (let y = 0; y < rows; y++) {
    for (let x = 0; x < cols; x++) {
      let img = sheet.get(x * w, y * h, w, h);
      frames.push(img);
    }
  }
  return frames;
}

// Draws and cycles through the frames
function drawAnimatedSprite(category, action, x, y, w, h, speed = 0.15, startFrame = 0) {
  if (animations[category] && animations[category][action]) {
    let frames = animations[category][action];
    
    let index;
    if (action === "death") {
      // Calculate frames passed since death began
      let elapsed = frameCount - startFrame;
      index = min(floor(elapsed * speed), frames.length - 1);
    } else {
      index = floor(frameCount * speed) % frames.length;
    }
    
    push();
    imageMode(CENTER); 
    image(frames[index], x, y, w, h);
    pop();
  }
}

We provide the image, and how many columns and rows it has. Splits the image with said column and rows so that each frame is extracted. Once all the frames are extracted, we can start drawing them with our second function, and this just loops through the frames using the formula:

index = floor(frameCount * speed) % frames.length;

The formula for death is different, as when it dies we want it to stop at the last frame, hence we use min which acts as a clamp and forces the index to stop at the last frame of the animation and stay there, preventing it from looping back to the beginning.

With all these separated files, we get a pretty clean sketch.js file which falls just under 100 lines.

function preload() {
  // Variable declared in handTracking.js
  handPose = ml5.handPose(() => {
    modelReady = true;
  });
  menubg = loadImage("assets/menu.jpeg");
  cyberFont = loadFont("assets/Cyberpunk.ttf");
  syncmusic = loadSound("assets/sync.mp3");
  game1music = loadSound("assets/game1.mp3");
  game2music = loadSound("assets/game2.mp3");
  breachedmusic = loadSound("assets/breach.mp3");
  mainmenumusic =loadSound("assets/mainmenusoundtrack.mp3");
  onboardingmusic = loadSound("assets/onboarding.mp3");
  sheets.normalIdle = loadImage("assets/mobidle.png");
  sheets.normaldeath = loadImage("assets/mobdeath.png");
}

function setup() {
  createCanvas(windowWidth, windowHeight);
  recognizer = new DollarRecognizer();
  gameplaymusic = [game1music, game2music];
  let constraints = {
    video: { width: 640, height: 480 },
    audio: false,
  };
  animations.normal = {
    idle: extractFrames(sheets.normalIdle, 4, 1),
    death: extractFrames(sheets.normaldeath, 6, 1)
  };
  video = createCapture(constraints);
  video.hide();

  handPose.detectStart(video, gotHands);
  textFont(cyberFont);
  
  for (let track of gameplaymusic) {
    track.setVolume(0.2); 
    track.playMode('untilDone'); 
  }
  

  if (state == "menu") {
    makeMenuButtons();
  }
}


function draw() {
  background(0);
  let { pointerX, pointerY, clicking, rawDist } = handTracking();

  if (state === "splash") {
    drawSplashScreen();
    if (hands.length > 0) drawHandIndicator(pointerX, pointerY, rawDist);
  } else if (state === "onboarding") {
    drawOnboarding();
  } else if (state === "menu") {
    menu();
    for (let btn of btns) {
      btn.update(pointerX, pointerY, clicking);
      btn.draw();
    }
  } else if (state === "play") {
    runGameplay(pointerX, pointerY, clicking);
  } else if (state == "gameover") {
    drawGameOver(pointerX,pointerY,clicking)
  } else if (state == "quit") {
    // Stop script and quit
    remove();
  } else if (state == "options") {
    drawOptions(pointerX,pointerY,clicking);
  }

  if (hands.length > 0 && state !== "onboarding") {
    drawCursor(pointerX, pointerY);
  }
  
}

function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
  if (state == "menu") {
    makeMenuButtons();
  }
}

function mousePressed() {
  let fs = fullscreen();
  fullscreen(!fs);
}

I am pretty happy with how it turned out, where all the interactions only use the camera, and I am happy with how the aesthetics of the game came out overall.

Reflection:

A lot of the errors I ran into stemmed from how am I going to have symbol recognition and smooth hand tracking, which both I was able to resolve using the recognizer open source code for the symbol recognition, and Kalman filtering for smooth hand tracking.

Improvements I think that could be made is the general aesthetics of the game could be more details, maybe add some more game modes so that there is more variety.

References:

Free Drones Asset Pack by Free Game Assets (GUI, Sprite, Tilesets)

Simple Kalman filter for tracking using OpenCV 2.2 [w/ code] – More Than Technical

Cyberpunk 2077 Neon Cityscape – 4K Ultra HD Cyberpunk Wallpaper

(1) Understand & Code a Kalman Filter [Part 1 Design] – YouTube

(1) Understand & Code a Kalman Filter [Part 2, Python] – Youtube

Recognizer

V – CyberPunk 2077 OST

PixaBay (Sound Effects)

Assignment 5 – Midterm Progress

Concept Demo Below!

 

Sketch Below (Press mouse button for full screen)

 

Concept:

The name of the game right now is Cyberpunk Breach (tentative) and as you can see in the demo above I am doing for a cyberpunk theme game!

The game play is currently a work in progress, I have started on it but no character stripes implementation of yet. The concept is as such:

I took inspiration from a game called Magic Touch which is on the app store. The gist of the game is, you are a wizard, and you need to stop the robots from attacking you, the way you do that is pop the balloons that the robots are using, these balloons have specific glyph that you need to draw, if you draw them currently the balloon containing that glyph will pop.

Now I have added my own twist to this. I am making it cyberpunk themed, with drones rather, and the biggest change of functionality is the fact this entire game does not use your keyboard or mouse. It is entirely based on hand tracking, where you use your hand to navigate the menus, and play the game.

Now there are multiple issues with this that I have tackled or in the process of tackling.

The problem with hand tracking on browsers, is that they are really really REALLY latent and jittery. Latency would be a hard problem to fix since this is a browser issue, but jittery I can fix. This is where Kamlan filtering comes in play.

To explain the core concept:

The filtering has 3 steps:

– Predict

– Update

– Estimate

The Kalman filter works in a simple loop. First, it predicts what the system should look like next based on what it already knows. Then, it checks that prediction against a new (noisy) measurement and corrects itself.

Because of this, the Kalman filter has two main steps. The prediction step moves the current estimate forward in time and guesses how uncertain that estimate is. The correction step takes in a new measurement and uses it to adjust the prediction, giving a more accurate final estimate.

And finally, using a threshold, we can choose between using the estimated path, or the camera path.

Using this we can have pretty smooth hand tracking.

Now the issue of having recognized gestures, and even adding my own custom gestures, using a library called $1 Unistroke recognizer.

Alternate sketch just to test out the library:

The library has inbuilt strokes, so for example if we try to draw a triangle, the algorithm guesses your drawing with how confident it is:

 

You can also add your own custom gestures:

The tracking and the gesture recognition is what I was worried about before I got started on this project

For the final stages of the game:
I will need to work on the game-play itself and the process of implementing this into the game.

Week 5 Reading – Computer Vision

Computer Vision has always been something I’ve been interested in, I used it in my 3rd assignment and I am currently using it in my midterm project. The article has given me answers to questions I have had while working with computer vision.

So far I have really only worked with hands, and it got me really curious, how does the AI model what is a hand and what isn’t, to the point it can assign so many key points to a single hand, it knows where the each finger tip is, the middle, the base and so on. And I know this article doesn’t fully answer this, but it gave me an idea to what exactly is computer vision. To a computer with no inherent context, anything it “sees” is just a bunch of no pixels with absolutely no relation whatsoever. It relies on mathematical calculations to make it’s own to context to what is happening and what is what.  But that is just an abstract definition, honestly the techniques provided seem to only work in really specific cases, the author says there is no computer vision algorithm which is “completely” general.

I am going to have to disagree with that on the basis that this is not specific enough. Hand detection algorithms seem to work in almost any environment. It is able to detect when a hand is on screen or not, even multiple hands. Now if we take a hand algorithm and say that this algorithm wont detect this object in any environment? Of course it won’t. When we say general we need some sort of context to what general is! A lot of hand detection algorithms can be considered general in detecting hands no matter the environment for example.

There is a detection technique that I had to learn to improve my hand detection in the midterm project, and it is called Kalman filtering. To briefly describe this technique, the algorithm tries to predict the location of what it is tracking in the next frame, an compares it to the what the location actually is, and depending on a threshold we give this algorithm, the visualization of this tracking will either follow our predicted calculation, or the camera’s calculation. And this is an algorithm which I found to be quite intuitive in how it works, and I have noticed considerable difference in my hand tracking after implementation.

Honestly computer vision’s potential in interactive art is so extremely untapped. I do not see many people implementing it besides very few, and considering how accessible it is now to implement, it is such a shame We can have true interaction with our art work if we have the computer make decisions based on what it sees, giving us a new piece, not just every time the program is ran, but every time the background or the person does something.

Week 4 Reading – Don Norman

One of the reasons products fail in real life is due to over-engineering. I know this wasn’t explicitly mentioned in the chapter, however it fits the description of designing a product that solves a simple problem in an complicated way. However, besides following the perspective of an engineer when designing the products, there is another play at hand. Control, many don’t realize it but over-engineering are done on purpose to control the people. It’s not that these people don’t understands how humans work, rather they understand how they work exactly. What do I mean by that?

Let us take printers for examples. Modern printers are so frustrating to work with, I absolutely hate dealing with them, you have to download their specified app, then press some button 2 times for 5 seconds or some nonsense to turn on blue tooth mode, and honestly half the buttons on the printer you will never end up using in your life. But that’s not it, it’s the fact you need all cartridges filled to print something.

Let’s say you want to print a document in black and white, and you don’t have any colored ink cartridges, the printer won’t let you print the document unless you have everything filled. Not to mention you need to use the printer’s brand specific cartridges which are most probably overpriced. All this is done so that the customer keeps on buying only their products for the printer. Besides my vent, it is true that most of the time products are over-engineered due to engineer’s not taking in the perspective of the average Joe.  For the midterm project specifically, I plan on implementing proper feedback and instructions so that the user feels in control the entire time, and don’t have to second guess anything they do while playing the game.

Week 4 Assignment – 4 bit adding visualization

Sketch and demo video below!

Concept:

This took a while to make but I am quite happy with how it turned out! First my reference for this was this Reddit post I saw about the visualization of a 4 bit adder. I thought it would be a perfect idea for data visualization, going back to the roots of how a computer performs calculations. This idea gave birth to this p5.js digital version.

Implementation:

The core logic of this code was simply drawing paths and gates at the time I specified them to draw them at. I will show an example of what I mean.

The coordinates are drawn manually, in a segments.

let b0_pA_to_X1 = [
  { x: 420, y: 630 },
  { x: 420, y: 600 },
  { x: 210, y: 600 },
  { x: 70, y: 600 },
  { x: 70, y: 450 },
];

let b0_pX2_to_sum = [
  { x: 165, y: 320 },
  { x: 165, y: 200 },
  { x: 810, y: 200 },
  { x: 810, y: 100 },
];

let b1_pX2_to_sum = [
  { x: 320, y: 320 },
  { x: 320, y: 250 },
  { x: 770, y: 250 },
  { x: 770, y: 100 },
];

And what I mean by segments, is the areas highlighted with a red circle, any part of the path which switches direction is the segment I note down, and we use a drawpath function to draw a straight line from one segment to the next.

Now this is the paths we set, but how do we draw them? Once we click the start button, we have a variable that keeps track of this and when it is true we start drawing everything. Example:

if (timer > 1250 + 9500 * 0.5) {
  drawPath(b2_pA_to_An1, getP(1250 + 9500 * 0.5, 1500), cB2);
  PathStart = 1250 + 9500 * 0.5;
  PathDur = 1500;
  if (timer > PathStart + PathDur) {
    drawAND(
      917,
      550,
      50,
      40,
      cAnd,
      getP(PathStart, PathDur),
      startT + PathDur + PathStart
    );
    if (timer > PathStart + PathDur + 1000) {
      drawPath(b2_pAn1_to_Or, getP(PathStart + PathDur + 1000, 5000), cB2);
    }
  }
}

Here we check if a certain time has passed before we start drawing the path of the 3rd bit that goes from the first point to the first and gate. We use our drawpath function to get it drawn with a duration of 1.5 seconds that we specified. And since is the last path to the gate that we are drawing it to, we draw the and gate right after! Repeat this with every single path and gate and we have our sketch done… (Absolutely tedious)

Before moving on to the code I am proud of.

The nToB function converts a number no into its binary representation with n+1 bits. It works by checking each bit from the most significant to the least significant:

– no >> i shifts the number i bits to the right, bringing the target bit to the least significant position.

– & 1 extracts that bit (0 or 1).

– b.push(…) adds the bit to the array.

The result is an array of 0s and 1s representing the binary digits of the number, with the most significant bit first.

I am quite proud of the function I wrote that draws the paths and here is why.

// We use a signal here to keep track of everything. Uses progress (pr) to interpolate coordinates between wire nodes
function drawPath(pts, pr, col) {
  push();
  drawingContext.shadowBlur = 15; // Creating neon glow through canvas shadow context
  drawingContext.shadowColor = color(col);
  stroke(col);
  strokeWeight(2);
  noFill();
  let segs = pts.length - 1;
  let cur = floor(pr * segs); // Identifies which wire segment the signal is on
  let sPr = (pr * segs) % 1; // Progress within that specific segment (0.0 - 1.0)

  // Draws all wire segments that the signal has already "charged"
  for (let i = 0; i < cur; i++)
    line(pts[i].x, pts[i].y, pts[i + 1].x, pts[i + 1].y);

  if (cur < segs) {
    // lerp() calculates the point between A and B based on sPr percentage
    let x = lerp(pts[cur].x, pts[cur + 1].x, sPr);
    let y = lerp(pts[cur].y, pts[cur + 1].y, sPr);
    line(pts[cur].x, pts[cur].y, x, y);
    fill(col);
    circle(x, y, 4); // The "head" pulse of the path flow
  }
  pop();
}

The function itself takes 3 parameters, the points which are arrays of segment objects, the current progress of our path, and the color of the path.

What do I mean by segment objects? An object in java script can be made as such: {x:400, y:500} for example, where x and y are properties of the object, and 400 and 500 is the value of these properties (It is quite similar to how dictionaries work in python, if you wanna read more on java script objects go here). And for each path we have an array of these segment objects. For the progress, there a small function that takes in the start time, and the duration of the path and calculates the progress based on how much time has passed. And finally the color simply takes in a hex value. After that we define the design of the path and then we get into the main logic of the function.

The function itself is pretty well commented so I will briefly explain how the logic goes.

We first check how many segments the path has, and identify what segment we are currently on, and how much have we progressed through that specific segment (This is calculated by using the duiration we want givern in the code). We have a loop that draws every segment we have completed on screen, but how do we draw the incomplete segments giving us that smooth animation? Simple! That is where linear interpolation comes in play (Really look into this function if you are doing animations it is splendid).

If our current segment is before the last segment, we draw the path of the current segment, but we don’t wanna draw the full thing if we have not reached it yet yes? that is where we use the variable that tracks how much of the segment we traversed. So for example if we traveled 60% of the way of segment 2, we fully draw segment 1 and draw up to 60% of segment 2. We draw a circle at the point we are to have that leading circle animation of the path. And boom! That is how we get the smooth animation of the paths being drawn! Pretty nifty no?

Anyway besides that, the gates were simply messing around with bezier curves and a lot of trial and error till I got the shape I wanted.

Reflection:

I am very happy with how it turned out, however I wish to eventually remake this with a more… automated logic, since drawing everything manually was a pain, and if we have more than 4 bits this would simply take way too long to do. So if there is something I would want to improve about it, is to try lessen the manual aspect of writing this code and instead try to automate it, maybe with random segment points within a certain boundary could work.

 

Week 3 Reading – Chris Crawford

The first thing that came to mind when the author was talking about how the word interactivity was used loosely, was AI. Funnily enough it goes hand in hand with interactivity in a lot of tech products, you will see “INTERACTIVE AND AI!!” plastered all around the advertisements when it really uses neither, and are just buzz words to attract customers. I had a similar idea of what interactivity is, to me it is a spectrum, we never say this is either interactive or not. We always say things like “oh this is really interactive,” or “this is somewhat interactive,” and so on. We always treated it as a spectrum, knowingly or unknowingly.

The problem with classifying if something is highly interactive is that, it’s all based on perspective, now what I mean by that is, treat the spectrum of interactivity as a battery with a fixed maximum capacity. A full charged battery would mean it’s highly interactive. The object defines the battery size, while the person’s perspective determines how much of that capacity is meaningful to them. This will make sense in a moment. There is objective traits to something being interactive, but how interactive it is will be subjective, for example, let us use the Nintendo fridge example with the kids and adults. The kids would tap into that battery supply and use most of it, while the adults would simply leave most of the battery unused.

Interactivity needs an initiator. a responder and a balance between that 2 that keeps the interaction in a meaningful interpretive cycle. This is true and can be objectively stated, however it is impossible to write a definition, or write what makes something highly or barely interactive as this will change person to person.

One thing this made me realize is that reactivity is not interactivity, and the sketches that we call interactive are simply just reactive to our inputs. To truly implement interactivity, I am thinking of making the program inputs less obvious, meaning the same input might lead to different actions depending on the context, making the user think about what to do.

Week 3 Assignment – Computer Vision

Sketch and video demo below!

(I think you might have to open this on a new table as it doesn’t seem like computer vision sketches are supported by wordpress…)

Concept:

Well, I have always been interested in learning and testing out computer vision, because I have seen really cool effects and my main inspiration is a woman who goes by the.poet.engineer and she has absolutely incredible computer vision work.
To make the tracking as smooth as I could, I used code from this git hub post which I will talk about later in this post. This took me a ridiculously long time to complete since I started learning computer vision the same day I made this, but they end result was worth it.

Implementation:

There is a lot that goes into this, 500+ lines of code, however everything is well commented if you would like to go line by line, I will explain the main concepts for creating this.

A bit of background, this was done using ml5js’s HandPose, and their hand tracker has specific keypoints for parts of your hand which was used to calculate distance, positions, rotation and so on for all the shapes used in this.

I did not use every connection as in the picture above, I manually selected the keypoints that I wanted to use in these 2 variables:

let fingerconnections = [
  [17, 18],
  [18, 19],
  [19, 20],
  [13, 14],
  [14, 15],
  [15, 16],
  [9, 10],
  [10, 11],
  [11, 12],
  [5, 6],
  [6, 7],
  [7, 8],
  [0, 1],
  [1, 2],
  [2, 3],
  [3, 4],
];
// Using ml5js classification I make a list of the connections I want for a curved line
let singlefinger = [
  [4, 5],
  [17, 16],
  [13, 12],
  [9, 8],
];

The way this works is that every hand that is seen on screen will be added to artInstances, of course I manually just made 2 but this can be scalable and automated for more than 2 hands.. (alien ?)

Each hand is assigned a class object of our class skeletal, which is where quite literally everything happens.

// Function for smoothing, uses linear interpolation to shorten how much we want the detector to move, since I used 0.8 for smoothing that means it only moves 80% of the way from original location to next location.
function gotHands(results) {
  if (activateSmoothing && hands.length > 0 && results.length > 0) {
    for (let i = 0; i < results.length; i++) {
      if (hands[i] && results[i]) {
        let oldHand = hands[i];
        let newHand = results[i];
        for (let j = 0; j < newHand.keypoints.length; j++) {
          newHand.keypoints[j].x = lerp(
            oldHand.keypoints[j].x,
            newHand.keypoints[j].x,
            smoothing
          );
          newHand.keypoints[j].y = lerp(
            oldHand.keypoints[j].y,
            newHand.keypoints[j].y,
            smoothing
          );
        }
      }
    }
  }
  hands = results;
}

This is the smoothing function that I took which I tried simplifying in the comments, it’s simply just linear interpolation so that the distance moved isn’t equal to the distance actually covered.

for (let i = 0; i < hands.length; i++) {
    if (artInstances[i]) {
      // Calling our inbuilt functions for every hand we have on screen
      artInstances[i].drawline(hands[i], singlefinger);
      artInstances[i].drawdoubleline(hands[i], fingerconnections);
      artInstances[i].drawcircle(hands[i], fingerconnections);
      artInstances[i].drawHUD(hands[i]);
      artInstances[i].drawSpeedHUD(hands[i]);
    }
  }
  
  drawMusicButton();
}

There are 6 methods in our skeletal class:
– drawline
– drawdoubleline
-drawcircle
-drawHUD
-drawSpeedHUD
-drawPopupHUD (this is used in the drawhud function so you won’t see it being called outside the class)

class skeletal {
  constructor(linecolor, circlecolor, hudcolor) {
    // We take the color we want for the line, circle and the hud, as well as initialize the angle and last angle to 0
    this.linecolor = linecolor;
    this.circlecolor = circlecolor;
    this.hudcolor = hudcolor;
    this.hud1angle = 0;
    this.rotationSpeed = 0;
    this.lasthandangle = 0;
    this.popupActive = false;
    this.popupTimer = 0;
  }

There is 8 attributes assigned to each hand, where only the first 3 are you can choose and the rest is assigned to 0 / false by default as they will be changed and used in calculations for our functions.

Before I get into the function I am most proud of, I will briefly talk about what each function does.

drawdoubleline : There was no inbuilt shape that satisfied what I want, so I ended up writing my own shape using BeginShape and EndShape to give me that glitchy double line between most the connections.

drawline: I used the bezier function here, (calculated anchor points by getting the points 1/3rd and 2/3rd of the way from keypoint A to keypoint B) and this was used to connect bases of fingers to the top of the next finger with a curved line.

drawHUD: This has 2 elements, the palm HUD, and the exterior HUD, the palm hud is centered by calculating the midpoint the base of the finger (which is keypoint 9) and the wrist (which is keypoint 0), and so the shapes of the HUD is drawed around that point, we use the mapping functions so that it becomes bigger the further away your hand is from the camera, or smaller if it is closer to the camera. This has constraints so the HUD can never be too small or too big.

drawPopupHUD: This HUD is the exterior one, which contains a circle with sound waves along a quarter of it, and this only pops up if the hand rotates at a fast enough speed, and then disappears after a short while. I used Perlin’s noise to give that random sound effect to the waves.

drawcircle: This simply draws a small circle at each keypoint that I saved in the beginning of the program.

Finally the function I am proud of:

drawSpeedHUD(singleHandData) {
    if (!singleHandData) return;

    // We map the keypoints once again to their respective parts.
    let thumbTip = singleHandData.keypoints[4];
    let indexTip = singleHandData.keypoints[8];
    let wrist = singleHandData.keypoints[0];
    let palmBase = singleHandData.keypoints[9];

    if (!thumbTip || !indexTip || !wrist || !palmBase) return;

    // We calculate the palm size and the gap between our thumb and index finger
    let palmSize = dist(wrist.x, wrist.y, palmBase.x, palmBase.y);
    let fingerGap = dist(thumbTip.x, thumbTip.y, indexTip.x, indexTip.y);

    // Check if it is left or right hand to adjust rotation accordingly
    let isLeftHand = singleHandData.handedness === "Left";

    // Calculate the angle of the finger for rotation of the speedometer
    let fingerAngle = atan2(indexTip.y - thumbTip.y, indexTip.x - thumbTip.x);
    // Calculate the rotation and adjust based on left or right hand
    let sideRotation = isLeftHand ? fingerAngle + PI : fingerAngle;

    // Conditions for when the speedometer pops up
    let closeToCamera = palmSize > 80;
    let fingersSpread = fingerGap > 0 && fingerGap < 140;

    if (closeToCamera && fingersSpread) {
      // We create a mapping between the distance of finger and speed, where if the thumb and index are closed the speed is 0 to 50.
      let speed = map(fingerGap, 0, 130, 16, 50, true);
      let needleAngle = map(speed, 0, 140, PI, -PI);

      let midX = (thumbTip.x + indexTip.x) / 2;
      let midY = (thumbTip.y + indexTip.y) / 2;

      push();
      translate(-width / 2 + midX, -height / 2 + midY);

      // Rotate with our hand
      rotate(sideRotation);

      noFill();
      stroke(this.hudcolor);
      strokeWeight(2);
      // Draw the speedometer arc and ticks.
      arc(0, 0, 110, 110, PI + QUARTER_PI, TWO_PI - QUARTER_PI);

      for (let i = 0; i < 10; i++) {
        let a = map(i, 0, 9, PI + QUARTER_PI, TWO_PI - QUARTER_PI);
        line(cos(a) * 45, sin(a) * 45, cos(a) * 55, sin(a) * 55);
      }

      // Draw the needle
      push();
      rotate(needleAngle - HALF_PI);
      stroke(255, 50, 50);
      strokeWeight(3);
      line(0, 0, 0, -50);
      pop();

      // Draw the speed bar which changes with speed and needle.
      strokeWeight(1);
      rectMode(CENTER);
      noFill();
      rect(0, 25, 60, 5);
      fill(this.hudcolor);
      rectMode(CORNER);
      // Using your speed mapping to fill the 60px wide bar
      rect(-30, 22.5, map(speed, 16, 50, 0, 60, true), 5);

      pop();
    }
  }

This is the code for that interactive aspect of this computer vision, the speedometer that moves with your pinching of your thumb and index. The speedometer only pops up if your hand is close enough to the camera and the pinch isn’t too big, and we use the mapping function to spin the needle and change the progress bar. This speedometer also rotates around the hand which leads to it rarely clipping through your hand. This took so much trial and error, checking which angle works, so much time spent due to the needle going under the speedometer and completely opposite of where I would want it, or the speedometer rotating towards the hand clipping through it, overall it was a painful experience of trial and error.

A small addition I have is some background music that you can enable or disable, the song is “Color Your Night” and this is part of the soundtrack for Persona, and is an amazing song.

Reflection:

It was a painful, but incredibly fun experience, I am really happy I challenged myself and dived into computer vision, because if I did not know then I would have never, and I am really happy with the end result. There is endless possibilities of interactive art with computer vision, such as animating characters, or showcasing fluid dynamics and so much more which I would like to compare beyond a simple futuristic interactive HUD.

Week 2 Casey Reas Reading

What is randomness? The way people describe it will always be different from one another. Yet it boils down to the same general boiler phrase, “it is unpredictable.” We say it is, but at the same time, randomness can be predicted over a large sample. We say random often in day-to-day life, intentionally or unintentionally, like “oh! this randomly happened today!” or “I just thought to do this today randomly!” It never really is random, is it? There is always something behind an event. This brings me to something Casey Reas said, and that is true randomness. He mentions it multiple times, saying that it has been used. However, at most, it is pseudo-random, and we can delve deeper into this and discuss causation and so on, but that is not my point here. I am aware that I might be nitpicking here; however, to me, “random” implies a lack of a proper explanation relative to a model or an event. Or to simplify it, it’s lawful unpredictability.

For my second assignment, my sketch was a 3D version of Conway’s Game of Life. Which, funnily enough, is how cells interact when they are given a certain set of rules. I created this before watching the lecture, so while I was watching it, I kept thinking about the sketch. The result of running such a sketch may seem random to some person you choose off the street, but at a deep level, they are following a set of rules that the person may not know. And that is what randomness to me is, a “random” event or randomness is never truly random, as it will always follow a set rule; we call it random because we do not know what that rule is. The lecture made me realize that I was already treating randomness as epistemic, but I hadn’t articulated it this clearly until seeing generative systems framed this way.

Week 2 Assignment – 3D Conway game of life

Conway game of life 3D sketch below!

Concept:

The inspiration for this is Conway’s game of life, computer generated art is like an umbrella term, I started with searching up old computer generated art in the 50’s and 60’s, and there were some cool concepts like using computer parts to make some sort of art out of them, but it wasn’t exactly what I was looking for. Then it hit me! Conway’s game of life, but not just 2D, that would not work for me, I had to make it 3D, and that’s how this sketch started.

I had to tweak the rules a bit to give this a more generative art feel as well as per-determining the spawn position for the starting cells.

Implementation:

Before we get into the technical implementation of this, let me cover  the theory and rules that the sketch runs by.

Of course for 3D we had to use webgl, to get our 3rd axis (z), and also use orbital control to allow us to move around the area.

createCanvas(400, 400, WEBGL);
// Start with a view of the entire resolution
camera(0, 400, 4900, 0, 0, 0, 0, 1, 0);

orbitControl();

 

A dead cell becomes alive only if it has exactly 3 or 6 neighbours. (Neighbours are alive cells that are next to the cell we are checking)

A living cell stays alive only if it has exactly 5 or 6 neighbours, otherwise it will die.

For the starting spawn, I first started with giving each cell a random chance of 2% to become alive on initialization, it would work however sometimes the  cells would all die or I would not get a good looking design, so I decided to spawn the cells at each corner.

// Create the array for all th cells state
  for (let x = 0; x < res; x++) {
    grid[x] = [];
    next[x] = [];
    for (let y = 0; y < res; y++) {
      grid[x][y] = [];
      next[x][y] = [];
      for (let z = 0; z < res; z++) {
        // Seed with 3x3x3 clusters at corners
        let inCorner1 = x < 3 && y < 3 && z < 3;
        let inCorner2 = x >= res - 3 && y < 3 && z < 3;
        let inCorner3 = x < 3 && y >= res - 3 && z < 3;
        let inCorner4 = x >= res - 3 && y >= res - 3 && z < 3;
        let inCorner5 = x < 3 && y < 3 && z >= res - 3;
        let inCorner6 = x >= res - 3 && y < 3 && z >= res - 3;
        let inCorner7 = x < 3 && y >= res - 3 && z >= res - 3;
        let inCorner8 = x >= res - 3 && y >= res - 3 && z >= res - 3;

        grid[x][y][z] =
          inCorner1 ||
          inCorner2 ||
          inCorner3 ||
          inCorner4 ||
          inCorner5 ||
          inCorner6 ||
          inCorner7 ||
          inCorner8
            ? 1
            : 0;
        next[x][y][z] = 0;
      }
    }
  }

To check for neighbours of each cell, we use a triple nested loop and offset from -1 to 1, to check behind, center forward for each axis.

A couple things to note is, I run the rule checking code once every 30 frames, so that the cells don’t populate too fast and so that we can actually see what is happening and enjoy the chaos that is happening.

A couple things I am proud of is my optimization and coordinate calculation.

Context: There are 2 grids that we are using, our “present” grid and the “next” grid, to not confuse the computer, we apply our rule application and calculation to our present grid, but store the results in our next grid, now originally to switch between the 2 I would turn grid into a long string JSON, then parse it tot equate it to next, but that would lead to thousands of operations. So what I figured out was actually something from c++ and dealing with pointers, and it’s simply just changing the name. To provide context, in javascript you can’t equate the 2 grids to each other to change them, because then they would be connected and if you affect one grid the other is also affected which defeats the purpose of having 2 grids.

let temp = grid;
grid = next;
next = temp;

These 3 lines may not seem much, but it saves our computers from doing thousands of operations every 30 frames, and reduces those thousand of operations to simply 2 operations.

Now, I had to limit the cells to spawn in a specific area of our canvas, otherwise the calculations would get too much, and wordpress would not be able to handle that many calculations.

let cellSize = 50;
let res = 20;

Resolution is basically how many cubes we want, or in this case we want a dimension of a 20 by 20 by 20 cubes, which is 8000 cubes, and 8000 operations every 30 frames, anymore and the browser would slow down tremendously so 20 was the sweet spot.

for (let x = 0; x < res; x++) {
    for (let y = 0; y < res; y++) {
      for (let z = 0; z < res; z++) {
        // Check if the cell is alive
        if (grid[x][y][z] === 1) {
          // Calculate the position of the cell in the canvas using the res and size as reference.
          let xPos = x * cellSize - (res * cellSize) / 2;
          let yPos = y * cellSize - (res * cellSize) / 2;
          let zPos = z * cellSize - (res * cellSize) / 2;
          push();
          translate(xPos, yPos, zPos);

          // Map scales our rgb colors based on the location so the cube looks like a spectrum.
          let r = map(x, 0, res, 0, 255);
          let g = map(y, 0, res, 0, 255);
          let b = map(z, 0, res, 0, 255);
          fill(r, g, b);
          stroke(0, 50);
          box(cellSize);
          // Take the pointer back to 0,0,0
          pop();
        }
      }
    }
  }

Our coordinate system of the resolution is going to be different to the canvas coordinate system, so with a little bit of math, we could take our raw x y z coordinate and convert them into our resolution coordinates to allow us a proper bounded area.

Finally giving us that beautiful color spectrum, we use the map functions which allows us to scale our resolution with the rgb values, for example at 10 (half way into resolution) the  r value would be 127 (half of 255).

Reflection and Improvements:

Honestly I am particularly happy about how this turned out, I thought it would be quite difficult to implement but it turned out a lot better and easier than I expected it to and I am happy that I went through with it.

A couple ways I would think about improving it is adding a gradient color background, and maybe implement more shapes for cells to be rather than simply a cube.

Week 1 – Self Portrait

Concept:

I dove into the assignment blind, no initial sketches or anything as such, instead of conveying an idea through the portrait beyond simply portraying myself, I thought it would be fun to try to implement other features of p5js into it, which lead to us having a color changing shirt, moving clouds and automatic blinking animation!

Implementation:

Honestly I did not sure on how to make my curly hair, at first I thought I could use the curve() function of p5.js to maybe simulate “curl” layers over a circle layer on the head. However it did not turn out realistic whatsoever, so my next step was to first add a rectangular layer behind the head, (besides the forehead part where I added a small rectangle on top of the forehead so that it looks like the hair is covering). I then added a bunch of circles centered inside the hair layer so that it looks like curls on top of my head, there was an emoji I used as reference actually and it was this: 👨‍🦱, not exactly the same however I took the top part of the hair as reference.

// hair
fill(0);
noStroke();
rect(210, 50, 180, 80);
circle(220, 60, 30);
circle(220, 80, 30);
circle(220, 100, 30);
circle(220, 120, 30);
circle(240, 50, 30);
circle(260, 50, 30);
circle(280, 50, 30);
circle(300, 50, 30);
circle(320, 50, 30);
circle(340, 50, 30);
circle(360, 50, 30);
circle(380, 60, 30);
circle(380, 80, 30);
circle(380, 100, 30);
circle(380, 120, 30);

// face
noStroke();
fill(223, 170, 139);
ellipse(300, 150, 175, 200);

// hair over face
fill(0);
rect(236, 50, 125, 20);

Next is the automatic blinking, there is 3 variables that are used for this process, lastblink, blinkInterval and the boolean blinking. The way the logic works is that I use the in built function millis(), and what this does is keep track of how much time has passed since the sketch started running, using that we can subtract our last blink and check if it is greater than our blink interval, so here I use that so the blink interval is 3 seconds, so when the sketch start running lastblink is going to be 0, so when millis reach 3001 milliseconds we get 3001-0 which is greater than 3000, meaning it is time for the character to blink, this sets blinking to true which “disables” the eye and pupil code giving the illusion of blinking. However we want the character eyes to open up after a bit so we use setTimeout() which waits a certain time we set before executing a command, so here we wait  300 ms / 0.3 seconds before setting blinking to false and opening up the character’s eyes.

let lastBlink = 0;
let blinkInterval = 3000; 
let blinking = false;

  // Check if time to blink
  if (millis() - lastBlink > blinkInterval) {
    blinking = true;
    setTimeout(() => blinking = false, 300); 
    lastBlink = millis();
  }

  // eyes and pupils
  stroke(223, 170, 139);
  strokeWeight(2);
  if (!blinking) {
    fill(255);
    ellipse(260, 125, 50, 40);
    ellipse(335, 125, 50, 40);
    noStroke();
    fill(0);
    circle(260, 125, 15);
    circle(335, 125, 15);
  }

The color changing shirt logic is pretty simple, for smooth transition I use sin on our shirtcolor variable, sin goes between -1 and 1, however rgb takes from 0 to 255, so we multiply by 127 to get -127 and 127 and then add 128 to this to get a range from 0 to 255, for r we just use sin of colorshirt, for g and b we delay the sin by two_pi  / 3 and 2*two_pi / 3, and finally we add 0.03 every time draw runs to shirtcolor.

let shirtcolor = 0;
  // shirt color changing
  let r = 128 + 127 * sin(shirtcolor);
  let g = 128 + 127 * sin(shirtcolor + TWO_PI / 3);
  let b = 128 + 127 * sin(shirtcolor + 2 * TWO_PI / 3);
  shirtcolor += 0.03;

  // shirt
  noStroke();
  fill(r, g, b);
  rect(200, 290, 200, 250);

The final thing that is worth mentioning is the cloud movement in the background, I use a function that takes in the x, y and s (where s is set to 1 unless specified otherwise), and draw our cloud using an ellipse with the parameters we put. I initialize 14 different “clouds” with different x,y and s values in an array, now to draw and move the clouds, I use a loop to move through the cloud array and call the drawCloud function that draws each cloud, then to move it along the background, I add 1 to every cloud’s x value in the loop and to make the clouds move up and down add the value of sin(shirtcolor) to the y value of each cloud, since shirtcolor constantly changes it’s a good variable to use in this case. Finally to make sure the clouds wrap back, we check if the x value of each cloud has went past the width of canvas by 25 pixels or not, if it has we set the x value to -50! This gives the illusion of clouds wrapping around without them “disappearing” from one side and “appearing” at the other.

let clouds = [];
clouds = [
    {x: 80, y: 80, s: 1},
    {x: 200, y: 60, s: 1.3},
    {x: 350, y: 90, s: 0.9},
    {x: 500, y: 70, s: 1.2},
    {x: 120, y: 140, s: 0.8},
    {x: 300, y: 160, s: 1.1},
    {x: 450, y: 140, s: 0.7},
    {x: 80, y: 200, s: 1},
    {x: 200, y: 260, s: 1.3},
    {x: 350, y: 290, s: 0.9},
    {x: 500, y: 270, s: 1.2},
    {x: 120, y: 340, s: 0.8},
    {x: 300, y: 460, s: 1.1},
    {x: 450, y: 340, s: 0.7},
  ];

  // draw and move clouds
  for (let c of clouds) {
    drawCloud(c.x, c.y, c.s);
    c.x += 1;
    c.y += sin(shirtcolor)
    if (c.x > width+25) c.x = -50;
  }


function drawCloud(x, y, s = 1) {
  stroke(255);
  strokeWeight(1);
  fill(255);
  ellipse(x, y, 50 * s, 24 * s);
}

Overall code:

let shirtcolor = 0;
let clouds = [];
let lastBlink = 0;
let blinkInterval = 3000; 
let blinking = false;

function setup() {
  createCanvas(600, 550);

  // initialize clouds
  clouds = [
    {x: 80, y: 80, s: 1},
    {x: 200, y: 60, s: 1.3},
    {x: 350, y: 90, s: 0.9},
    {x: 500, y: 70, s: 1.2},
    {x: 120, y: 140, s: 0.8},
    {x: 300, y: 160, s: 1.1},
    {x: 450, y: 140, s: 0.7},
    {x: 80, y: 200, s: 1},
    {x: 200, y: 260, s: 1.3},
    {x: 350, y: 290, s: 0.9},
    {x: 500, y: 270, s: 1.2},
    {x: 120, y: 340, s: 0.8},
    {x: 300, y: 460, s: 1.1},
    {x: 450, y: 340, s: 0.7},
  ];
}

function draw() {

  // Check if time to blink
  if (millis() - lastBlink > blinkInterval) {
    blinking = true;
    setTimeout(() => blinking = false, 300); 
    lastBlink = millis();
  }
  // shirt color changing
  let r = 128 + 127 * sin(shirtcolor);
  let g = 128 + 127 * sin(shirtcolor + TWO_PI / 3);
  let b = 128 + 127 * sin(shirtcolor + 2 * TWO_PI / 3);
  shirtcolor += 0.03;

  // background
  background(178, 237, 232);

  fill(0);
  strokeWeight(1);
  text(`${mouseX}, ${mouseY}`, 20, 20);

  // draw and move clouds
  for (let c of clouds) {
    drawCloud(c.x, c.y, c.s);
    c.x += 1;
    c.y += sin(shirtcolor)
    if (c.x > width+25) c.x = -50;
  }

  // hair
  fill(0);
  noStroke();
  rect(210, 50, 180, 80);
  circle(220, 60, 30);
  circle(220, 80, 30);
  circle(220, 100, 30);
  circle(220, 120, 30);
  circle(240, 50, 30);
  circle(260, 50, 30);
  circle(280, 50, 30);
  circle(300, 50, 30);
  circle(320, 50, 30);
  circle(340, 50, 30);
  circle(360, 50, 30);
  circle(380, 60, 30);
  circle(380, 80, 30);
  circle(380, 100, 30);
  circle(380, 120, 30);

  // face
  noStroke();
  fill(223, 170, 139);
  ellipse(300, 150, 175, 200);

  // hair over face
  fill(0);
  rect(236, 50, 125, 20);

  // jaw and neck
  noStroke();
  fill(223, 170, 139);
  quad(224, 200, 377, 200, 340, 250, 260, 250);
  rect(260, 250, 80, 40);

  // shirt
  noStroke();
  fill(r, g, b);
  rect(200, 290, 200, 250);

  // eyebrows
  fill(0);
  stroke(0);
  noFill();
  strokeWeight(10);
  arc(265, 100, 25, 7, PI, -0.2);
  arc(332, 100, 25, 7, PI, -0.2);

  // eyes and pupils
  stroke(223, 170, 139);
  strokeWeight(2);
  if (!blinking) {
    fill(255);
    ellipse(260, 125, 50, 40);
    ellipse(335, 125, 50, 40);
    noStroke();
    fill(0);
    circle(260, 125, 15);
    circle(335, 125, 15);
  }

  // nose
  stroke(194, 132, 103);
  noFill();
  line(297.5, 150, 295, 170);
  line(302.5, 150, 305, 170);
  arc(305, 170, 10, 10, -0.5, PI / 2);
  arc(295, 170, 10, 10, PI / 2, PI + 0.5);

  // mouth
  fill(180, 13, 61);
  noStroke();
  arc(300, 200, 50, 50, 0, PI);

  // glasses
  stroke(0);
  noFill();
  ellipse(260, 125, 40, 30);
  ellipse(335, 125, 40, 30);
  line(275, 115, 320, 115);
  line(355, 125, 385, 130);
  line(240, 125, 210, 130);

  // ears
  stroke(194, 132, 103);
  fill(194, 132, 103);
  arc(385, 145, 30, 30, -PI / 2, PI / 2);
  arc(214, 145, 30, 30, PI / 2, -PI / 2);
}

function drawCloud(x, y, s = 1) {
  stroke(255);
  strokeWeight(1);
  fill(255);
  ellipse(x, y, 50 * s, 24 * s);
}

 

 

Reflection:

I am not a very artistic person so honestly I did not know how this would go, I did not know how I’d draw myself using just 2D shapes, and to be honest I did struggle, especially with the hair, however I am pretty happy with how it turned out, in the future I am hoping to be able to make more realistic animations, for example for the blinking animation maybe the animation of the eye slowly closing up and opening up would be a cool thing to experiment with and add. Maybe also a scene where my character is doing some sort of activity! I love wall climbing so something where my character is climbing would be cool to do in the future.