Week 8: Unusual Switch

Concept:

Although very impractical in its current form, the concept of this switch was to tell the user whether or not their mixing spoon is making contact with the liquid inside the cup when they are mixing it. This switch turns on when the spoon touches the liquid, completing the circuit.

Components:

The following components were used:

  • A spoon
  • A cup filled with an ionic liquid (water and table salt)
  • 4x wires
  • 330 Ohm Resistor
  • LED light
  • Metal straw (to mask the wire)

Demonstration:

Reflection

This was a very fun project, particularly because this was a no-coding project. Therefore, I really had to stretch my imagination and come up with an interesting switch. I think I was inspired to make the mixing switch because I was making coffee for friends that day.

Midterm Project: Soundscapes//Landscapes

Concept

Initially, I had planned on making a project with gravity manipulation as a core mechanic. However, I did not particularly like the ideas that I came up with. One particular idea was to create a voxel-based game where players could create objects that would fall to the ground. Upon impact, the land would be destroyed based on the momentum of impact. However, this proved to be difficult. I might attempt this for a future project, but the idea I settled on took its roots in this idea of a voxel-based landscape, where a voxel is a 3-dimensional pixel.

My idea was to implement a landscape constructed with voxels, and the player could play around with. For ease of computation and aesthetic preferences, I decided to make the voxels change height instead of stacking voxels to create the terrain. Additionally, I gave the players the ability to change from 3D to 2D and vice versa.

Lastly, to complete the experience, I added sounds that vary based on the nature of the landscape. The user can control different attributes of the landscape, and experience what it sounds like.

Note: Please enter Fullscreen mode to view the controls.

P.S: Headphones are recommended!

Design

I really enjoy pixel art, which is why I wanted my landscape to be pixel-based instead of being a continuous plane. Some of my previous projects have had the same style, so I wanted to stick to something that I knew design-wise.

To move from 3D to 2D and vice versa, I decided to implement the landscape to be comprised of boxes whose height varies according to Perlin noise. The noise depends on the spatial coordinates of each box in the space, as well as their height in the previous frame. The noise also depends on time, so the landscape evolves right in front of your eyes.

I particularly like the way I transition from the 2D view to the 3D view. The plane rotates as it grows and morphs into a landscape, which is a sleek effect and adds to the experience in my opinion.

The above video is an early sample from the project, where the project was supposed to be a landscape editor with physical interactions.

Code Highlights

Although I’m proud of many parts of the code, there are a few pieces that add a functionality that ties the entire project together.

For example, the following code changes the soundscape by calculating the percentage of each type of terrain on the current landscape and sets the volume of each corresponding sound based on how much of that terrain is present.

//set soundstage according to the composition of the landscape
 if (timeEvolution) {
   playSounds();
   //set audio volume based on the number of tiles of each type
   waterSound.setVolume(
     map(landscape.waterCount, 0, landscape.totalGridPoints, 0, 0.2)
   );
   sandSound.setVolume(
     map(landscape.sandCount, 0, landscape.totalGridPoints, 0, 0.2)
   );
   landSound.setVolume(
     map(landscape.landCount, 0, landscape.totalGridPoints, 0, 0.2)
   );
   mountainSound.setVolume(
     map(landscape.mountainDirtCount, 0, landscape.totalGridPoints, 0, 0.6)
   );
   mountainRockSound.setVolume(
     map(landscape.mountainRockCount, 0, landscape.totalGridPoints, 0, 0.1)
   );
   snowSound.setVolume(
     map(landscape.snowCount, 0, landscape.totalGridPoints, 0, 0.2)
   );
 } else {
   stopSounds();
 }

Despite it being a very simple case of using the map() function, the depth this code snipped creates for the overall piece is amazing, giving the whole experience a much-needed atmospheric vibe.

Project Design — The Good

The way the land is created is by creating a Landscape object, which is a container that manages multiple Tile objects. Each Tile is represented as a box with a particular height. The way the height is determined is using Perlin Noise, which takes in the spatial coordinates of each tile as well as the point in time and spits out a number between 0 and 1. This number is then mapped between a minimum and maximum height-scale, and so we get a landscape comprised of multiple tiles that vary in height.

Perlin noise was the perfect tool to use for generating landscape. Landscapes are typically very smooth, with areas close to each other having similar elevation. Conveniently, Perlin noise outputs similar values for similar input values, giving us areas of higher elevation and other areas of lower ones.

The last part that makes this project come together is using the calculated noise value for different things. With some thresholds, you can specify what heights should be water and which should be grass, so on and so forth. With this, one can color the landscape differently based on the elevation, which really produced a natural looking landscape.

Also, I feel like having sliders being the only controls was a good idea. Sliders are very intuitive to use, and one can figure out how they integrate with the project in no time. I wanted to have other controls in the beginning — like a brush that allows you to morph the canvas — but realized that those controls were just muddying the experience. Instead, I decided to focus on the aesthetic choices like the sound design, which I believe added way more than the brush tool would have.

Project Design — The Not So Good

Working on this project was frustrating. Funnily enough, creating the landscape and everything related to that took no time. However, what I did not realize at that time was how difficult it would be to integrate 2D portions, like the menu and buttons, back into the project.

Firstly, I decided to use a library called EasyCam. This allowed me to easily create a camera object that could be programmed with animations so that it shifted from one perspective to another with a smooth transition, with the click of a button. However, as I started working on the 2D elements, I realized what I had gotten myself into.

Turns out, you cannot really create 2D portions when using the WEBGL renderer. My first thought was to create multiple canvases and swap them when switching to the menu, but that does not work as the renderer is still WEBGL which is 3D.

Then I tried to experiment with something known as Instance mode. Basically, everything that we run in a P5js sketch is part of a larger P5js object. What you can do is create multiple p5js objects to be able to use namespaces, which is just a larger scope that you can specify when declaring object. I did try that, and everything ran, but I was still confused on how to create the menu using instance mode. Thinking back, I realize that I should have stuck to that route, as it would have been much more elegant compared to the solution I came up with.

In the end, I decided to use an idea that I learnt from one of Daniel Schiffman’s videos of turning the menu into a texture and applying it to a plane. Here is where the use of EasyCam proved to be a bad choice. I basically had two states: a game state and a menu state, which would show either the instructions menu or the main game menu. Since the plane lives in a 3D world, the camera being at a different position when going into the menu means the menu plane will be viewed from a different angle. To top this off, the mouse coordinates are skewed when compared to the world coordinated because the world is seen through the lens of the camera. All of this made it hard to implement a menu that was in the right orientation, and buttons that were clickable.

However, I was able to make a crude solution by resetting the camera every time the game state changes. This way, the camera returns to the neutral position when in the menu, and goes back to being movable in the game state.

Overall, I’m proud of the end result, but looking at the guts of the project I see that there are many aspects that can be improved. Still, it was a project that I spent many hours working on, and I’m glad I had fun along the way.

Resources

How to set up JavaScript namespace and classes properly? — Stack Overflow

createGraphics() p5.js Tutorial — The Coding Train

createGraphics() as WebGL Texture — The Coding Train

3D Terrain Generation with Perlin Noise in Processing — The Coding Train

How to Code Procedural Terrain with Perlin Noise (JavaScript & p5.js) — RachelfTech

Font Used

Overbyte

 

 

Midterm Progress

Concept

Initially, I had planned on making a project with gravity manipulation as a core mechanic. However, I did not particularly like the ideas that I came up with. One particular idea was to create a voxel-based game where players could create objects that would fall to the ground. Upon impact, the land would be destroyed based on the momentum of impact. However, this proved to be difficult. I might attempt this for a future project, but the idea I settled on took its roots in this idea of a voxel-based landscape, where a voxel is a 3-dimensional pixel.

My idea was to implement a landscape constructed with voxels, and the player could play around with. Additionally, I wanted to give players the ability to change the view from 3D to 2D and vice versa. What I have so far is the project below:

I really enjoy pixel art, which is why I wanted my landscape to be pixel-based instead of being a continuous plane. Some of my previous projects have had the same style, so I wanted to stick to something that I knew design-wise.

I particularly like the way I transition from the 2-D view to the 3-D view. The 2-D plane rotating as it grows and morphs into a 3-D landscape gives a sleek look to the experience.

 

Assignment 4: Crafting a Nighttime Scene with Text

Concept: Converting Names into Visual Forms

The main idea behind this project was to create imagery by using the names of objects to represent their form.  I wanted to implement water as I felt that I could create a cool wave pattern by moving the letters. The final result was this sketch:

Process

I began by sketching out different ideas on combining text and imagery:

Water as text spilling from a point and fills up the screen, revealing “WATER” written in the background

In the end, I ended up liking this idea the most:

Sun drawn with the word “sun”, with rays coming out that also say “sun”

To implement this, I looked up functions that would allow me to manipulate texts as objects. I was able to find the textToPoints function, which allows me to convert text to an array of points that make up the shape of the text. With this array of points, I could use the beginShape function to create a closed shape of the text. Then, by altering the position of the points, I could manipulate the shape of the text.

However, I found that converting the shape of the text when converting the entire string at once didn’t give me a clean shape. So, I decided to split the text into individual characters, and converting each of those into a separate shape.

After adding some rippling effect and moving each letter like a wave, I ended up with the following result:

Since I went with an object-oriented approach, I could just replicate the water across a grid to generate a wave of water. I repeated the same steps to create a moon and added points that twinkled to add stars in the background.

Code Highlights

My favorite part of the code was creating the rippling and wave effect. It was so simple, but I believe it added amazing detail to the sketch.

//make the water ripple
let rippleFactor = map(this.textSize, MINFONTSIZE, MAXFONTSIZE, 0, 2);
y += noise(frameCount * 0.02 + j * 0.02) * rippleFactor;

//make the water wave
let waveHeight = map(this.textSize, MINFONTSIZE, MAXFONTSIZE, 1, 2);
y += waveHeight * sin(frameCount * 0.02 * (i + 1) + this.offset);

I added some noise and used the frameCount variable to make the letters move in a wave pattern. The crucial part was to make these changes when drawing, since I did not want to modify the points directly, otherwise I would need to reset their position. If I didn’t do that, the movement would add up and move the letters out of place. So, this was a simple way of getting this effect.

Reflection

For this project, I feel like I really pushed my creativity. Although my previous projects might have been more technical, I really enjoyed the planning and ideation of this project.

Sources

Reference | p5.js

Font used: Super Funky Font | dafont.com

 

 

 

 

Reading Reflection – Week 4

The world we live in is designed. We interact with many things in our day-to-day life, from doors to mugs, to the screen you’re reading this off of right now.

Great design is something that often blends in with our lives so well that we don’t realize how well these things are designed. Take an ordinary pair of scissors for example. When you pick one up, you know where your fingers have to go. When you open and close the scissors, you have an intuition about its purpose. How is this information communicated to us? Do we all have an innate understanding of what a scissor is?

The answer lies in great design. In chapter 1 of “The Design of Everyday Things”, Don Norman highlights some concepts that come together to explain what makes a design great. These include affordances, which are the types of relations between two entities that are possible. Paper affords cutting for a pair of scissors, but for the same scissors a chunk of metal does not afford cutting. Another concept that is important is that of signifiers. These are indicators that signify where an action is supposed to take place.

Lastly, feedback is another aspect of design that—if implemented correctly—can help a person understand how to use something. There are two examples of machines with feedback, one good and one bad, that I use in my daily life. One is an electronic toothbrush that beeps violently when I apply too much pressure. I didn’t have to use any manuals to understand what it was telling me. Whenever I brushed too hard, it beeps to let me know that I’m being a bit harsh. On the other hand, I have an espresso machine that has three buttons: two for espresso and one for steaming milk. The milk-steaming button has been blinking orange for a while. I didn’t notice it until a friend who owns the same machine saw it and told me that the machine needs descaling. How would I have known that?  The toothbrush is an example of good design, where the feedback is conveyed without needing a manual. On the other hand, my espresso machine has a poorer feedback design.

After reading this chapter, I instantly thought of the doors that we have installed on campus. The automatic doors that should open when a sensor detects a person, and they usually do. Except when they stop working:

Which happens more often than it should. But that’s fine, machines break. After seeing these signs so often, I’ve made it a habit of using every door manually. You would expect that to work all the time, but something weird happens with the automatic doors that we have on campus. When I push on a “functional” automatic door, it decides to resist. It will not move until I stop pushing, even if it was perfectly keen on doing so before. What does this feedback even mean? Is the door saying that I should let it do its job?

Door with handles that make you want to use them manually

Then why are the handles and push bars there? They handles are affordances—they allow me to pull on the door, and also signifiers as they tell me where to pull on. Yet when I listen to the design, I am misled; I am met with resistance that leaves me confused. Why call to me to be pulled and then push against me as if telling me that I never should have pulled. It is deception. Bad design is deception.

Assignment 3 – Insects

Concept: Mimicking Organic Patterns

For this assignment, I wanted to implement insects and add behaviors to them that would give rise to organic patterns that are found in nature. For example, slime mold exhibits fascinating patterns when searching for food.

Slime Mold – Audrey Dussutour, CNRS

I wanted to recreate this effect. However, I was unable to achieve this. Still, I managed to create a class-based visual that is interesting in its own way.

Process

After watching some videos on simulating ants and slime mold, I wanted to work on a similar project as I loved the results. I would play around with the visuals that I could achieve, but I wanted to create the visuals based off of movement of some insects.

I decided to create an insect class that would leave trails behind it as it moved. Initially, I wanted to create trails that would allow ants to seek out the best route to food, and back to their home. However, the logic that I implemented did not achieve this. Instead, I decided to make the trails look aesthetically pleasing and added random movement to give the insects some life-like feeling.

Code Highlights

My favorite part of the code was making the trail diffuse. Since I implemented the trail on a grid layer, all I had to do was to create weaker trails on the surrounding grid cells. The code that does this is as follows:

diffuse() {
    //if the trail  is too weak, don't diffuse
    if (this.life <= 9) {
      return;
    }
    //diffuse the trail
    let i = floor(this.x / gridSize);
    let j = floor(this.y / gridSize);
    let x = i * gridSize;
    let y = j * gridSize;

    let intensity = this.life / 4;

    //add the weaker trails to the surrounding cells

    for (let x = i - 1; x <= i + 1; x++) {
      for (let y = j - 1; y <= j + 1; y++) {
        if (x >= 0 && x < trails.length && y >= 0 && y < trails[0].length) {
          let weakerTrail = new Trail(
            x * gridSize,
            y * gridSize,
            gridSize,
            this.type,
            intensity
          );

          //add the trail  to the cell
          trails[x][y] = weakerTrail;
        }
      }
    }
    this.life = intensity;
  }

Not only do the trails fade over with time, but they spread out if they are strong enough. This is what allowed the fading away effect to emerge.

Reflection

While working on this project, I realized that I need to measure the scale of the project better beforehand. This project was a massive undertaking and so I had to give up some of my goals due to time constraints. Going forward, I need to plan the project with respect to the time frame so that I can achieve what I want to for the project.

However, I love the actual effect that was produced in the end. It reminds me of The Game of Life, and maybe my grid-based implementation for the trail markers is why such a similar effect emerged. Nonetheless, I’m happy with the way this project turned out.

Sources

Slime Mold – Audrey Dussutour

Coding Adventure: Ant and Slime Simulations – Sebastian Lague

Reading Reflection – Week 3

What exactly is interactivity?

Chris Crawfords chapter on “What is Interactivity” was interesting because I had never thought about formally defining interactivity before. Since Crawford was trying to do so, I approached the reading as if I were a skeptic, trying to think of a counter argument to anything that Crawford proposed.

Crawford insists that interaction is the name of the exchange between two actors as they listen, think, and reply to one another. I was about to ask what exactly an “actor” is to Crawford, but he replied not soon after. For him an actor is a “purposeful creature”.  This is still a bit unclear to me. Right then he mentions that a fridge that is programmed to “react” to someone opening the door by turning on its light is not what he is considering as an actor, but rather what he wants is “interactivity that has some blood in its veins.” To me this just means that he does not want include entities whose decision-making is too simplistic in his notion of interactivity. Humans are one type of actors that Crawford would accept, and the replies that they can generate are on a spectrum when compared to the refrigerator that can only turn on or off as a reply to an input. As such, Crawford’s definition of interactivity required the actors to have a certain degree of complexity.

Thinking about this for a little bit, I did not find myself disagreeing with this notion. He insists that interaction is different from reaction, and even though I’m taking the skeptic’s position, I will have to agree with him on that. So far, I’ve accepted the notion that interaction needs two actors to listen, think, and reply to one another. However, I thought of moments when I’ve had small talk and conversations without purpose. By this definition, it would seem that such conversations would not be interactive. It seemed a bit counterintuitive that a conversation is non-interactive, but as I thought of this I read through Crawford’s position on the subjectivity of interaction, and his proposed solution to it. He insists that interaction can vary on a spectrum, and that interactivity is just a measure of how much thinking, listening, and replying occurs during an exchange. As if he had read my mind, the example of trivial conversations was one that Crawford used to explain the spectrum of interactivity.

Was the book interactive then? It certainly felt like it. However, by Crawford’s proposed definition it was definitely lower on the scale than an engaging conversation. Although it felt like I was conversing with someone who had just read my mind, the ideas in the chapter were stagnant, unchanging to my thoughts. For a moment they seemed to be a reply to my thoughts, but I know better than that. Still, this gave Crawfords definition a merit in my books.

After going through the entire chapter, I wonder what the epitome of interactivity would look like. One technology that comes to mind is artificial intelligence. I think the reason AI fascinates us is because it is a technology that is much higher on the interactivity scale. When interacting with AI technology it feels as if there is more complex thinking involved in the interaction between us humans and technology. If interactivity is one measure of how good a program is, then is a perfect program one that uses some AI algorithms to tailor its function and form to our needs?  If such tailoring is done, then what is the need of specialized programs. When I imagine the peak of interactivity, I think of something without a predetermined form. A single program that is an amalgamation of every possible program but does not have form until it is interacted with.

Reading Reflection – Week 2

After watching Casey Reas’ talk at the 2012 Eyeo festival, I was left questioning certain beliefs I held. In my mind, the distinction between digital and organic systems was like night and day. For example, Casey Reas’ earlier work is what I would have associated with computer graphics. Before watching this video, we had to make some computer graphics ourselves, and all of the ideas that came to my mind had one thing in common: order.

Casey Reas’ Early Work

While trying to work on the second assignment, all I kept thinking about was how to recreate using code the vision I had in my mind. Not thinking of generating something I had no control over makes me realize that I was limiting myself to the idea that computer graphics had to have order.

Casey Reas’ Early Organic Work

Secondly, after watching Reas’ talk, I’m questioning what exactly it means to be alive. When he shows the grid of randomly generated pixels and adds symmetry to the randomness, suddenly faces and things I associate with living beings start to emerge. I do feel an urge to resist this association. Maybe it’s because my definition of life is tied with organic matter. I don’t have the answer to that yet.

Symmetry – Casey Reas

However, when he shows the paths of vehicles and mentions that these would be perfectly similar to one another if not for the error, it made me think: is that not what life is? Living things fail and some win. Sometimes through sheer luck. Without random mutations changing us, we could have been single-celled organisms floating across the oceans. However, all the complexity that we see today was given rise to through that little bit of randomness causing some helpful instability.

With this in mind, I think I’ll let some randomness into my upcoming work. I’ve been trying to mold it to fit my vision so far, so it will be a fun change to incorporate an element of uncertainty into it.

Sources

Casey Reas – Eyeo 2012

Retro ASCII Art

Concept: Retro 3D Art

Retro Computer Graphics from Behance

I was inspired by old school computer graphics that you would see in movies like The Matrix. Because of this, I knew that I wanted to make some ASCII art in the signature green color that most retro graphics used. After some experimenting, I decided to make an ASCII representation of a Menger Sponge, which is a fractal geometry that I thought would be very interesting to look at.

Process

I began by creating a sample video that I could use to turn into ASCII art. To do this, I created a 3D cube in Processing, which is a predecessor of P5.js. I attempted to do this in P5.js, but found the saveFrame() function too limiting. I created a simple box using the 3D renderer in Processing, and added some lighting to give the sketch some dynamic range. This is important as I needed to use the difference in brightness later on when converting the video to ASCII, and the greater the dynamic range is the easier it is to perceive the ASCII video.

void setup() {
  size(600, 600, P3D);
}

float a = 0;
void draw() {
  background(0);
  noStroke();
  spotLight(10, 80, 240, width/2, height/2, 400, 0, 0, -1, PI/4, 2);
  pointLight(255, 0, 0, width/2, height/2, height/2);
  ambientLight(100, 0, 100);
  fill(255);

  translate(width/2, height/2);
  rotateX(a/2);
  rotateY(a/2);
  rotateZ(a/3);
  box(280);

  a+=PI/100;

  saveFrame("src/box-######.png");
}

I incremented the rotation angle by a fraction of pi because I wanted to be able to count when the cube resets to its original position. This made it easier to create a video that could be looped seamlessly.

Once I had the output frames, I combined them together using Microsoft Movie Maker. The final result was this video:

Next, I wanted to work on converting this footage to ASCII art. I followed Daniel Schiffman’s coding challenge on creating ASCII text images. After experimenting with the video size and character density arrays, the following sketch was the result I got:

However, I wanted to create something a bit more complex. This is when I remembered an old project that I worked on by following another one of The Coding Train‘s challenges, which was the Menger Sponge coding challenge.  After generating the frames and compiling them into a video, this was the result:

All I had to do then is to insert this video into the original code and play around with different parameters until I got the desired result.

Code Highlights

I’m really proud of the code that makes the animation look like its being built up slowly using ASCII characters. The way I achieved this is basically by filtering out the highlights on the Menger sponge. When I compiled the video, I saw that the lower right corner of the sponge had a bright highlight on it that was blocky.

//finding the right character based on brightness of pixel
let len = charArray.length;
let charIndex;

//playing with map values for the building up effect
charIndex = floor(map(apparentBrightness, 0, 100, len, 0));

When I filtered the brightest points of the sponge out, I essentially removed the lower left corner until it got a bit darker later in the animation, which created the building-up effect.

Reflection

Compared to the first assignment, I had a more solid idea of what I wanted to achieve. Because of this, I had planned out my workflow beforehand and that streamlined the entire creative process. I knew I had to create source animation and then convert it to ASCII characters. This made my code more readable, and I had better control of the sketch overall.

However, the building-up animation that I am most proud of is dependent on the source video. It looks the way it is because in the source animation the highlights are blocky as well. If I wanted to recreate this project, I want to work on some logic that allows the effect to be more generalizable. Maybe I could filter out sections of the object based on a distance function instead of the brightness levels. That way I can substitute different source videos and still get the cool effect.

Sources

 

 

Self Portrait — With a Surprise!

Concept

I wanted to capture as much detail as possible in my self-portrait. Since we needed to use simple shapes to draw our portraits, I decided to work with a reference photo of myself and simplify my face into the most basic shapes possible. For an animated component, I added a blinking animation as well. I also wanted an interactive element, which is why I added some cats that pop out when you click on the mouse!

 

Code Highlights:

My favorite part was creating a cat class. It’s the highlight of the project since I can create a cat anywhere on the canvas and rotate it as well. You can even customize the base, iris, mouth, and nose colors for each cat. This flexibility made my code much cleaner and allowed me to have fun, creating different cats.

class Cat {
  constructor(x, y, angle, baseColor, eyeColor, mouthColor = color(255), noseColor = color(202, 137, 155)) {
    this.x = x;
    this.y = y;
    this.angle = angle;
    this.baseColor = baseColor;
    this.eyeColor = eyeColor;
    this.noseColor = noseColor;
    this.mouthColor = mouthColor;
  }

  show() {
    push();
    translate(scaledSize/this.x, scaledSize/this.y);
    rotate(this.angle);

    strokeWeight(2);

    //face
    push();
    fill(this.baseColor);
    triangle(
      -scaledSize / 25,
      -scaledSize / 120,
      -scaledSize / 21.81,
      -scaledSize / 17.14,
      -scaledSize / 100,
      -scaledSize / 26.66
    );
    triangle(
      scaledSize / 25,
      -scaledSize / 120,
      scaledSize / 21.81,
      -scaledSize / 17.14,
      scaledSize / 100,
      -scaledSize / 26.66
    );
    circle(0, 0, scaledSize / 12);

    pop();

    //eyes
    push();
    strokeWeight(2);
    fill(255);
    circle(-scaledSize / 60, -scaledSize / 120, scaledSize / 40);
    circle(scaledSize / 60, -scaledSize / 120, scaledSize / 40);
    pop();

    push();
    noStroke();
    fill(this.eyeColor);
    circle(-scaledSize / 60, -scaledSize / 120, scaledSize / 48);
    circle(scaledSize / 60, -scaledSize / 120, scaledSize / 48);
    pop();

    push();
    noStroke();
    fill(this.noseColor);
    triangle(
      0, scaledSize / 120,
      -scaledSize / 150, scaledSize / 300,
      scaledSize / 150, scaledSize / 300
    );
    pop();

    //mouth
    push();
    stroke(this.mouthColor);
    strokeWeight(1);
    line(0, scaledSize / 120, 0, scaledSize / 80);
    pop();

    push();
    stroke(this.mouthColor);
    strokeWeight(1);
    translate(-scaledSize / 1200, scaledSize / 80);

    rotate(PI / 3);
    line(0, 0, 0, scaledSize / 120);
    pop();

    push();
    stroke(this.mouthColor);
    strokeWeight(1);
    translate(scaledSize / 1200, scaledSize / 80);

    rotate(-PI / 3);
    line(0, 0, 0, scaledSize / 120);
    pop();

    pop();
  }
}

To add the blinking animation, I created open and closed states for the eyes.  I implemented the state changing using the following code:

 
//blink animation
if(eyeOpen)
  {
    if(frameCount%150== 0)
      eyeOpen = !eyeOpen;
  }
else{
  blinkFrames++;
  if(blinkFrames%11 == 0)
    {
      eyeOpen = !eyeOpen;
    }
  
}

The time between each blink is calculated using the frameCount variable. I want the eye to blink every 2.5 seconds, and since there are 60 frames per second, every time 150 frames are drawn 2.5 seconds elapse.

I also use a blinkFrames variable because I want to count the number of frames that pass while the eyes are closed. This way I can keep the blink time constant. Since I don’t want the time between blinks to be exactly the same, I use frameCount instead. The reason this inconsistency occurs is because frameCount still counts ahead while the eye state is set to be closed.

 

Approach and Reflection

After an houof experimenting, I came up with the following face shape using the rect, arc, triangle, createShape, and curveVertex functions:

One particular tool that I used extensively throughout this project was Microsoft Powertoys , which comes with a Color Picker tool. It helped me figure out the color scheme that I wanted to go with, e.g., when picking the color for the shadow under my face.

 

I struggled with creating the hair, as I wanted to create curls. After no luck with that, I changed the plan to feature me with short hair. Using triangles and arcs, I created my hairline:

The line in the picture above is a reference line I drew, as in the beginning I translated the canvas to the middle. However, it was hard to keep track of coordinated as printing the mouse coordinated did not work after translating the canvas using the translate function. Removing the translation was tedious after making so much progress, so I continued working like this. This is something I would choose not to do next time.

After finishing the hair, I quickly gave my portrait a body and then created the cat class. I worked in a separate file and added the class to the main project once I was satisfied.

I also wanted to implement a way to control the scale of the picture, so I added a factor to every coordinate and length that is controlled by the scaleFactor variable. The idea was to keep size of the portrait fixed relative to the canvas size, and to allow me to zoom in and out of the portrait. However, this implementation was not perfect, and I suspect the translation in the beginning to be causing some minor problems.

In retrospect, it would have been beneficial to plan out the project more thoroughly. I did not have a clear idea in the beginning of what I wanted, and added features as I went along. Still, I enjoyed the process and am happy with the end result!

 

Resources