Midterm “Ladushki”

Sketch

* In order to play the game, go to p5 sketch and give access to the camera!

Concept

I created a game that is controlled by user’s video input. In Russia, we play a clapping game called “Ладушки” (ladushki; in English it’s called Patty Cake), where you need to match the rythm of the other person clapping, as well as their hands (right to right, left to left, two hands to two hands). A cute kind girl in the room welcomes the player to play this game with her, starting the game after a short tutorial.

However, if the player fails to match the girl’s rythm and handpose, she will get more and more upset. With more mistakes, the girl will clap faster, and her anger will distort the environment and sound around her. What happens if you manage to fail so many times that she reaches the boiling point? Play and find out.

Proccess of Development & Parts I’m Proud of
Sprites & Design

To create the sprites, I first created a character on Picrew, so I can later edit the image of a complete, well-designed in one style character. I chose the design of the girl to be cute-classy to fit the mood of the game.

After that, I inserted the photo to Nano Banana to pixelate to 16-bit and extend the image. After that, I edited the image in Canva, so all the faces, hands positions are properly aligned, and the image has all 4 positions with 4 different faces.

Sound

The sounds from the game were generated and/or taken from open-source copyright free resources. The background music was created using Suno AI using the following prompt:

Dreamy chiptune instrumental, midtempo, modular sections built for easy tempo shifts, Playful square leads carry a singable main motif, doubled an octave up on repeats, Soft, bouncy drum kit with rounded kicks and brushed snares; bubbly sub-sine/square bass locks to a simple walking pattern, Light 8-bit arps and gentle pitch bends sparkle at phrase ends while warm, detuned pad layers smear the edges for a cozy, nostalgic arcade glow, Occasional breakdowns thin to arps and pad swells before the full groove pops back in with extra countermelodies for an intensifying, joyful loop, playful, nostalgic, light, warm, soft, gentle, bright

Other sounds, such as clapping sounds, screaming sound were taken from Pixabay.

I had a lot of manipulations with sound for its speeding up/distortion for creepy effect.

update() {    
  //for sounds
  let current_rate = map(this.level, 50, 100, 1.0, 1.3, true);
  soundtrack.rate(current_rate);
  if (this.level >= 70) {
    let intensity = map(this.level, 70, 100, 0, 0.3); 
    distortion.set(intensity); // set the distortion amount
    distortion.drywet(map(this.level, 70, 100, 0, 0.2));
  } else {
  distortion.drywet(0); // keep it clean under level 70
  }

Here, I use few methods from p5.js sound reference page. Background soundtrack is connected to the distortion variable that can be seen in the code. By mapping the rate (speed of the soundtrack) and intensity (the distortion amount), as well as drywet value (for reverbing) and connecting all these values to the background soundtrack, the sound effect and background music slow but noticeable change is created.

ml5

The fundamental part of my project is hands tracking, which was implemented using ml5.js HandPose ML model.

The implementation process was carefully explained in my previous post since it was the first step in the development. I didn’t change this part since then, but I built up on closed palm pose detection: I added the following condition:

//DISTANCE BETWEEN THUMB AND PINKY is also counted for state of the hand
//define what means when hand is open and set status of the user's hand positions
if (hand.keypoints && hand.keypoints.length >= 21) {
  let isHandOpen = (
    hand.keypoints[4].y < hand.keypoints[2].y &&   
    hand.keypoints[8].y < hand.keypoints[5].y &&   
    hand.keypoints[12].y < hand.keypoints[9].y &&  
    hand.keypoints[16].y < hand.keypoints[13].y && 
    hand.keypoints[20].y < hand.keypoints[17].y &&
    abs(hand.keypoints[4].x - hand.keypoints[20].x) > abs(hand.keypoints[5].x - hand.keypoints[17].x));

  if (isHandOpen) {
    if (hand.handedness === "Right" && hand.keypoints[20].x - hand.keypoints[4].x > 0) {
      leftOpen = true;  
    } else if (hand.handedness === "Left" && hand.keypoints[20].x - hand.keypoints[4].x < 0) {
      rightOpen = true; 
    }
  }
}

The condition  abs(hand.keypoints[4].x - hand.keypoints[20].x) > abs(hand.keypoints[5].x - hand.keypoints[17].x));  measures the distance between pinky tip and thumb tip and compares it with the distance between knuckle of index finger and pinky, ensuring that the palm is fully open and not tilted. The condition  hand.keypoints[20].x - hand.keypoints[4].x < 0  checks if the distance between pinky and thumb tip is positive, ensuring that the user shows the inner side of the palm to the camera, not its back side.

Other parts

One part that I’m proud of in this code is the typewriter text effect in pixel dialogue window.

//draw text like a typewriter
function draw_text(t, anger_level) {
  //add shaking for higher anger levels
  let shakeAmount = 0;
  if (anger_level > 40 && anger_level < 100) {
    shakeAmount = map(anger_level, 40, 99, 0, 5, true); 
  }
  // random offset
  let offsetX = random(-shakeAmount, shakeAmount);
  let offsetY = random(-shakeAmount, shakeAmount);

  let currentIndex = floor(text_counter / text_speed);
  if (currentIndex < t.length) {
    text_counter++;
  }
  let displayedText = t.substring(0, currentIndex);

  push();
  translate(offsetX, offsetY);
  
  textFont(myFont);
  textSize(19);
  noStroke();
  
  fill(0);
  textAlign(CENTER, CENTER);
  rect(width/2, height*0.9, width*0.6+15, 40); //lines from side
  rect(width/2, height*0.9, width*0.6, 55); //lines from up/down
  //dialogue window
  fill(237, 240, 240);
  rect(width/2, height*0.9, width*0.6, 40);
  fill(0);
  text(displayedText, width/2, height*0.9);
  pop();
}

Here, if-condition checks on which index in the text we are currently on (default is set to 0 since text_counter = 0), if it’s less that the length of the desirable output string. If it is, it increments a counter. The counter is being divided by text speed (set to 2 frames), and the current index displayed is a rounded to lower number result of this division with the help of floor() function. Substring function converts the initial string to an array of characters using starting index (0) and ending index which is exactly the current index we’re reevaluating every time, and then it outputs the string captured between these indices. This way, a small pause (of 2 frames) between drawing each letter is created, creating an effect of typewriting.

In the final part of the function black rectangles are created under the main gray dialogue window, creating a pixel-style border to it.

Another valuable part of the code here is the shaking. In other parts of the code the shaking technique is almost the same: the offsets by x and y that depends on the anger level are passed to translate() function, changing the coordinates origin. Thanks to that, the whole dialogue window displayed has this new coordinate origin each time the function runs if the condition is satisfied, creating an effect of shaking.


Apart from that, the core of my code is the class “Girl” which controls almost everything connected to the girl charachter, from her speech to comparing handpose states. Also, I have some independent functions, like detect() that recognizes and returns the handpose state of the player and tutorial running that explains the player the rules of the game (by controlling and modifying some class public variables as well).

To control the game state, when it should run the tutorial, when the main part is being played, and when it’s over and needs a restart I use game states. For resseting, player is prompted to press “ENTER” on the final screen to fully restart the game by triggering the resetting function that sets all global variables back to default state and creates a new Girl object with new default attributes:

//reset the whole game upon calling this function
function resetGame() {
  // reset global variables
  game_state = "START";
  state = "CLOSED";
  text_counter = 0;
  screenFlash = 0;
  girlImages = [];
  
  girl = new Girl();
  
  // reset girl's variables
  girl.current_state = 0; 
  girl.level = 0;
  girl.change_state();
  endStage = 0;
  endTimer = 60;

  
  // reset the audio
  soundtrack.stop();
  soundtrack.rate(1.0);
  soundtrack.setVolume(1.0);
  distortion.set(0);
  distortion.drywet(0);
  soundtrack.loop();
}

...

function keyPressed() {
  ...
  if (keyCode === ENTER) {
    if (game_state === "GAME_OVER") {
      resetGame();
    }
  }
...
}

My code is pretty big but I feel like explained parts are the most interesting ones. I believe I have some inefficient parts in my code (such as hardcoded ending speech and its progression) but they all work now without lagging or taking long time to load, so I believe that at least for this projects it is fine to leave them like that.

While writing the code, I used the following resources:

    1. p5.js reference
    2. ml5.js reference
    3. The Coding Train Handpose video
    4. Gemini (Guided Learning Mode) for debugging and searching for functions of p5.js (such as substring function in typewriter, for example)

+just googling some methods and clarifications

Problems

Throughout the development of the project I ran into a lot of problems and small bugs but I will describe one that actually taught me a very useful trick.

I had a lot of visual parts that required precise positioning of the object, as well as I had different effects applied to them. Offsets of the object that were limiting its shaking, the mode of displaying the object (rectMode, imageMode), aligning, the translating conditions etc. were different for many parts. However, when you assign imageMode in one place globally, and then somewhere else you set another imageMode, and then in the third place you just use it without assigning expecting the default mode — the whole sketch turns to complete chaos. As you can see on the photos, I had video being aligned to another part of the screen, the textMode being set to some weird value, font style dissapearing, and textbox moving out of the screen. I learned how to isolate the styles (with the help of Gemini), as in this example:

function draw_video() {
  push();
  imageMode(CORNER);
  image(bg_img, 0, 0, width, height);
  
  //layer that gets the room darker as the anger level rises
  rectMode(CORNER);
  let mask_level = map(girl.level, 20, 100, 0, 180);
  noStroke();
  fill(0, mask_level);
  rect(0, 0, 640, 480);
  pop();

By surrounding the code block with push() and pop(), the style and code inside the block becomes isolated and doesn’t impact other parts of the code. It was really helpful, so I used it almost everywhere in my project!

Areas for Improvement

There’re some parts of my project that can be significantly improved and parts I don’t really like.

First of all, the final screamer, I feel like it is not scary enough to really make a great impact on the user. The concept was to have that cuteness vs. creepiness contrast. So, in contrast for a small childrens’ game and cutesy design, I wanted to make a really impactful and creepy screamer in the end, additional to other glitch/creepy effects. Turned out that making a scary screamer is actually a very hard job. I tested a few of the screamers versions, asking my friends to test the game so they can tell which one is scarier. I stopped on the current version because it was more unexpected, since it appears mid-sentence and has some stop-frame picture and not zoomed video or something else. Still, I feel like there’re ways to make this part much more surprising and scary that I wasn’t able to come up with.

Another part I could work on more is the design. I can’t draw, so in order to create visual assets I used picrew, editing AI (described earlier). However, I think that sprites created could be more fitting, and maybe I could have added additional sprites for more smooth pose-change, and sprites of a “still” pose. It is a bit hard to do in time-constraits and lack of skill, but I’m sure it’s something I can think about in the future.

Also, I believe I could introduce more unified control system. While playing, the user doesn’t touch the keyboard and only show their hands to the screen, but to progress through the tutorial and ending scene they need to press some buttons. I believe it is not really good to have these two controls systems mixed so maybe one of the improvement can be introducing some additional poses (like peace sign, maybe?) instead of keyboard pressing.

Midterm Progress

Concept & Interaction

What I love more than horror games, is psychological story-based visual novels. Games that hold you in place, extremely focused and afraid to even blink to miss something important (or something that will get you in trouble). Also, I really love when innocent and soft, childlike things are framed in the way that makes you really uncomfortable, creating a really two-sided feeling of nostalgia and comfort with unsettling disturbance.

More than that, for a very long time I wanted to try experiment with computer vision and body capturing, so I decided to combine these two things in my midterm.

What I want to do is a game that is controlled by user’s video input. In Russia, we play a clapping game called “Ладушки” (ladushki; I believe in English it’s called Patty Cake), where you need to match the rythm of the other person clapping, as well as their hands (right to right, left to left, two hands to two hands). I want to make the user to play this game with the computer. There will be a girl in a room that will welcome the player to play this game with her. Her clapping will be consequentive, and player just has to match her hands state with their hands.

The twist is that if the player fails to match the girl’s rythm and hands state, she will get angry. And as the user makes more mistakes, she will get more angry. With the anger level increasing, the whole picture and game will become distorted: video glitching (in later phases: disappearing), rythm becoming unstable and/or much faster, unfair hand detection, intentional mistakes in detecting state of the hands, sound becoming distorted, phrases girl says after mistakes turning to be aggressive, her appearance shifting as well. If the user makes it to the anger level = 100, there will be a jumscare with their video distorted (I figured that out of all jumscares it will make the most impact).

Some details about the concept might be found as comments in my code, and the general outline is planned to be like in the picture below. To create creepy atmosphere, I plan to use really soft colors and cute artstyle that won’t match the gameplay and plot.

Code Outline

Right now I decided to focus on the technical side of the project, making the algorithm work, so after this I will have the core mechanic. After that, I will focus on visual and sound design: drawing sprites, finding suitable sounds, creating glitching effects, scales, text etc.

This is the plan of how I (as of now) think the code should include:

  • class Girl with one object initiated and the following methods:
    • talking method (reaction when user fails to match the girl’s tempo)
    • changing hands states method
    • comparing Girl’s to user’s states method
    • drawing girl method (sprites needed)
    • anger level scale draw method
  • Functions in the general code block:
    • detecting user’s handpose function
    • displaying user’s video
    • video distortion function
    • sound implementation + sound distortion methods
    • final video screamer function
  • Assembled game in setup() and draw() with restart option (maybe pause + exit buttons ?)
Code Made

Sketch requires camera access!

This is my sketch so far and the code I made. As I said, now I’m focusing on the technical part. Now, the code can:

  • detecting user’s handpose function
  • display user’s video in the corner
  • talking method (reaction when user fails to match the girl’s tempo) (Girl Class)
  • changing hands states method (Girl Class)
  • comparing Girl’s to user’s states method (Girl Class)

Instead of drawing, the code now just outputs the anger level and the state of the girl’s hands. The code compares the video input and user’s hands position with girl’s hands state. When user makes mistake, anger level is increased by 10 and there’s text displayed on the screen (3 phrases for each sector: 4 sectors depending on anger level). However, it isn’t staying on it now (to be fixed). Also, the game loop stops once annger level reaches 100.

The base code for hand detection function implementation is copied from ml5 HandPose reference page. I also used this The Coding Train video about Hand Pose.

Complex Part

I believe that the most difficult part of my midterm is working with video input and hand detection. It’s a pretty new concept for me, as well as it’s pretty hard to implement it to use not just as a small interactive component but a core concept around which the game is built. Risk of improper detection of pose, poor video input and glitching is quite high. However, I tried to build this part first and it turned out to not be too difficult. After testing my code for some time, I defined three poses the computer should recognize: two hands open, left hand open, right hand open. Perfectly, to fit my concept and the Ladushki gameplay, I also need to have a pose for clap, but the problem is that when hands are clapped and face their edge to the camera, hand detection disappears. Since it could possisbly break the game and unfairly detect it as user’s mistake when it’s not, I decided to ignore this state fully and check only for claps to the camera, when palm is facing the computer.

Also, to avoid random poses appearing and being detected, I added a confidence level: if the confidence in detection of hands is lower than this set level, the computer won’t register it as a pose. This really helps a lot to not identify some random pose/made up movement as the user’s mistake.

Now, the most challenging part for me would be the visual design. I don’t have much experience in this type of creative work unlike in coding, so creating sprites and building environment that will serve for the goal of the game and suit its atmosphere, as well as arranging everything properly and not overloading, will be a bit hard for me. To check on my progress here and to track the result of the aesthetic impact and suitability, I would ask my friends to give me feedback and interact with AI so it can give me some more theoretical rules about how things should be arranged in final outcome according to some basic design rules.

I believe that this project is really fun and much easier to make than I expected, since the hardest part was mostly completed already!

Some of my inspirations for design, concept and aesthetic are: Needy Streamer Overload and DDLC

Week 5 – Reading Reflection

It’s easy to forget that computers don’t actually see anything. When we look at a video feed, we instantly recognize a person walking across a room. A computer just registers a grid of numbers where pixel values shift over time. Because of this, computer vision is incredibly fragile. Every tracking algorithm relies on strict assumptions about the real world. If the lighting in a room changes, a tracking algorithm might completely break. The computer doesn’t see “general” picture with context, since it only knows the math it was programmed to look for.

Basic Tracking Techniques

To avoid this blindness of the computer, some techniques are used to track/react to things the developers are interested in.

    • Frame differencing: comparing the current video frame to the previous one. If the pixels changed, the software assumes motion happened in that exact spot.

    • Background subtraction: memorizing an image of an empty room. When a person walks in, it subtracts the “empty” image from the live feed to isolate whatever is new.

    • Brightness thresholding: tracking a glowing object in a dark room by telling the software to ignore everything except the brightest pixels.

    • Simple object tracking: This involves looking at the color or pixel arrangement of a specific object and looking for those same values as they move across the screen.

Surveillance in Art

I believe that the fact that people use the technology made for surveillance and military to create art is very interesting. I believe that using technology built for control to create art is truly impressive: flipping the understanding of this technology, or even making it very double-sided. While interactivity that comes with such tracking technology has a huge variety, and sometimes feels magical and extremely emotional, it comes from the computer tracking, analyzing and reacting to every move of the person in front of it. Such art presents the invisible unsettling surveillance we have everyday to a work of art that makes it extremely present.

Honestly, this military baggage explains a lot of computer vision’s blind spots. If you’re designing a system just to monitor crowds or track moving targets, you don’t need it to understand the whole scene and all details. You just need fast analysis of tiny differences, like a shift in pixels.

However, I feel like in interactive media details are very important, and that art runs on them. This way, while computer vision has not yet reached the state when it can analyze everything at once, artists have to come up with algorithms that will try to do it instead.

Week 4 – Reading Reflection

One thing that always confuses me is the variety of modes on some household items. When using an iron, I see that spinning the circle increases the steam production, and for people who have no idea which level is needed for which clothes, they write the names of the materials on the same circle respectively. What drives me mad is that washing machines and dryers are NEVER intuitive. What’s the difference between Cupboard Dry and Cupboard Dry+ if they take the same time and operate at the same temperature? What is the difference between Gentle and Hygiene, and why is the time difference there 3 hours? And to actually figure out the difference, you have to find the name of the machine (which will never match its actual name), look it up in some 2008 PDF file on the very last Google page, and it still won’t answer the question. I always use Mixed washing and Cupboard Dry just because it works, and I have no idea how the other regimes work. And as Norman says, it’s not me being stupid, but the design allowing for these mistakes.

“The same technology that simplifies life by providing more functions in each device also complicates life by making the device harder to learn, harder to use”

I think my example perfectly supports this idea, since the bad design of all these items: with no signifiers, no clear affordances, and no clear conceptual model formed either through life experience or through using the item, just creates more confusion and makes the user always choose one method instead of the huge variety of (probably) useful and functional ones.

I think one way to fix it is to provide some sort of manual, even a tiny table on the edge of the machine would help so much to at least understand which method does what and what the difference between them is. Another way is to display something on the small screen that almost every machine has, like all the characteristics and statistics that are unique to each method, or some short warnings/instructions. Another way to solve this problem is to at least make small illustrations near each method that actually depict what the method does. Genuinely, it would help unleash the potential of these machines and help people use them.

Talking about interactive media, I think the principles Norman talks about are really applicable and foundational.

Sometimes great art pieces with very interesting and complex interactions can be overlooked just because people can’t figure out how to interact with them. I believe that it is very important to design the piece in a very intuitive or guiding way, a way that encourages the user to make the interaction that the author created. As Norman says, humans are really predictable, and in this way, some silent guiding design (not notes, not manuals, but the design itself) should trigger the interaction that is meant to be done in order to experience the art.

Week 4 – Data Visualization

Concept

I am from Moscow, and recently somehow I had a lot of conversations about metro.  Moscow metro has a lot of different lines and more than 300 stations, and I wanted to visualize it in some way.

© Transport.mos.ru, 2025

I found data on all the stations and lines on official dataset by Moscow Government and used it (link doesn’t open without Russia VPN). Sadlly, even this the fullest dataset didn’t have coordinates on stations, so I decided to adjust the concept. Now, my piece displays all the stations with their respective color in the random places on the canvas, and then slowl draws connections between them. This way, it shows how complex the system is and how many connection there are, because at some point it becomes impossible to even differentiate which line is from which points, and I only draw one connection from one dot. I think it perfectly serves to the idea of showing complexity of the metro at my hometown.

Highlight Code

The part I struggled the most in the coding was extracting data: in the row, it was divided by semicolons, not commas, and the language was Russian, of course, and the writing system was cyrillic. I struggled to understand what is written where in the original document and to clear the data so I can properly extract parts that I need.

//clear the data from garbage that prevents from proper extraction of data
function cleanCell(dirtyData) {
  if (!dirtyData) return "";
  let str = String(dirtyData);
  let match = str.match(/value=(.*?)\}/); //The Regex from GEMINI: see if there's repeated character up to the curly braces
  if (match && match[1]) return match[1]; //return actual data from regex
  return str.replace(/^"|"$/g, '').trim(); //replace the quotations from the beggining and the end of the line and clear up additional spaces
}

This was the part I learned from the most. It uses Regex (.*?)\}/ and /^”|”$/g to check the data. Basically these characters represent some pattern the computer is checking. For instance, /^”|”$/g: /^” matches the quotation mark in the beggining of the line, | serves as logical OR, “$/ matches the mark in the end of the line, g allows the program run even after the first match. I didn’t know about this method but I find it really useful in working with data.

Overall, I created a class for stations, so each station object holds the line name and the station name, and has a method for drawing the node as a small circle with color of its line. All stations are added to the dictionary as values for the key that is their line name. Then, by the line name computer accesses all the stations on this line and slowly draws connections one by one in random order using lerp function. Also, the user can access the code and toggle the names of the stations: choose to show them or see the sketch without them at all. Upon click the user can restart the sketch thanks to mouseClicked() function.

Reflection

I think it would be really nice to somehow find data on stations with actual coordinates, so I can display them on canvas as they positioned in real life. After that, I believe it’s possible to experiment with way of connection all them: from real way, to more sharp or abstract style. I believe that this way it will look more like data respresentation and closer to my original idea.

Also I thought about grouping the stations by district and this way sort them by allocation each district a position on the canvas that will be close to its position on the real map.

Week 3 – OOP Generative Art

Concept

Once again, I was searching for inspiration in tech-aesthetic. I came across this photo on Pinterest, when the idea for my work sparked in my head:

I thought that it would be nice to create something that looks like keyboard but is less predictable (and less functional, apparently). Therefore, I created a small piece where keyboard tiles of random color appear all over the screen and stay for random time between 1 and 4 seconds, and, if being clicked, display a random emoji face from the list I provided.

 

 

Highlight code

Due to the lack of time, my code is pretty simple this time, but the part of implementing color scheme into the piece is something I struggled with for some time.

class Tile {
  constructor(x, y, size) {
    this.keyHue = random(0, 360);
    ...
  draw_tile() {
      ...
  
      // Lower tile
      fill(this.keyHue, 90, 80); 
      square(this.x, this.y, this.keySize, this.cornerRadius);

      // Upper tile
      let innerSize = this.keySize * 0.8;
      let offset = (this.keySize - innerSize) / 2;
      
      fill(this.keyHue, 60, 100);
      square(this.x + offset, this.y + offset - 3, innerSize, 8);

I switched to colorMode(HSB) which decodes as Hue, Saturation, Brightness. Basically, the computer randomly chooses only hue, and saturation and brightness settings persist in all tiles. This way, the lightness and the intensity of the color pigment stays the same, but the actual color changes depending on the value the computer chose. It made all the tiles look the same stylistically while being different color.

Overall, for this piece I created one class for each Tile, which had methods for activating it based on timer and then resetting it; checking if the mouse is in borders when it’s clicked, so it can display a random title; drawing the tile itself. I used for loops, and some built-in p5.js functions like mouseClicked() in this sketch. I also preloaded a pixel-like font from Google Fonts for text to match with the overall aesthetic.

Reflection

I find the way the sketch looks to be very colorful and have some tetris vibe, which I really like. I didn’t really have much time for this piece, otherwise I would like to come up with a unified color scheme to create something less colorful but more of one style.

Also, after the reading for today, I thought that it might be great to implement various ways of interaction. For instance, having a text line which will display the user’s entry and give some response to it, like emoji-reactions or some simple replies like “Yay!” and “Oh no”. I think this way the piece would turn out to be much more fun and way more interactive.

Week 3 – Reading reflection

What do you consider to be the characteristics of a strongly interactive system?

After reading, I feel like the main and most important part of a strongly interactive system is thinking. As explained, there’re three steps of interaction, and while many objects can listen (register the action, like fridge opening) and speak (performing an action, like book telling a story), very few can actually think – analyze the action and then perform in response in accordance with it. Like in conversation, answer to one’s words can’t always be the same whatever someone says, the answer of a strongly interactive system should differ depending on the interaction made with it.

I stress the value of thinking in an interactive system because, as it was said in the writing, participation and reaction are not considered to be the same as interaction. Yes, the lamp turns on when you press the button – it reacts to the action, and yes, you participate in a dance with some music, but you don’t interact with them because 1) the interaction with them doesn’t consider any thinking, 2) no matter how you press the button or dance, music and lamp always will be the same and have the same response to your actions.

 

What ideas do you have for improving the degree of user interaction in your p5 sketches?

I understood that right now my sketches are not really interactive since they also don’t really consider thinking of the user. Clicks and mouse movement are probably not the best interactions since it’s very low-level interaction with really poor user action amalysis. I understood that in order to come up with something more interactive I have to consider what the user might think about doing with my sketches. For instance, if they see some object on the screen, what possibly they might do with it? If I come up with various answers, I would be able to make different responses of my system in accordance with user’s actions. This way, the system will adapt “thinking” that is so important when thinking about interactivity.

Week 2 – Reading reflection

Casey Reas says that instead of directly painting a picture or sculpting an object, the artist becomes a builder of systems. The final art is just the result of the system the artist built. I strongly agree with this claim. I believe that having direct control over every single detail kills the magic of digital art. Straightforward instructions that create totally predictable, controlled pieces feel useless to me.

However, the same applies to pure randomness. While we might find some meaning in things that are fully random, it’s hard to feel an emotional connection to something that has zero structure. Casey Reas mentions that he finds “white noise” or total randomness boring because it has no intent, and I can really relate to that.

I believe the perfect balance happens when the artist creates a structure that allows for randomness, resulting in art that is controlled yet creative. The artist writes the instructions, but the outcome still has some natural variation. Reas talks about algorithms like Perlin noise as a way to do this: it has randomness, but it creates smooth, “alive” motion within a set of rules, rather than just chaotic jittering.

For me, the beauty of computer art is in the power of this controlled randomness. Sometimes the best part is knowing exactly what algorithms you are using, but being surprised by the result every time you run it. Computers are much better than humans at creating something that is random but still follows a system. This reminds me of Reas’ “Process” series shown in the talk, where simple elements react to each other to create complex, organic forms.

I really want to use this kind of randomness in my work. For example, in my assignment for this week, the code is the same every time I run it, but the shape of the figures, the connections between them, and their speed are different. I can never get the exact same picture twice. I find it extremely cool to create something that looks super “mathematical” but, at the same time, never turns out to be the same thing twice.

Week 2 – Generative Art

Concept

Generative art seems very modern and new to me: creating art using your computer and element of randomness sounds like something that has no association with older times.

I wanted to create a minimalistic piece that will carry the atmosphere of the age when a laptop at home was a rare find, and when old Windows 7 and DVD players were considered super cool.

I found my inspiration in Klim Type Foundry art-piece and wanted to recreate something similiar.

I decided to make my art minimalistic and more “mathematical”: I have only two colors, blue and white, strokes and lines, shapes, and a lot of numbers which display the coordinates of the figures.

The computer randomly chooses the type and the coordinates of the figure and draws it, then randomly connecting it with other figures by making it either the endpoint of the bezier curve, or the endpoint of its control line. The figures and lines appear slowly, and move around, displaying their coordinates on top.

Code

Implementing the animation of line drawing was the most difficult part for me, so I want to highlight it:

//draw connections between objects
for (let c of connections) {
  // animate t from 0 to 1 
  // existing lines stay at 1 (fully drawn)
  c.t = min(c.t + 0.02, 1); 

  noFill();
  stroke(255);
  strokeWeight(1);

  beginShape();
  // draw the curve based on the current value of t
  //the loop will increment t, therefore moving (x,y) to the second enpoint, allowing a smooth animation
  for (let t = 0; t <= c.t; t += 0.02) {
    let x = bezierPoint(c.a0.x, c.p0.x, c.p1.x, c.a1.x, t); 
    let y = bezierPoint(c.a0.y, c.p0.y, c.p1.y, c.a1.y, t); //save coordinates of a segment of the curve being drawn
    vertex(x, y); //add x,y up to which the curve will be drawn
  }
  endShape();
  
  //do the same for the control lines: it also will be drawn slowly based on t
  let h1x = lerp(c.a0.x, c.p0.x, c.t);
  let h1y = lerp(c.a0.y, c.p0.y, c.t);
  line(c.a0.x, c.a0.y, h1x, h1y);

  let h2x = lerp(c.a1.x, c.p1.x, c.t);
  let h2y = lerp(c.a1.y, c.p1.y, c.t);
  line(c.a1.x, c.a1.y, h2x, h2y);
}

To animate the connections smoothly, I used a normalized time variable, t. Here, t = 0 is the start of the path, and t = 1 is the end. Every frame, value of t of every connection is being incremented by 0.02, revealing 2% more of the path.

For the straight lines I used method lerp(), and for the curves I used bezierPoints(). In the loop coordinates of the current endpoint are slowly increasing from 0 to 1, in the end connecting the first point with the last one, as the coordinates of incremented moving point become the same as the endpoint’s. This creates the illusion of the curve being drawn over time.

Beside that, the structure of the code is pretty simple. I created a class for the figures with methods to create, draw, and move the figures around. After that I connected the figures with lines, and stored all the objects and connections between them in arrays. There’s a lot of randomness in the code: the choice of the velocity of the figure, its shape and which figures it will be connected with lies on the computer, not the user.

Reflection

I find this minimalistic and simple art piece very hypnotizing and interesting. Even though it’s super simple, I can feel the emotions and atmosphere I wanted it to have.

However, I think that improving connection between figures and making it more smooth and less crunchy would be great. Also, finding an algorithm that would allow these figures to move around without overlapping would make the art less messy.

For further improvement, I think that adding sound effects and gravitating the points and lines in a certain pattern that creates some clear shapes of animals/objects would be extremely cool.

Week 1 – Self-portrait

Concept

When I think about portraits, I believe personal characteristics should always be included to reveal who the person really is. When it comes to me, my favorite hobby— sports —immediately came to mind. Now I play for the NYUAD Volleyball team and I really love being there and doing my best at practice.

When I study, I usually feel sleepy and I don’t enjoy doing boring homework, but I always have energy for volleyball. So, I decided to make an interactive self-portrait that reflects this contrast. In the sketch, an item (either a book or a ball) is tied to the mouse’s position. When the user double-clicks, my facial expression, clothes, the item itself, and the background all change.

Highlight Code

One part of the code that I particulary liked is integration of double-click:

function doubleClicked() {
  if (thing.chosen_type == 'book' && bg.type == 'library' && character.type == 'sad') {

    thing.chosen_type  = 'ball';
    bg.type = 'volleyball';
    character.type = 'happy';
  } else {
    thing.chosen_type = 'book';
    bg.type = 'library';
    character.type = 'sad';
  }
}

Since I have some prior coding experience, I used OOP principles to make the object move with the mouse and change states with a double-click. Even though the code itself is quite simple and straightforward, these interactive elements make a big difference to the overall picture. In this code snippet, double-click changes the attributes of the background, character, and item. It changes their type to another one, so in the next frame they will change and the code will display the picture with changed characteristics.

While creating the self-portrait, I relied on the p5.js reference page and Googled RGB codes for specific colors. I also had to look up the syntax for class and function creation, since I know how to do this in Python, but not JavaScript.

I used simple shapes in my portrait, but the most difficult ones were the arcs. It was hard to find the exact angles to make the arcs look the way I wanted while maintaining the right shape. I had to experiment with different angle values and ellipse diameters to create bangs that looked correct. I also spent time figuring out the angles for the main hair so it would look like a short haircut and not just a squished circle.

Reflection

Reflecting on my work, I definitely want to improve the overall design. I want to add more specific details so the portrait becomes less simplistic. In terms of interaction, I think adding more complex animations, such as the eyes tracking the movement of the ball or book as they moce with a mouse, would be great. Finally, making it a 3D version where the character is a detailed 3D model with depth, shades and textures, and the object actually rotates in 3D space would be awesome.