Assignment 3: Ocean – Farah Shaer

Here is my final sketch:

Concept:

For this assignment, I wanted to create an ocean-inspired sketch. It has clouds in the sky, moving waves, a floating boat, and birds flying across the sky. My goal was to make all of the animations smooth and calm rather than random.

Code Highlight:

class Waves {
  constructor(y, noiseLevel, noiseScale) {
    this.y = y; // the vertical position of the wave
    this.noiseLevel = noiseLevel; //for the height of the waves
    this.noiseScale = noiseScale; // for the smoothness of the waves
    this.t = 0; //the time for the animation (the move part)
  }

  move() {
    this.t += 1; //to update the wave over time (slightly)
  }

  display() {
    stroke("rgba(63,63,198,0.45)"); //color of the wave (a bit transparant)
    strokeWeight(4); //thickness of the wave

    for (
      let x = 0;
      x < width;
      x += 1 // so it can be looped from left to right across the canvas
    ) {
      let nx = this.noiseScale * x; // to scale the x position for the perlin noise, making the x values have a similar height
      let nt = this.noiseScale * this.t; //to scale the time variable for a smooth transiton because as the time increases the wave slowly changes its shape

      let waveHeight = this.noiseLevel * noise(nx, nt); // uses the perlin noise to calc the height of the wave and return it to a value between 0 and 1

      line(x, this.y, x, this.y - waveHeight); // draws a vertical line for the wave
    }
  }
}

 

I’m particularly proud of the wave animation. This was my first time working with Perlin noise. I followed the p5 noise reference and watched the coding train video to understand how Perlin noise creates smooth and natural movements. I think using noise instead of random values made the water feel more realistic and continuous.

Reflection/future work:

I built the sketch using object-oriented programming with separate classes for the waves, boat, clouds, and birds. I also used arrays to animate multiple objects at once. At the beginning, I started with the clouds, and I tried using the random function to place them, but it looked too messy and chaotic, so I decided to just space them out evenly with a little variation to make it look natural. That way, the clouds felt more organized and intentional. 

Originally, I planned to animate a person with a surfboard moving back and forth, but I decided a floating sailboat would fit the scene better. So I added a boat that floats gently up and down with the waves. I used sin for the floating motion and push and pop with translate to make it easy to move the boat without changing its shape (this is a game-changer). 

Then I felt as if the sketch was too empty, so I added birds that fly from left to right. Their y position changes a little, so they do not move in a straight line. The bird shapes were inspired by another P5 sketch I found online, but I changed the movement and made them fit my sketch.

I think the hardest part was positioning things so everything looked intentional and nice together. The clouds and boat were tricky at first. I also spent a lot of time figuring out how to use push and translate to make the boat move without messing up the coordinates. Once I got it down, it was actually really simple using objects and classes. Also, I felt like the object oriented programing was really useful, since it made it easier for me to create multiple objects without constantly repeating code. For my future work, I do want to lean more towards interactive elements, but I used this assignment to really focus on understanding OOP.

Here are the tutorials I watched and the reference/examples  I found inspiration for the sailboat and birds from other P5 sketches. I liked the shapes, so I used them as a starting point and modified the code to fit my scene by changing the motion, colors, and integrating them into my own object-oriented program:

https://editor.p5js.org/cs105/sketches/iCmF693Ps 

https://editor.p5js.org/zielinsj/sketches/NAzOThoLl#:~:text=Sketch%20Files,37 

https://www.youtube.com/watch?v=Qf4dIN99e2w 

https://www.youtube.com/watch?v=YcdldZ1E9gU 

https://p5js.org/reference/p5/noise/#:~:text=noise()%20is%20used%20to,x%2C%20y%2C%20t)%20

Week 3 Assignment

In terms of concept, I created an 600,600 frame animation which has bubbles of random sizes with each bubbles having different widths and heights and that if you drag the mouse along the frame it will form more and more bubbles a bit like what you would see in chemistry reaction. I also added a gimic where it shows hello in 7 different languages. For let x = random width and let x = random height I ended up with function ask different bubbles, at different widths and heights at random place. I commented on specific hard parts of the code such as the bubble class and interaction functions, this makes the logic very easy to follow.

In terms of the code I am particularly proud of the code from the Bubble class section which is lines 42-50 and 53-60. As I set variables X, Y and R where bubbles form at X, Y, R but they are random and that anything within frame of setup will do only once. (Generate 30 bubbles then no more), but anything after background 220 would do and run in the output again. I also set it to move random within the scope (-2,2) so that it could create reaction between bubbles not too fast and not too slow about right. And I chose to fill the bubbles with random colours and also not having stroke for the bubbles to be more beautifully designed. I then realized that I wanted to do (-5,5) but that would mean that the bubble would be reacting to each other too rapidly, so I had to go watch some of the Object-Oriented Programming examples from Dan Shiffman’s coding tutorials from the links below:

https://youtu.be/rHiSsgFRgx4?si=_Fz-81v_ZoDdLj7G

https://youtu.be/T-HGdc8L-7w?si=pm0hI9p7K852YnL0

https://youtu.be/fBqaA7zRO58?si=55BSv8u6SYXY80y0

//—————Bubble and Text Array—————–
// cite YT Dan Shiffman 7.3
let bubbles = []; //blank store for incoming bubbles

// Greetings mousePressed-stamp greetings on canvas
let greetings = [“สวัสดี”, “Marhaba”, “привет”, “Hola”, “Ciao”, “Bonjour”, “你好”, “Olá”];
let greetIndex = 0;//start with สวัสดี
let texts = []; //blank store for incoming greetings

function setup() {
createCanvas(600, 500);

// create 30 bubbles at the start
for (let i = 0; i < 30; i++) {
let x = random(width);
let y = random(height);
let r = random(10, 60);
bubbles[i] = new Bubble(x, y, r);
}
}

function draw() {
background(220);

// update + show all bubbles (including new ones from mouseDragged)
for (let i = 0; i < bubbles.length; i++) {
bubbles[i].move();
bubbles[i].show();
}

// show all stamped greeting texts
for (let i = 0; i < texts.length; i++) {
texts[i].show();
}

// instruction text
fill(0);
textSize(12);
text(“Drag mouse to add bubbles | Click to stamp greetings”, 20, 20);
}

// ——————– Bubble Class ——————
// cite youtube Dan Shiffman 6.2
class Bubble {
constructor(x, y, r) {
this.x = x;
this.y = y;
this.r = r;
this.col = color(random(255), random(255), random(255)); // random(R,G,B)
}

move() {
this.x += random(-2, 2);
this.y += random(-2, 2);
}

show() {
noStroke();
fill(this.col);
ellipse(this.x, this.y, this.r); // Dan YT 6.3
}
}

// ——————– Greeting Text Class ———–
class GreetingText {
constructor(x, y, msg) {
this.x = x;
this.y = y;
this.msg = msg;

// style (random)
this.size = random(14, 26);
this.col = color(random(255), random(255), random(255));
}

show() {
fill(this.col);
textSize(this.size);
text(this.msg, this.x, this.y);
}
}

// ——————Interaction———————–
// Drag mouse: add more bubbles
function mouseDragged() {
let r = random(10, 60);
let b = new Bubble(mouseX, mouseY, r);
bubbles.push(b);
}

// Click mouse: stamp greeting words (store as objects)
function mousePressed() {
let msg = greetings[greetIndex];
greetIndex = (greetIndex + 1) % greetings.length;
let t = new GreetingText(mouseX, mouseY, msg);
texts.push(t);
}

<iframe src=”https://editor.p5js.org/po2127/full/D7dM3ayTa”></iframe>

https://editor.p5js.org/po2127/full/D7dM3ayTa

I think what I could improve for the future would be separating different features such as bubbles, greetings, UI text into their own files and clearly labeled section so the sketch stays clean as there is more code. I also think adding more interaction logic, such as limiting how many bubbles can appear, letting users delete objects, or adding animations to the greeting text instead of just putting it there.

 

 

Week 3 Reading – Chris Crawford

The first thing that came to mind when the author was talking about how the word interactivity was used loosely, was AI. Funnily enough it goes hand in hand with interactivity in a lot of tech products, you will see “INTERACTIVE AND AI!!” plastered all around the advertisements when it really uses neither, and are just buzz words to attract customers. I had a similar idea of what interactivity is, to me it is a spectrum, we never say this is either interactive or not. We always say things like “oh this is really interactive,” or “this is somewhat interactive,” and so on. We always treated it as a spectrum, knowingly or unknowingly.

The problem with classifying if something is highly interactive is that, it’s all based on perspective, now what I mean by that is, treat the spectrum of interactivity as a battery with a fixed maximum capacity. A full charged battery would mean it’s highly interactive. The object defines the battery size, while the person’s perspective determines how much of that capacity is meaningful to them. This will make sense in a moment. There is objective traits to something being interactive, but how interactive it is will be subjective, for example, let us use the Nintendo fridge example with the kids and adults. The kids would tap into that battery supply and use most of it, while the adults would simply leave most of the battery unused.

Interactivity needs an initiator. a responder and a balance between that 2 that keeps the interaction in a meaningful interpretive cycle. This is true and can be objectively stated, however it is impossible to write a definition, or write what makes something highly or barely interactive as this will change person to person.

One thing this made me realize is that reactivity is not interactivity, and the sketches that we call interactive are simply just reactive to our inputs. To truly implement interactivity, I am thinking of making the program inputs less obvious, meaning the same input might lead to different actions depending on the context, making the user think about what to do.

The Art of Interactive Design – Reading Reflection

After reading Chris Crawford’s perspective on what is considered interactive, I still don’t believe it changed my definition of interactivity. Yes, his idea of interactivity, including listening, speaking, and thinking, was striking; however, it wasn’t enough to change my idea of interactivity. Throughout the text, I was feeling confused about what his true definition of interactivity is. Initially, he mentioned listening, thinking, and speaking; then he went on to describe ideas beyond that scope, and I was left confused. The only thing I would say changed in my perspective is that now I expanded my understanding of the word interactive, and things I wouldn’t have considered to be “interactive” now are (such as conversations).

Yes, I think the fridge lights turning on once the fridge is opened is an interactive element. Perhaps I do agree with Crawford that there are extents to interactivity, since the fridge light turning on and off is not necessarily an intentional interactive element. With that being said, my definition of interactivity is a reciprocated exchange between two parties, and how striking it was to both of them.

In my opinion, I would improve the interactivity of my p5 sketches through expanding the users’ ability to choose exactly what it is they are interacting with. For example, in my balloon-popping sketch, I would like for the users to choose exactly which balloons they want to pop.

 

Week 3 Assignment – Dina

My Concept:

I won’t lie, I went into the assignment completely blind. This was probably the most challenging assignment for me conceptually. I struggled with understanding the concept of arrays/how they function. After calling four people to come and explain it to me, I got the hang of it.

For my assignment, I was inspired by the idea of popping balloons at a party, and that seemed like something I could replicate through building an array and building a class.

Here is the finished product:

 

The Process:

Honestly, the process of creating the project began way before I opened the p5 website. I still had a lot of confusion about arrays and how they function, especially with the addition of the “i.” I re-read the lesson’s slides to try to grasp the concept to no avail. I then decided to get help from others who managed to successfully explain the topic.

Since I just got the hang of the topic, I decided that I wanted to create something that encapsulates all of what we did in class this week, from arrays, classes, to interactivity.  I first wanted to create a Sudoku game/grid, then a coloring page, but in the end, I decided that the one thing I could do to incorporate the three main things we took in class is through a balloon-popping experience.

I first started by creating a class for my balloons, creating a motion, display, and bouncing function for them. Afterwards, I created an empty array for my balloons and built a for loop to initialize and create my balloons, their parameters, and details. In order to allow my balloons to appear, bounce, and move as intended, I made a for loop within my draw function. Within the same for loop, I made an if statement to allow a text that says “POP!” to appear whenever the user pops a balloon. I incorporated the mousePressed() function as well as mouseX, to boost interactivity and functionality.

Snippet of Code I’m Proud Of:

There isn’t one particular part of my code that I’m proud of. I’m honestly more proud of the entire code in general, because it was a challenge for me to grasp the concepts and integrate them into one idea. For that, I will paste my main sketch’s code:

let balloons = [];
function setup() {
  createCanvas(600, 600);
  for(let i=0; i<9; i++){
    balloons[i]=new Balloon(50*i, random(600), 4+1*i, 4+1*i)
  }

}

function draw() {
  background('pink');
for (let i = 0; i < balloons.length; i++) {
  balloons[i].display();
  balloons[i].move();
  balloons[i].bounce();
  
   if (mouseIsPressed){
     fill('red')
       textSize(100)
    text('POP!', mouseX, mouseY)
  } 
     
}

}
  

  
 function mousePressed(){
  
  balloons.pop(balloons);
   

}

 

Reflections:

I’m satisfied with the end product, however, for future works I would hope to push and challenge myself even more and create specific projects I have in mind like an interactive coloring book that I know could be created using arrays and OOP.

 

Reading Reflection- Week #3

The reading has challenged the way I usually think about the word “interactive.” Before this, I used this term in a very loose way, often assuming that anything involving a computer or screen was automatically interactive. The author argues that true interactivity is more related to a meaningful conversation, when both sides listen, think, and respond within a cycle. This idea made me reflect on my own previous experiences with technology. Many apps and websites that are “interactive” actually feel very one-sided; they simply just deliver information rather than genuinely responding to the user. In my own creative coding projects, I sometimes try to focus more on visual effects than on whether how the program is actually engaging and having meaningful exchange with the user. The author’s definition helps me see why some digital experiences feel alive and others feel dead. I can think of examples from my classes where a project looked impressive but offered little real interaction, hence supporting the author’s argument that interactive is more about quality of exchange, not just the use of buttons or animations.

At the same time, I noticed that the author seems somewhat biased against newer interactive. He writes being confident that his definition is the correct one and dismisses other common uses of the term as misunderstandings. While I find his argument persuasive, I wonder if he overlooks the fact that interactivity might exist on a spectrum rather than a strict category. For example, using simple interfaces can create emotional or imaginative engagement, which might also count as a form of interaction. The reading changed my belief that interactivity is actually a technical feature as I now see it more as a communication process. However, it also raises questions for me such as Who gets to decide what really can be seen as a meaningful interaction? Can a passive experience still be interactive if it affects the user’s thinking? And in art or design, should emotional response matter as much as logical back-and-forth exchange?

Reading Reflection – Week 3

After reading this article, I realized that I had been using the word interactive without fully questioning what it actually means. I mostly associated it with responsiveness and just assumed that if something reacted to my input, it was interactive. The author’s examples and definition of interactivity helped me clearly distinguish between interaction, reaction, and even participation. For example, the branch example showed how it was more reactive rather than interactive because there was only one active actor. I also really liked how the author argues that reaction, no matter how intense, never becomes interaction. The example with the dancers made it clear to me that, yes, the dancers were dancing with the music (giving a response), but they did not provide any feedback that changed the music or affected it in any way. This also made me question my own work and whether my sketches are more reactive than truly interactive. For example, in my work, I created sketches where the system responds immediately to the participant (like if they press a button), but I did not make it actually change or adapt based on their behavior. So I think my previous sketches were more about animations rather than interactive ones because, yes, it responds to input, but does not meaningfully listen, think, and respond in the way he describes. 

I also like how interactivity was discussed more as a continuous variable rather than binary, especially with the use of the fridge example. I think framing interactivity as high or low rather than interactive or not interactive helped me understand how people can experience the same system differently. I agree with the author’s argument in this case, because it gives designers a more practical way to evaluate the interaction on a scale from high to low, rather than just dismissing the concept altogether. But this has me questioning who gets to define that standard of high or low interactivity, especially when we all can experience systems so differently. 

Also, the difference between the user interface and interactivity was clear. At first, I had a hard time distinguishing between them.  I usually think visually first, and then add interaction later. But now it is clear that for a strong interactive system, it needs to be heavy on both aspects of interactivity in technology, which is combining both form and function, and should not be separate steps. While I do agree with the author, I think he is somewhat biased towards interaction models that give off human conversations. I think this bias most likely comes from his own experience as an interactivity designer, but I think it also limits how other forms of interaction might be perceived and valued. In future work, I want to design my sketches more intentionally, making sure each element has a functional role in the interaction so it does not look like it was just thrown all together. 

Week 3 Assignment – Computer Vision

Sketch and video demo below!

(I think you might have to open this on a new table as it doesn’t seem like computer vision sketches are supported by wordpress…)

Concept:

Well, I have always been interested in learning and testing out computer vision, because I have seen really cool effects and my main inspiration is a woman who goes by the.poet.engineer and she has absolutely incredible computer vision work.
To make the tracking as smooth as I could, I used code from this git hub post which I will talk about later in this post. This took me a ridiculously long time to complete since I started learning computer vision the same day I made this, but they end result was worth it.

Implementation:

There is a lot that goes into this, 500+ lines of code, however everything is well commented if you would like to go line by line, I will explain the main concepts for creating this.

A bit of background, this was done using ml5js’s HandPose, and their hand tracker has specific keypoints for parts of your hand which was used to calculate distance, positions, rotation and so on for all the shapes used in this.

I did not use every connection as in the picture above, I manually selected the keypoints that I wanted to use in these 2 variables:

let fingerconnections = [
  [17, 18],
  [18, 19],
  [19, 20],
  [13, 14],
  [14, 15],
  [15, 16],
  [9, 10],
  [10, 11],
  [11, 12],
  [5, 6],
  [6, 7],
  [7, 8],
  [0, 1],
  [1, 2],
  [2, 3],
  [3, 4],
];
// Using ml5js classification I make a list of the connections I want for a curved line
let singlefinger = [
  [4, 5],
  [17, 16],
  [13, 12],
  [9, 8],
];

The way this works is that every hand that is seen on screen will be added to artInstances, of course I manually just made 2 but this can be scalable and automated for more than 2 hands.. (alien ?)

Each hand is assigned a class object of our class skeletal, which is where quite literally everything happens.

// Function for smoothing, uses linear interpolation to shorten how much we want the detector to move, since I used 0.8 for smoothing that means it only moves 80% of the way from original location to next location.
function gotHands(results) {
  if (activateSmoothing && hands.length > 0 && results.length > 0) {
    for (let i = 0; i < results.length; i++) {
      if (hands[i] && results[i]) {
        let oldHand = hands[i];
        let newHand = results[i];
        for (let j = 0; j < newHand.keypoints.length; j++) {
          newHand.keypoints[j].x = lerp(
            oldHand.keypoints[j].x,
            newHand.keypoints[j].x,
            smoothing
          );
          newHand.keypoints[j].y = lerp(
            oldHand.keypoints[j].y,
            newHand.keypoints[j].y,
            smoothing
          );
        }
      }
    }
  }
  hands = results;
}

This is the smoothing function that I took which I tried simplifying in the comments, it’s simply just linear interpolation so that the distance moved isn’t equal to the distance actually covered.

for (let i = 0; i < hands.length; i++) {
    if (artInstances[i]) {
      // Calling our inbuilt functions for every hand we have on screen
      artInstances[i].drawline(hands[i], singlefinger);
      artInstances[i].drawdoubleline(hands[i], fingerconnections);
      artInstances[i].drawcircle(hands[i], fingerconnections);
      artInstances[i].drawHUD(hands[i]);
      artInstances[i].drawSpeedHUD(hands[i]);
    }
  }
  
  drawMusicButton();
}

There are 6 methods in our skeletal class:
– drawline
– drawdoubleline
-drawcircle
-drawHUD
-drawSpeedHUD
-drawPopupHUD (this is used in the drawhud function so you won’t see it being called outside the class)

class skeletal {
  constructor(linecolor, circlecolor, hudcolor) {
    // We take the color we want for the line, circle and the hud, as well as initialize the angle and last angle to 0
    this.linecolor = linecolor;
    this.circlecolor = circlecolor;
    this.hudcolor = hudcolor;
    this.hud1angle = 0;
    this.rotationSpeed = 0;
    this.lasthandangle = 0;
    this.popupActive = false;
    this.popupTimer = 0;
  }

There is 8 attributes assigned to each hand, where only the first 3 are you can choose and the rest is assigned to 0 / false by default as they will be changed and used in calculations for our functions.

Before I get into the function I am most proud of, I will briefly talk about what each function does.

drawdoubleline : There was no inbuilt shape that satisfied what I want, so I ended up writing my own shape using BeginShape and EndShape to give me that glitchy double line between most the connections.

drawline: I used the bezier function here, (calculated anchor points by getting the points 1/3rd and 2/3rd of the way from keypoint A to keypoint B) and this was used to connect bases of fingers to the top of the next finger with a curved line.

drawHUD: This has 2 elements, the palm HUD, and the exterior HUD, the palm hud is centered by calculating the midpoint the base of the finger (which is keypoint 9) and the wrist (which is keypoint 0), and so the shapes of the HUD is drawed around that point, we use the mapping functions so that it becomes bigger the further away your hand is from the camera, or smaller if it is closer to the camera. This has constraints so the HUD can never be too small or too big.

drawPopupHUD: This HUD is the exterior one, which contains a circle with sound waves along a quarter of it, and this only pops up if the hand rotates at a fast enough speed, and then disappears after a short while. I used Perlin’s noise to give that random sound effect to the waves.

drawcircle: This simply draws a small circle at each keypoint that I saved in the beginning of the program.

Finally the function I am proud of:

drawSpeedHUD(singleHandData) {
    if (!singleHandData) return;

    // We map the keypoints once again to their respective parts.
    let thumbTip = singleHandData.keypoints[4];
    let indexTip = singleHandData.keypoints[8];
    let wrist = singleHandData.keypoints[0];
    let palmBase = singleHandData.keypoints[9];

    if (!thumbTip || !indexTip || !wrist || !palmBase) return;

    // We calculate the palm size and the gap between our thumb and index finger
    let palmSize = dist(wrist.x, wrist.y, palmBase.x, palmBase.y);
    let fingerGap = dist(thumbTip.x, thumbTip.y, indexTip.x, indexTip.y);

    // Check if it is left or right hand to adjust rotation accordingly
    let isLeftHand = singleHandData.handedness === "Left";

    // Calculate the angle of the finger for rotation of the speedometer
    let fingerAngle = atan2(indexTip.y - thumbTip.y, indexTip.x - thumbTip.x);
    // Calculate the rotation and adjust based on left or right hand
    let sideRotation = isLeftHand ? fingerAngle + PI : fingerAngle;

    // Conditions for when the speedometer pops up
    let closeToCamera = palmSize > 80;
    let fingersSpread = fingerGap > 0 && fingerGap < 140;

    if (closeToCamera && fingersSpread) {
      // We create a mapping between the distance of finger and speed, where if the thumb and index are closed the speed is 0 to 50.
      let speed = map(fingerGap, 0, 130, 16, 50, true);
      let needleAngle = map(speed, 0, 140, PI, -PI);

      let midX = (thumbTip.x + indexTip.x) / 2;
      let midY = (thumbTip.y + indexTip.y) / 2;

      push();
      translate(-width / 2 + midX, -height / 2 + midY);

      // Rotate with our hand
      rotate(sideRotation);

      noFill();
      stroke(this.hudcolor);
      strokeWeight(2);
      // Draw the speedometer arc and ticks.
      arc(0, 0, 110, 110, PI + QUARTER_PI, TWO_PI - QUARTER_PI);

      for (let i = 0; i < 10; i++) {
        let a = map(i, 0, 9, PI + QUARTER_PI, TWO_PI - QUARTER_PI);
        line(cos(a) * 45, sin(a) * 45, cos(a) * 55, sin(a) * 55);
      }

      // Draw the needle
      push();
      rotate(needleAngle - HALF_PI);
      stroke(255, 50, 50);
      strokeWeight(3);
      line(0, 0, 0, -50);
      pop();

      // Draw the speed bar which changes with speed and needle.
      strokeWeight(1);
      rectMode(CENTER);
      noFill();
      rect(0, 25, 60, 5);
      fill(this.hudcolor);
      rectMode(CORNER);
      // Using your speed mapping to fill the 60px wide bar
      rect(-30, 22.5, map(speed, 16, 50, 0, 60, true), 5);

      pop();
    }
  }

This is the code for that interactive aspect of this computer vision, the speedometer that moves with your pinching of your thumb and index. The speedometer only pops up if your hand is close enough to the camera and the pinch isn’t too big, and we use the mapping function to spin the needle and change the progress bar. This speedometer also rotates around the hand which leads to it rarely clipping through your hand. This took so much trial and error, checking which angle works, so much time spent due to the needle going under the speedometer and completely opposite of where I would want it, or the speedometer rotating towards the hand clipping through it, overall it was a painful experience of trial and error.

A small addition I have is some background music that you can enable or disable, the song is “Color Your Night” and this is part of the soundtrack for Persona, and is an amazing song.

Reflection:

It was a painful, but incredibly fun experience, I am really happy I challenged myself and dived into computer vision, because if I did not know then I would have never, and I am really happy with the end result. There is endless possibilities of interactive art with computer vision, such as animating characters, or showcasing fluid dynamics and so much more which I would like to compare beyond a simple futuristic interactive HUD.

Week 2 — Reading Response

In reflecting on Casey Reas’ discussion of chance operations, I found my fundamental assumptions about artistic intent deeply challenged. I’ve always associated “intent” with manual, top-down control, but Reas’ argument for “jitter” or “noise” as a tool to keep a system from becoming “homogeneous” offers a much more interesting path. I am particularly struck by the idea of “homeostasis” within a system — where a work stays dynamic and shifting while maintaining a consistent texture. This shifts my perspective of the computer from an “obedient tool” to a creative collaborator. By utilizing randomness as a “jumping-off point,” I want to see if I can bypass my own aesthetic biases and “logical nonsense,” similar to how Reas and Tal Rosner used dice-rolls to spark compositions for their Chronograph project.

However, Reas’ mention of Rosalind Krauss’s critique of the “grid” leaves me with a lingering, uncomfortable question: if a work is generated through chance, can it still hold onto the “pure feeling” or “humane quality” that artists like Mondrian or Malevich aimed for? If I follow John Cage’s lead and use chance to “remove myself from the activity,” I worry that the work might lose its connection to social discourse and humanity. Yet, I am increasingly drawn to Gerhard Richter’s provocation that chance is “better” than the artist because it can “destroy what I’ve worked out wrong.” This suggests that the “human” element might actually reside in my willingness to let a system disrupt my own ego.

All these connections reinforced my goal to find an “optimum balance” between total randomness and complete control. Moving forward, I want to experiment with defining “minimal structures” or constraints that allow for “surprising visual qualities,” testing whether a system-based approach can actually feel more organic than one born of purely determined geometry.

Assignment 2: looped in love

For my concept, the repeated shapes in the attached PDFs reminded me of wallpapers and patterned backgrounds. Given that it’s February, I decided to create a static heart wallpaper using a grid of mini hearts. When the hearts are pressed, the background turns red and the hearts begin to move.

I initially started by coding the static wallpaper, but quickly realized that if I wanted to incorporate animation, I needed to plan for it early on. I also added a mouse press function so that I could toggle between the still wallpaper and the moving hearts.

For the hearts, I made them using two ellipses and a triangle, and I used nested loops to place the hearts in a grid, which created the wallpaper effect.

To add animation, I created a variable that updates every frame to move the hearts together. When the hearts are moving, this value added to their positions, which shifts the grid. I also used a boolean variable with the if statements to control when the animation occurs.

if (moving) {
    heartshift += speed;
    if (heartshift > 15 || heartshift < -15) {
      speed *= -1;
    }
  }

I’m proud of this part of my code because it controls the movement of the hearts. When the variable (moving) is true, (heartshift) updates every frame, which make the hearts move. And by adding the conditional statement, the speed reverses when the movement reaches the limit(15), so the hearts go back and forth instead of only moving one direction.


Overall, I’m happy with the sketch, and I enjoyed experimenting with different variables to see how they affect the hearts. The most challenging part was combining the animation with the nested loops and learning how to incorporate them with the boolean variable and if statements all together. For future ideas, I would like to add more interactions, or make it more intricate.

Reflection Response

Casey’s talk made a great point about randomness vs. order and the idea of losing total control. Reflecting on my past work, especially my heart wallpaper sketch, everything is very structured and controlled. Casey’s talk made me realize that adding small random elements in my sketch, like changes in color or even the movement of the hearts, could make the sketch feel less rigid. I could have created a static wallpaper of mini hearts and then, when pressed, made the hearts move randomly, but I think it is challenging to balance the randomness of the movement, given how rigid the static grid is when the sketch is pressed again.