I won’t lie, I went into the assignment completely blind. This was probably the most challenging assignment for me conceptually. I struggled with understanding the concept of arrays/how they function. After calling four people to come and explain it to me, I got the hang of it.
For my assignment, I was inspired by the idea of popping balloons at a party, and that seemed like something I could replicate through building an array and building a class.
Here is the finished product:
The Process:
Honestly, the process of creating the project began way before I opened the p5 website. I still had a lot of confusion about arrays and how they function, especially with the addition of the “i.” I re-read the lesson’s slides to try to grasp the concept to no avail. I then decided to get help from others who managed to successfully explain the topic.
Since I just got the hang of the topic, I decided that I wanted to create something that encapsulates all of what we did in class this week, from arrays, classes, to interactivity. I first wanted to create a Sudoku game/grid, then a coloring page, but in the end, I decided that the one thing I could do to incorporate the three main things we took in class is through a balloon-popping experience.
I first started by creating a class for my balloons, creating a motion, display, and bouncing function for them. Afterwards, I created an empty array for my balloons and built a for loop to initialize and create my balloons, their parameters, and details. In order to allow my balloons to appear, bounce, and move as intended, I made a for loop within my draw function. Within the same for loop, I made an if statement to allow a text that says “POP!” to appear whenever the user pops a balloon. I incorporated the mousePressed() function as well as mouseX, to boost interactivity and functionality.
Snippet of Code I’m Proud Of:
There isn’t one particular part of my code that I’m proud of. I’m honestly more proud of the entire code in general, because it was a challenge for me to grasp the concepts and integrate them into one idea. For that, I will paste my main sketch’s code:
let balloons = [];
function setup() {
createCanvas(600, 600);
for(let i=0; i<9; i++){
balloons[i]=new Balloon(50*i, random(600), 4+1*i, 4+1*i)
}
}
function draw() {
background('pink');
for (let i = 0; i < balloons.length; i++) {
balloons[i].display();
balloons[i].move();
balloons[i].bounce();
if (mouseIsPressed){
fill('red')
textSize(100)
text('POP!', mouseX, mouseY)
}
}
}
function mousePressed(){
balloons.pop(balloons);
}
Reflections:
I’m satisfied with the end product, however, for future works I would hope to push and challenge myself even more and create specific projects I have in mind like an interactive coloring book that I know could be created using arrays and OOP.
The reading has challenged the way I usually think about the word “interactive.” Before this, I used this term in a very loose way, often assuming that anything involving a computer or screen was automatically interactive. The author argues that true interactivity is more related to a meaningful conversation, when both sides listen, think, and respond within a cycle. This idea made me reflect on my own previous experiences with technology. Many apps and websites that are “interactive” actually feel very one-sided; they simply just deliver information rather than genuinely responding to the user. In my own creative coding projects, I sometimes try to focus more on visual effects than on whether how the program is actually engaging and having meaningful exchange with the user. The author’s definition helps me see why some digital experiences feel alive and others feel dead. I can think of examples from my classes where a project looked impressive but offered little real interaction, hence supporting the author’s argument that interactive is more about quality of exchange, not just the use of buttons or animations.
At the same time, I noticed that the author seems somewhat biased against newer interactive. He writes being confident that his definition is the correct one and dismisses other common uses of the term as misunderstandings. While I find his argument persuasive, I wonder if he overlooks the fact that interactivity might exist on a spectrum rather than a strict category. For example, using simple interfaces can create emotional or imaginative engagement, which might also count as a form of interaction. The reading changed my belief that interactivity is actually a technical feature as I now see it more as a communication process. However, it also raises questions for me such as Who gets to decide what really can be seen as a meaningful interaction? Can a passive experience still be interactive if it affects the user’s thinking? And in art or design, should emotional response matter as much as logical back-and-forth exchange?
After reading this article, I realized that I had been using the word interactive without fully questioning what it actually means. I mostly associated it with responsiveness and just assumed that if something reacted to my input, it was interactive. The author’s examples and definition of interactivity helped me clearly distinguish between interaction, reaction, and even participation. For example, the branch example showed how it was more reactive rather than interactive because there was only one active actor. I also really liked how the author argues that reaction, no matter how intense, never becomes interaction. The example with the dancers made it clear to me that, yes, the dancers were dancing with the music (giving a response), but they did not provide any feedback that changed the music or affected it in any way. This also made me question my own work and whether my sketches are more reactive than truly interactive. For example, in my work, I created sketches where the system responds immediately to the participant (like if they press a button), but I did not make it actually change or adapt based on their behavior. So I think my previous sketches were more about animations rather than interactive ones because, yes, it responds to input, but does not meaningfully listen, think, and respond in the way he describes.
I also like how interactivity was discussed more as a continuous variable rather than binary, especially with the use of the fridge example. I think framing interactivity as high or low rather than interactive or not interactive helped me understand how people can experience the same system differently. I agree with the author’s argument in this case, because it gives designers a more practical way to evaluate the interaction on a scale from high to low, rather than just dismissing the concept altogether. But this has me questioning who gets to define that standard of high or low interactivity, especially when we all can experience systems so differently.
Also, the difference between the user interface and interactivity was clear. At first, I had a hard time distinguishing between them. I usually think visually first, and then add interaction later. But now it is clear that for a strong interactive system, it needs to be heavy on both aspects of interactivity in technology, which is combining both form and function, and should not be separate steps. While I do agree with the author, I think he is somewhat biased towards interaction models that give off human conversations. I think this bias most likely comes from his own experience as an interactivity designer, but I think it also limits how other forms of interaction might be perceived and valued. In future work, I want to design my sketches more intentionally, making sure each element has a functional role in the interaction so it does not look like it was just thrown all together.
(I think you might have to open this on a new table as it doesn’t seem like computer vision sketches are supported by wordpress…)
Concept:
Well, I have always been interested in learning and testing out computer vision, because I have seen really cool effects and my main inspiration is a woman who goes by the.poet.engineer and she has absolutely incredible computer vision work.
To make the tracking as smooth as I could, I used code from this git hub post which I will talk about later in this post. This took me a ridiculously long time to complete since I started learning computer vision the same day I made this, but they end result was worth it.
Implementation:
There is a lot that goes into this, 500+ lines of code, however everything is well commented if you would like to go line by line, I will explain the main concepts for creating this.
A bit of background, this was done using ml5js’s HandPose, and their hand tracker has specific keypoints for parts of your hand which was used to calculate distance, positions, rotation and so on for all the shapes used in this.
I did not use every connection as in the picture above, I manually selected the keypoints that I wanted to use in these 2 variables:
let fingerconnections = [
[17, 18],
[18, 19],
[19, 20],
[13, 14],
[14, 15],
[15, 16],
[9, 10],
[10, 11],
[11, 12],
[5, 6],
[6, 7],
[7, 8],
[0, 1],
[1, 2],
[2, 3],
[3, 4],
];
// Using ml5js classification I make a list of the connections I want for a curved line
let singlefinger = [
[4, 5],
[17, 16],
[13, 12],
[9, 8],
];
The way this works is that every hand that is seen on screen will be added to artInstances, of course I manually just made 2 but this can be scalable and automated for more than 2 hands.. (alien ?)
Each hand is assigned a class object of our class skeletal, which is where quite literally everything happens.
// Function for smoothing, uses linear interpolation to shorten how much we want the detector to move, since I used 0.8 for smoothing that means it only moves 80% of the way from original location to next location.
function gotHands(results) {
if (activateSmoothing && hands.length > 0 && results.length > 0) {
for (let i = 0; i < results.length; i++) {
if (hands[i] && results[i]) {
let oldHand = hands[i];
let newHand = results[i];
for (let j = 0; j < newHand.keypoints.length; j++) {
newHand.keypoints[j].x = lerp(
oldHand.keypoints[j].x,
newHand.keypoints[j].x,
smoothing
);
newHand.keypoints[j].y = lerp(
oldHand.keypoints[j].y,
newHand.keypoints[j].y,
smoothing
);
}
}
}
}
hands = results;
}
This is the smoothing function that I took which I tried simplifying in the comments, it’s simply just linear interpolation so that the distance moved isn’t equal to the distance actually covered.
for (let i = 0; i < hands.length; i++) {
if (artInstances[i]) {
// Calling our inbuilt functions for every hand we have on screen
artInstances[i].drawline(hands[i], singlefinger);
artInstances[i].drawdoubleline(hands[i], fingerconnections);
artInstances[i].drawcircle(hands[i], fingerconnections);
artInstances[i].drawHUD(hands[i]);
artInstances[i].drawSpeedHUD(hands[i]);
}
}
drawMusicButton();
}
There are 6 methods in our skeletal class:
– drawline
– drawdoubleline
-drawcircle
-drawHUD
-drawSpeedHUD
-drawPopupHUD (this is used in the drawhud function so you won’t see it being called outside the class)
class skeletal {
constructor(linecolor, circlecolor, hudcolor) {
// We take the color we want for the line, circle and the hud, as well as initialize the angle and last angle to 0
this.linecolor = linecolor;
this.circlecolor = circlecolor;
this.hudcolor = hudcolor;
this.hud1angle = 0;
this.rotationSpeed = 0;
this.lasthandangle = 0;
this.popupActive = false;
this.popupTimer = 0;
}
There is 8 attributes assigned to each hand, where only the first 3 are you can choose and the rest is assigned to 0 / false by default as they will be changed and used in calculations for our functions.
Before I get into the function I am most proud of, I will briefly talk about what each function does.
drawdoubleline : There was no inbuilt shape that satisfied what I want, so I ended up writing my own shape using BeginShape and EndShape to give me that glitchy double line between most the connections.
drawline: I used the bezier function here, (calculated anchor points by getting the points 1/3rd and 2/3rd of the way from keypoint A to keypoint B) and this was used to connect bases of fingers to the top of the next finger with a curved line.
drawHUD: This has 2 elements, the palm HUD, and the exterior HUD, the palm hud is centered by calculating the midpoint the base of the finger (which is keypoint 9) and the wrist (which is keypoint 0), and so the shapes of the HUD is drawed around that point, we use the mapping functions so that it becomes bigger the further away your hand is from the camera, or smaller if it is closer to the camera. This has constraints so the HUD can never be too small or too big.
drawPopupHUD: This HUD is the exterior one, which contains a circle with sound waves along a quarter of it, and this only pops up if the hand rotates at a fast enough speed, and then disappears after a short while. I used Perlin’s noise to give that random sound effect to the waves.
drawcircle: This simply draws a small circle at each keypoint that I saved in the beginning of the program.
Finally the function I am proud of:
drawSpeedHUD(singleHandData) {
if (!singleHandData) return;
// We map the keypoints once again to their respective parts.
let thumbTip = singleHandData.keypoints[4];
let indexTip = singleHandData.keypoints[8];
let wrist = singleHandData.keypoints[0];
let palmBase = singleHandData.keypoints[9];
if (!thumbTip || !indexTip || !wrist || !palmBase) return;
// We calculate the palm size and the gap between our thumb and index finger
let palmSize = dist(wrist.x, wrist.y, palmBase.x, palmBase.y);
let fingerGap = dist(thumbTip.x, thumbTip.y, indexTip.x, indexTip.y);
// Check if it is left or right hand to adjust rotation accordingly
let isLeftHand = singleHandData.handedness === "Left";
// Calculate the angle of the finger for rotation of the speedometer
let fingerAngle = atan2(indexTip.y - thumbTip.y, indexTip.x - thumbTip.x);
// Calculate the rotation and adjust based on left or right hand
let sideRotation = isLeftHand ? fingerAngle + PI : fingerAngle;
// Conditions for when the speedometer pops up
let closeToCamera = palmSize > 80;
let fingersSpread = fingerGap > 0 && fingerGap < 140;
if (closeToCamera && fingersSpread) {
// We create a mapping between the distance of finger and speed, where if the thumb and index are closed the speed is 0 to 50.
let speed = map(fingerGap, 0, 130, 16, 50, true);
let needleAngle = map(speed, 0, 140, PI, -PI);
let midX = (thumbTip.x + indexTip.x) / 2;
let midY = (thumbTip.y + indexTip.y) / 2;
push();
translate(-width / 2 + midX, -height / 2 + midY);
// Rotate with our hand
rotate(sideRotation);
noFill();
stroke(this.hudcolor);
strokeWeight(2);
// Draw the speedometer arc and ticks.
arc(0, 0, 110, 110, PI + QUARTER_PI, TWO_PI - QUARTER_PI);
for (let i = 0; i < 10; i++) {
let a = map(i, 0, 9, PI + QUARTER_PI, TWO_PI - QUARTER_PI);
line(cos(a) * 45, sin(a) * 45, cos(a) * 55, sin(a) * 55);
}
// Draw the needle
push();
rotate(needleAngle - HALF_PI);
stroke(255, 50, 50);
strokeWeight(3);
line(0, 0, 0, -50);
pop();
// Draw the speed bar which changes with speed and needle.
strokeWeight(1);
rectMode(CENTER);
noFill();
rect(0, 25, 60, 5);
fill(this.hudcolor);
rectMode(CORNER);
// Using your speed mapping to fill the 60px wide bar
rect(-30, 22.5, map(speed, 16, 50, 0, 60, true), 5);
pop();
}
}
This is the code for that interactive aspect of this computer vision, the speedometer that moves with your pinching of your thumb and index. The speedometer only pops up if your hand is close enough to the camera and the pinch isn’t too big, and we use the mapping function to spin the needle and change the progress bar. This speedometer also rotates around the hand which leads to it rarely clipping through your hand. This took so much trial and error, checking which angle works, so much time spent due to the needle going under the speedometer and completely opposite of where I would want it, or the speedometer rotating towards the hand clipping through it, overall it was a painful experience of trial and error.
A small addition I have is some background music that you can enable or disable, the song is “Color Your Night” and this is part of the soundtrack for Persona, and is an amazing song.
Reflection:
It was a painful, but incredibly fun experience, I am really happy I challenged myself and dived into computer vision, because if I did not know then I would have never, and I am really happy with the end result. There is endless possibilities of interactive art with computer vision, such as animating characters, or showcasing fluid dynamics and so much more which I would like to compare beyond a simple futuristic interactive HUD.
Watching Casey Reas’ talk influenced the way I think about randomness in interactive artwork. It made me reflect on how, in traditional art, we often try to control every aspect of every element. I typically do not plan every aspect of my own art pieces; sometimes I go with the flow, and sometimes I see where my thoughts take me. However, Reas’ discussion of “systems” rather than “images” challenges my creative process.
This raises the question of when computational art is truly random, or whether it is our own thoughts and decisions that contribute to the apparent randomness of the work. I wonder how much control I actually have over “chance” in a digital system and whether the unpredictability is generated by the algorithm or by the choices I make in setting up the system. This makes me reconsider the balance between control and randomness in creative processes.
So, I want to say that randomness is still a type of control because I think we will never achieve full randomness. This is evident in chaos theory: in a small picture, it looks chaotic, but as we zoom out to see the full picture, it becomes more evident that it is a longer form of a pattern, more complex, yet there is still some type of control.
Watching this talk, what came to my mind was how deeply rooted the tension between chaos and order is in both art and human history. I found myself reflecting on how, for centuries, artists were expected to create structure and clarity in response to a world perceived as chaotic. This made sense within a Newtonian worldview, where the universe felt stable, predictable, and governed by clear rules. As the speaker moved through moments of scientific disruption and historical trauma, especially the impact of World War I, I began to see how those events made strict order feel insufficient. It became clear to me that the shift toward chance and randomness in art was not a rejection of meaning, but an attempt to better reflect the uncertainty and complexity of lived experience.
What struck me most was the idea that chance is never truly blind. As I watched the examples of algorithmic and generative work, I kept thinking about how these systems are carefully designed yet allowed to behave unpredictably. This balance between control and surrender felt deeply human to me. I realized that using randomness is not about giving up responsibility as an artist, but about trusting processes to reveal patterns that cannot be forced. In that sense, chaos becomes a collaborator rather than a threat. It allows art to remain alive over time, continuously changing while still holding a coherent identity.
My concept draws inspiration from the interactive projections often seen in galleries and museums. As I mentioned in my previous post, I wanted to include an interactive art piece that allows the audience to both enjoy and engage with it.
How was it made:
I used the ML5.js body segmentation example as a source, which captures live video from the camera. I then looped through the mask to keep the shapes inside the person mask and drew shapes only where a person was detected. If the mask pixel was black (not a person), it looped to draw random shapes instead.
I used ML5.js documentation and example for my source body segmentation used in my code.
A highlight of some code that you’re particularly proud of:
let bodySegmentation;
let video;
let segmentation;
let options = {
maskType: "person",
};
function preload() {
bodySegmentation = ml5.bodySegmentation("BodyPix", options);
}
function setup() {
createCanvas(640, 480);
// Create the video
video = createCapture(VIDEO);
video.size(640, 480);
video.hide();
bodySegmentation.detectStart(video, gotResults);
}
function draw() {
background(255);
// Draw the video
image(video, 0, 0);
if (segmentation) {
// Load the pixels of the mask to check each pixel
segmentation.mask.loadPixels();
This part of the code taught my a lot about body segmentation, and how interactive projections work form behind the scenes.
Embedded sketch:
Note: For it to work, please click the link and allow camera access.
Reflection and ideas for future work or improvements:
I am happy that I learned how to use ML5.js, and I look forward to using it in future projects. For this project, I would like to implement more details, using something like FaceMesh, which could affect the shape or color depending on your visible motions.
Upon watching Casey Reas’ talk, I became interested in his approach of using artificial, computation processes to create something that feels “organic,” as he says. Another thing that resonated with me was when he mentioned how he was more interested on the paths that elements were taking while moving rather than on their individual location at specific moments. I plan on keeping these two features in my mind for my future works. I want to apply this thinking by designing systems where the journey and behavior of elements matter more than their fixed positions, letting randomness guide their movements while maintaining an intentional overall direction or purpose, and create something with that “organic” feeling, where the work doesn’t look rigidly programmed but instead has a natural, living quality to it.
I think the optimum balance lies in incorporating random elements by establishing controlled boundaries while allowing freedom within those constraints.Simply put, controlling where something can exist or move, but letting how it moved or which specific direction it takes to be determind by chance. There would be a defined space or ruleset but then letting randomness operate freely within those parameters. In this way, everything still feels intentional and cohesive because it’s all happening within a thoughtfully designed framework.
“One must still have chaos in oneself to be able to give birth to a dancing star.” — Friedrich Nietzsche
Concept:
For this assignment, I decided to go for a feel about how I view life. In my opinion, I view life as quite chaotic and we feel like we don’t have really any control over it. But at some point, we stand back and stop for a moment to get our life in order. We do this from scheduling, persistency in our work and even making sure that nothing will surprise us. But no matter how much we try, there will always be even a little bit of chaos in order. Which is why the piece below represents that very feeling.
How it’s made:
This actually partially uses some code from my last assignment, namely the random ellipses going around in different directions. It uses a simple for loop, that constantly iterates a random color for the ellipses and they are placed randomly on the canvas. I’ve made it so they are smaller as if I made them bigger they would be more dense and you practically can’t see much.
That in itself is meant to represent chaos. There’s no specific order or movement that the ellipses follow, no specific size they must have and no specific color they need to embody. To me, this represents pure chaos.
But I then wanted to represent the logic and order our lives embody so I thought the best to do that, was to get the ellipses under control. I was thinking about what’s the best way to represent order and I thought:
“To make order from chaos, there needs to be a base line structure to follow.”
Wait, a line? Yeah that could work. But to actually showcase more ellipses I needed to make a 2×2 made only from ellipses. So I tapped in to some coding knowledge from before and made a nested for loop.
A highlighted bit of code I’m proud of:
I really like the nested for loop that I made to represent order. It takes in the length and width of the canvas, and sorts out the ellipses in rows and columns. It’s quite simple honestly when you think about it. Also I made sure to move them a bit down so they don’t stick to the walls of the canvas.
if (pause == true){
background(255)
for (let i = 0; i < widthCanvas; i = i + 50){
for (let n = 0; n < heightCanvas; n = n + 50){
fill(random(255), random(255), random(255), random(255));
ellipse(i + 25, n + 25, random(50));
}
}
}
But I thought more in adding a bit more to this. Of course, order can be easily represented with rows of ellipses in the same color. But I feel as though, no matter how much order exists, there’s still a little element of chaos inside.
Which is why, I made it so the ellipses have different color, and different sizes, but still in line with the order created. To me, it represents just that still a little bit of chaos can exist.
Reflection:
I enjoyed making this simple effect. Originally I wanted to be ambitious and go for an effect where if I click the mouse at a certain point in the canvas, then all of the circles would come towards the mouse. And when the user would let go, they would go back to the random movements. Unfortunately I struggled and couldn’t think of a way to go about it.
But that’s not to say that this wasn’t good. I think giving it more time to thinker and play around, perhaps someday I could make it have cool animations and effects.
How are you planning to incorporate random elements into your work? Where do you feel is the optimum balance between total randomness and complete control?
Before answering this questions, I wanted to implement by myself the concepts the video was talking about. Randomness. One quote by Casey Reas really stayed with me: “change it by a little so that movement creates chaos in an ordered way.” And I think that perfectly represents what I was trying to do in my work. The things I changed to create this element of randomness were data, small pieces of data that might seem insignificant at first, but machines are built based on that, so even the smallest thousandth can make a difference.
This brings me to another quote from the video, about Dadaism: “Dada wished to replace the logical nonsense of the men of today with an illogical nonsense.” To what point is something considered logical? Computers are supposed to be based on pure logic. And yet, artists find ways to turn that logic into illogical sense. And yet, it still has meaning. It’s like deconstructing the logic embedded in the machine with the purpose of illogically creating something that has meaning behind it. Or the other way around, creating something without meaning by using the logic of the computer. Either way, I feel this can be applied to the artworks shown in the video.
A quote by Gerhard Richter captures this idea very well: “Above all, it’s never a blind chance; it’s a chance that is always planned but also always surprising.” These artists construct from chance as a base. It’s about bringing in disorder that has been curated with intention, in a way that even surprises the artist themselves. I think that finding this balance between total randomness and complete control is about using logic and repetition, patterns and algorithms that allow you to repeat a process, while the outcome is completely different, but the essence of it remains present.
Something that really caught my attention in the video was the idea of the million random digitized values and the 1,000 normal deviates. It honestly amazed me to think about the power, the expectation, and the importance that randomness has in our lives when we understand that art imitates or simulates the real world, and the real world is chaotic and not curated by anyone, but rather filled with randomness. Without a doubt, this video and this work opened my eyes and helped me understand even better the importance of chaos within order, and order within chaos.