After reading Chris Crawford’s perspective on what is considered interactive, I still don’t believe it changed my definition of interactivity. Yes, his idea of interactivity, including listening, speaking, and thinking, was striking; however, it wasn’t enough to change my idea of interactivity. Throughout the text, I was feeling confused about what his true definition of interactivity is. Initially, he mentioned listening, thinking, and speaking; then he went on to describe ideas beyond that scope, and I was left confused. The only thing I would say changed in my perspective is that now I expanded my understanding of the word interactive, and things I wouldn’t have considered to be “interactive” now are (such as conversations).
Yes, I think the fridge lights turning on once the fridge is opened is an interactive element. Perhaps I do agree with Crawford that there are extents to interactivity, since the fridge light turning on and off is not necessarily an intentional interactive element. With that being said, my definition of interactivity is a reciprocated exchange between two parties, and how striking it was to both of them.
In my opinion, I would improve the interactivity of my p5 sketches through expanding the users’ ability to choose exactly what it is they are interacting with. For example, in my balloon-popping sketch, I would like for the users to choose exactly which balloons they want to pop.
I won’t lie, I went into the assignment completely blind. This was probably the most challenging assignment for me conceptually. I struggled with understanding the concept of arrays/how they function. After calling four people to come and explain it to me, I got the hang of it.
For my assignment, I was inspired by the idea of popping balloons at a party, and that seemed like something I could replicate through building an array and building a class.
Here is the finished product:
The Process:
Honestly, the process of creating the project began way before I opened the p5 website. I still had a lot of confusion about arrays and how they function, especially with the addition of the “i.” I re-read the lesson’s slides to try to grasp the concept to no avail. I then decided to get help from others who managed to successfully explain the topic.
Since I just got the hang of the topic, I decided that I wanted to create something that encapsulates all of what we did in class this week, from arrays, classes, to interactivity. I first wanted to create a Sudoku game/grid, then a coloring page, but in the end, I decided that the one thing I could do to incorporate the three main things we took in class is through a balloon-popping experience.
I first started by creating a class for my balloons, creating a motion, display, and bouncing function for them. Afterwards, I created an empty array for my balloons and built a for loop to initialize and create my balloons, their parameters, and details. In order to allow my balloons to appear, bounce, and move as intended, I made a for loop within my draw function. Within the same for loop, I made an if statement to allow a text that says “POP!” to appear whenever the user pops a balloon. I incorporated the mousePressed() function as well as mouseX, to boost interactivity and functionality.
Snippet of Code I’m Proud Of:
There isn’t one particular part of my code that I’m proud of. I’m honestly more proud of the entire code in general, because it was a challenge for me to grasp the concepts and integrate them into one idea. For that, I will paste my main sketch’s code:
let balloons = [];
function setup() {
createCanvas(600, 600);
for(let i=0; i<9; i++){
balloons[i]=new Balloon(50*i, random(600), 4+1*i, 4+1*i)
}
}
function draw() {
background('pink');
for (let i = 0; i < balloons.length; i++) {
balloons[i].display();
balloons[i].move();
balloons[i].bounce();
if (mouseIsPressed){
fill('red')
textSize(100)
text('POP!', mouseX, mouseY)
}
}
}
function mousePressed(){
balloons.pop(balloons);
}
Reflections:
I’m satisfied with the end product, however, for future works I would hope to push and challenge myself even more and create specific projects I have in mind like an interactive coloring book that I know could be created using arrays and OOP.
The reading has challenged the way I usually think about the word “interactive.” Before this, I used this term in a very loose way, often assuming that anything involving a computer or screen was automatically interactive. The author argues that true interactivity is more related to a meaningful conversation, when both sides listen, think, and respond within a cycle. This idea made me reflect on my own previous experiences with technology. Many apps and websites that are “interactive” actually feel very one-sided; they simply just deliver information rather than genuinely responding to the user. In my own creative coding projects, I sometimes try to focus more on visual effects than on whether how the program is actually engaging and having meaningful exchange with the user. The author’s definition helps me see why some digital experiences feel alive and others feel dead. I can think of examples from my classes where a project looked impressive but offered little real interaction, hence supporting the author’s argument that interactive is more about quality of exchange, not just the use of buttons or animations.
At the same time, I noticed that the author seems somewhat biased against newer interactive. He writes being confident that his definition is the correct one and dismisses other common uses of the term as misunderstandings. While I find his argument persuasive, I wonder if he overlooks the fact that interactivity might exist on a spectrum rather than a strict category. For example, using simple interfaces can create emotional or imaginative engagement, which might also count as a form of interaction. The reading changed my belief that interactivity is actually a technical feature as I now see it more as a communication process. However, it also raises questions for me such as Who gets to decide what really can be seen as a meaningful interaction? Can a passive experience still be interactive if it affects the user’s thinking? And in art or design, should emotional response matter as much as logical back-and-forth exchange?
After reading this article, I realized that I had been using the word interactive without fully questioning what it actually means. I mostly associated it with responsiveness and just assumed that if something reacted to my input, it was interactive. The author’s examples and definition of interactivity helped me clearly distinguish between interaction, reaction, and even participation. For example, the branch example showed how it was more reactive rather than interactive because there was only one active actor. I also really liked how the author argues that reaction, no matter how intense, never becomes interaction. The example with the dancers made it clear to me that, yes, the dancers were dancing with the music (giving a response), but they did not provide any feedback that changed the music or affected it in any way. This also made me question my own work and whether my sketches are more reactive than truly interactive. For example, in my work, I created sketches where the system responds immediately to the participant (like if they press a button), but I did not make it actually change or adapt based on their behavior. So I think my previous sketches were more about animations rather than interactive ones because, yes, it responds to input, but does not meaningfully listen, think, and respond in the way he describes.
I also like how interactivity was discussed more as a continuous variable rather than binary, especially with the use of the fridge example. I think framing interactivity as high or low rather than interactive or not interactive helped me understand how people can experience the same system differently. I agree with the author’s argument in this case, because it gives designers a more practical way to evaluate the interaction on a scale from high to low, rather than just dismissing the concept altogether. But this has me questioning who gets to define that standard of high or low interactivity, especially when we all can experience systems so differently.
Also, the difference between the user interface and interactivity was clear. At first, I had a hard time distinguishing between them. I usually think visually first, and then add interaction later. But now it is clear that for a strong interactive system, it needs to be heavy on both aspects of interactivity in technology, which is combining both form and function, and should not be separate steps. While I do agree with the author, I think he is somewhat biased towards interaction models that give off human conversations. I think this bias most likely comes from his own experience as an interactivity designer, but I think it also limits how other forms of interaction might be perceived and valued. In future work, I want to design my sketches more intentionally, making sure each element has a functional role in the interaction so it does not look like it was just thrown all together.
(I think you might have to open this on a new table as it doesn’t seem like computer vision sketches are supported by wordpress…)
Concept:
Well, I have always been interested in learning and testing out computer vision, because I have seen really cool effects and my main inspiration is a woman who goes by the.poet.engineer and she has absolutely incredible computer vision work.
To make the tracking as smooth as I could, I used code from this git hub post which I will talk about later in this post. This took me a ridiculously long time to complete since I started learning computer vision the same day I made this, but they end result was worth it.
Implementation:
There is a lot that goes into this, 500+ lines of code, however everything is well commented if you would like to go line by line, I will explain the main concepts for creating this.
A bit of background, this was done using ml5js’s HandPose, and their hand tracker has specific keypoints for parts of your hand which was used to calculate distance, positions, rotation and so on for all the shapes used in this.
I did not use every connection as in the picture above, I manually selected the keypoints that I wanted to use in these 2 variables:
let fingerconnections = [
[17, 18],
[18, 19],
[19, 20],
[13, 14],
[14, 15],
[15, 16],
[9, 10],
[10, 11],
[11, 12],
[5, 6],
[6, 7],
[7, 8],
[0, 1],
[1, 2],
[2, 3],
[3, 4],
];
// Using ml5js classification I make a list of the connections I want for a curved line
let singlefinger = [
[4, 5],
[17, 16],
[13, 12],
[9, 8],
];
The way this works is that every hand that is seen on screen will be added to artInstances, of course I manually just made 2 but this can be scalable and automated for more than 2 hands.. (alien ?)
Each hand is assigned a class object of our class skeletal, which is where quite literally everything happens.
// Function for smoothing, uses linear interpolation to shorten how much we want the detector to move, since I used 0.8 for smoothing that means it only moves 80% of the way from original location to next location.
function gotHands(results) {
if (activateSmoothing && hands.length > 0 && results.length > 0) {
for (let i = 0; i < results.length; i++) {
if (hands[i] && results[i]) {
let oldHand = hands[i];
let newHand = results[i];
for (let j = 0; j < newHand.keypoints.length; j++) {
newHand.keypoints[j].x = lerp(
oldHand.keypoints[j].x,
newHand.keypoints[j].x,
smoothing
);
newHand.keypoints[j].y = lerp(
oldHand.keypoints[j].y,
newHand.keypoints[j].y,
smoothing
);
}
}
}
}
hands = results;
}
This is the smoothing function that I took which I tried simplifying in the comments, it’s simply just linear interpolation so that the distance moved isn’t equal to the distance actually covered.
for (let i = 0; i < hands.length; i++) {
if (artInstances[i]) {
// Calling our inbuilt functions for every hand we have on screen
artInstances[i].drawline(hands[i], singlefinger);
artInstances[i].drawdoubleline(hands[i], fingerconnections);
artInstances[i].drawcircle(hands[i], fingerconnections);
artInstances[i].drawHUD(hands[i]);
artInstances[i].drawSpeedHUD(hands[i]);
}
}
drawMusicButton();
}
There are 6 methods in our skeletal class:
– drawline
– drawdoubleline
-drawcircle
-drawHUD
-drawSpeedHUD
-drawPopupHUD (this is used in the drawhud function so you won’t see it being called outside the class)
class skeletal {
constructor(linecolor, circlecolor, hudcolor) {
// We take the color we want for the line, circle and the hud, as well as initialize the angle and last angle to 0
this.linecolor = linecolor;
this.circlecolor = circlecolor;
this.hudcolor = hudcolor;
this.hud1angle = 0;
this.rotationSpeed = 0;
this.lasthandangle = 0;
this.popupActive = false;
this.popupTimer = 0;
}
There is 8 attributes assigned to each hand, where only the first 3 are you can choose and the rest is assigned to 0 / false by default as they will be changed and used in calculations for our functions.
Before I get into the function I am most proud of, I will briefly talk about what each function does.
drawdoubleline : There was no inbuilt shape that satisfied what I want, so I ended up writing my own shape using BeginShape and EndShape to give me that glitchy double line between most the connections.
drawline: I used the bezier function here, (calculated anchor points by getting the points 1/3rd and 2/3rd of the way from keypoint A to keypoint B) and this was used to connect bases of fingers to the top of the next finger with a curved line.
drawHUD: This has 2 elements, the palm HUD, and the exterior HUD, the palm hud is centered by calculating the midpoint the base of the finger (which is keypoint 9) and the wrist (which is keypoint 0), and so the shapes of the HUD is drawed around that point, we use the mapping functions so that it becomes bigger the further away your hand is from the camera, or smaller if it is closer to the camera. This has constraints so the HUD can never be too small or too big.
drawPopupHUD: This HUD is the exterior one, which contains a circle with sound waves along a quarter of it, and this only pops up if the hand rotates at a fast enough speed, and then disappears after a short while. I used Perlin’s noise to give that random sound effect to the waves.
drawcircle: This simply draws a small circle at each keypoint that I saved in the beginning of the program.
Finally the function I am proud of:
drawSpeedHUD(singleHandData) {
if (!singleHandData) return;
// We map the keypoints once again to their respective parts.
let thumbTip = singleHandData.keypoints[4];
let indexTip = singleHandData.keypoints[8];
let wrist = singleHandData.keypoints[0];
let palmBase = singleHandData.keypoints[9];
if (!thumbTip || !indexTip || !wrist || !palmBase) return;
// We calculate the palm size and the gap between our thumb and index finger
let palmSize = dist(wrist.x, wrist.y, palmBase.x, palmBase.y);
let fingerGap = dist(thumbTip.x, thumbTip.y, indexTip.x, indexTip.y);
// Check if it is left or right hand to adjust rotation accordingly
let isLeftHand = singleHandData.handedness === "Left";
// Calculate the angle of the finger for rotation of the speedometer
let fingerAngle = atan2(indexTip.y - thumbTip.y, indexTip.x - thumbTip.x);
// Calculate the rotation and adjust based on left or right hand
let sideRotation = isLeftHand ? fingerAngle + PI : fingerAngle;
// Conditions for when the speedometer pops up
let closeToCamera = palmSize > 80;
let fingersSpread = fingerGap > 0 && fingerGap < 140;
if (closeToCamera && fingersSpread) {
// We create a mapping between the distance of finger and speed, where if the thumb and index are closed the speed is 0 to 50.
let speed = map(fingerGap, 0, 130, 16, 50, true);
let needleAngle = map(speed, 0, 140, PI, -PI);
let midX = (thumbTip.x + indexTip.x) / 2;
let midY = (thumbTip.y + indexTip.y) / 2;
push();
translate(-width / 2 + midX, -height / 2 + midY);
// Rotate with our hand
rotate(sideRotation);
noFill();
stroke(this.hudcolor);
strokeWeight(2);
// Draw the speedometer arc and ticks.
arc(0, 0, 110, 110, PI + QUARTER_PI, TWO_PI - QUARTER_PI);
for (let i = 0; i < 10; i++) {
let a = map(i, 0, 9, PI + QUARTER_PI, TWO_PI - QUARTER_PI);
line(cos(a) * 45, sin(a) * 45, cos(a) * 55, sin(a) * 55);
}
// Draw the needle
push();
rotate(needleAngle - HALF_PI);
stroke(255, 50, 50);
strokeWeight(3);
line(0, 0, 0, -50);
pop();
// Draw the speed bar which changes with speed and needle.
strokeWeight(1);
rectMode(CENTER);
noFill();
rect(0, 25, 60, 5);
fill(this.hudcolor);
rectMode(CORNER);
// Using your speed mapping to fill the 60px wide bar
rect(-30, 22.5, map(speed, 16, 50, 0, 60, true), 5);
pop();
}
}
This is the code for that interactive aspect of this computer vision, the speedometer that moves with your pinching of your thumb and index. The speedometer only pops up if your hand is close enough to the camera and the pinch isn’t too big, and we use the mapping function to spin the needle and change the progress bar. This speedometer also rotates around the hand which leads to it rarely clipping through your hand. This took so much trial and error, checking which angle works, so much time spent due to the needle going under the speedometer and completely opposite of where I would want it, or the speedometer rotating towards the hand clipping through it, overall it was a painful experience of trial and error.
A small addition I have is some background music that you can enable or disable, the song is “Color Your Night” and this is part of the soundtrack for Persona, and is an amazing song.
Reflection:
It was a painful, but incredibly fun experience, I am really happy I challenged myself and dived into computer vision, because if I did not know then I would have never, and I am really happy with the end result. There is endless possibilities of interactive art with computer vision, such as animating characters, or showcasing fluid dynamics and so much more which I would like to compare beyond a simple futuristic interactive HUD.
In reflecting on Casey Reas’ discussion of chance operations, I found my fundamental assumptions about artistic intent deeply challenged. I’ve always associated “intent” with manual, top-down control, but Reas’ argument for “jitter” or “noise” as a tool to keep a system from becoming “homogeneous” offers a much more interesting path. I am particularly struck by the idea of “homeostasis” within a system — where a work stays dynamic and shifting while maintaining a consistent texture. This shifts my perspective of the computer from an “obedient tool” to a creative collaborator. By utilizing randomness as a “jumping-off point,” I want to see if I can bypass my own aesthetic biases and “logical nonsense,” similar to how Reas and Tal Rosner used dice-rolls to spark compositions for their Chronograph project.
However, Reas’ mention of Rosalind Krauss’s critique of the “grid” leaves me with a lingering, uncomfortable question: if a work is generated through chance, can it still hold onto the “pure feeling” or “humane quality” that artists like Mondrian or Malevich aimed for? If I follow John Cage’s lead and use chance to “remove myself from the activity,” I worry that the work might lose its connection to social discourse and humanity. Yet, I am increasingly drawn to Gerhard Richter’s provocation that chance is “better” than the artist because it can “destroy what I’ve worked out wrong.” This suggests that the “human” element might actually reside in my willingness to let a system disrupt my own ego.
All these connections reinforced my goal to find an “optimum balance” between total randomness and complete control. Moving forward, I want to experiment with defining “minimal structures” or constraints that allow for “surprising visual qualities,” testing whether a system-based approach can actually feel more organic than one born of purely determined geometry.
For my concept, the repeated shapes in the attached PDFs reminded me of wallpapers and patterned backgrounds. Given that it’s February, I decided to create a static heart wallpaper using a grid of mini hearts. When the hearts are pressed, the background turns red and the hearts begin to move.
I initially started by coding the static wallpaper, but quickly realized that if I wanted to incorporate animation, I needed to plan for it early on. I also added a mouse press function so that I could toggle between the still wallpaper and the moving hearts.
For the hearts, I made them using two ellipses and a triangle, and I used nested loops to place the hearts in a grid, which created the wallpaper effect.
To add animation, I created a variable that updates every frame to move the hearts together. When the hearts are moving, this value added to their positions, which shifts the grid. I also used a boolean variable with the if statements to control when the animation occurs.
if (moving) {
heartshift += speed;
if (heartshift > 15 || heartshift < -15) {
speed *= -1;
}
}
I’m proud of this part of my code because it controls the movement of the hearts. When the variable (moving) is true, (heartshift) updates every frame, which make the hearts move. And by adding the conditional statement, the speed reverses when the movement reaches the limit(15), so the hearts go back and forth instead of only moving one direction.
Overall, I’m happy with the sketch, and I enjoyed experimenting with different variables to see how they affect the hearts. The most challenging part was combining the animation with the nested loops and learning how to incorporate them with the boolean variable and if statements all together. For future ideas, I would like to add more interactions, or make it more intricate.
Reflection Response
Casey’s talk made a great point about randomness vs. order and the idea of losing total control. Reflecting on my past work, especially my heart wallpaper sketch, everything is very structured and controlled. Casey’s talk made me realize that adding small random elements in my sketch, like changes in color or even the movement of the hearts, could make the sketch feel less rigid. I could have created a static wallpaper of mini hearts and then, when pressed, made the hearts move randomly, but I think it is challenging to balance the randomness of the movement, given how rigid the static grid is when the sketch is pressed again.
Before watching the video, I assumed that randomness = messy or chaotic. I never before thought of controlled randomness, which initially sounds contradictory, but begins to make sense as the video goes on. I was fascinated by the artwork that was produced by programming randomness and it opened my eyes to different ways randomness can be used in my own work. In a way, some of the artworks Reas showed in the video, specifically the music pieces, reminded me of modern AI. The music sounded exactly what it was, random, yet also structured oddly enough. It reminded me of AI generated images and songs because AI tends to create some very messy looking images when you give it a specific prompt, and the randomly generated music somewhat mimics the idea of AI nowadays. More importantly, I was mostly impressed with the artwork that can be produced through computer graphics and code. Coming from a computer science backround, most of my coding involved creating algorithms. So, seeing a whole new world of coding to create abstract art pieces was captivating.
In my own artwork, I definitely plan to incorporate randomness by generating random colors and sizes for objects when needed, and especially for generating random positions and velocities for objects. I believe the optimum balance between total randomness and complete control is having almost complete control yet incorporating some elements of randomness when necessary. The control comes from writing the code yourself and deliberately inserting when you want randomness to be used. This helps create more complex art pieces because sometimes it is difficult to draw each element individually in the sketch and create more abstract pieces. So, the element of randomness allows for the creation of art pieces one might not have completely imagined in their mind.
Thinking of randomness brings the idea of disorganization and something more instinctive that is done with no purpose to me. I came to realize that this is not always the case, and that randomness can rather be a more purposeful approach when working on a piece of art. The way Casey Reas presents and discusses randomness makes it seem like a much more important element to consider incorporating, and I was surprised to find myself relating what he was saying with what I had done so far and what I could do next. I completely agreed with his point that randomness in unplanned things can in fact improve your work, especially as I reflected on my last assignment while listening. I created a looped artwork of a drawing made of shapes, and at first I was too focused on organizing how they moved. However, I ended up using randomizing code, which allowed the shapes to move around more freely and resulted in a stronger outcome.
Along with randomness, I feel that the control and manipulation of it also really matters, as I keep adjusting how random I want each code to behave, and think “how random is too random?” Casey mentioned a quote that made me think more about this concept, “the computer is a unique device for the arts since it can function as an obedient tool with vast capabilities for controlling complicated processes.” This reflects how the computer actually works in my practices, it is just “obedient,” meaning that I am still controlling what it does no matter how random it gets. I plan to incorporate random elements into my work, especially when I’m working with a complex piece, for movement or timings, but I believe there should always be a balance between randomness and control. Complete randomness can possibly lead to a messy display and an overwhelming number of actions, while complete control does do exactly what is planned, it can sometimes result in a less engaging or satisfying outcome. Therefore, when using random elements, I would control the randomness to some extent.
How long something can stay random, when we are always looking for patterns in things. Does it helps simplifying world around. Probably yes. Just now, in the week2 assignment what started as random colliding circles, if I grey them out, I just see two giant Xes. Also when we manipulate randomness is it still random, what the speaker called “Controlled randomness”. But what would be something truly random. A roll of a die, but it’s bound by 6 choices. That also is a form of controlled randomness. Given this all physical systems are bound by laws of physics. That make all random events not so random. If by random, we mean lack of knowledge about the outcome by that I mean the choice of the outcome not the outcome itself, then to increase randomness is to increase the available for the system to choose from.
For an artwork, My balancing point of chaos and control will the total number choice available at a moment. I see one element with 100 choices and 10 elements with 10 choice the same way. Surely, I can play around with by introducing biases and triggers to manipulate the balancing point, this will come at a cost of increasing complexity. And I need to look out for the chaos coming from this complexity. There is beauty in order and chaos both. But ordered beauty and chaotic beauty carry representation of complete opposite worlds which in its own is a spectrum and in that who lies where become their balancing point.