Week 3- OOP Assignment

 

Your concept:

I was inspired by my house cats, who always bring joy and happiness when I play with them or when they decide to sit on my lap. I wanted to include them, but wasn’t sure how, so I decided to base it on Nyan Cat, which I used to play when I was younger, when I had to wait for my friends to arrive, didn’t have access to the internet, or just wanted to give my brain a break.

Nyan Cat (Music Video 2011) - IMDb

Code:

I prompted Gemini to add the particles, it used blendMode(ADD) is what makes the magic particles look like they are made of light.

let segments = 8; 
let segLength = 12;
let x = [], y = [];
let fireParticles = [];

const COLORS = {
fur: [255, 235, 245], 
ears: [255, 180, 200],
eyes: [100, 200, 255],
magic: ['#FFD1DC', '#FFECB3', '#B2E2F2', '#D1FFD7', '#E0BBE4']
};

function setup() {
createCanvas(windowWidth, windowHeight);
// Initialize segment positions
for (let i = 0; i < segments; i++) {
x[i] = mouseX; 
y[i] = mouseY;
}
background(0); 
angleMode(RADIANS);
}

function draw() {
// Semi-transparent black rect creates the generative trail
blendMode(BLEND);
fill(0, 0, 0, 30); 
rect(0, 0, width, height);

let mouseSpeed = dist(mouseX, mouseY, pmouseX, pmouseY);

//Follow Logic
dragSegment(0, mouseX, mouseY);
for (let i = 0; i < x.length - 1; i++) {
dragSegment(i + 1, x[i], y[i]);
}

let headAngle = atan2(mouseY - y[1], mouseX - x[1]);
let emissionRate = map(mouseSpeed, 0, 50, 1, 8);

if (mouseIsPressed || mouseSpeed > 2) {
for(let i = 0; i < emissionRate; i++) {
fireParticles.push(new MagicParticle(x[0], y[0], headAngle, mouseSpeed));
}
}



// Tail
drawTail(x[segments-1], y[segments-1]);

// Body
for (let i = x.length - 1; i > 0; i--) {
drawCatBody(x[i], y[i], i);
}

// Glowing Particles
// We use ADD blend mode to make them pop on black
blendMode(ADD);
for (let i = fireParticles.length - 1; i >= 0; i--) {
fireParticles[i].update();
fireParticles[i].display();
if (fireParticles[i].isDead()) fireParticles.splice(i, 1);
}
blendMode(BLEND);

//Head always on top
drawCatHead(x[0], y[0], headAngle);
}

function dragSegment(i, xin, yin) {
let dx = xin - x[i];
let dy = yin - y[i];
let angle = atan2(dy, dx);
x[i] = xin - cos(angle) * segLength;
y[i] = yin - sin(angle) * segLength;
}

function drawCatBody(posx, posy, index) {
push();
translate(posx, posy);
fill(COLORS.fur);
noStroke();
ellipse(0, 0, 50 - index, 45 - index);
pop();
}

function drawTail(tx, ty) {
push();
translate(tx, ty);
stroke(COLORS.fur);
strokeWeight(12);
noFill();
let wag = sin(frameCount * 0.2) * 25;
bezier(0, 0, -15, wag, -30, -wag, -45, 0);
pop();
}

function drawCatHead(hx, hy, angle) {
push();
translate(hx, hy);
rotate(angle);

fill(COLORS.fur);
noStroke();
ellipse(10, 0, 55, 50); // Face

// Ears
fill(COLORS.ears);
triangle(-5, -20, 5, -45, 20, -20); 
triangle(10, -20, 25, -45, 40, -20); 

// Mouth
fill(255, 150, 150);
arc(30, 5, 22, 22, 0, PI);

// Eyes
fill(COLORS.eyes);
ellipse(15, -5, 10, 12);
ellipse(35, -5, 10, 12);
fill(255); 
ellipse(17, -7, 4, 4);
ellipse(37, -7, 4, 4);

// Whiskers
stroke(255, 200);
strokeWeight(1);
line(40, 2, 60, -5);
line(40, 5, 60, 5);
line(40, 8, 60, 15);
pop();
}

class MagicParticle {
constructor(x, y, angle, speed) {
this.pos = createVector(x, y);
// Spread the magic out
this.vel = p5.Vector.fromAngle(angle + random(-0.5, 0.5));
this.vel.mult(speed * 0.2 + random(1, 4));
this.lifespan = 255;
this.c = color(random(COLORS.magic));
this.size = random(2, 7);
}

update() {
this.pos.add(this.vel);
this.vel.mult(0.96); 
this.lifespan -= 4;
}

display() {
noStroke();
let alpha = map(this.lifespan, 0, 255, 0, 200);
fill(red(this.c), green(this.c), blue(this.c), alpha);
ellipse(this.pos.x, this.pos.y, this.size);

// Sparkle effect
if (random(1) > 0.95) {
fill(255, 255, 255, alpha);
ellipse(this.pos.x, this.pos.y, this.size * 0.6);
}
}

isDead() { return this.lifespan < 0; }
}

function windowResized() {
resizeCanvas(windowWidth, windowHeight);
background(0);
}
      • Embedded sketch

     

    Reflection and ideas for future work or improvements:

  • In the future, I might add interactions or obstacles to make it like an interactive fun game, but I want to be unique and something that have been done previously.

Week 3: Reading Response

I do agree with the author that interactivity is a two-way process. It is something I interact with, and something that interacts with me. However, does it have to include all three components the author mentions—speaking, listening, and thinking? Are we really interacting with computers in the same way? They do not think the way humans do. They are trained using mathematical equations and algorithms, yet they can sometimes outperform human thinking.

In my opinion, a strong interactive system allows me to manipulate things in real time—for instance, typing on a computer right now or playing a virtual reality (VR) game. There is not necessarily a listening, speaking, or even thinking component. According to the author, this would not be considered interactive. But for me, interaction means getting a reaction to my actions.

One example the author gives is an interactive rug, which I would not consider interactive from the rug’s side because it does not interact with me in the same way I interact with it—I simply play with cars on it. However, I would consider rugs that teach prayer through step-by-step instructions, or similar designs, to be interactive because I interact with them and they interact with me.

In the future, I would add things that get the user more involved in my p5 sketches, creating a real interaction between the user and the sketch. For example, I could use ml5 and other machine learning libraries to make the sketches respond to gestures, sounds, or movements. I think a strong interactive system should give feedback to the user, so their actions actually change what happens on the screen in real time. This way, the interaction doesn’t need to include thinking or speaking like the author says—it’s still interactive because the sketch reacts to me. I also wonder, though, does a system need intention or intelligence to be truly interactive, or is responsiveness enough?

 

Week 2 – Reading Reponse

Watching Casey Reas’ talk influenced the way I think about randomness in interactive artwork. It made me reflect on how, in traditional art, we often try to control every aspect of every element. I typically do not plan every aspect of my own art pieces; sometimes I go with the flow, and sometimes I see where my thoughts take me. However, Reas’ discussion of “systems” rather than “images” challenges my creative process.

This raises the question of when computational art is truly random, or whether it is our own thoughts and decisions that contribute to the apparent randomness of the work. I wonder how much control I actually have over “chance” in a digital system and whether the unpredictability is generated by the algorithm or by the choices I make in setting up the system. This makes me reconsider the balance between control and randomness in creative processes.

So, I want to say that randomness is still a type of control because I think we will never achieve full randomness. This is evident in chaos theory: in a small picture, it looks chaotic, but as we zoom out to see the full picture, it becomes more evident that it is a longer form of a pattern, more complex, yet there is still some type of control.



Week 2- Generative Art

 

My concept draws inspiration from the interactive projections often seen in galleries and museums. As I mentioned in my previous post, I wanted to include an interactive art piece that allows the audience to both enjoy and engage with it.

How was it made:

I used the ML5.js body segmentation example as a source, which captures live video from the camera. I then looped through the mask to keep the shapes inside the person mask and drew shapes only where a person was detected. If the mask pixel was black (not a person), it looped to draw random shapes instead.

I used ML5.js documentation and example for my source body segmentation used in my code.

A highlight of some code that you’re particularly proud of:

let bodySegmentation;
let video;
let segmentation;
let options = {
maskType: "person",
};

function preload() {
bodySegmentation = ml5.bodySegmentation("BodyPix", options);
}

function setup() {
createCanvas(640, 480);
// Create the video
video = createCapture(VIDEO);
video.size(640, 480);
video.hide();
bodySegmentation.detectStart(video, gotResults);
}

function draw() {
background(255);

// Draw the video
image(video, 0, 0);

if (segmentation) {
// Load the pixels of the mask to check each pixel
segmentation.mask.loadPixels();

This part of the code taught my a lot about body segmentation, and how interactive projections work form behind the scenes.

Embedded sketch:

Note: For it to work, please click the link and allow camera access.

Reflection and ideas for future work or improvements:

I am happy that I learned how to use ML5.js, and I look forward to using it in future projects. For this project, I would like to implement more details, using something like FaceMesh, which could affect the shape or color depending on your visible motions.

Week 1 – Self-portrait

My concept:

I added eye movement inspired by the Mona Lisa, where the eyes follow the viewer’s gaze. I chose pink as the background color since it is my favorite. I also added a touch of glitter and shimmer to the background because I grew up loving Barbie dolls, movies, and TV shows. Finally, I included my house cats, who love my family unconditionally.

“The most personal is the most creative.” — Martin Scorsese

A highlight of some code I am particularly proud of

The code I am most proud of is the eye movement:

let leftEyeX = constrain(map(mouseX, 0, width, -75, -55), -75, -55);
let leftEyeY = constrain(map(mouseY, 0, height, -8, 8), -8, 8);
let rightEyeX = constrain(map(mouseX, 0, width, 55, 75), 55, 75);
let rightEyeY = constrain(map(mouseY, 0, height, -8, 8), -8, 8);

circle(leftEyeX, leftEyeY, 38);
circle(rightEyeX, rightEyeY, 38);

How it was made:

I used basic shapes for the portrait of me, including circles, ellipses, rectangles, and arcs. I made the eye movement and the cats’ tail movements follow the mouse using mouseX and mouseY, with constraints to keep the irises within the eye area. For the background glitter, I used a loop to create a sparkling effect by randomly placing tiny circles with different alpha values. In addition, I used functions to keep everything organized and easy to edit.

Embedded sketch:

 

Reflection and ideas for future work implementations:

I really enjoyed the process. I learned a lot from this project, such as utilizing the for loop for adding glitter and shimmer. In the future I would like to add learn how to utilize the JavaScript Computer Vision and Detection libraries, such as ML5.js and face-api.js, to make more interactive to the audience. I would also want to make more detailed and add a bit more shadows.