Assignment 5: Midterm Progress

CONCEPT

For my midterm project, I am planning to create an active, time-management puzzle game where the player must balance quick math with physical character movement. The goal is to hang exactly 10 pieces of clothing on a clothesline before the 1:30 timer runs out, without exceeding the line’s strict weight limit. The player’s character can hang clothes in the backyard for up to 1:30 minutes, but she must carefully manage the weight on the line because it can carry only up to 25 grams.

The player will not know in advance which piece of clothing they will get from the laundered clothes basket. Each piece has a different weight: hoodies weigh 4 grams, pants 3 grams, shirts and shorts 2 grams, and tank tops 1 gram. If the player is unsure, they can temporarily place a piece of clothing in the “for later” basket. However, this basket can only hold up to 6 pieces of clothing, and once items are placed there, they cannot be accessed again until later.

I also plan to integrate a rain effect in the middle of the game, adding another layer of challenge. At a certain point, the player will receive a 10-second warning to pick up the laundered basket and move to a shaded area somewhere on the screen. If they fail to do so, every piece of clothing will gain an extra +1 gram. If the total weight on the line exceeds 25 grams, the line will snap, and the player will have to start over.

DESIGN 

I first designed my concept in Canva so I could actually see what the game would look like. I started with the intro page, then the instructions page, and finally worked on the game UI itself. I found a background on Pinterest and added some extra elements from Canva to make it look more complete. I struggled a bit at first with how to design the game UI because I wasn’t sure how everything would fit together, but after trying different layouts, I finally got it to work. I also asked Claude to help me generate different game maps, which gave me a bunch of ideas and styles. Looking at those really helped me figure out what I liked and guided me in deciding the final layout for the game.

UNCERTAIN PART

One of the uncertainties in this game is the rain part. Without the rain, players could win more easily, which is why I want to include it to add more challenge. However, the rain event is complicated to implement because it interrupts the normal gameplay for about 40 to 50 seconds and requires multiple conditions to work at the same time. During this event, the player needs to drag the basket and hold it while walking, but the basket movement should only be allowed within a 10 second timer. At the same time, the system also needs to add +3g to the clothesline weight. The most confusing part for me is adding +1g to all remaining basket items if they are not placed in the shade.

I think this part requires careful state management, such as handling dragging, carrying, placing, walking, and temporarily locking or allowing actions, along with proper timing so that nothing breaks, overlaps, or conflicts during the rain event.

REDUCING RISK

To reduce the risk, the controls during the rain event will be simplified. The player will press the spacebar to pick up the laundered basket and use the arrow keys to move the character. Once the rain starts, +3g will be added to the clothesline weight, making it unavailable, so the player will no longer be able to hang clothes. Instead, the player just needs to move to the shaded area with the basket and wait there until the rain stops.

The part I am still unsure about is how to add +1g to each clothing item if they are not placed in the shade. If this becomes too complicated to implement, I will simplify the mechanic. Instead of adding +1g to each clothing item inside the basket, I will just add an extra +2g to the total weight. If that still does not work well, another option is to change the setup so that the laundered clothes are already placed in the shaded area. In that case, the player’s task would simply be to carry the clothes from the shaded area to the clothesline and hang them once the rain stops.

Week 5: Reading Response

During the pandemic, I was really amazed by the process people had to go through before entering a space. Many places installed thermal face recognition systems at their entrances, and I remember lining up outside a mall, feeling confused about how it actually worked. While reading the article, that memory came back to me and helped answer the confusion I had back then. This experience made me realize the differences between computer vision and human vision. Instead of relying on perception, judgment, and context like a human would, computer vision processes visual information through algorithms that detect specific patterns, such as facial features and temperature readings. The system does not interpret situations the way humans do, it reads measurable data and produces a result based on programmed criteria. People had to stop, face the camera, and stand at the right distance so the system could read them accurately. In this case, the environment and people’s behavior were adjusted to be more legible to the algorithm, showing that while human vision is flexible and adaptable to different situations, computer vision relies on structured data and optimized conditions to function efficiently and consistently.

Computer vision’s ability to track and monitor people also changes how interactive art functions. Because the technology can detect movement, faces, or body position, it allows artworks to respond directly to the audience’s presence. However, as people differ from each other, the system produces various responses, creating multiple forms of interaction and monitoring. We can help the computer see or track what we are interested in by providing more labeled data so it can learn the patterns we want it to detect, improving visibility with good lighting and clear angles, using visual markers or cues, and controlling the environment to reduce background clutter and maintain proper distance.

Week 4 : Reading Response

While I was reading the text, I did see the point the author is trying to make. I also got confused and frustrated whenever there were not many or unclear instructions for something, or if there were no instructions at all. I remembered when I used the washers for the first time in uni, I got confused about how they worked because they were different from what I used before, and I even tried to open it from a different direction since the handle was so sleek that you couldn’t tell it was the handle at all. This connects to the idea of failure of discoverability and signifiers mentioned in the reading, because the design did not clearly show where the action should take place, which led to trial and error and frustration. 

Additionally, when I got back to uni after the spring semester, they changed both the washers and dryers with options that were very limited (heavy, normal, delicates) compared to the multiple options before (eco, quick, mixed, delicates, heavy, wool, etc.) Without any proper instructions or signifiers explaining what is considered a heavy, normal, or delicate wash, washing clothes could go wrong. This confused my understanding of how the machine worked, since my experience with washers no longer matched the new system image, and I was left unsure about what each setting did. Situations like this could be improved by adding clearer labels, brief explanations, visible signifiers, and better feedback so users can understand how things work without feeling lost.

Applying the author’s principles of design to interactive media, especially Human-Centered Design (HCD), helps me think more about how users actually experience what I create. I realized that it’s not enough for an interface to just look nice, it needs to be easy to understand and navigate. The readings reminded me how important it is to include clear labels and signifiers so users don’t have to guess what a button, icon, or feature does. Actions should be easy to discover, and feedback should appear immediately when something works, so users don’t feel lost or frustrated.

Week 4: Assignment

CONCEPT

For this week’s assignment, I decided to make generative text based on a floating lantern event. I was listening to Tangled’s “I See the Light” when the idea came to me. As far as I know, people usually make wishes or set intentions before releasing their lanterns. To show this, I added different verbs and objects into my code and had p5.js generate them into random sentences, so each lantern shows a different kind of wish or intention.

I kept the visuals simple and warm to match the feeling I imagined. The lanterns float slowly upward, like in a real event, and I added a soft flicker inside so they feel alive. When you hover your mouse over a lantern, a text appears below showing the wish. I hid the wishes at first so you people can only see them when interacting with the lanterns.

HIGHLIGHT

The piece of code I’m most proud of is this:

// ------------- text hover ------------- //
function hover(lanterns) {
  for (let l of lanterns) {
    if (dist(mouseX, mouseY, l.x, l.y) < 25) {
      stroke("rgb(23,23,55)");
      strokeWeight(3);
      fill(255);
      textFont(myFont);
      textSize(28);
      textAlign(CENTER, BOTTOM);
      text(l.phrase, width / 2, height - 20);
      break;
    }
  }
}

This function checks if the mouse is close to any of the lanterns. If it is, it displays a phrase at the bottom of the screen. I like this part because it adds interaction instead of just having a static scene. It makes the viewer move their mouse around and discover different messages. I’m especially proud of it because it connects the visual part (the lanterns) with the text in a simple but meaningful way, making it more engaging and giving each lantern its own small story.

REFLECTION

Overall, I really enjoyed making this assignment and seeing how it developed throughout the process.  One thing I would like to explore further is adding more movement and animation to the lanterns. At first, I planned to make them gently sway, like real lanterns do when they float up into the sky. However, because of time constraints, I wasn’t able to add that effect. If I had more time, I would definitely include the sway to make it feel more natural and complete.

I would also like to experiment with making the movement slightly different for each lantern, so they don’t all move the exact same way. Small changes like that could make the scene feel more realistic and alive. Overall, I’m proud of what I created, and I can see a lot of potential to keep improving and building on this idea.

USAGE OF AI

Most of my code came from previous in-class exercises and what I already understood about p5.js. However, I did use ChatGPT to help me with a few parts that I wasn’t fully confident about yet: the gradient background, the lantern movement, and the twinkling stars. I mainly used it to understand the logic behind those effects so I could apply them in my own way.

For the gradient background, I wanted something that would reflect what we see on the sky during nighttime. I asked how to create a gradient effect, and I learned that I could draw horizontal lines and slightly change the color value based on the y position. By adjusting the blue value little by little like this:

 
stroke(23, 23, 55 + y * 0.1);
line(0, y, width, y);

This code helped me create a soft transition that makes the sky feel more atmospheric.

For the lantern movement, I needed help in remembering how to animate them moving upward and then reappearing at the bottom after leaving the screen. I then remembered that subtracting from the y value makes them move up, and using a conditional statement resets their position:

l.y -= 0.3; // movement
    if (l.y < -50) {
      l.y = height + 5;

This made the animation feel continuous, like an endless lantern release.

For the twinkling stars, I asked how to make them look like they were softly flickering instead of staying the same brightness. I learned that adding a small random change to the brightness each frame and then constraining it within a range creates that effect:

s.brightness += random(-10, 10);
s.brightness = constrain(s.brightness, 150, 255);

This made the sky feel more alive and less static, which matched the mood I was trying to create.

Overall, I used AI mostly when I got stuck.It gave me a better understanding of how to create those effects, but I adjusted everything to match the look and feeling I wanted for my project. It supported my process rather than replacing it, and it helped me feel more confident experimenting with movement and atmosphere in my project.

Source: https://p5js.org/reference/ and Class Slide decks

 

Week 3: OOP and Array

CONCEPT

After looking at examples from previous students and the sample car sketch in the syllabus, I was reminded of the Filipino children’s fable Ang Kuneho at ang Pagong, or in English, The Rabbit and the Turtle. The story is about a fast but overconfident rabbit who challenges a slow turtle to a race, only to lose after falling asleep halfway through. Inspired by this, I created a race animation where the turtles move forward at their own slow and steady pace, while the rabbit stays asleep unless the user wakes it up by clicking the mouse.

I used object-oriented programming to better organize my code. Each turtle is treated as its own object, with individual position and speed values, allowing them to move smoothly and independently across the screen. I also used arrays to control the turtles’ speeds and the rabbit’s hopping positions. Every time the mouse is pressed, the rabbit moves to the next position in the array, reinforcing the idea that the user’s input directly affects what happens in the sketch.

HIGHLIGHT

In creating my code, I mainly referenced the class slide decks and my previous p5.js sketches from earlier exercises. These helped guide both the structure and logic of my work. I also used ChatGPT as a support tool, especially when I struggled with connecting another JavaScript file to my sketch.js. Because of that issue, I decided to keep everything in a single file, which made the project easier to manage. Whenever my code did not work as expected, I also used ChatGPT to help me identify where things were going wrong and how to fix them, which helped me better understand my mistakes instead of just copying solutions.

One part of the code that I am particularly proud of is how I handled the movement of the rabbit and the turtles. For the rabbit, I created a function that stores possible hop positions in an array, allowing the rabbit to move step by step each time the user interacts. For the turtles, I used a loop to update each turtle’s position based on its own speed, making them move independently across the screen and loop back once they reach the end.

//---------------- Moving the Rabbit ----------------//

function hop() {
  rabbitPositions = [];
  for (let x = 33; x < width; x += n + 5) {
    rabbitPositions.push(x);
  }
}


//---------------- Moving the Turtle ----------------//

function moveTurtles() {
  
  for (let t of turtles) {
    t.x += t.speed;          
    turtle(t.x, t.y, 50, 35); 

    if (t.x > width + 40) t.x = -40;
  }
}
REFLECTION

Overall, I really enjoyed making this mini game and seeing how it developed through the process. One thing I would like to explore further is adding more movement and animation to both the turtle and the rabbit as they move along the track. Once I feel more confident with the code, I want to push the sketch further and make the characters more expressive.

I am also interested in expanding the game logic, such as clearly showing who is winning by tracking how many laps each character has completed and adding a visual response when a winner is determined. Adding small details like the rabbit blinking, hopping in a more natural way, or the turtles moving their legs as they walk would make the sketch feel more alive and engaging for the user.

Sources:

https://p5js.org/reference/ and Class Slide decks

 

Week 3 – Reading Response

When I first read the opening lines of the text, I immediately thought of interactivity as a conversation. To me, a strong interactive system works as a two-way exchange, not just a machine waiting for commands. The user does something, and the system responds in a way that affects what happens next. This back and forth is what keeps the experience active. I agree with the author that participation and interactivity are not the same. Simply taking part does not mean interactivity is happening. Just like dancing to a CD is not a conversation because the music does not change, it is not truly interactive if it continues without responding to the user.

This idea strongly influences how I approach my p5.js sketches. I want to move away from making works that are only meant to be watched and create pieces that respond to the viewer. In my current turtle and rabbit sketch, I avoided letting the animation play on its own. The rabbit only hops when the user clicks the mouse, which turns the action into an exchange. The click becomes the user’s input, and the movement becomes the system’s response. As I continue working, I want to explore deeper forms of interactivity where users can influence not only what appears on the screen but also how the sketch behaves, making the experience feel more like an ongoing conversation between the person and the code.

Week 2 Reflection

I was inspired by the video presented by Casey Reas to think about the computer not only as a means to create art but also as a means to look outside my own creative habits. Randomness helps us create results we wouldn’t normally think of, even if we’re the ones stating the rules. Reas’ talk made me recognize that there’s something about that unpredictability that draws me in, and it makes me want to look more closely at every detail. In my future works, I want to incorporate randomness so that each time I press play in p5.js, my artwork will be slightly different. Instead of placing the object in a specific location on the screen, I could allow the computer to have a range, such as allowing a circle to be placed anywhere between the left and right sides of the screen. It’s a simple way to stay within my rules while still letting the outcome surprise me every time the code runs.

I believe that the best combination of total randomness and total control is to have a clear structure and still allow for unexpected events. Before viewing Reas’ presentation, I believed that art had to be carefully planned, and I did not understand how randomness could be meaningful or beautiful. But seeing how small and unpredictable changes could make a piece come alive changed my perspective. It interested me and engaged me. I enjoy knowing that I am in control but also that I can be surprised by the unexpected result. This is similar to the concept of the computer being an ‘obedient tool,’ as Michael Noll quoted that Reas has stated in 24:07 of the video. It follows my rules flawlessly but uses its power for ‘controlled randomness’ to create something entirely new that I could not have done myself.

 

Week 2: Loops

CONCEPT

While I was looking up tutorials and creative coding examples, I found a YouTube tutorial by Professor Chris. I used his tutorial for building the basic grid as my starting point. I used nested for loops to generate a grid of rows and columns, then added an interactive “spotlight” effect that measures the distance between the moving light and each shape (you can see it clearly through the trail it leaves behind). As the spotlight gets closer, I use the map() function to make the shapes grow, so the grid responds more dynamically, similar to the tutorial that I watched.

I added more by testing different values and adding my own features. I brought in what we did from our Week 2 exercise by making the spotlight move on its own and bounce when it hits the edges of the canvas. To make the motion feel smoother and more interesting, I adjusted the background opacity, which left a ghost trail effect, and I also experimented with other values to see how they changed the overall look. This effect helps create a smoother motion and makes the spotlight’s path more visible as it moves around.

HIGHLIGHT

One thing I accidentally did in my code was adding two values to background(), and I was surprised when it created a ghosting effect that followed my circle everywhere. But when I saw it happening, I decided to keep it because it honestly looked better and more interesting than my previous version.

  //background & ghost effect
function draw() {
  background(255, 8);

REFLECTION

Overall, even though I didn’t have a specific concept at first and I was mostly just playing around with it, I really loved how it turned out. I honestly didn’t expect the final result, so I’m proud of it. It surprised me how something that started as random experimenting ended up looking intentional and visually satisfying. At the same time, this project also took me a while because I kept testing and adjusting things until it felt right. Sometimes when I experiment with different values, my code ends up breaking, so I use ChatGPT to help me figure out why what I changed isn’t working. While I enjoyed the process, I learned that it would probably be easier if I had at least a simple concept or direction in mind first so I don’t get too lost while experimenting. One improvement I still want to try is adding more color, like a gradient trail, because I feel like it would make the piece look even more alive and add more personality to the piece. I tried thinking of ways to do it, but it felt a bit complicated for me this time, especially while I was still figuring out the main logic of the animation. Maybe next time, once I’m more confident with the code, I’ll push it further and make my work more colorful and expressive.

Sources: https://p5js.org/reference/ and  https://youtu.be/MzD7W6Vt6LA?si=EnNemvdZ7Goaw21a

 

 

Week 1: Self Portrait using p5.js

CONCEPT

Before I started working on my self-portrait, I looked for inspiration on Pinterest and found this photo.

I wanted to create something fun and bubbly, so I began with a simple circle for the head, following the form of my inspiration, and added more shapes to create the features. Initially, I only planned to create a basic head and body figure. However, as I continued working, I struggled with how to design the body and make it visually cohesive with the head. Because of this, I decided to hide half of the character’s body by adding a table and placing the arms on top of it. This solution helped simplify my concern, but it also led to another challenge: I found it difficult to create the hands in between the forearms or the ellipses that I placed on top of the table. To resolve this, I added a phone to fill the space, which allowed the pose to feel more natural.

This decision led me to the creation of a figure that looks like a girl watching something on her phone, an activity that reflects something I do every day. After adding the phone, I felt that the figure was still lacking something. This led me to add animation to the piece. I realized that when people watch content on their phones, light from the screen reflects onto their faces. To create this effect, I used an ellipse to create a light source and hid parts of it under the bangs so that it would appear as though the glow was coming directly from the phone. To make it more realistic, I researched whether it was possible to create a flickering light effect on a loop. I found a video on YouTube that showed me how frame count works, and I asked for guidance from an AI on how to apply this concept to the ellipses I used to represent the phone’s light.

HIGHLIGHT

One of the biggest challenges I faced with this project was creating the hair. Initially, I intended to recreate my current hairstyle, but I found it too complex. As a result, I chose to recreate a previous version of my hairstyle when I had bangs and wavy hair. To achieve this, I experimented with layering ellipses on top of a rectangle to suggest waves and texture in the hair, and I adjusted the rotation of the ellipses to shape the bangs more naturally. Another challenge was animating the phone light so that it would blink continuously on a loop. Even after adding the ellipse that represents the phone’s light, it took me a lot of time to figure out how to animate it in a way that it would really look like the character was watching something on her screen.

 

//hair
  noStroke();
  fill(0);
  arc(300,242,236,210,360,QUARTER_PI);
  
  fill(0);
  rect(181.5,240,237);
  
  fill(0);
  ellipse (195,305,50,145);
  
  fill(0);
  ellipse (195,405,50,145);
   
  fill(0);
  ellipse (402,305,50,145);
  
  fill(0);
  ellipse (402,405,50,145);

  push();

// bangs
  translate(250, 230);
  rotate(-PI / 6);
  fill(0);
  ellipse(0, 0, 150, 100);

  translate(80, 40);
  rotate(-PI / 6);
  fill(0);
  ellipse(0, 0, 100, 170);

  fill(0);
  triangle(-130,-110,-80,-100,-160,-57);

  fill(0);
  triangle(-70, 100, -2, 50, 10, 87);

  pop();
//light from the phone
  noStroke();
  fill(255, 255, 255, 50 + sin(frameCount * 0.09) * 20);
  ellipse(300, 340, 227, 280);
REFLECTION

After completing the final touches, I felt relieved and satisfied with how the project turned out. Although the final outcome differed from what I originally expected, I believe it was successful, and I genuinely enjoyed the process of creating it.

Here’s the final code:

function setup() {
  createCanvas(600, 600);
}

function draw() {
  background(130);
  
  //window
  fill(192);
  stroke(255);
  strokeWeight(30);
  rect(100,50,400,300,0)
  
  fill(255);
  noStroke();
  rect(275,50,50,150,0)
  
  fill(255);
  noStroke();
  rect(100,180,390,50,0)
  
  //hair
  noStroke();
  fill(0);
  arc(300,242,236,210,360,QUARTER_PI);
  
  fill(0);
  rect(181.5,240,237);
  
  fill(0);
  ellipse (195,305,50,145);
  
  fill(0);
  ellipse (195,405,50,145);
   
  fill(0);
  ellipse (402,305,50,145);
  
  fill(0);
  ellipse (402,405,50,145);
 
  //neck
  noStroke();
  fill('#D5C29B');
  rect(270,400,60);
  
  //head
  fill('rgb(248,229,190)');
  ellipse (300,300,200,240);
  noStroke();
  
  //left ear
  fill('#EBDBB8');
  ellipse (200,310,35,60)
  noStroke();
  
  //right ear
  fill('rgb(248,229,190)');
  ellipse (400,310,35,60)
  noStroke();
  
  //shirt
  fill('#784E80');
  ellipse(300,480,470,100);
  
  fill('#E5D1A6');
  ellipse(300,434,60,27);
  
  //table
  fill(23);
  rect(0,490,600);
  
  //light from the phone
  noStroke();
  fill(255, 255, 255, 50 + sin(frameCount * 0.09) * 20);
  ellipse(300, 340, 227, 280);
  
  //left thumb
  fill('#E5D1A6');
  rect(263,430,20,37,22);
  
  // right thumb
  fill('#E5D1A6');
  rect(323,430,20,37,22);
  
  //phone
  fill(40);
  rect(275,420,55,90,10);
  
  //forearms
  fill('#EBDBB8');
  ellipse(173,480,230,70);
  
  fill('#EBDBB8');
  ellipse(430,480,230,70);
  
  //hands
  fill('#F8E5BE');
  rect(233,450,55,70,22);
  
  fill('#F8E5BE');
  rect(313,450,55,70,22);
  
  push();
  
  // bangs
  translate(250, 230);
  rotate(-PI / 6);
  fill(0);
  ellipse(0, 0, 150, 100);
  
  translate(80, 40);
  rotate(-PI / 6);
  fill(0);
  ellipse(0, 0, 100, 170);
  
  fill(0);
  triangle(-130,-110,-80,-100,-160,-57);
  
  fill(0);
  triangle(-70, 100, -2, 50, 10, 87);
  
  pop();
  
  //right eye
  fill(0);
  circle(252,325,45);
  
  fill(255);
  circle(252,329,45);
  
  //left eye
  fill(0);
  circle(345,325,45);
  
  fill(255);
  circle(345,329,45);
  
  //right pupil
  fill(190);
  circle (343,334,35);
  
  fill (70);
  circle(342,339,25);
  
  fill (255)
  circle(350,337,10);
  
  //left pupil
  fill(190);
  circle (254,334,35);
  
  fill (70);
  circle(255,338,25);
  
  fill (255)
  circle(262,337,10);
  
  //nose
  fill ("#F1DABD")
  triangle (310,370,298,329,290,370)
  
  //left eyelash
  stroke(0);
  strokeWeight(3);
  line(236, 309, 228, 305);
  
  stroke(0);
  strokeWeight(3);
  line(230, 315, 220, 312);
  
  stroke(0);
  strokeWeight(3);
  line(228, 324, 220, 324);
  
  //right eyelash
  stroke(0);
  strokeWeight(3);
  line(364, 314, 375, 308);
  
  stroke(0);
  strokeWeight(3);
  line(368, 320, 382, 316);
  
  stroke(0);
  strokeWeight(3);
  line(369, 327, 379, 326);
  
  //left earring
  fill('yellow');
  noStroke();
  circle(200,345,20);

  fill(0)
  noStroke();
  circle(203,347,11);
  
  //right earring
  fill('yellow');
  noStroke();
  circle(400,345,20);

  fill(0);
  noStroke();
  circle(396,347,11);
  
  //upper lip
  fill('rgb(242,174,186)');
  ellipse(295,388,17,10);
  
  fill('rgb(242,174,186)');
  ellipse(307,388,17,10);
  
  //lower lip
  fill('rgb(242,174,186)');
  ellipse(301,393,35,15);
  
  //lip line
  stroke('rgb(216,146,159)');
  strokeWeight(2);
  line(285,392,316,392);
  
  //left eyebrows
  stroke(20);
  strokeWeight(4);
  line(230,295,276,293);
  
  //right eyebrows
  stroke(20);
  strokeWeight(4);
  line(320,293,370,295);
  
}