Week 8: Reading 2 – Her Code Got Humans on the Moon

Norman’s article made me rethink what makes a design successful. I always assumed that usability was the most important goal, but he shows how our emotions can change the way we interact with objects. When we feel good, we tend to be more flexible, more creative, and more forgiving of small flaws. On the other hand, when we are stressed or anxious, even simple tasks can become difficult. This means that good design is not just about how something works, but also about how it makes us feel while using it.

His story about the three teapots really brings this idea to life. One is intentionally useless, another looks awkward but works well, and the last one is both practical and thoughtful. What stood out to me is how he chooses which teapot to use based on his mood. Sometimes he wants efficiency, other times he wants elegance or creativity. This shows that design is not just about solving a problem in one way, but about understanding the different contexts in which people live and make decisions.

In the end, Norman is not saying beauty is more important than function, or the other way around. He argues for a balance where usability, emotion, and aesthetics all work together. That idea stuck with me because it feels true not just for products, but for how we make choices in general. We are not purely logical or purely emotional. We are a mix of both, and the best designs (and maybe even the best ideas) are the ones that recognize that.

Week 8: Reading 1 – Norman,“Emotion & Design: Attractive things work better”

Reading about Margaret Hamilton made me realize how much of the space race story we usually miss. We often focus on the astronauts and the rockets, but here was someone working behind the scenes, writing the code that actually made the missions possible. What is fascinating is that she was doing all of this before “software engineering” was even considered a real field. She helped define it as a serious discipline, at a time when software was seen as secondary to hardware.

One moment that really stood out was when Hamilton tried to build in error protection after her daughter accidentally caused a crash in a simulation. She was told it would never happen during a real mission, but it did. Because of the precautions she took, the astronauts were able to land safely. It makes me think about how we often overlook the importance of preparing for the unexpected. Her work was not just about getting things to function, but about thinking ahead and designing systems that could handle failure.

There is also something meaningful in the way she worked: methodical, thoughtful, and persistent, even when her contributions were underestimated. She showed that innovation is not just about new inventions, but also about foresight and responsibility. Her story raises an important question: how many people are quietly shaping the world through careful, often invisible work, and how can we do better at recognizing their impact?

Week 8: Unusual switch – Stanky Leg

Concept:

I love dancing, so I wanted to create a switch that incorporates my hobby by doing the Stanky Leg! To do this, I used copper tape and attached strips to my knees. As I move and dance, my legs bend and the copper tapes on each knee come into contact. When they touch, the circuit is completed, which activates the switch and lights up the LED. This way, the LED lights up through the movement of dance, making it both interactive and fun.

https://drive.google.com/file/d/1TfbptN2TEvhUywdUC8JOCoeZR8rrbgjd/view?usp=sharing

Challenges:

It turned out to be more challenging than I expected because the wires had to maintain solid contact with the copper tape. However, as I moved my legs while dancing, the tape started to come loose, which caused the connection between the wire and the copper tape to break, disrupting the circuit.

Midterm Project – The Magic Studio

Concept

The game is called The Magic Studio. The original idea was to create a highly interactive room where users could generate anything they wanted—sounds, objects, and more—giving them full creative freedom. However, due to time constraints, I scaled the project down to a painting game. In the current version, users are given a reference painting, and they must recreate it by generating objects that match the original as closely as possible. The goal is to maintain an element of creativity while also challenging players to be precise in their recreations.

How it works

The game starts with an instructions screen. It then moves the scene where the user can generate objects to match the desired painting. New objects can be generated by prompting again and the previous one gets removed automatically. Once the user is satisfied with their design, they can click on the next button to move to the next painting. Once all the painting tasks are completed, the user can restart the game.

Challenges

One of the biggest challenges and something that I am the most proud of was making the language model (LLM) generate objects as accurately as possible. Initially, I used a model with limited capabilities, which struggled to create detailed objects correctly. This led to some frustrating results, as the generated objects often didn’t match the intended design. Eventually, I switched to the Gemini model, which significantly improved performance. The new model generated objects more accurately, making the gameplay experience much smoother and more enjoyable.

Another challenge was ensuring that the interaction between the user and the game felt intuitive. Since p5.js is primarily a visual tool, integrating AI-based object generation in a way that seamlessly fit into the game mechanics took a lot of trial and error.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
// Create input field and Gemini button, but hide them until the game starts
promptInput = createInput("Provide Object Description");
promptInput.position(350, 100);
promptInput.hide();
button = createButton("Create your painting!");
button.position(promptInput.x+20,promptInput.y + promptInput.height + 10);
button.mousePressed(() => {
let userText = promptInput.value();
let geminiPrompt = `
You are an AI code generator working within an online p5.js editor environment. In this environment, the following conditions apply:
- The p5.js library is loaded along with its DOM addon (p5.dom), so functions like createCanvas, createButton, and createInput are available.
- A canvas of 800x600 pixels is created as part of a "dream room" simulation.
- The dream room maintains an array of dream objects. Each dream object must be defined as a JavaScript object with the following properties:
- x: a numeric value representing the horizontal coordinate (default value: 100)
- y: a numeric value representing the vertical coordinate (default value: 100)
- size: a numeric value representing the object’s size (default value: 50)
- draw: a function that uses p5.js drawing commands (for example, ellipse) to render the object at (x, y) using its size. Ensure you combine multiple shapes for a rich rendering.
- move: a function that accepts two parameters, dx and dy, and updates the x and y coordinates respectively
Your task is to generate a valid p5.js code snippet that creates a new dream object (named \`dreamObj\`) with these properties. The object must be defined using "let".
Output Requirements:
- Your output must be a valid JSON object with exactly two keys: "code" and "description".
- The "code" key’s value must be a string containing the p5.js code that defines the dream object as described.
- The "description" key’s value must be a concise explanation of what the code does.
- Do not include any additional keys or text; output only the JSON.
Now, generate the object:
`;
let prompt = geminiPrompt + userText;
generateDreamObject(prompt);
});
button.hide();
// Create input field and Gemini button, but hide them until the game starts promptInput = createInput("Provide Object Description"); promptInput.position(350, 100); promptInput.hide(); button = createButton("Create your painting!"); button.position(promptInput.x+20,promptInput.y + promptInput.height + 10); button.mousePressed(() => { let userText = promptInput.value(); let geminiPrompt = ` You are an AI code generator working within an online p5.js editor environment. In this environment, the following conditions apply: - The p5.js library is loaded along with its DOM addon (p5.dom), so functions like createCanvas, createButton, and createInput are available. - A canvas of 800x600 pixels is created as part of a "dream room" simulation. - The dream room maintains an array of dream objects. Each dream object must be defined as a JavaScript object with the following properties: - x: a numeric value representing the horizontal coordinate (default value: 100) - y: a numeric value representing the vertical coordinate (default value: 100) - size: a numeric value representing the object’s size (default value: 50) - draw: a function that uses p5.js drawing commands (for example, ellipse) to render the object at (x, y) using its size. Ensure you combine multiple shapes for a rich rendering. - move: a function that accepts two parameters, dx and dy, and updates the x and y coordinates respectively Your task is to generate a valid p5.js code snippet that creates a new dream object (named \`dreamObj\`) with these properties. The object must be defined using "let". Output Requirements: - Your output must be a valid JSON object with exactly two keys: "code" and "description". - The "code" key’s value must be a string containing the p5.js code that defines the dream object as described. - The "description" key’s value must be a concise explanation of what the code does. - Do not include any additional keys or text; output only the JSON. Now, generate the object: `; let prompt = geminiPrompt + userText; generateDreamObject(prompt); }); button.hide();
  // Create input field and Gemini button, but hide them until the game starts
  promptInput = createInput("Provide Object Description");
  promptInput.position(350, 100);
  promptInput.hide();
  
  button = createButton("Create your painting!");
  button.position(promptInput.x+20,promptInput.y + promptInput.height + 10);
  button.mousePressed(() => {
    let userText = promptInput.value();
    let geminiPrompt = `
You are an AI code generator working within an online p5.js editor environment. In this environment, the following conditions apply:
- The p5.js library is loaded along with its DOM addon (p5.dom), so functions like createCanvas, createButton, and createInput are available.
- A canvas of 800x600 pixels is created as part of a "dream room" simulation.
- The dream room maintains an array of dream objects. Each dream object must be defined as a JavaScript object with the following properties:
    - x: a numeric value representing the horizontal coordinate (default value: 100)
    - y: a numeric value representing the vertical coordinate (default value: 100)
    - size: a numeric value representing the object’s size (default value: 50)
    - draw: a function that uses p5.js drawing commands (for example, ellipse) to render the object at (x, y) using its size. Ensure you combine multiple shapes for a rich rendering.
    - move: a function that accepts two parameters, dx and dy, and updates the x and y coordinates respectively

Your task is to generate a valid p5.js code snippet that creates a new dream object (named \`dreamObj\`) with these properties. The object must be defined using "let".

Output Requirements:
- Your output must be a valid JSON object with exactly two keys: "code" and "description".
- The "code" key’s value must be a string containing the p5.js code that defines the dream object as described.
- The "description" key’s value must be a concise explanation of what the code does.
- Do not include any additional keys or text; output only the JSON.

Now, generate the object: 
`;
    let prompt = geminiPrompt + userText;
    generateDreamObject(prompt);
  });
  button.hide();

 

Future Improvements

There are the ways I can enhance the project in the future:

  1. AI-Based Scoring System – One idea is to allow players to take a screenshot of their generated painting, and then use AI to analyze it and give a score based on accuracy and similarity to the reference image.
  2. AI-Generated Reference Objects – Instead of only providing a static reference painting, we could allow AI to generate a new image based on the original reference. The AI could create a new rendition of the image in a slightly altered style, and players could then attempt to recreate that version using p5.js.
  3. Comparing AI vs. Player Renderings – We could take screenshots of both the AI-generated image and the player-generated image, then compare them using an AI model to determine which one is a better match to the original reference. This would add another layer of challenge and gamification to the experience.
  4. More Creative Freedom – To bring the project closer to the original concept, I could add more interactive elements, such as sound generation or more diverse object creation tools, allowing users to express their creativity beyond just painting.

Week 5: Reading Response

Computer vision is really different from how humans see the world. While we naturally process images, depth, and context with our brains, computers rely on algorithms and sensors to make sense of what they’re “seeing.” Humans can instantly recognize faces, emotions, and even artistic meaning without much effort, but computers need tons of data and training to even get close to that level of understanding. Plus, human vision adjusts easily to different lighting or angles, while computers often struggle unless the conditions are just right.

To help computers track or recognize what we want, we use techniques like edge detection, motion tracking, and pattern recognition. Edge detection helps separate objects from the background, while motion tracking follows movement across frames in a video. Pattern recognition is also huge—by training a model with a bunch of images, it can learn to recognize faces, shapes, or even specific objects. But to get accurate results, we usually have to clean up the data first, removing noise or adjusting lighting so the system doesn’t get confused.

The ability of computer vision to track and monitor things has a big impact on interactive art. Artists use it to create pieces that respond to movement or presence, making the experience more immersive and engaging. But at the same time, it raises ethical concerns—these same tools can be used for surveillance, sometimes without people knowing. So while computer vision opens up exciting possibilities for art, it also forces us to think about privacy and how we balance creativity with responsibility.

Week 5: Midterm Progress

Concept:

I decided on a calming spa game where the user (or viewer) sees a person lying in a clinic setting, complete with subtle animations like steam or aroma particles. Even though I haven’t fully finalized all interactivity elements, my plan is to allow some simple interactions, such as choosing different spa treatments or changing certain visual elements. For now, the main focus is creating the environment and making it look professional and aesthetically pleasing.

Code and Design:

I started laying out the main structure in p5.js. I separated the code into different functions to keep things organized:

  • drawClinicBackground(): Sets the scene with the walls, floor, decorations, and additional details like a window or posters.
  • drawTreatmentBed(): Draws the bed and pillow for the patient.
  • drawPatient(): Renders the patient’s upper body and face, including minimal facial features.
  • drawSteam(): Handles the animation of steam or aroma particles rising around the face.

I’m also planning to introduce classes if the animation or interactivity becomes more complex, especially if I need multiple interactive objects or more sophisticated animations. This modular approach helps keep things clean. If I need to expand later—maybe adding sound effects, more interactive objects, or advanced animations—I can easily integrate new code.

This is what the design is supposed to look like:

Version 1.0.0

Frightening/Challenging aspects:

One of the most uncertain parts of my project is making the environment feel truly interactive and alive. I’m worried about how performance might be affected if I add a lot of animations or interactive elements at once. Another concern is making sure the art style and animations blend nicely so that the scene doesn’t look disjointed.

To reduce this risk, I wrote some test code to experiment with particle systems and layering. Specifically, I tested out how many steam particles I can animate in real-time without causing a slowdown. I also experimented with gradient backgrounds, images, and more detailed drawings to see how far I could push the visuals before I start seeing performance drops.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function drawSteam() {
// Draw and update each steam particle
noStroke();
for (let i = 0; i < steamParticles.length; i++) {
let p = steamParticles[i];
fill(255, 255, 255, p.alpha);
ellipse(p.x, p.y, p.size, p.size);
// Move the particle upwards
p.y -= p.speed;
// Small horizontal "drift"
p.x += map(noise(p.y * 0.01, p.x * 0.01), 0, 1, -0.2, 0.2);
// Gradually fade out
p.alpha -= 0.2;
// Reset the particle when it goes out of range
if (p.y < height / 2 - 80 || p.alpha < 0) {
steamParticles[i] = createSteamParticle();
steamParticles[i].y = height / 2 + random(0, 30);
steamParticles[i].alpha = random(100, 150);
}
}
}
function drawSteam() { // Draw and update each steam particle noStroke(); for (let i = 0; i < steamParticles.length; i++) { let p = steamParticles[i]; fill(255, 255, 255, p.alpha); ellipse(p.x, p.y, p.size, p.size); // Move the particle upwards p.y -= p.speed; // Small horizontal "drift" p.x += map(noise(p.y * 0.01, p.x * 0.01), 0, 1, -0.2, 0.2); // Gradually fade out p.alpha -= 0.2; // Reset the particle when it goes out of range if (p.y < height / 2 - 80 || p.alpha < 0) { steamParticles[i] = createSteamParticle(); steamParticles[i].y = height / 2 + random(0, 30); steamParticles[i].alpha = random(100, 150); } } }
function drawSteam() {
  // Draw and update each steam particle
  noStroke();
  for (let i = 0; i < steamParticles.length; i++) {
    let p = steamParticles[i];
    
    fill(255, 255, 255, p.alpha);
    ellipse(p.x, p.y, p.size, p.size);
    
    // Move the particle upwards
    p.y -= p.speed;
    // Small horizontal "drift"
    p.x += map(noise(p.y * 0.01, p.x * 0.01), 0, 1, -0.2, 0.2);
    // Gradually fade out
    p.alpha -= 0.2;
    
    // Reset the particle when it goes out of range
    if (p.y < height / 2 - 80 || p.alpha < 0) {
      steamParticles[i] = createSteamParticle();
      steamParticles[i].y = height / 2 + random(0, 30);
      steamParticles[i].alpha = random(100, 150);
    }
  }
}

 

Things to prevent:

  • Overcomplicating the Code: I’m trying not to throw everything in one giant file without structure. By using separate functions (and potentially classes), I’ll keep my code organized and easier to debug.
  • Performance Bottlenecks: Adding too many particles or large images could slow down the sketch. I’m keeping an eye on frame rates and testing on different devices so I can catch performance issues early.
  • Poor User Experience: If I add too many clickable elements or extra features, it might overwhelm the user and make the scene less relaxing. I want a balanced level of interaction that doesn’t feel cluttered.
  • Lack of Testing: I plan to test small sections of the code often, rather than waiting until the end. This way, I can catch bugs and performance issues as soon as they pop up.

Week 4: Reading Response

One thing that drives me crazy is the poor design of public bathroom sinks. Many of them have automatic sensors for water flow, but the sensors are inconsistent. Sometimes, you have to wave your hands around for the water to turn on, or it shuts off too soon, forcing you to repeat the process. It’s frustrating, especially when you’re in a hurry. This could be improved by using better sensors that recognize motion more accurately or by designing a simple manual option as a backup.

Norman’s principles of design, such as discoverability and feedback, can be applied to interactive media in many ways. For example, in app design, buttons and navigation menus should be clearly visible and intuitive. A good interface should guide the user naturally, without confusion. Norman also talks about signifiers, which are essential in digital design. Websites and apps should have clear indicators—like highlighted buttons or hover effects—to show what actions are possible.

Another principle that applies to interactive media is mapping, ensuring that controls relate logically to their effects. For example, when adjusting volume on a screen, a vertical slider is often more intuitive than a horizontal one, because we associate “up” with “increase” and “down” with “decrease.” Norman’s ideas remind me that good design is not just about looks but about usability. A well-designed interface should be easy to understand and not require a user manual to figure out.

Week 4: Generative Text

Concept:

This project is a gentle companion for women across different phases of their cycle. Each phase—Follicular, Ovulation, Luteal, and Menstrual—reveals its own short, reflective text. The more difficult phases, Luteal and Menstrual, include extra motivating lines to offer comfort and encouragement. Overall, it acts like a quote page, providing small bursts of support and understanding. The goal is to create a sense of connection and help women feel acknowledged in whatever day they find themselves.

Highlight:

I believe that input parsing and validation is a highlight because it makes sure the user’s number is always correct. It was tricky to get the latest number when users changed the value without pressing Enter. Moving the number conversion to the button click made sure we always use the newest value. Handling wrong numbers and showing clear messages was tough, but it is key for a smooth experience.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
// if valid, choose a random entry based on phase
let entry = "";
switch (currentPhase) {
case "Follicular":
entry = random(follicularEntries); // pick random phrase
break;
case "Ovulation":
entry = random(ovulationEntries);
break;
case "Luteal":
entry = random(lutealEntries);
entry += " " + random(motivationalPhrases); // add extra motivation
break;
case "Menstrual":
entry = random(menstrualEntries);
entry += " " + random(motivationalPhrases); // add extra motivation
break;
}
diaryEntry = entry; // store the chosen entry
isGenerated = true; // mark as generated
}
//validates if the day is within the correct range for the phase
function validateDayRange(phase, day) {
if (phase === "Follicular") {
// allowed days: 1 to 13
return day >= 1 && day <= 13;
} else if (phase === "Ovulation") {
// allowed days: 14 to 16
return day >= 14 && day <= 16;
} else if (phase === "Luteal") {
// allowed days: 17 to 28
return day >= 17 && day <= 28;
} else if (phase === "Menstrual") {
// allowed days: 1 to 5
return day >= 1 && day <= 5;
}
return false; // default false
}
// if valid, choose a random entry based on phase let entry = ""; switch (currentPhase) { case "Follicular": entry = random(follicularEntries); // pick random phrase break; case "Ovulation": entry = random(ovulationEntries); break; case "Luteal": entry = random(lutealEntries); entry += " " + random(motivationalPhrases); // add extra motivation break; case "Menstrual": entry = random(menstrualEntries); entry += " " + random(motivationalPhrases); // add extra motivation break; } diaryEntry = entry; // store the chosen entry isGenerated = true; // mark as generated } //validates if the day is within the correct range for the phase function validateDayRange(phase, day) { if (phase === "Follicular") { // allowed days: 1 to 13 return day >= 1 && day <= 13; } else if (phase === "Ovulation") { // allowed days: 14 to 16 return day >= 14 && day <= 16; } else if (phase === "Luteal") { // allowed days: 17 to 28 return day >= 17 && day <= 28; } else if (phase === "Menstrual") { // allowed days: 1 to 5 return day >= 1 && day <= 5; } return false; // default false }
// if valid, choose a random entry based on phase
  let entry = "";
  switch (currentPhase) {
    case "Follicular":
      entry = random(follicularEntries); // pick random phrase
      break;
    case "Ovulation":
      entry = random(ovulationEntries);
      break;
    case "Luteal":
      entry = random(lutealEntries);
      entry += " " + random(motivationalPhrases); // add extra motivation
      break;
    case "Menstrual":
      entry = random(menstrualEntries);
      entry += " " + random(motivationalPhrases); // add extra motivation
      break;
  }

  diaryEntry = entry; // store the chosen entry
  isGenerated = true; // mark as generated
}

//validates if the day is within the correct range for the phase

function validateDayRange(phase, day) {
  if (phase === "Follicular") {
    // allowed days: 1 to 13
    return day >= 1 && day <= 13;
  } else if (phase === "Ovulation") {
    // allowed days: 14 to 16
    return day >= 14 && day <= 16;
  } else if (phase === "Luteal") {
    // allowed days: 17 to 28
    return day >= 17 && day <= 28;
  } else if (phase === "Menstrual") {
    // allowed days: 1 to 5
    return day >= 1 && day <= 5;
  }
  return false; // default false
}

Reflections, ideas for future work & Improvements:

For future work, I plan to add more customization options and richer animations. I want to explore saving user entries so that they can track their mood over time. I also plan to refine the validation process and introduce more advanced error handling. These improvements would make the project even more useful and appealing as a supportive quote page for women.

Week 3: Reading Response

Reading Crawford’s thoughts on interactivity made me realize how often we misuse or misunderstand the term. He argues that true interaction is more than just clicking buttons or watching something respond—it’s a continuous cycle of listening, thinking, and speaking. This made me question a lot of things I previously considered interactive, like video games or even websites. Are they really interactive, or am I just reacting to what they already decided for me? Crawford’s example of a refrigerator light turning on when you open the door really stuck with me. We say it “responds,” but does it actually interact? Probably not. It makes me wonder—how many things in digital media are truly interactive, and how many just give an illusion of interactivity? I’m also curious if we, as users, even want true interactivity all the time. If everything required deep engagement, wouldn’t that be exhausting?

His perspective also makes me rethink my approach to designing p5.js sketches. If interactivity isn’t just about reacting, but about engaging in a meaningful back-and-forth, then my sketches should create that kind of loop. Right now, they mostly rely on simple user inputs like clicks or movement, but how can I make them listen better? Maybe adding more adaptive responses or making the program “think” before reacting could help. A strongly interactive system, in my view, would be one where the user feels like they are having a conversation with the system, rather than just pressing buttons to see what happens. Overall, Crawford’s ideas push me to design experiences that aren’t just visually engaging but intellectually engaging too.

Week 3: OOP assignment

Concept:

In the spirit of valentine’s week, I made this interactive scene to celebrate rose day with my boyfriend. It’s inspired by our favorite trip to japan, where we fell in love with the beauty of cherry blossom trees and the iconic torii gate. The scene aims to capture those sweet memories by blending gentle falling flowers, a warm sunset, and a romantic gesture of offering roses. It’s meant to feel cozy, nostalgic, and filled with the magic of our shared experiences. Since we are in a long distance relationship, I gave this as a gift.

Highlight:

The highlight of this scene is definitely the cherry blossom trees. I spent a lot of time studying pictures of real sakura trees to capture their delicate look and subtle branching. I wanted them to appear natural, so I used a fractal-like approach: starting with a trunk and randomly growing multiple smaller branches at slightly varied angles and lengths. Each branch has clusters of blossoms added at the tips, mimicking the way real cherry blossoms bloom in bunches. striking a balance was tricky—too few blossoms made them look sparse, but too many quickly made them look overly dense. Eventually, layering different pink tones and carefully adjusting the random growth patterns helped create a softer, more realistic effect.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
// add a random cluster of flowers at a branch tip
addFlowerCluster(xcenter, ycenter) {
let numflowers = floor(random(4, 8));
for (let i = 0; i < numflowers; i++) {
let offsetx = random(-20, 20);
let offsety = random(-20, 20);
let size = random(6, 10);
let rot = random(TWO_PI);
this.flowers.push(new flower(xcenter + offsetx, ycenter + offsety, size, rot));
}
}
// display all branches and flowers
display() {
push();
translate(this.x, this.y);
scale(this.scalefactor);
// draw branches
for (let b of this.branches) {
b.display();
}
// draw flowers
noStroke();
for (let f of this.flowers) {
f.display();
}
pop();
}
}
// add a random cluster of flowers at a branch tip addFlowerCluster(xcenter, ycenter) { let numflowers = floor(random(4, 8)); for (let i = 0; i < numflowers; i++) { let offsetx = random(-20, 20); let offsety = random(-20, 20); let size = random(6, 10); let rot = random(TWO_PI); this.flowers.push(new flower(xcenter + offsetx, ycenter + offsety, size, rot)); } } // display all branches and flowers display() { push(); translate(this.x, this.y); scale(this.scalefactor); // draw branches for (let b of this.branches) { b.display(); } // draw flowers noStroke(); for (let f of this.flowers) { f.display(); } pop(); } }
// add a random cluster of flowers at a branch tip
  addFlowerCluster(xcenter, ycenter) {
    let numflowers = floor(random(4, 8));
    for (let i = 0; i < numflowers; i++) {
      let offsetx = random(-20, 20);
      let offsety = random(-20, 20);
      let size = random(6, 10);
      let rot = random(TWO_PI);
      this.flowers.push(new flower(xcenter + offsetx, ycenter + offsety, size, rot));
    }
  }
  
  // display all branches and flowers
  display() {
    push();
    translate(this.x, this.y);
    scale(this.scalefactor);
    
    // draw branches
    for (let b of this.branches) {
      b.display();
    }
    
    // draw flowers
    noStroke();
    for (let f of this.flowers) {
      f.display();
    }
    
    pop();
  }
}

Improvements:

I could add more details to the roses, like multiple petals or soft shading, to make them look richer and more lifelike. Similarly, refining the branching pattern and blossom clusters of the cherry trees could bring even more realism by adding subtle color variations and layered petals. Another improvement could be animating the actual gesture of handing out the rose, so the flower doesn’t just glide through the air. Seeing a little motion in the characters’ arms or a gentle pass of the rose would make the scene feel more personal and sweet.