Midterm Project – Barbie Dreamhouse

Barbie’s Dreamhouse 

Link to sketch: https://editor.p5js.org/rma9603/full/y2y3-M4zC

Whenever I play with interactive art, I try to build something that invites slow exploration instead of a single-goal game. For my final project I built Barbie’s Dreamhouse: a small interactive world with rooms to explore (Outside → Inside → Closet, Kitchen, Bedroom, Living Room), each containing subtle objects the user can click or interact with. The goal was to create a calm, nostalgic environment that encourages clicking, discovering, and lingering.

Concept

The Dreamhouse is not a “win/lose” game, it’s an exploratory scene. The idea was to capture the cozy, pastel vibe you expect from a dreamhouse and layer in small interactive details:

  • An exterior view with a theme song and a door that rings a real doorbell when clicked.

  • An interior view with hotspots for the closet, kitchen, bedroom, and living room.

  • A Closet (Wardrobe) with multiple outfit sets and selectable frames, accompanied by sparkle effects and sound.

  • A Kitchen  to pick a cupcake base, bake it, then decorate with frosting.

  • A Bedroom with a clickable book that opens a reader overlay

  • A Living Room with a TV area and remote control for channel-like images with music that resonated with the pic shown.

On a personal note: I loved Barbie as a kid, and some of my favorite Barbie movies directly inspired the look and feel of the living room — the pastel decor, playful props, and the idea of a tiny TV full of different “channels” came straight from that nostalgia.

The focus was on atmosphere: soft pastel visuals, gentle audio, and small surprises that reward clicking around.

Here is a rough sketch of what I envisioned:

Key features and interactions

Room transitions

  • Click the door from the exterior to enter the house.

  • From the interior, click room hotspots to open that room full-screen.

  •  Scene state management makes switching easy.

Closet 

  • Pages of outfit frames sliced from larger sprite sheets. (took me a while to slice them correctly).

  • Click dots to switch pages/sets, arrows to cycle frames, and a ✓ button to confirm selection.

  • Sparkle overlay + sparkle sound on selection when outfit is confirmed.

  • I added a wardrobe reset so pressing R reliably restarts the closet to its initial state.

Kitchen

  • A mini workflow: pick base → bake (progress bar) → confirm → decorate (frosting).

  • Cupcake base and frosting are separate sprites; frosting is aligned to the base bottom using computed offsets so different frosted overlays sit properly.

  • Tweaked the base preview size so the cupcake base doesn’t dominate the scene.

Bedroom

  • A book hotspot that opens a reader overlay with pages.

  • Prev/Next page buttons and R to close.

Living room

  • The living room is where my childhood Barbie inspiration shows most — pastel furniture, framed photos, and a playful TV nook. I built a small TV area with channel-like images and a responsive remote so users can flip through visuals like changing channels in a cozy movie night.

  • TV image area and remote hotspots scale responsively with the canvas; the living room’s color and props riff off my favorite Barbie movies.

Audio

  • Background theme for the exterior (looping Barbie Life in the Dreamhouse theme song).

  • Doorbell sound that plays when clicking the door — the theme song stops when the door is pressed so the bell is audible and the audio doesn’t overlap.

  • Special audio for sparkle, baking ding, closet music, and bedroom lullaby — all are conditionally played/stopped when entering/exiting rooms or selecting items.

Workflow & what went well

  1. Scene manager + state machine
    Making a small SceneManager (global state variable with states like outside, inside, closet, bedroom, living room) made it trivial to manage transitions and keep room-specific logic isolated.
  2. Drawing and assets
    I drew assets in  Canva and used AI to generate some of the visuals  at the same size as the p5 canvas where possible — this hugely simplified positioning and saved time. For sprite sheets (like cupcake bases/frostings and outfits) I sliced frames programmatically so I could treat them like tiled sprites.
  3. Small polish details
  • Preventing continuous hover sounds (door bell) by gating the knock with a boolean.

  • Ensuring music doesn’t layer (check .isPlaying() and .pause() before starting a new track).

  • Adding a “sparkle” overlay and stopping closet music when confirming a selection so the sparkle sound can be heard.

What coding I’m proud of

The piece of code I’m proudest of is  honestly the whole kitchen, specifically the sprite-slicing + alignment system for the Kitchen. I wrote utilities that trim transparent pixels from sprite frames, compute each frame’s visual center/bottom, and then use those offsets to automatically align frosting to the cupcake base across many different sprite sheets and sizes. That makes wildly different art assets behave as a single cohesive object without manual per-frame positioning. It also required careful handling of canvas scaling, timing (bake/ding), and audio overlap — a lot of little edge cases that had to work together cleanly. 

 // slice sprites  — trims each frame and computes offsets on trimmed images
  _sliceSprites() {
    // base frames
    this.baseFrames = [];
    this.baseOffsets = [];
    if (
      this.cupcakeImg &&
      this.cupcakeImg.width &&
      this.cupcakeImg.height &&
      this.baseCount > 0
    ) {
      const g = this._bestGridFor(this.cupcakeImg, this.baseCount);
      const fw = Math.round(this.cupcakeImg.width / g.cols);
      const fh = Math.round(this.cupcakeImg.height / g.rows);
      let idx = 0;
      for (let r = 0; r < g.rows; r++) {
        for (let c = 0; c < g.cols; c++) {
          if (idx >= this.baseCount) break;
          const sx = c * fw,
            sy = r * fh;
          try {
            const raw = this.cupcakeImg.get(sx, sy, fw, fh);
            const trimmed = this._trimTransparent(raw) || raw;
            this.baseFrames[idx] = trimmed;
            this.baseOffsets[idx] = this._computeContentBounds(trimmed);
          } catch (e) {
            this.baseFrames[idx] = null;
            this.baseOffsets[idx] = {
              xOffset: 0,
              yOffset: 0,
              maxY: Math.floor(fh / 2),
            };
          }
          idx++;
        }
      }
    }

    // frosting frames
    this.frostingFrames = [];
    this.frostingOffsets = [];
    if (
      this.frostingImg &&
      this.frostingImg.width &&
      this.frostingImg.height &&
      this.frostingCount > 0
    ) {
      const g = this._bestGridFor(this.frostingImg, this.frostingCount);
      const fw = Math.round(this.frostingImg.width / g.cols);
      const fh = Math.round(this.frostingImg.height / g.rows);
      let idx = 0;
      for (let r = 0; r < g.rows; r++) {
        for (let c = 0; c < g.cols; c++) {
          if (idx >= this.frostingCount) break;
          const sx = c * fw,
            sy = r * fh;
          try {
            const raw = this.frostingImg.get(sx, sy, fw, fh);
            const trimmed = this._trimTransparent(raw) || raw;
            this.frostingFrames[idx] = trimmed;
            this.frostingOffsets[idx] = this._computeContentBounds(trimmed);
          } catch (e) {
            this.frostingFrames[idx] = null;
            this.frostingOffsets[idx] = {
              xOffset: 0,
              yOffset: 0,
              maxY: Math.floor(fh / 2),
            };
          }
          idx++;
        }
      }
    }
  }

 

Areas for improvement / future work

  • Add instructions or an optional guided mode (right now the experience is intentionally exploratory, but an in-game menu could help some users).

  • Refine click detection for non-rectangular images (pixel-perfect hit testing for PNGs with transparency).

  • Add more kitchen interactions: coffee machine, more decoration options, or an inventory for outfits.


What I learned

  • Breaking the app into small room controllers (Wardrobe, Kitchen, Bedroom, LivingRoom) makes the codebase much easier to maintain and debug.

  • Small details matter: gating hover sounds, preventing overlapping music, and subtle visual feedback (sparkle, dots) make the experience feel much more polished.

  • Drawing assets at canvas scale saves tons of time when positioning interactive pieces.

Closing

I loved Barbie when I was a kid, and designing this project felt like a grown-up, interactive love letter to those movies ,  especially when building out the living room. I enjoyed making something soft and low-pressure that rewards clicking and exploration. The Dreamhouse was a great exercise in scene management, responsive layout, and polishing interactions that make users want to hang out in a piece of art.

Barbie’s Dream House – Midterm Progress

Concept + Interaction: 
I’m making a small interactive Barbie Dream House experience with mini-games in each room. You start outside the house and it tells you to “click the door”. When you do a doorbell sound plays and you go inside. The interior is a divided view with hotspots for rooms: closet, kitchen, bedroom, and living room.
Each room has a tiny activity:
Closet: wardrobe selector (browse outfits with ← / → and hit ✓ to confirm).
Kitchen: cupcake maker (choose base, pick frosting color, “bake” in an oven).
Bedroom: open a book to see a short quote or page.
Living room: click the TV to change pictures (channel changer). 

Code design:
Each interactive room is its own class/file.
Main sketch holds a simple state machine: state = “outside” | “inside” | “closet” | “kitchen” | …. When state === “closet” the sketch delegates to wardrobe.draw() and forwards clicks to wardrobe.handleClick().

I made a Wardrobe class that:

  •  loads outfit sprite-pages and slices them into frames,
  • centers and scales the current outfit into a fixed box,
  • draws left/right arrow buttons and a confirm (✓) button,
  • exposes handleClick() for arrows, page switches, and confirm.

Other rooms (Kitchen, Bedroom, LivingRoom) follow the same class pattern so swapping and testing is easy.

Here is my current progress:

https://editor.p5js.org/rma9603/full/5s96PtOnM

 

Biggest risk & what I did about it:
The scariest part was making outfit switching reliable: slicing sprite-sheets, handling transparency, and aligning click hitboxes. If that failed the closet would be unusable.

How I reduced the risk:
I wrote a tiny test that loads one outfit sprite-page and uses page.get() to draw each frame side-by-side so I could verify cropping and transparency immediately. I exported/uploaded transparent PNGs (no white bg) and re-tested them.

centralized coordinates: arrows, confirm button, and hitboxes compute from the same this.box values so visuals and clicks always line up.

Result: the sprite-sheet method works, i can slice pages reliably and the outfit switching is stable, so i’m confident i can implement the cupcake and other rooms the same modular way.

Next steps:

  •  finish the kitchen cupcake game (choose base, frosting, bake timer).
  •  Create the bedroom interaction
  •  Create living room interaction
  • add start/instructions screen and a “new session” reset button so people can restart without refreshing.
  •  hover feedback + polish (shadows, small tweaks).
  • Make it full screen.

Week 5 – Reading Reflection

What stood out to me in the reading is how limited computer vision really is compared to human vision. As humans, we don’t think twice about recognizing objects, adjusting to poor lighting, or making sense of what we see in context. A computer, on the other hand, needs specific rules and conditions to function. It does not actually understand meaning but instead works through pixels, patterns, and features. If something changes in the environment, like lighting or background, the system can easily fail. That made me realize how much of computer vision is not about “seeing” the world the way we do but about narrowing down what the computer is expected to detect.

To make computer vision work, artists and designers often shape the environment so the system has fewer obstacles. This can be done by using clear contrasts, better lighting, or markers that help the camera distinguish what matters. There are also tools like background subtraction and motion tracking that simplify what the computer needs to follow. Hardware choices such as certain cameras, filters, or infrared technology also help in making the vision system more reliable.

In interactive art, I think this creates both opportunities and challenges. On the positive side, computer vision allows art to respond to the presence and movements of people, turning viewers into active participants. It makes installations feel alive and immersive in ways that would not be possible without tracking. At the same time, it carries the same logic as surveillance because the system is always watching and recording behavior. For me, this makes computer vision powerful but also a little unsettling, since it forces us to think about what it means to be observed and how that shapes the experience of art.

Week 4: Reflection

Something That Drives Me Crazy
One thing that really frustrates me is when digital buttons or interactive elements look clickable but actually aren’t. For example, I’ll see a button that’s styled like it should do something, click it, and… nothing happens. Sometimes I double-check, hover around, or even click other areas, thinking maybe I missed something. It’s confusing, annoying, and honestly breaks the flow of using the site or app. I’ve seen it in everything from websites to apps, and even small projects can suffer if users aren’t given clear cues. It’s such a simple thing to fix, too. Adding hover effects, subtle animations, shadows, or clear visual cues can immediately signal that something is interactive, saving users time and preventing frustration.

How I Can Apply Norman’s Principles to Interactive Media
Norman’s ideas about affordances and signifiers really clicked for me. Affordances tell users what actions are possible, and signifiers indicate where they can take those actions. In my interactive media work, I try to make every clickable element obvious. For instance, in my p5.js sketches like my floating dad joke bubble project, I make sure bubbles are visually distinct, move gently to draw attention, and respond when hovered or clicked. These small cues let users know exactly what to do without guessing. I also pay attention to natural mapping. Just like Norman talks about arranging light switches to match the lights they control, I place interface elements in locations that feel intuitive, buttons go where people expect them, and interactive elements behave like their real-world counterparts would.

Reflection and Ideas for Improvement
Working on this has made me realize how tiny details in design can have a huge impact. Even something as simple as whether a bubble looks “poppable” can completely change how a user experiences the project. In the future, I’d like to experiment with multiple interactive elements at once and make sure each is clearly signaled. I’m also thinking about adding feedback for users, like subtle animations when they hover or click, so the system feels alive and responsive. Another idea is giving users more control over interactions; for example, letting them drag or rearrange elements, while still keeping things intuitive. Ultimately, I want my interactive media to be fun, obvious, and frustration-free, where users can explore naturally and enjoy the experience without ever feeling lost.

Week 4 – Dad Joke Aquarium

Concept
I love dad jokes, so I wanted to give them a playful home. My project is a digital aquarium where each joke floats inside a bubble. Click on a bubble and it pops to reveal a new joke. The goal was to combine humor with a visually appealing theme and make the jokes feel alive in their own little environment.

Highlight of Code I am Proud Of
The hardest part was making sure the joke text always fits neatly inside the bubble. I created a custom function called drawWrappedText that splits the joke into lines and centers them vertically. This required measuring text widths, calculating line spacing, and dynamically adjusting positions so the text always looks clean and balanced. I also added gentle floating and rotation to the bubble, giving the jokes a lively, buoyant feel.

// draw wrapped and vertically centered text inside bubble
function drawWrappedText(txt, x, y, maxWidth, lineSpacing) {
  let words = txt.split(" "); // split text into words
  let lines = [];
  let line = "";

  // build lines that fit within maxWidth
  for (let i = 0; i < words.length; i++) {
    let testLine = line + words[i] + " ";
    if (textWidth(testLine) > maxWidth && line.length > 0) {
      lines.push(line);
      line = words[i] + " ";
    } else {
      line = testLine;
    }
  }
  lines.push(line);

  // calculate vertical centering
  let totalHeight = lines.length * lineSpacing;
  let startY = y - totalHeight / 2 + lineSpacing / 2;

  // draw each line
  for (let i = 0; i < lines.length; i++) {
    text(lines[i], x, startY + i * lineSpacing);
  }
}

Reflection and Future Improvements
I had a lot of fun combining humor with interactive design. In the future, I would like to add multiple bubbles at once, each with a different joke, and animate the fish reacting to the bubbles for extra playfulness. Another idea is letting users submit their own dad jokes to make the aquarium more personalized and community-driven.

Week 3 – Reflection post

When I first read that activities like reading or dancing are not really interactive, I was confused. I always thought that simply reacting to something counted as interacting, so I had to question my own assumptions. Crawford explains that strong interaction requires three elements: listening, thinking, and speaking. Listening is when the system notices what the user does, thinking is when it figures out how to respond, and speaking is when it actually shows that response. I realized that many of my sketches are more reactive than interactive. They respond to clicks or mouse movement, but they do not really “think.” Without all three elements, interaction feels one-sided or unsatisfying. Thinking about this helped me see that interactivity is really a two-way conversation, where the program and the user are both active participants. The reading also made me reflect on the balance between predictability and surprise. If the sketch always responds the same way, it quickly becomes boring. If it responds randomly without logic, it can feel confusing. Strong interactivity happens when the user feels like their actions truly matter, but the system also has its own personality and reasoning.

For my p5 sketches, I want to build this type of interactivity. In my most recent mosaic-style sketch, the program could listen to which shapes the user clicks or where the mouse moves, think by changing the lines, colors, or arcs, and speak by  updating the patterns on the canvas. I also want to add keyboard functions so pressing keys can change the color of something or the layout. By including randomness or subtle variations, each interaction would feel unique while still making sense. This way, the user is not just observing the sketch, they are influencing it, experimenting, and finding out how different interactions can lead to different outcomes. Adding these layers of responsiveness would make the sketch feel alive and engaging, showing the strongest form of interactivity that Crawford describes.

Week 3 – Generative Artwork

“We are all mosaics of the people we’ve met.”

This artwork started as an idea I couldn’t stop thinking about: what if all the people and experiences in our lives were shapes in a giant, messy mosaic, connected by invisible lines we carry with us? So I made it happen. Each shape, circles, squares, triangles, and some random oddballs, represents a person or a memory. The lines connecting them are the invisible threads of influence that never really go away, no matter where you move or who you become.

I wanted this mosaic to feel alive. It’s not static. The shapes can move around when you click on them, but the connections always stay. It’s a simple interactive twist, but I love how it visually represents the idea that we carry pieces of each other wherever we go. You can almost imagine it breathing, like a community of little tiles that are all related in some way.

 

How I Built It: 

I used Object-Oriented Programming because I wanted each shape to feel like its own little character, with its own position, size, rotation, and even “friends,” aka connections to other shapes. All the shapes live in an array, which makes it super easy to loop through them to draw, animate, and update lines.

The movement was trickier than I thought. I used lerp() to make shapes glide smoothly toward new spots instead of jumping around like they were hyperactive. And here’s a cool tip I learned from The Coding Train: push() and pop() are lifesavers for isolating transformations like rotation and translation. Without them, every rotation or move would have messed up all the other shapes. Honestly, watching that video was a game-changer. I finally understood why isolating transformations is so important in generative art.

The hardest part was keeping the lines connected while shapes moved independently. At first, I tried drawing straight lines, but it looked rigid and didn’t feel like a living mosaic. I solved it by using curved lines with quadratic vertex and a little randomness to the midpoint, which made the connections feel fluid and unique. I also had to carefully loop through each shape’s connections array so that every connection stayed accurate even when shapes moved.I must have spent hours fiddling with arrays and connections, trying to get it to feel smooth but not rigid. I think it ended up working, and honestly, I kind of love the little quirks. The lines stretch and bend in ways that feel alive. The code snippet: 

// Draw connecting lines
stroke(255, 100);
strokeWeight(2);
for (let i = 0; i < shapes.length; i++) {
  for (let target of shapes[i].connections) {
    // curved line for mosaic effect
    let mx = (shapes[i].x + shapes[target].x) / 2 + random(-20, 20);
    let my = (shapes[i].y + shapes[target].y) / 2 + random(-20, 20);
    noFill();
    beginShape();
    vertex(shapes[i].x, shapes[i].y);
    quadraticVertex(mx, my, shapes[target].x, shapes[target].y);
    endShape();
  }
}

The Interactivity:

You can click on any shape, and it will move to a new random location. The lines stretch and follow, showing that even if things move around in life, the connections remain. I considered adding drag-and-drop or color changes, but I kind of like it simple. Letting the shapes shift on their own feels like they have a mind of their own.

Reflection : 

This project taught me a few things:

  • Classes and arrays are magical once you get them to work together 
  • lerp() is a lifesaver for smooth motion 
  • push() and pop() are your best friends for rotations and translations (thanks, Coding Train) 
  • Generative art can be messy and chaotic, but sometimes that’s exactly the point 

If I did this again, I’d probably add more personality to each shape, maybe tiny patterns or colors that reflect “who” each piece is. For now, I like how it’s simple, a little messy, and interactive, like a mini community on the screen.

Week 2 – Loops

Inspiration: Sound Waves and Chaos

When I first heard the word loops, the first thing that came to mind was sound waves. I imagined smooth, rhythmic waves moving across the canvas, like music frozen in space. I even found a little example of what I hoped to create, but my coding skills weren’t quite ready to match the vision.

 Scrolling through Instagram one day, I stumbled on an image that sparked a new idea. I thought: what if I could combine sound waves with the chaotic little voice in your head when you overthink?  

Concept Development: From Chaos to Calm

The top right of my canvas is extremely chaotic; dense, loud, overwhelming, but as the lines move toward the bottom left, they thin out and eventually fade into empty space. This represents the calm that comes when that inner chatter finally quiets.

This idea lets me visualize the process of overthinking, moving from noise to quiet, using loops and arcs to represent both emotion and structure. 

Coding Process: Adding Randomness

Here’s a snippet I’m proud of:

//arc

for (let x = 250; x <= width; x += 40) {
  stroke(255, random(150, 255), random(100, 255)); // random color
  noFill();
  let randY = 80 + random(-20, 20); // random vertical position
  let randW = 300 + random(-50, 50); // random width
  let randH = 400 + random(-50, 50); // random height
  arc(x, randY, randW, randH, 250, 450);
}

//line

for (let x = 1; x <= width; x += 20) {
  stroke(255, random(150, 255), random(100, 255)); // random green/pink/blue tint
  let endX = width + random(-50, 50);
  let endY = height + random(-50, 50);
  line(x, 1, endX, endY);
}

After watching Casey Reas’ video on randomness, I was inspired to incorporate a bit of chaos into my work. I added randomness to the color and the shape of the arcs, so every frame looks slightly different. This gives the piece a sense of movement and chaos, like thoughts bouncing around in your head—but still following a hidden structure underneath.

Reflection: What I Learned

This project helped me realize that coding can be just as expressive as traditional art—it’s just another medium to visualize thoughts and emotions. I also learned the value of adding randomness, it brought the project to life in ways my original plan didn’t anticipate. I also noticed that art doesn’t always have to be ‘pretty’ it can also be beautifully chaotic. 

Future Improvements

For the future, I’d love to make the sketch more dynamic and interactive, so the patterns could respond to the user’s input or feel more like actual sound waves. I also want to experiment with layering animations to capture the feeling of overthinking versus quiet peace more vividly. I wish one day I could make something as cool as my original inspiration.

Week 2 – Video Reflection

I had to reconsider my approach to structure after watching Casey Reas’ presentation on chance operations and randomness. I have a background in business, so I’ve always thought that control and planning are crucial for projects, schedules, and data analysis. I assumed the same would apply to coding: clearly define the rules and follow them. The 11×11 grid of dots, one of Reas’ examples, showed how controlled randomness functions: each dot moves according to precise rules, but minor variations or “random” steps result in patterns that change from orderly to chaotic. The thought of chaotic work always scared me. He demonstrated how a system can be both rule-based and unpredictable at the same time by describing instructions like moving in a straight line, bouncing off edges, or slightly deviating from a path. It helped me understand that randomness need not imply a lack of control but can be incorporated into a well-organized framework to produce entertaining and captivating results.

I want everything to be flawlessly structured, so as a beginner coder, I frequently question my work. I enjoy doing things the “right” way and abiding by the rules. Reas’s ability to strike a balance between chance and rules has piqued my interest in trying new things. While allowing randomness to dictate specifics like placement, size, or interactions, I hope to establish boundaries in my own projects, such as establishing a color scheme, the quantity of elements, or grid positions. This method seems applicable not only to coding or art but also to business: stability is provided by a clear structure, but creativity and innovation can be stimulated by flexibility and unpredictability.  Reas’ talk taught me that embracing chance within rules doesn’t undermine order; it makes it more interesting, alive, and full of potential surprises.

Week 1 – Self-Portrait

Project Overview

For this assignment, I created an interactive self-portrait that represents me both visually and personally. The portrait includes:

  • My cartoonish face with long straight hair, eyebrows, eyelashes, nose, and mouth.
  • Interactive eyes that follow the user’s mouse, which I am especially proud of.
  • A background of colorful circles that symbolize “yarn cakes,” reflecting my love for crocheting.
  • A small bunny plushie on the side, inspired by one of my crochet projects, including tiny stitched details to mimic real crochet stitches.

    Visual Documentation

    I started with an extremely rough sketch to plan what I wanted my self-portrait to look like and the placement of facial features, hair, and background elements. I also noted a couple of interactive elements I hoped to achieve. 

    I watched a YouTube tutorial to  learn the mouse-following pupil technique, which became the most interactive and technically exciting part of my sketch: YouTube tutorial. Honestly, I struggled a lot with this part, but I was determined to get it right. I tried to make the pupils move without leaving the eyes, and after lots of trial and error, I finally figured it out.

    The circles in the background represent yarn cakes, inspired by my hobby. Crocheting is similar to knitting, but instead of using two needles, I use a hook to create loops and form fabric.

    The little bunny in the corner represents one of my actual crochet projects, and I tried to include stitch details to mimic crochet.

    Interactive Features

    • The eyes follow the user’s mouse, giving the portrait a sense of life.
    • Users can move their mouse around the canvas to see how the pupils track it.

    Technical Notes

    • Eyes: Pupil positions are calculated by taking the difference between the mouse coordinates and the center of each eye, multiplied by a scaling factor to keep pupils inside the eyeball. This took me some time to get the hang of.
    • Hair & Face: Used basic shapes (rectangles, ellipses, arcs).
    • Background Yarn Cakes: Multiple overlapping circles of different sizes and colors, inspired by real crochet yarn balls.
    • Bunny Plushie: Ellipses for the head and ears, with short lines to simulate crochet stitches.

    Research, Tutorials, and Inspirations

    • YouTube tutorials helped with  the eye-following technique.
    • Crocheting inspired the yarn cakes and bunny, making the project personal.
    • I was inspired by examples given in class and wanted to create cartoon-style self-portraits similar to them.

    Coding I Am Proud Of

    I am most proud of the eye-following interaction, which made the portrait feel alive. My code for this feature is well-commented and included below. 

    //eyes
     stroke(0);
     fill("#FCC7D9");
     ellipse(170, 120, 30, 35); //left eye
     ellipse(230, 120, 30, 35); //right eye
     fill(255);
     ellipse(170, 130, 25, 20); //left eye
     ellipse(230, 130, 25, 20); //right eye
    
     fill(0);
     let leftPupilX = 170 + (mouseX - 170) / 40; // small movement toward mouse
     let leftPupilY = 130 + (mouseY - 130) / 40;
     ellipse(leftPupilX, leftPupilY, 10, 15); // left pupil
    
     let rightPupilX = 230 + (mouseX - 230) / 40;
     let rightPupilY = 130 + (mouseY - 130) / 40;
     ellipse(rightPupilX, rightPupilY, 10, 15); // right pupil
    

    Reflection

    I am really proud of myself because this is my first time ever coding. I learned a lot of new things and hope to improve along the way.

    I wish I could have animated the yarn cakes so they floated gently in the background while the eyes moved around, making the portrait even more dynamic. Linking the yarn cake movement to the eye-following interaction could create a more immersive experience, where the background reacts to the user.

    This would enhance the interactive and playful feel of the portrait, connecting personal elements to user engagement.