Midterm Project Documentation: All Day Breakfast

Sketch(f for fullscreen): https://editor.p5js.org/joyzheng/full/tb0uwj2nP

Overall Concept

As a visiting student at NYUAD, I found the made-to-order dining system, particularly at the All Day Breakfast counter, to be very confused. Unlike the pre-made options I was used to, the text-only menus made it difficult to visualize my order. I always confused what and how many did I ordered if there’s no picture (some are arabic food I don’t know) and I often found myself pulling out a calculator to see if my selections added up to a full meal plan.

These frictions made me want to digitalize the experience to an interactive game that aims to gamify the ordering process. The core goal is to provide a more intuitive and visual way for players to assemble a meal, manage inventory, understand the costs, and manage their spending. By turning the process into a game with clear steps and rewards (badges), the project transforms a problem/demand discovered in my life into an engaging and replayable experience.

How It Works

The game guides the player through a six-scene narrative that mirrors the real-life process and menu of getting food at the D2 dining hall A LA BRASA All Day Breakfast Counter.

UI Prototype:

UE:

Scene 1:

Start Screen: The player is presented with the All Day Breakfast counter and prompted to “Ready to Order?”. Clicking the triangle button begins the game. The badge board is also displayed here, showing the player’s progress.

Scene 2:

Choose Food: The player is shown a grill with all available food items. They must first click to pick up a pair of tongs, which then attaches to their mouse. They can then click on food items to pick them up and click on the plate to add them to their meal. The total cost is updated in real-time.

Scene 3:

Scan Items: The player takes their plate to the cashier. They must pick up the scanner tool and move it over each food item on the plate. As each item is scanned, a beep sound plays, and the item is added to a virtual receipt.

Scene 4:

Payment: The cashier opens, revealing a coin tray. The player must pay the total amount shown on the receipt by clicking on coins from a palette and dropping them into the tray.

Scene 5:

Eat: The player sits down to eat. They must pick up a fork and use it to pick up food from their plate and bring it to the character(NYUAD Girl)’s mouth to “eat” it, which plays a sound and makes the food disappear.

Scene 6:

End Screen & Badges: After the meal, the game checks if the player’s actions have met the conditions for any new badges. If so, a special animation plays. The player is then given the option to “Dine in AGAIN!”, which resets the game and starts a new session.

Technical Decisions & Game Design I’m Proud of

I am proud of completing a fully functional and well-designed game within the project timeline, especially after iterating on the initial idea. A key technical challenge was to build the entire game to be fully responsive. The core of the responsive design is a set of helper functions (updateLayoutDimensions, scaleRectangle, scaleValue) that calculate scaling factors based on the current window size versus the original 700×500 design grid. This allows every element to reposition and resize dynamically, ensuring the game is playable on any screen.

It’s also helpful to discuss with Professor Mang to improve the interactivity and replayability of the game. We came up the ideas of implementing the stock management system and humorous badge reward that every NYUAD students who went to this dining hall could resonate with(e.g., never being able to spend a whole meal plan; why is 1 meal plan 33.6? Is that 0.1 for service fee?). I design the inventory as the same as how it usually would be in the counter, for instance, there’s always only a few avocado toast and I just never being able to get tofu omelet till now. Overall, this is also very meditating and educational (in some sense) that it reminds people to feed themselves well in dining hall even when you are rushing in classes and encourage user to do a balanced meal with enough amount of fiber everyday.

// =======================================
// SCENE 2: CHOOSE FOOD
// this function calculates the responsive positions for all food items in scene 2
function buildScene2FoodGrid() {
  // clears the array of food objects to ensure a fresh start each time the grid is rebuilt (e.g., window resize)
  scene2FoodObjects = [];

  // constants that define the original pixel dimensions of the background art and the specific rectangular area within it where the food is displayed
  const sourceImageSize = { w: 1536, h: 1024 };
  const sourceFoodArea = { x: 124, y: 138, w: 1284, h: 584 };
  
  // responsive calculation
  // current on-screen position and size of the food area
  // by finding the scaling ratio between the current canvas and the original background image
  // so the grid always perfectly overlays the correct part of the background art
  const foodGridRect = {
      x: sourceFoodArea.x * (canvasWidth / sourceImageSize.w),
      y: sourceFoodArea.y * (canvasHeight / sourceImageSize.h),
      w: sourceFoodArea.w * (canvasWidth / sourceImageSize.w),
      h: sourceFoodArea.h * (canvasHeight / sourceImageSize.h)
  };
  
  // the calculated grid area is then divided into cells (8 columns by 2 rows) to position each food item
  const columns = 8;
  const rows = 2;
  const cellWidth = foodGridRect.w / columns;
  const cellHeight = foodGridRect.h / rows;
  
  // the size of each food item is based on the smaller dimension (width or height) of a grid cell
  // this prevents the food images from looking stretched
  scaled by 70% to add padding
  const itemSize = min(cellWidth, cellHeight) * 0.7;
  
  // this loop iterates through every food item defined
  for (let i = 0; i < ALL_FOOD_ITEMS.length; i++) {
    // math.floor() and % convert the 1d loop index (i) into a 2d (row, col) grid coordinate
    let row = Math.floor(i / columns);
    let col = i % columns;
    
    // calculates the final top left (x, y) coordinate for each food item
    // starts at the grid's origin
    // adds the offset for the column/row
    // adds a centering offset
    let itemX = foodGridRect.x + col * cellWidth + (cellWidth - itemSize) / 2;
    let itemY = foodGridRect.y + row * cellHeight + (cellHeight - itemSize) / 2;
    
    // a new food object is created with its calculated position and size
    // added to the array to be drawn
    scene2FoodObjects.push(new FoodItem(ALL_FOOD_ITEMS[i], itemX, itemY, itemSize));
  }
}

 

The most complex piece of code, and the one I’m most proud of, is the logic in the buildScene2FoodGrid() function. Unlike other elements that scale relative to the canvas, this grid must scale relative to the background image itself to ensure the food items are perfectly aligned with the artwork.

This logic calculates a scaling ratio based on how the background image has been stretched to fit the screen, and then applies that same ratio to the coordinates of the food grid. It’s a powerful piece of code that makes the experience feel seamless.

Challenges & Improvements

The development process was a valuable relearning game development. I’m surprised by the amount of free assets resources and tutorials for game development online. I’m also inspired by the Coffee Shop Experience example of how to use p5js to manage a game and toggle between scenes.

One of the most surprisingly time-consuming challenges was a simple debugging session that lasted hours, only to discover I had misspelled “diarrhea” as “diarreah” or “diareah” in different location. This taught me the importance of meticulous checking and creating simple debugging tools to isolate issues early.

I also got the opportunities to explore AI created assets through this project. For this huge amount of assets, AI assets might be the best option for me in order to finish on time. However, I still spent at least half of the game development just to get back and forth for “drawing a good card” of images. To be honest, I want to say Nano Banana didn’t worth the hype for image creation. For game assets development, ChatGPT might be the best choice after trying a few different apps like Midjourney or Canva. This is very lightweight and it also supports transparent background with png, so it could be directly use without manually removing the background.

For the future, I have several ideas for improvement:

  1. Expand to Other Counters: I would like to implement a similar ordering system for the D1 dining hall, which also has a confusing menu.
  2. UI Enhancements: I plan to add a toggle to hide or show the badge board, giving the player more control over their screen space.
  3. More Badges: Adding more creative badges would further increase the incentive for players to try different food combinations and spending strategies.
  4. Scene Refinement: Some scenes are quite dense with assets. In a future version, I might split complex steps into more scenes to make the layout feel cleaner and less cluttered.
  5. Real Implementation: After Midterm, I will demo this to dining hall manager to see if they want to adopt this ordering system or just using a more intuitive and interactive menu to run the dining hall more efficient.

Week 5 Midterm Progress

Concept

For my midterm project, I came up this dining hall idea at the last minute. I had originally been inspired by music interactivity in p5.js and considered continuing with my earlier idea of a meditation game. But while eating lunch, I came up with a new idea that felt both playful and relevant to my experience here at NYUAD. So, I mostly working on replanning my idea and preparing assets this week.

As a visiting student from the New York campus, I was used to the dining hall’s pre-made meals. But At NYUAD, the on-demand menus were at first a little overwhelming. Without pictures, I often had no idea what I had ordered (especially with Arabic dishes I wasn’t familiar with) and I even found myself pulling out a calculator to check how much I had left in my meal plan and how much I orderd. Counters like All Day Breakfast felt especially confusing.

So my concept is to digitalize the experience of eating at NYUAD’s D2 All Day Breakfast counter. The project will let users visualize the ordering process, making it more interactive and hopefully reducing the friction that comes with navigating the real-life menu.

User Interaction

Planned Scenes (prototype):

1.Entering the A LA BRASA counter and tapping into the menu

2.Picking up the clamp to get food from the grill to the plate

3.Scanning food on the plate at the cashier’s scanner

4.Paying with coins in the cashier tray (display receipt?)

5.Eating!!

6.Burping to finish the meal

 

Assets:

Audio:

Dining hall ambient background

Cashier scanner beep

Cash register “kaching”

Burp sound

Yumyum sound

 

Pixelated images:

A LA BRASA counter background

All Day Breakfast menu

Grill plate

Clamp

Plate

Cashier scanner

Cashier with coins tray

Coins (D5, D3, D2, D1, D0.5, D0.25)

Fork

 

Pixel art food items:

Avocado fried egg toast

Avocado toast

French toast

Fried egg

Scrambled egg

Plain omelet

Cheese omelet

Mixed vegetable omelet

Tofu omelet

Hash brown

Chicken sausage

Beef bacon

Turkey bacon

Classic pancake

Coconut banana pancake

small bowl salad

 

The Most Frightening Part & How I’m Managing It

The biggest challenge I anticipate is gathering and aligning all these assets into a coherent game within the midterm timeframe. Real-life food images can be messy and hard to unify visually. To reduce this risk, I’ve decided to make everything in pixel art style. Not only does this match the “breakfast game” aesthetic, but it also makes it much easier to align items consistently.

Since Professor Mang mentioned we can use AI to help generate assets, I’ve been experimenting with transforming photos of my own plates and my friends’ meals into pixelated versions. This approach makes asset creation more manageable and ensures I’ll be able to integrate everything smoothly into the game.

 

Week 5 Reading Response

Computer vision isn’t really “vision” in the way humans experience it, it’s more like a giant calculator crunching patterns in pixels. Where we see a friend’s smile and immediately read context, emotion, and memory, the computer just sees light values and tries to match them against models. It’s fast and can process way more images than a person ever could, but it lacks our built-in common sense. That’s why artists and developers often need to guide it using things like face detection, pose estimation, background subtraction, or optical flow to help the machine focus on what’s actually interesting. Techniques like MediaPipe that can map out your skeleton for gesture-based games, or AR apps that segment your hand so you can draw in mid-air, could let us bridge the gap between human intuition and machine literalism.

But once you start tracking people, you’re also borrowing from the world of surveillance. That’s a double-edged sword in interactive art. On one hand, it opens up playful experiences. On the other, the same tech is what powers CCTV, facial recognition in airports, and crowd analytics in malls. Some artists lean into this tension: projects that exaggerate the red boxes of face detection, or that deliberately misclassify people to reveal bias, remind us that the machine’s gaze is never neutral. Others flip it around, letting you “disappear” by wearing adversarial patterns or moving in ways the system can’t follow. So computer vision in art isn’t just about making the computer “see”, it’s also about exposing how that seeing works, what it misses, and how being watched changes the way we move.

You can also invert the logic of surveillance: instead of people being watched, what if the artwork itself is under surveillance by the audience? The camera tracks not you but the painting, and when you “stare” at it too long, the work twitches as if uncomfortable. Suddenly, the power dynamics are reversed.

Week 4 Reading Response

One everyday object that always drives me crazy, but not quite do with interactive media, is my water bottles. Every bottle I’ve owned is either too tight to open or too loose so it leaks everywhere. Especially for the current one I’m using, I always ended up struggling like it’s a test of grip strength. The most baffling example is the viral Stanley cup, people on social media complain nonstop about leaks, and yet it still become a must-have item. I’ve even seen TikToks where the “fixing the leak” process itself becomes a kind of trend. From Norman’s perspective, that feels like a failure of design: the affordance (a secure, portable cup) clashes with the signifiers (the lid/the cap) that don’t clearly tell you how to tighten it enough. The fact that millions of users are improvising and complaining is proof that the “system image” isn’t communicating what the designer intended.

Norman’s principles also made me think about my own experiments in p5.js. When I build interactive sketches, I realize I need to add clear signifiers, not just rely on “hidden” affordances. For example, I once made a generative text piece where users could drag characters around, but unless I showed some visual cue, like a subtle highlight or a cursor change, no one discovered the interaction. It’s exactly what Norman warns about: affordances exist, but if they’re invisible, they fail.

Another part of the article I found really interesting was Norman’s discussion of mapping. He uses the example of car seat adjustment buttons shaped like the actual seat, you just push the part of the button that matches the part of the seat you want to move. That struck me because it feels so obvious and natural, but you realize how many objects miss this. I think about the confusing array of stove knobs in my apartment where I’m constantly turning on the wrong burner. If those knobs had a layout that mapped directly to the burners, I’d never make that mistake. It made me realize that whether it’s an app, a sketch, or just a stove, people shouldn’t need labels and trial-and-error to figure out something so basic.

Week 4: Generative Text Art

Concept

I started the brainstorming of generative text art by the inspiration of Tibetan font we’ve seen in class. I felt so amazing how this niche and complicated language can be incorporated into unicode and redesigned into various new format. Then I came up the idea of generating rearrangement of typography from Chinese Wubi input method, which decomposes characters into strokes that can then be recombined into full forms. I wanted to apply this same principle of decomposition and recomposition to my favorite language of all times: Nüshu (The direct shortcut key for ü is v, referring as Nvshu in the following).

Nvshu was historically used in Jiangyong County, Hunan, China, as a secretive women’s writing system. It is the only known script in the world designed and used exclusively by women, and was transmitted through sisterhood groups, generational teaching, and playful exchanges. Today, Nvshu often appears on carvings, embroidered fans, handbags, or even tattoos, celebrated as both a form of sisterhood heritage and an aesthetic design language.

My goal was to design a digital generative “Nvshu board” where users can appreciate the beauty of these glyphs and play with them (dragging, rearranging, and composing their own patterns). The resulting forms resemble ornaments, motifs, or even tattoo-like compositions, highlighting how a script can carry both linguistic and visual meaning.

Sketch (Written Horizontally, same as historical Chinese Characters)

Highlight Code

The code I am most proud of is the section that generates random Nvshu glyphs from the Unicode range and places them into a 10×10 board. To make sure the characters display properly, I had to research Unicode code points and how to convert them into visible strings in p5.js. The line below is the most rewarding part of my code:

//       randomized and round down to integer to match the string
      let nscode = floor(random(0x1B170, 0x1B2FB + 1));
      //take the code create the string from the unicode sequence of code points
      // saw this way of locating unicode in chinese-fonts p5.js works by tytyshi
      let nsglyph = String.fromCodePoint(nscode);
      // store it into the array with parameters

Here, random() generates a floating-point number within the Unicode range for Nvshu (0x1B170 to 0x1B2FB inclusive). floor() ensures the result is an integer, since Unicode code points must be whole numbers. Then, String.fromCodePoint() converts that integer into the actual Nvshu glyph. This was a breakthrough moment for me, because it connected abstract Unicode numbers to visible characters and allowed me to fill the canvas with living Nvshu script.

On top of this, I added drag-and-drop interactivity: when the user presses a glyph, the program tracks it and updates its coordinates as the mouse moves. This simple interaction lets users create their own custom compositions from the script.

Future Improvements and Problems

While researching fonts, I first discovered a brush-style Nvshu font package on GitHub. However, its size (18.3MB) exceeded the 5MB upload limit on the p5.js online editor. This led me to the only option of Noto Sans Nvshu on Google fonts. This size of 0.2MB really surprised me. Learning this powerfulness of unifying typeface made me think about how technology preserves cultural memory. I also consulted Lisa Huang’s design reflections on the challenges of Noto Sans Nüshu (https://www.lisahuang.work/noto-sans-nueshu-type-design-with-multiple-unknowns) to better understand the typographic logic of the font.

I would also like to add a save feature in future, so that once users have composed a design, they could export it as an image (to use as a print, tattoo design, or ornament).

This project combined cultural research, typographic exploration, and interactive coding. By bringing Nvshu into a playful digital space, I wanted to highlight both its fragility as an endangered script and its resilience through Unicode and modern typography. Users can engage with scripts not only as carriers of language but also as visual and cultural materials.

Reading Reflection – Week#3 – Joy Zheng

I find the final review questions delightfully absurd, so I really want to start my reflection by pondering over them.

    1. Are rugs interactive?
      • Before I read Crawford’s clear division of interactivity, I had never really thought about this question before, and I probably would have said yes because of its interactive tactility. But by Crawford’s definition, no—rugs don’t “listen” or “think.” However, if we glue Arduino sensors to one, making it play synth sounds when stepped on, it becomes interactive. While his definition is useful, interactivity’s boundaries blur with creativity; anything has the possibility to be interactive when reimagined with tech. Socrates’ fear of silent text feels ironic in the age of AI. What if books “could” argue back? Maybe that’s the next Kindle update.
    2. My own damn definition of interactivity:

      • Interactivity is when a system not only reacts to me but also alters how I react the next time. It’s a loop of influence, not just a one-off cause and effect. If the system surprises me or teaches me something about myself, that’s when it’s truly interactive.
    3. Throw this book across the room. Measure the distance it traveled and the angle of impact:
      •  I didn’t actually throw the book (my roommate might object), but imagining it, I realize the question is poking fun at how we measure “outcomes.” Distance and angle are just the physical data. The real interaction is in the frustration, laughter, or satisfaction I’d feel when the book is thrown on the floor. 

When I think about the characteristics of a strongly interactive system, what strikes me most is not the sophistication of the technology but the reciprocity of the relationship. An interactive system is strong when it feels alive: it doesn’t just accept my input but adapts, resists, or even surprises me in return. Instead of a flat cause-and-effect chain, it’s more like a conversation with another mind. I see this quality echoed in the text’s emphasis on feedback loops—interactivity emerges from cycles of action and response rather than one-time events.

For my p5 sketches, I want to push beyond the “click to change color” type of interactions. One idea is to give the sketch a form of “mood” that evolves depending on how the user treats it. If the mouse lingers gently, shapes might bloom and expand; if the cursor jerks around aggressively, the system could retreat or glitch out. In other words, I’d like to design sketches that don’t just follow orders but perform a little interpretation of the user’s intent. This would make interaction less like pressing buttons and more like building an emotional relationship with the code.

Week 3 – OOP Array Production – Joy Zheng

Concept

This project is inspired by Yayoi Kusama’s immersive polka-dot installations at the Queensland Gallery of Modern Art in Australia and her collaborations with Louis Vuitton. I’ve always admired how her work visualizes the flow of her inner world and allows viewers to enter an environment of rhythm and playfulness. My goal was to adapt this experience into a digital dot space, where users are also not only observers but active participants.

Yayoi Kusama’s Collab with Louis Vuitton

Yayoi Kusama’s Interactive Installation at the Queensland Gallery of Modern Art in Australia

Embedded Sketch

Highlighted Code

The part I am most proud of is the growth logic of my Dot:

function mouseDragged() {
//   let user to draw their own dots on top of the background
//   when mouse is pressed, add a new color dot
  let colorpick = random(colors);
  let d = random(5,20);
  let newDot = new Dot(mouseX, mouseY, d, colorpick);
  dots.push(newDot);
  
//   expand all nearby dots when dragged/clicked
  for (let i = 0; i <dots.length; i++) {
    if(dist(mouseX, mouseY, dots[i].xPos, dots[i].yPos) <= 10) {
      dots[i].grow();
    }
  }
  
}

This method became powerful when paired with mouse interactivity and repetition. In mouseDragged(), every time the mouse is close to a dot, that dot grows and creates a new one. By continuously dragging the mouse, viewers can continuously add new dots and expand them, creating a swirling, psychedelic canvas that evokes Kusama’s sense of infinity.

This small combination of object-oriented design created an effect where the user’s movement actively reshapes the artwork. It transformed the piece from a static background into a living, evolving canvas.

Reflection and Future Work 

Working with OOP in p5.js helped me think in a modular way. By separating Dot objects and their behaviors, I avoided repetitive code and could focus on experimenting with interactivity. One challenge was deciding which mouse functions to use: mousePressed() felt more like stamping stickers, while mouseDragged() created a more immersive, “infinite swirl” effect that matched Kusama’s style better.

For future work, I want to experiment with the analogical distribution of dots according to the real-life exhibit of the show. I’m also excited to explore Kusama’s “Infinity Nets” in a 3D or VR environment, where users could walk through an endless dot universe instead of only drawing on a flat canvas. I want to keep exploring how interactive code can reinterpret her immersive art (e.g., Pumpkin) in digital spaces.

Reading Reflection – Week#2

When Casey Reas presented on the section of the interplay between order and chaos in art, I was struck by the way geometry became a central thread in the works he showed. What first appears as random begins with a point, then extends into a line, a plane, a pyramid, and beyond. This progression made me reflect on how art can serve as a medium for visualizing different dimensions, not just one or two, but conceptually even higher dimensions that are difficult to visualize mathematically.

These pieces also reminded me of a saying that what we perceive as “fixed reality” is often the outcome of countless random events in the past. This questioned me to reconsider the very idea of randomness. In the digital world, randomness is never truly random but rather pseudo-random, generated by algorithms. If randomness is always mediated by machines, codes, and computational logics, then perhaps chance itself is never pure but always carefully curated within larger systems, or even by higher-dimensional “creatures.”

In the last works that Reas showed, randomized pixels can be flipped into recognizable icons and numbers. This reminded me that the rules and symbolic systems we rely on every day, language, notation, even code itself, are not inevitable truths but constructed layers that emerged from countless iterations, decisions, and constraints. What seems stable is in fact the result of “layered randomness,” structured into order through history and standardization. Even today, the simple random() function in p5.js, which feels effortless to call, is built upon decades of infrastructural layering: from punch cards and military research to modern standards like Unicode. Each of these conveniences conceals a history of constraints, distilling complex philosophies and technologies into modular tools that allow digital artists to create with apparent spontaneity.

To answer the question of the optimum balance between total randomness and complete control, my mind drifted toward a more philosophical side. Can there ever be an autonomous “optimal point,” or is every balance ultimately surveilled and regulated? I find myself leaning toward a more pessimistic answer, suspecting that what appears to be freedom within randomness is always already framed by invisible structures of control. 

Week 2 Loop Assignment – Joy Zheng

Concept

My concept for this work came from an in-class moment. When Professor Mang was demonstrating looping horizontal and vertical lines, my eyes were tricked into seeing illusory diagonal lines that weren’t actually there.

After chatting with a psychology major friend who is taking a course on visual perception, I learned this effect is called optical illusion (our brain “fills in” missing shapes or creates illusions). She also introduced me to a set of famous illusions, and I was especially fascinated by the Cafe Wall Illusion, where parallel lines look slanted (not parallel as they really are) when black and white tiles are offset. I decided to recreate that illusion using nested loops, because it is simple yet powerful in showing how math and art can merge to create perceptual and artistic effects.

The effect turned out to be amazing. You can actually feel the tiles appear sloped when you look at them from a certain distance. I noticed that the illusion doesn’t work really well when you are too close.

Embedded sketch

The interactivity of p5.js also inspired me to experiment with different ways of breaking the illusion, so I found that adding bold lines between the tiles is a surprisingly effective method.

A highlight of some code that you’re particularly proud of

My favorite pieces of code are the row offset logic and interactive show lines feature:

for (let y=0; y<rows; y++) {
  //all even rows' first tile was pushed half way in   
    let offset = 0;
    if (y%2 == 1) {
      offset = tileSize/2;
    }
    //for every column
    for (let x = 0; x < columns; x++) {
      //all even row tiles are white
      //all odd row tiles are black
      if (x%2 == 0) {
        fill(0);
      } else {
        fill(255);
      }
      //create tileSize tiles
      //created half way in when it's on the even row
      rect(x * tileSize+offset, y * tileSize, tileSize, tileSize)
    }
    
  }
  //display the breaking illusion lines when clicked
  if (showLines) {
    fill(150);
    for (let y=1; y <rows; y++) {
    rect(0, y*tileSize, width, 5);
    }
  }

This small conditional offset, embedded in a loop, shifts every other row half a tile, which is what makes the illusion possible. The most basic function of showing lines gives me the immediate initiative to stop the illusion. I specifically love these parts because it shows how a very minimal algorithm can have a huge impact on the final visual. Without it, the grid reverses back to ordinary.

Reflection and ideas for future work or improvements

This project inspired me to think about how simple loops and conditions can recreate complex psychological effects. It reminded me that coding is not only technical but also artistic—it can play with human perception in surprising ways. In the future, I would like to expand this piece by making the illusion interactive: for example, letting the user adjust the tile size or the offset with a slider, or animating the tiles so that the illusion shifts dynamically. I would also like to try building other illusions, like the Zollner illusion or the Kanizsa triangle, to see how far p5.js can push visual trickery.

Week 1 Self-portrait Joy

Concept

I started the idea of pixel art with this piece of wooden pixel art that we mentioned in our first class(Wooden Segment Mirror). I’m also recently obsessed with Perler Beads(plastic fusible beads to form patterns and fuse together with an iron). So, I pixelated one of my self-portraits and created a canvas of 10 * 12 colored squares.

To make it more interactive, I also use the mouseClicked function to add a more immersive version to get rid of the pixel stroke.

Code Highlight

I’m particularly proud of the color list. When I just started, I wanted to do an automatic color picker program to create the list for me. However, after a few trials, all the available auto ones cannot handle many colors, either giving a fake hex code or a single palette with a maximum of 10. Then, I did some research on manual color pickers, and I found this one(https://colorpickerfromimage.com/) to be the best hex color picker shortcut among all. Since I had to manually picking up all the colors, it also pushes me to optimize other parts of my code to use less repeated manual calls but using the loop to create all the squares.

let colors = ["#9194a6","#7d7d8f","#c29c8e","#e8bc9e","#ebbe9e","#dcac8c","#a38572","#5b585a","#5b585a","#c0cc66","#767787","#736f79","#c19a88","#e7b691","#edbb93","#daa884","#a5846d","#5b5350","#685d56","#dfe1b5","#676672","#6c6266","#ab8273","#ab8273","#e5af83","#927466","#7c6258","#544944","#665a52","#bea7a9","#5c5258","#6c5554","#a17d68","#db9f7e","#e8b08a","#5e4540","#785145","#544139","#665c56","#cecccf","#5c4b4a","#b57b64","#de9b79","#e5aa8a","#e6a68d","#d09375","#c18067","#5c443a","#6a5c57","#ccc7c9","#674e46","#cd8e6a","#eaa67d","#e1a286","#e7ab93","#e6a880","#d8946e","#5c4035","#544b47","#cdcdd3","#543d36","#d7996a","#e9ae7e","#e0a17b","#d08b5d","#e0a46d","#d39560","#3b2922","#49423b","#ccc7c4","#372623","#c28b5c","#d5936e","#c6735c","#c67255","#d59761","#7f5535","#34231c","#3a2c23","#625c43","#33221e","#644430","#e4a976","#da8667","#da7c5e","#af794d","#341d13","#39261d","#3b281e","#3b281e","#39251e","#5d402d","#d29e74","#d59d6d","#a6754b","#2b1810","#2b1b14","#422c1e","#4c3626","#28190a","#4d372d","#694735","#d09c74","#d29d72","#7c593e","#362822","#3a2b24","#442f21","#78614d","#4d3b26","#4e382f","#6a4833","#d69a74","#c99874","#49362d","#2f221f","#3c2f2b","#402d20","#876e51","#83735c"]

I also think the way p5.js displays these hex codes is another piece of art that breaks down the color of me into hex code pieces.

Reflection

For my future works, this work reminded me the importance of planning before starting. Digital art is a more well-planned process compared to free-form drawing on papers. This time, I began by creating all the squares manually, but soon I realized how redundant that was. So, I turned back to a draft note to jot down the ratio calculations. Overall, the process of choosing colors and planning turned out to be quite meditative.