Week 4 – Generative text

Concept:
I was inspired by the idea of an “answer book.” When people feel confused or overwhelmed, they sometimes speak their question out loud and then randomly open a book to any page. Even though the page is chosen by chance, a sentence on that page can feel meaningful or comforting, and it can help the person feel calmer or more encouraged. Based on this idea, I created a simple online version of the answer book. The user brings their own question, then clicks once to receive a randomly generated sentence. The goal is not to give a perfect or “correct” answer, but to offer a small prompt that can shift the user’s mood, support reflection, and help them move forward.
How this was made:
I made this project by following a simple p5.js workflow and using a mix of learning resources to solve specific problems as they came up. I started by watching beginner YouTube tutorials on p5.js to understand the basic structure of a sketch, especially how `setup()` and `draw()` work, and how to display text on the canvas. After I had a basic template running, I used the official p5.js reference website to check the correct syntax for functions like `text()`, `textAlign()`, `textSize()`, `mousePressed()`, and `saveCanvas()`. Next, I built the “answer book” logic using online examples about randomness and arrays. I created several word banks (sentence starters, subjects, verbs, adjectives, and endings) and then used `random()` to pick one item from each list. I combined those parts into a full sentence, so every click generates a new “page” from the book. I tested the output many times and edited the word lists to make sure the sentences sounded smooth and encouraging, not broken or repetitive. When I got stuck or wanted to improve the design, I also used generative AI as a helper. I asked it for simple code patterns (like how to organize arrays, how to avoid repeating the same quote too often, and how to make the layout look cleaner).

The part I am proud of:
Actually, the whole thing is fun and interesting, it indeed took me sometimes to collaborate the part that we use for adding all the words together to form a sentences. But I will say the “word bank” itself is the most chellenging part. Since I have to go through many answer books and find some short sentences to make the whole sentence make sense. (somehow)

// Word banks 
let starters = ["Today I learned", "I keep forgetting", "I wish", "I noticed", "It feels like", "Sometimes","I hope", "I confess"];
let subjects = ["my phone", "the app", "the door", "my brain", "the algorithm", "the website", "this class","the thoughts"];
let verbs = ["is", "acts", "behaves", "works", "fails", "pretends", "keeps trying","destroy"];
let adjectives = ["confusing", "too fast", "too slow", "strangely helpful", "kinda broken", "overdesigned", "simple"];
let endings = [
  "and I blame the design.",
  "but I'm still alive.",
  "so I take a deep breath.",
  "and somehow it makes sense.",
  "and then I start over.",
  "and that’s the whole story."
];

Conclusion:
I think I did a goood job on this assignment. But if the time is longer, I think I can put more phrases inside the coding, make the text not that repeatable.

Week 4- Reading

One thing that drives me crazy is QR-code restaurant ordering. I scan the code and it opens a slow website that is hard to use. Important things are hidden, like how to add notes (“no ice,” “no peanuts”), and the buttons are often small or confusing. Sometimes the page refreshes and my order disappears, so I have to start over. It makes me feel like I’m doing extra work just to buy food.

Using Norman’s design ideas, this could be improved in a few simple ways. First, the main actions should be obvious, like “Order,” “Ask for help,” and “Pay.” Second, the steps should match how people actually think: pick items, customize, review, then submit. Third, the system needs clear feedback, like “Added to cart,” “Order sent,” and a confirmation screen, so I know it worked. It should also prevent common mistakes by reminding me if I forgot a required choice, like size or toppings. Finally, it should save my progress if the internet is bad, instead of deleting everything.

We can apply the same principles to interactive media, like games, apps, VR, or interactive art. The most important thing is discoverability: people should quickly understand what they can do. Interactive objects should have clear signs, like a highlight, an icon, or a small animation that hints “touch me” or “pick me up.” Controls should also feel natural. If I move something left, the result should move left, not the opposite, unless the experience clearly explains why.

Feedback is also essential in interactive media. When I tap, grab, or press something, the system should respond right away with sound, vibration, movement, or a visual change. This makes the experience feel reliable. Good design also helps users recover from errors. There should be a clear way to undo, restart, or exit, so users don’t get stuck feeling confused. Overall, Norman’s principles remind us that if many people struggle, the design should change—not the user.

Assignment 3-The night

Concept:
I imagined a night view while making this piece. This weekend I went to teamLab, and some of the installations reminded me of the night scenery back in my hometown. The windows turning on and off represent the feeling of being in a moving car at night. When you look out from the car, the city lights seem to flicker and flash past, and the view is not always clear. That memory is what inspired the changing lights in my artwork.
How this is made:
I made this through p5.js. Firstly, I drew the background, including the sky and the stars, and also the ground. This part was not very challenging. I just used basic shapes like rectangles and circles. For the sky, I drew many thin rectangles to make a simple gradient. For the stars, I used random positions, so the stars look different each time.
After that, I started to create the city. I wanted the buildings to fill up the whole canvas, so I used a loop that keeps adding buildings until the x position reaches the right side of the screen. I also needed the buildings to be placed from left to right, and not overlap. For this part, I asked AI to give me an idea of how to organize the code.
Then I used Object-Oriented Programming to make the code cleaner. I made a Building class and a Window class. Each building is an object, and it contains an array of window objects. When a building is created, it automatically creates a grid of windows inside it. In the draw loop, every building updates its windows, and then displays itself. The windows turn on and off because each window has a small random chance to switch states each frame. This creates the flickering light effect, like windows flashing when you look at a city at night. I found this part challenging so I went through some YouTube video to deal with this.
Finally, I went back to the background details to make it feel more alive. The stars use randomness so their locations are not all the same, and the small twinkle effect makes the night view feel more natural. Overall, the main idea is a simple night skyline, but the random buildings and changing windows make each version look slightly different.

The part that I am proud of:
Actually I think I am proud of the whole project I’ve made. However, if I have to choose one of the part I love I will say it’s the most challenging part.

// building class
class Building {
  constructor(x, y, w, h) {
    this.x = x;
    this.y = y;
    this.w = w;
    this.h = h;

    // building color
    this.bodyColor = color(random(20, 60));

    // array of Window objects for this building
    this.windows = [];

    // decide window grid size based on building size
    this.makeWindows();
  }

  makeWindows() {
    this.windows = [];

    // window size and spacing (kept simple)
    let winW = 12;
    let winH = 16;
    let gapX = 8;
    let gapY = 10;

    // margins inside the building so windows don't touch edges
    let marginX = 10;
    let marginY = 14;

    // how many columns/rows fit?
    let cols = floor((this.w - marginX * 2 + gapX) / (winW + gapX));
    let rows = floor((this.h - marginY * 2 + gapY) / (winH + gapY));

    // create Window objects in a grid
    for (let r = 0; r < rows; r++) {
      for (let c = 0; c < cols; c++) {
        let wx = this.x + marginX + c * (winW + gapX);
        let wy = this.y + marginY + r * (winH + gapY);

        // store a Window object in the building’s windows array
        this.windows.push(new Window(wx, wy, winW, winH));
      }
    }
  }

  update() {
    // update each window (some will randomly toggle)
    for (let w of this.windows) {
      w.update();
    }
  }

  display() {
    // draw building body
    fill(this.bodyColor);
    rect(this.x, this.y, this.w, this.h, 3);

    // draw windows
    for (let w of this.windows) {
      w.display();
    }
  }
}

// window class
class Window {
  constructor(x, y, w, h) {
    this.x = x;
    this.y = y;
    this.w = w;
    this.h = h;

This part was killing me,  because before I went through the videos, I don’t know how can I write the coding for class of buildings and the lights. After I watched the video I have some ideas. But I still tried many times to figure out the exact number and place I wanted.

Reflection:

I think I am satisfied with the project this time. But if I have more time, I think I will and some more interactive things for example like having some shooting stars when ever the users clicked the screen.

Week 3–Reading Response

I found this reading really interesting because the author’s writing style is amusing and engaging. It almost made me feel like I was interacting with the ideas while I was reading. The reading also gave me a new, clearer definition of what “interactive art” means. The examples helped me realize that I didn’t fully understand interactive art before. In the author’s view, an interactive event needs three parts: speaking, listening, and responding. All three have to be present. And the response has to be meaningful; if the response is too weak or shallow, then the event does not really count as interactive in a strong sense.

Before, I assumed that if an artist designed something “interactive,” it would still be interactive even if there was no audience, or even if people were not paying attention. After reading, I understand the author’s point that the audience is not just watching the art—the audience becomes part of the artwork. In interactive art, the artist and the audience “make” the experience together. This is the first time I have really noticed how important the audience is to the final meaning of an art piece.

I also like the idea of “degrees of interactivity.” Some artworks create strong interaction because the audience thinks carefully and responds in a thoughtful way. Other situations feel less interactive because the audience may not notice what the work is asking, or they may respond without thinking much. This raises questions for me: how can we tell whether an interaction is “high” or “low” in interactivity? Who gets to decide that? Also, does the level of interactivity change how an artwork is interpreted, or can the meaning stay the same even when audience responses are shallow?

Week 2: Loop

Concept:

My concept is to create a simple “inside the hive” scene where order and randomness happen at the same time. The honeycomb background is built from repeated hexagons in a grid, which represents the hive’s structure—tight, organized, and predictable. In contrast, the bee moves with a random flying motion that changes slightly every frame, so it feels alive and spontaneous rather than mechanical. I also made the bee bounce within the canvas so it stays inside the hive space, like it is exploring but still contained by its environment. Also, during the class time when I was thinking about what to draw in the project, I felt really hungry and I want to eat honey cake.


The code that I am proud of:
  // --- Bee motion (random flying) ---
  beeVX += random(-0.3, 0.3);
  beeVY += random(-0.3, 0.3);
  beeVX = constrain(beeVX, -2.5, 2.5);
  beeVY = constrain(beeVY, -2.5, 2.5);

  beeX += beeVX;
  beeY += beeVY;

  // Keep bee inside canvas (bounce)
  let margin = 30;
  if (beeX < margin || beeX > width - margin) beeVX *= -1;
  if (beeY < margin || beeY > height - margin) beeVY *= -1;

  // Draw bee on top
  drawBee(beeX, beeY, 0.35);
}

At the beginning, I wasn’t sure how to create this project, so I watched several YouTube tutorials to learn how to animate an object with “random” speed and changing positions. I borrowed the basic motion idea from those videos, but when I copied it into my own sketch, it didn’t fit well at first because most examples were things like spaceships or tanks, not a bee flying inside a hive. I wanted the bee to feel a little “drunk” and overworked—like it’s been collecting too much honey—so I tried lowering the speed. After that, I ran into another issue: the bee sometimes flew past the canvas boundaries instead of bouncing back. I returned to YouTube to look for a solution, but I couldn’t find an explanation that matched my situation. I then asked ChatGPT, but the first version of the code was too complicated for me to fully understand. In the end, I asked a friend on campus for help, and the final motion code in my project was revised with my friend’s support so it works properly and still matches the bee-in-a-hive idea.

How this was made:

In the beginning, I was frustrated with drawing the honeycomb. My first draft didn’t tile correctly—the shapes overlapped and collapsed into each other, so it didn’t look like a real hive at all. I made many adjustments and learned a lot from YouTube tutorials, especially about changing the way I structured the code so the hexagons could repeat cleanly in a grid. For the color choices, I also struggled because it would take too many trials to find the exact shade I wanted, so I asked ChatGPT for suggestions and used that as a starting point. After the hive finally worked, the next challenge was the bee’s movement, which I mentioned earlier. Once the motion was fixed, I focused on drawing the bee with more detail. The wings and stripes were honestly the hardest parts because small changes in position and size could make the bee look “off.” I spent a lot of time adjusting these details until the proportions felt right and the bee matched the hive scene more naturally.

Conclusion and reflection:

I learned a lot throughout the process of making this project. Compared to the beginning, I felt much more comfortable writing the code to draw the bee, mainly because the self-portrait assignment helped me get used to building shapes step by step and adjusting details. Next time, I think I can improve by creating a more surprising or interesting background and adding more animation—so the scene feels more dynamic rather than just a moving bee on a static hive.

Reading Reflection- Week 2

Before watching the video, I assumed that “randomness” in art meant something purely spontaneous—an idea that suddenly appears in an artist’s mind. After watching it, I started to understand randomness differently. In generative work, the artist often writes a set of rules or instructions for a computer (or a system) to follow. The computer then runs those rules, and the final image can turn out in ways the artist didn’t fully predict. That surprise is part of the point.

But this also makes me wonder: is this still “randomness”? If the artist designs the rules, then the outcome isn’t completely uncontrolled. At the same time, the result can still feel unpredictable. The projects shown in the video seem to hold both chaos and structure at once—patterns emerging from something that still looks messy—and that combination is what confuses me.

This connects to a larger debate about whether AI-generated work counts as “real” art, and if it does, who the artist actually is. Is it the person who sets the prompts and constraints, or the AI that produces the final output? Generative art raises a similar question. The human chooses the system, the limits, and the type of randomness, but the machine carries out the process and generates the final form. In a way, the artwork is created through collaboration between human intention and machine unpredictability.

At the same time, I keep coming back to the purpose of this kind of work. Are we amazed mainly because a system can produce something that looks complex and meaningful from simple rules? And what happens if nothing surprising appears—if the output feels flat, repetitive, or uninteresting? Would it still count as art, or would it just feel like a technical exercise? These questions make me realize that “randomness” in generative art isn’t the absence of control—it’s more like a tool the artist uses to invite uncertainty, so that structure and surprise can exist together.

In my project, I think I have to set up some boundaries, but at the same time leave the computer some spaces that it can actually have options. But I am still very confused about what counts as a success in such an art project? This kinds of process is confusing: we are doing something logical and follow the instructions but we expected something unpredictable and surprising.

Week1-self potrait

Concept:

For this self-portrait assignment, I drew Anpanman, one of my favorite cartoon characters from childhood. Many of my friends say I look like him, so I chose him as a playful version of my self-portrait. I also admire his optimistic, outgoing personality, and I want to reflect that feeling in my work. My goal for this assignment was to learn how to use p5.js, since I had never used it before. Through drawing with code, I practiced working with coordinates and basic programming logic, and I learned how to build an image step by step using simple shapes. Since my birthday is coming up, I also designed this piece as a small birthday gift card for myself.

How this was made:

I created this image using p5.js. I started by drawing Anpanman’s face. For the face color, I asked ChatGPT to help me find the exact color values, and I used the p5.js reference to learn how to make the shape and remove the stroke. Next, I drew the nose and cheeks. Since they are also circles, that part was fairly straightforward. After that, the eyebrows were the hardest part. Even after checking the reference image, I was still confused about how to code the correct shape. I asked ChatGPT for help, but the code it suggested was too complicated for me to understand. So I went back to the p5.js reference and experimented on my own. After about 15 minutes of trial and error, I finally got the eyebrow shape I wanted. For the mouth, I followed the reference image for the shape, but I asked ChatGPT for help with the mouth color because I tried several times and still couldn’t match the shade I wanted. For the remaining details—highlights, clothing, and buttons—I mainly relied on the p5.js reference and built them using simple shapes.

The part I’m proud of:

The part I’m most proud of is the eyebrows. They weren’t a simple shape, and using arcs was harder than I expected. I had to manage a lot of details—especially the angle and the coordinates—and I kept getting confused about where everything should go. After many tries and small adjustments, I finally found the right position, curve, and angle that looked correct and made me feel satisfied.

//left eye brow
 noFill();
stroke(20);
strokeWeight(3);
arc(117, 135, 50, 65, PI, TWO_PI);

//right eye brow
arc(183,135,50,65,PI,TWO_PI);
Reflection and ideas for future work or improvements:

I think I could improve this project by adding more detail to the clothing and including some interactive animation. Since this was my first time using p5.js, there are still many skills I want to develop. In future work, I hope to incorporate more of my own creative ideas and make my drawings more detailed, polished, and well organized.