W9: Assignment

Concept

Parking lots can often be a frustrating experience, especially when it’s hard to tell whether a spot is free or occupied without driving around aimlessly. I wanted to create a simple, interactive system using Arduino that mimics real-world parking indicators: a yellow light that changes brightness when a car is moving in or out, and a red light that turns on when a spot is occupied. This way, drivers can quickly see which spots are available and which are taken, making the parking process smoother and more intuitive.

Implementation

To achieve this, I used an ultrasonic sensor to detect the movement of cars. The sensor works by sending out a pulse from the trigger pin, which bounces off an object and returns to the echo pin. The Arduino then calculates the distance based on the time it takes for the pulse to return. I mapped this distance to the brightness of a yellow LED, so that the closer a car gets to the parking spot, the brighter the yellow light becomes. A slide switch allows us to manually indicate when a car is parked: flipping the switch turns on a red LED and turns off the yellow light, clearly showing that the spot is occupied. Two 330-ohm resistors ensure the LEDs operate safely without drawing too much current.

cardemo

Code I’m proud of

// Trigger pulse 
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);digitalWrite(trigPin, LOW);

// Read echo
duration = pulseIn(echoPin, HIGH);
distance = duration * 0.0343 / 2.0;

I’m particularly proud of the code I wrote for this project. Writing it taught me a lot about how ultrasonic sensors work and how to use the trigger and echo functionality effectively.

Future Developments

For future development, the system could be expanded to include a green LED, which would light up to indicate available parking spots. In that scenario, the green light would show availability, the yellow LED would indicate movement, and the red LED would signal when a spot is taken. Eventually, this could be automated further so that the sensor alone detects whether a car is parked, eliminating the need for the manual switch. Overall, this project was a great exercise in combining sensors, outputs, and user interaction to create a functional and visually intuitive system.

W9: Reading Reflections

Physical Computing’s Greatest Hits (and misses)

While reading this piece, I found myself fascinated by how imagination can stretch beyond the limits of what we typically perceive as possible. The example of the waves of leaves particularly resonated with me. It was such a beautiful and unexpected way to translate nature into sound and movement. I would have never imagined something like that, yet it reminded me that creativity often begins with seeing the ordinary through a new lens. This concept really reflects what this course encourages us to do: to move beyond traditional boundaries and explore how abstract ideas can become tangible experiences. It even made me think about how we could merge this with technology, perhaps building something like a domino-inspired instrument that creates a tune from a movement.

Another concept that stood out to me was Dance Dance Revolution. I’ve always loved dancing and even enjoyed playing the this type of game in fun zones, where timing and coordination create a sense of both challenge and joy. Reading about it made me think of how such ideas could evolve into more interactive art experiences. We can probably utilise this concept to build a “twister” game such that everytime someone is out it creates a buzz noise.

Overall, this reading reminded me that creativity is not confined to art or technology alone, it’s in how we connect both. The examples encouraged me to think more experimentally and to consider how imagination can be designed into playful, sensory experiences that engage both mind and body.

Making Interactive Art: Set the Stage, Then Shut Up and Listen

I completely agree with what the author is saying in this reading. If you are creating an immersive, interactive experience, you need to let the audience truly be part of it: to explore, engage, and form their own interpretations. That process of interaction is what reveals how deeply people are willing to think about your project and how many different meanings it can evoke. Each person’s response becomes part of the artwork itself, showing you perspectives you may never have considered.

An immersive experience, in a way, is like an open-ended question. There can be multiple interpretations, each valid in its own context. You can build theories around what you intend to express, but you should always leave your audience curious about what the ground truth really is. That curiosity is what keeps the experience alive even after the interaction ends. As a creator, you can guide emotions subtly through design and environment, but once you begin instructing the audience, it stops being interactive and becomes prescriptive. True interactivity lies in that delicate balance between guidance and freedom where the audience feels both engaged and uncertain.

W8: Her Code Got Humans On The Moon Reflection

It’s revolutionary to see the scale of what Margaret Hamilton achieved. She didn’t just break gender stereotypes, she essentially founded an entire discipline that grew into a billion-dollar industry: software engineering. While we all remember Neil Armstrong as the first man to step on the moon, we rarely think about the person who made that step possible. Reading about Hamilton made me realise how much unseen effort lies behind every historic moment.

As a woman in computer science, a field still largely dominated by men, her story feels deeply personal and inspiring. It’s empowering to see someone who not only challenged norms but also redefined “engineering.”

One part of the reading that resonated with me on a technical level was Hamilton’s insistence on anticipating and handling errors. When I first started learning to code, I used to find “try,” “except,” and “catch error” statements frustrating and unnecessary. I would think, why not just tell users not to make mistakes? But Hamilton’s experience showed the flaw in that thinking. Even an astronaut, among the most trained and intelligent individuals, made an oversight that could have led to mission failure. That moment completely reframed my understanding: robust systems are not built on the assumption that people won’t perform error, but on the expectation that they inevitably will.

This reading reminded me that testing, error handling, and designing for failure are not tedious parts of coding, they’re acts of responsibility and necessity. Margaret Hamilton’s story shows that great engineering is not just about writing functional code but about preventing failure, protecting people, and thinking ahead. It’s a mindset I want to carry into every project I work on.

W8: Emotion and Design writing response

I strongly agree with Norman’s idea that beautiful things often appear more usable than others. His argument immediately reminded me of a simple economic distinction: need vs want. The “need” reflects a product’s functionality, while the “want” represents the emotional desire or aesthetic appeal that creates the illusion of greater usability.

A recent experience illustrates this perfectly. My phone’s screen-guard and cover had broken, making it look worn out and, to me, almost unusable. I even considered buying a new phone, not because it stopped working, but because it looked unattractive. However, as soon as I replaced the cover, the phone suddenly felt smooth, neat, and functional again. Nothing changed technically, yet my perception of usability improved. This small incident made Norman’s point about emotional design feel remarkably real. It stresses on how our positive affect can shape our judgment of an object’s performance.

This also made me wonder: why do we, as humans, lean so strongly toward attractiveness over function? Is it instinctive, a natural response to seek pleasure in what pleases the eye? Consider the popular Longchamp tote bag that have taken over university campuses. They are stylish and easily recognisable, yet lack practical compartments, making it difficult to organise essentials like a laptop or documents. Despite this, they remain a trend. Perhaps this reflects what Norman calls the emotional pull of design. We forgive functional flaws when an object evokes a certain feeling or identity.

Yet, aesthetics are subjective; what one finds beautiful, another may not. This raises an important question for designers: how should one balance usability with aesthetics when beauty itself cannot be universally defined? Norman suggests that effective design lies in harmonising both, where aesthetic pleasure enhances, but does not replace, functionality. Maybe it is acceptable, to some degree, for design to create an illusion of usability through beauty, as long as that illusion inspires engagement rather than deception.

In the end, I believe the power of design lies in its ability to connect both heart and mind to make people feel good while helping them do well. Beauty without function is momentary, but function without beauty rarely delights. The challenge, as Norman describes, is to design for both.

W8: Creative Switch

Inspiration

 I drew inspiration from interactive toys and physics-based games where an object’s motion triggers an outcome. I wanted to create a setup where the physical action of dropping an object could directly influence the visual cues.

Concept

The idea is simple yet engaging: dropping a conductive ball into a cup lights up an LED. The cup contains two strips of conductive material that act as a switch. When the ball touches both strips, it completes the circuit, sending a signal to the Arduino, which then lights the LED. This concept transforms a small, physical action into a clear electronic response, demonstrating the connection between the real world and digital systems.

Implementation

To bring the concept to life:

I attached two strips of copper tape inside a cup, positioned on opposite sides so that the ball could touch both when dropped.

Wires were connected from the copper tape to the Arduino circuit: one to 5V, the other to a digital input pin with a pull-down resistor to stabilize readings.

The LED was connected to another Arduino pin with a series 10K ohm resistor to prevent overcurrent.

Finally, I wrote a simple Arduino code using digitalWrite to read the input from the cup and light the LED whenever the ball completed the circuit. This setup allowed the system to react instantly when the ball was dropped, making it interactive.

Pictures

fig 1: circuit schema

fig 2: circuit

Video Demonstration

Challenges

During development, several practical challenges arose:

Material availability: Initially, I planned to use Aluminum foil as the conductive ball but it was unavailable. I switched to copper as a complementary conductor, which worked well.

Maintaining electrical contact: Attaching wires to the copper tape and ensuring consistent contact was difficult. The wires would often become loose, causing the LED to light only at certain angles. To resolve this, I maximized the contact area between the ball and copper strips and reinforced the connection from the cup to the breadboard. This ensured reliable detection whenever the ball was dropped.

Future Improvements

A possible future enhancement would be to introduce multiple cups or targets, each connected to a different colored LED and sound output. For instance, when the ball drops into a specific cup, it could trigger that cup’s LED to light up in a unique color and play a corresponding sound. This would make the interaction more dynamic and visually appealing.

To make it more game-like, the cups could be arranged in a triangular formation, with each cup representing a different score or difficulty level. For example, the front cup might be worth fewer points while those at the back are worth more. The Arduino could then keep track of the scores based on which input pin was triggered, creating an engaging mini electronic scoring game.

Midterm Project

Inspiration

The idea for Blackhole was born from one of my all-time favorite casual games: hole.io. The core concept of “the more you eat, the more you grow” fascinated me. However, my goal was never to recreate an existing game, but to reinterpret that mechanic in an entirely different environment.

In hole.io, the objective is to have the largest area among all players, consuming the environment and other players to maintain your crown. So I though of reimagining that concept in space? What if the hole was a blackhole preying everything in its path? This became the foundation of my project; A space-themed, single-player survival game centered around growth, gravity, and chaos.

Concept

Blackhole is a single-player arcade-style game set in deep space. The player controls a blackhole that grows by consuming asteroids and powerups, while avoiding larger threats.

To bring the theme to life, I envisioned asteroids as natural “food” for the blackhole (blending the space aesthetic with the satisfying loop of consuming and expanding). To make the gameplay more dynamic, I added powerups that appear on screen at random intervals, offering temporary boosts or effects to make the experience more engaging.

The main goal is simple: survive as long as possible while growing your blackhole and collecting stars for bonus points. The longer you last, the higher your score.

Implementation

Developing Blackhole was an iterative process. I began by prioritizing functionality, ensuring the mechanics worked before focusing on visuals or audio.

Blackhole

The foundation of the game lies in the Blackhole class, responsible for handling player growth, shrinking, and interaction with other entities. For simplicity, I represented all objects (asteroids and powerups) as circles. This made collision detection intuitive and computationally efficient. Using the dist() function in p5.js, I determined when the player’s blackhole had consumed another object or collided with a threat.

Blackhole Class:

class Blackhole{
  constructor(){
    this.size=25;
    this.speed=3;
    this.x = 100;
    this.y = 100;
  }
  
  // call when eats star or expand powerup
  grow(){
    this.size *= 1.2;
  }
  
  shrink(){
    this.size *= 0.8;
  }
  
  move() {
    if (keyIsDown(UP_ARROW)) {
      // guarding against boundaries
      if (this.y - this.speed > 0){
        this.y -= this.speed;
      }
    }
    if (keyIsDown(DOWN_ARROW)) {
      // guarding against boundaries
      if (this.y + this.speed < height){
        this.y += this.speed;
      }
    }
    if (keyIsDown(RIGHT_ARROW)) {
      // guarding against boundaries
      if (this.x + this.speed < width){
        this.x += this.speed;
      }
    }
    if (keyIsDown(LEFT_ARROW)) {
      // guarding against boundaries
      if (this.x - this.speed > 0){
        this.x -= this.speed;
      } 
    }
  }
  
  display(){
    stroke('white');
    strokeWeight(2);
    fill('black');
    circle(this.x, this.y, this.size);
  }
  
  // returns boolean
  collides(obj) {
    let d = dist(this.x, this.y, obj.x, obj.y);
    return d <= this.size/2;
  }
}

Powerups and Movement

Initially, I designed the powerups to spawn randomly and stay on screen indefinitely. However, I quickly realized that this cluttered the visuals and made it hard to differentiate between background stars and powerups. Even adding a blinking effect before disappearance didn’t feel natural.

Taking inspiration from Tetris Rush, I redesigned the mechanic. Now, powerups drop from the top of the screen and disappear at the bottom, creating a natural movement cycle. This solved both visibility and clutter issues while keeping the game lively and visually coherent.

Powerup Class:

class Powerups {
  constructor(_x, _y, _type, _img) {
    this.x = _x;
    this.y = _y;
    this.type = _type;
    this.img = _img;
    this.fallSpeed = random(1, 3); // random falling speed
  }

  // to have powerups fall from up to down
  move() {
    this.y += this.fallSpeed;
    if (this.y > height) {
      this.y = height;
    }
  }

  display() {
    textSize(20);
    
    if (this.type == "fast") {
      this.size = textWidth(''); // for detecting collissions
      text('', this.x, this.y);
    } 
    else if (this.type == "freeze") {
      this.size = textWidth('❄️'); // for detecting collissions
      text('❄️', this.x, this.y);
    } 
    else if (this.type == "grow") {
      // image for grow
      push();                   
      imageMode(CENTER);
      this.size = 75; // for detecting collissions
      image(this.img, this.x, this.y, this.size, this.size)
      pop();
      
    } 
    else if (this.type == "shrink") {
      // image for shrink
      push(); 
      this.size = 40; // for detecting collissions
      imageMode(CENTER);
      image(this.img, this.x, this.y, this.size, this.size)
      pop();
      
    } 
    else if (this.type == "star") {
      this.size = textWidth('⭐️'); // for detecting collissions
      text('⭐️', this.x, this.y);
    } 
    else if (this.type == "bomb") {
      this.size = textWidth(''); // for detecting collissions
      text('', this.x, this.y);
    }
  }
}

Asteroids

For this class it was difficult preventing the asteroids from moving in predictable or repetitive patterns. To make the game feel organic and chaotic, I implemented an Asteroid class that allows each asteroid to move in random directions and change course periodically.

Each asteroid picks a random direction from a list of possible vectors including diagonal and straight-line movement and updates its velocity every few seconds. This ensures the movement feels spontaneous, mimicking the drifting, floating nature of space.

Asteroid class

class Asteroid {
  constructor(_x, _y, _img, _size) {
    this.x = _x;
    this.y = _y;
    this.img = _img
    this.size = _size;

    // separate velocities
    // this.vx = random([-1, 1]); // left or right
    // this.vy = random([-1, 1]); // up or down
    this.speed = 1.5;
    
    this.setRandomDirection();
  }

  // asteroids move in any direction randomly
  setRandomDirection() {
    let choices = [
      [1, 0],   // right
      [-1, 0],  // left
      [0, 1],   // down
      [0, -1],  // up
      [1, 1],   // diagonal down-right
      [-1, 1],  // diagonal down-left
      [1, -1],  // diagonal up-right
      [-1, -1]  // diagonal up-left
    ];

    let dir = random(choices);
    this.vx = dir[0]; // x-axis dir
    this.vy = dir[1]; // y-axis dir
  }

  move() {
    
    // every 2s change direction
    if (frameCount % 120 === 0) {
      this.setRandomDirection();
    }
    
    this.x += this.vx * this.speed;
    this.y += this.vy * this.speed;

    // Bounce off edges
    if (this.x < 0 || this.x > width)this.vx *= -1;
    if (this.y < 0 || this.y > height) this.vy *= -1;
  }
  
  display() {
    push();                   
    imageMode(CENTER);
    image(this.img, this.x, this.y, this.size, this.size);
    pop();
  }
}

Code I’m Proud of

One of the features I’m most proud of implementing is the freeze and speed-up powerups. Increasing/ Decreasing the speed the speed at collision was easy but how do I revert those changes back to their original state after a fixed duration (around 4 seconds).

I knew I could use frameCount to track time progression, but the question was where and how to integrate it efficiently without breaking the game loop. The solution came through the use of two external constants: freezeTimer and fastTimer.

Whenever the player collided with a powerup, the corresponding timer was assigned a specific value. In each frame of the game, these timers would decrement until they reached zero. Then the original state was restored.

// timers
  if (freezeTimer > 0) {
    freezeTimer--; // keep decreasing with every frame count
  } else if (fastTimer > 0) {
    player.speed = 5;
    fastTimer--; // keep decreasing with every frame count
  } else {
    // resetting speed for player and aesteroids
    player.speed = 3;
    for (let asteroid of asteroids) {
      asteroid.speed = 2;
    }
  }

Design and Visuals

For the backdrop, the choice was obvious, a starry night sky. I later replaced my simple circular asteroids with actual asteroid images to enhance immersion.

Sound design also became an important aspect. I added distinct sound cues for eating asteroids and activating powerups. Interestingly, while I was coding, the song “All the Stars” started playing and it immediately felt like the perfect fit. So I included it as background music, giving the game a cinematic atmosphere.

Instructions Screen

Game Screen

Game Over screen

Key Challenges and Solutions

  1. Resizing Mechanics
    One of the more subtle but tricky challenges I faced was maintaining a consistent visual layout across different screen sizes. A problem I faced was that whenever I would resize on the instructions or Game Over screen, it would give me an error and paint a black empty backdrop. This was because I used noLoop() to freeze the game on both the instruction screen and the Game Over screen. To solve this, I ensured that after resizing, the program would temporarily call redraw function (even when in a noLoop() state) to refresh the layout and correctly reposition all visual elements according to the new width and height.

  2. Blackhole and Asteroid size

Another major design challenge came from the scaling mechanics of the Blackhole and the Asteroids. Since the central idea of the game revolves around “eating to grow,” the player’s size could increase infinitely if left unchecked. Thereby, eventually covering the entire canvas and breaking the gameplay. Similarly, large asteroids could spawn that were nearly impossible to avoid or defeat, making the game feel unfair or chaotic.

To address this, I implemented maximum and minimum size constraints for both the Blackhole and the Asteroids. The Blackhole’s growth is now capped at a defined MAX_SIZE, preventing it from expanding beyond the playable area, while asteroids are spawned within a limited size range relative to the player’s current size. This balancing ensures that gameplay remains challenging yet visually coherent.

Potential Developments

While I’m proud of the current version of Blackhole, there’s plenty of room for evolution.

  • Improved Scoring System:
    Currently, the score depends on survival time and the number of stars collected. In the future, I’d like to integrate the player’s size into the scoring formula (though my earlier attempts caused the score to grow too rapidly to track meaningfully).

  • Explosive Powerups:
    Introducing a special powerup that destroys all asteroids within a radius could add an exciting burst element and reward risky gameplay.

  • Dynamic Difficulty:
    Over time, the game could progressively increase asteroid speed or bomb spawn rate, keeping the challenge alive as the player grows.

Week 5 Reading Reflection

When I compare computer vision to human vision, I realize how differently they work. For me, vision is immediate and layered with meaning, but for a computer, an image is nothing more than an array of numbers, vectors, and pixels. This reading reminded me of the summer I worked on an image understanding project with our faculty in the cybersecurity domain. I saw firsthand how computers don’t “see” an image the way I do. They only interpret numerical values and computations. Because of this, it’s even possible to generate two different sets of numbers that look the same to a computer, but when drawn out, they are hardly identical. That gap between what humans and computers perceive really fascinated me.

To help computers track what we want, techniques like frame differencing, background subtraction, and brightness thresholding come into play. Through this reading, I learned how important the setup is: lighting, camera placement, and reflective markers can make or break the system’s accuracy. It’s almost like computers need us to simplify reality so they can process it.

Where I feel conflicted is in the area of surveillance. In my cybersecurity project, I came to appreciate how powerful these systems can be, but also how much risk comes with that power. Since computers only “understand” numbers, they can be tricked, but they can also be used to monitor, record, and categorise people in ways that feel invasive. In interactive art, I see these tools as playful, engaging, and thought-provoking. But outside of art, they can become a threat. Surveillance powered by computer vision has the capacity to invade privacy, consolidate control, and even manipulate what we think of as reality. That tension makes me think harder about not just what computers can see, but what we allow them to see.

Week 5- Midterm Progress

Concept

For my midterm project, I wanted to build something interactive, competitive, and continuously evolving in difficulty. Inspired by the mechanics of hole.io, I decided to create a Blackhole game in p5 where the player controls a growing blackhole that consumes objects scattered across the canvas. The more the player consumes, the larger and stronger the blackhole becomes.

The concept behind this project is to give the user an engaging challenge: survive, grow, and adapt. I wanted to capture the thrill of growth mechanics where every interaction (consuming objects, avoiding hazards, gaining powerups) feels rewarding. Unlike my previous projects, this one leans toward a fast-paced arcade experience rather than something calm or exploratory.

To keep the game fun yet playable, I plan to design mechanics that gradually increase difficulty by reducing screen size. The goal is for the user to feel both challenged and accomplished as they make progress.

Design

The overall design will remain simple but dynamic, with a clear focus on gameplay elements. The blackhole will start small, almost unnoticeable, and will grow bigger as it absorbs objects.

The background will replicate the feeling of outer space, dark and starry, while asteroids and floating debris will act as obstacles and targets. Power-ups will be distinguishable by their shapes and colors, with each type offering unique abilities: faster movement, temporary freezing of asteroids, shrinking/growing effects, or invincibility.

The controls will be intuitive: the blackhole follows the player’s movement across the canvas, keeping the learning curve minimal so the focus remains on reaction and survival.

Frightening / Challenging Aspects

  • Considering the extreme conditions: what if the blackhole becomes larger than the screen? At that point, gameplay might break or lose its intended challenge.
  • Object Spawn Locations: How do I make sure that newly spawned objects do not appear inside the blackhole space, where they would be instantly consumed without interaction?
  • Minimum Distance Between Objects: Ensuring that two spawned objects are not too close to each other, which would make the game feel cluttered and unrealistic.
  • Shrinking Screen Space: The problem arises when objects fall outside the shrinking boundaries, how should those be handled?

 

Risk Prevention

  • Extreme Growth Prevention: To prevent the blackhole from growing uncontrollably, I will implement a mechanic where the hole begins to shrink if it hasn’t consumed anything in the past few seconds (e.g., 5 seconds). This ensures balance and keeps the game playable even in longer sessions.
  • Object Spawn Locations: While spawning new objects, I will check their coordinates against the blackhole’s current position and radius. If an object is detected within the hole’s space, it will be re-spawned at another random location.
  • Minimum Distance Between Objects: I plan to use a distance check between all newly spawned objects, ensuring that each object maintains a minimum distance from others. This can be achieved with a simple loop comparing coordinates and re-spawning objects if they are too close.
  • Shrinking Screen Space: I am exploring the use of constraint and filter functions to dynamically remove or reposition any objects that fall outside the shrinking boundaries. This way, the playable area remains fair and uncluttered as it reduces in size.

 

Week 4 Reading Reflection


Reading Don Norman’s reflections on the “psychopathology of everyday things” immediately pulled me back to my first-year CADT course on redesign. Professor Geffredo Puccetti. In the course we studied the idea of a “nudge” which is basically a subtle element that steers users in the right direction without needing explicit instructions. It is a simple yet powerful thing to have in a design. Norman’s principles of affordance and signifiers echo this beautifully. He reminded me that design should make the right action feel almost self-evident, sparing users from the awkward trial and error of guessing.

I see this all the time in everyday spaces. At NYUAD, the plain steel strips on push doors are a perfect example. Without a single word, they tell you what to do. Yet, recently, they pasted “push” and “pull” stickers on doors, a sign of design overcompensating for its own ambiguity. Digital design isn’t so different. Minimalist interfaces often leave users hunting for functionality, hiding navigation behind icons. Sometimes I find myself clicking around blindly, wondering if something is interactive or just static.

Norman’s framework helps me think through why. Affordances, like shading, button shapes, or a small animation hint at what’s possible and invite us to try.  Signifiers like a microphone icon or a heart symbol work almost instinctively, cutting down the need for extra instructions.

But the rise of minimalism has complicated things. I think often of the infamous “Kim Kardashian sink”. A perfect example of how design can privilege beauty over usability. The sink looks striking, but newcomers can’t figure out how to use it. I’ve had similar frustrations with everyday objects, like awkwardly designed shopping baskets that seem almost painful to the human hand. Having the handle in the middle makes it uncomfortable to carry and increases the probability of things falling out. This is a clear example of poor design.

For me, that tension between minimalism and intuitiveness is the heart of the matter. Designers are often tempted towards beauty, but at the expense of comfort and clarity. I’ve realised that the best designs aren’t the ones that impress me visually at first glance but the ones that quietly work. In this course I would try to implement buttons which clearly indicate they are to be pressed by a simple shadow or glow effect.  I would prioritise intuitive and visual cues, making every function discoverable, understandable, and, ideally, a little delightful. I would also follow a key for instance, all glowing objects are interactable and all the non-glowing ones are static. Have icons to demonstrate power ups and add a visual animation to add a signifier. Overall, I hope these changes add simplifies discoverability and understanding.

Week 4 Generative Text

Concept:
For this week’s coding assignment, I wanted to experiment with something more on the creative side. I decided to work on generative text, with the idea of making the appearance of words reflect different moods. My goal was to have the text evoke an emotion not just through what it says, but how it looks and behaves on the screen.

The concept I explored was simple: each time a user clicks, the mood changes, and the text morphs to visually represent that emotion. To achieve this, I combined techniques we covered in class, like sine functions and noise. I also experimented with movement mechanics, such as vertical speed (gravity), bouncing off edges, and the dynamic effect of writing a word.

Code I’m most proud of:

if (!excitedInitialized) 
      // convert text into points (vector outlines of letters)
      points = font.textToPoints(current_mood, width / 2, height / 2, 60, {
        sampleFactor: 0.16,       // density of points
        simplifyThreshold: 0      // no simplification
      });

      // create particles starting at random positions moving toward text points
      particles = [];
      for (let p of points) {
        particles.push({
          x: random(width),
          y: random(height),
          targetX: p.x,
          targetY: p.y
        });
      }
      excitedInitialized = true; // mark as initialized
    }

    // animate particles moving toward their target text points
    for (let p of particles) {
      p.x = lerp(p.x, p.targetX, 0.05); // smooth movement toward targetX
      p.y = lerp(p.y, p.targetY, 0.05); // smooth movement toward targetY

      ellipse(p.x, p.y, 4, 4); // draw particle as a bubble
    }

This snippet stands out to me because it uses two functions I learned during this assignment: textToPoints and lerp.

textToPoints breaks down a word into a set of points based on the chosen font, giving me the flexibility to manipulate text at the particle level.

lerp (linear interpolation) was the key to achieving the effect I wanted. It allowed particles to smoothly move from random positions on the canvas to their designated target points. As a result, the word takes shape out of multiple “bubbles,” giving the text an energetic, almost playful quality.

This was exactly the kind of interaction I wanted. The text doesn’t just appear, it comes alive.

Future Improvements:
While I’m happy with how the project turned out, there’s still plenty of room to push it further. A key next step would be to make the generative text more interactive, so that it doesn’t just display moods but actively responds to the user. I imagine scenarios where hovering over the text could cause particles to scatter and fall apart, or where words might sparkle, ripple, or shift dynamically in response to movement on the screen.