Week 8 — Reading Response

This week’s readings really brought home a lot of food for thought that I hadn’t thought about before, especially the reading on Margaret Hamilton. I had heard of her prior to reading the article but only that she had done pioneering work in the field of computer science. A part that stood out to me in particular was when they were talking about her daughter playing with the command simulator and accidentally started P01 and how Hamilton wanted to add additional code to act on this possible error scenario but how it was blocked by NASA as unnecessary overhead as all astrounats will be “perfect”. However, an accident actually happened during the Apollo 11 mission when an astronaut mistakenly triggered the same program sequence, causing them to be stranded. This indicated to me that even when a woman is exceptionally intelligent and has made significant contributions, her expertise can still be undervalued or dismissed. Hamilton’s insistence on prioritizing safety and anticipating human error by putting it in the documentation, after being overruled, demonstrated to me how women in STEM have often had to fight harder to have their opinions recognized, even when their ideas ultimately prove to be crucial. It personally resonated with me quite a bit, especially as someone who has also witnessed this type of discrimination within my major, asymmetrically affecting my female peers, including myself. This story, to me, serves as an inspiration to be more vocal about my opinions, be experimental and creative, and most importantly, to trust myself more.

The other reading, Attractive Things Work Better, also gave me a new perspective on design and usability. It discussed how aesthetics influence not just our perception of a product but also our ability to use it effectively. The idea that people are more tolerant of minor usability flaws in visually appealing designs was really intriguing with me. It made me think about how often we associate beauty with functionality, even in cases where the two aren’t directly related. This reading helped me see how emotional responses to design can impact user experience in ways I hadn’t considered before, specifically for physical objects. For example, for software, I can imagine that the design aspect of UI is heavily correlated to its usability aspect of it, since clear signals have to be designed (aesthetically AND functionally) in a way that doesn’t disturb the user experience.

Week 8 — Unconventional Swtich

Ideation

I was initially very intimidated by the idea of having to create a switch that didn’t use our hands (to turn on a switch, I’m assuming), especially after seeing the trash can example since it seemed like a relatively complicated example. However, I looked at a couple more examples from other students and it seemed like the idea itself had to be creative rather than the technical aspect being advanced. I almost immediately knew that I wanted to work with something that had to do with food utensils, then I thought of chopsticks. In Korea, people use chopsticks as their main utensil to eat but I always thought that it would act as an interesting pair of drumsticks.

Process

I replicated the switch circuit that we made in class then decided to use alligator tongles to collect the wires to one of each of the chopsticks. At first, I thought to use electricity-conducting tape, but then I had a hard time finding them in the lab so I decided to use the tongles. After I knew it worked by using one of the LED bulbs, I thought something more could be added to make the project more musical. This is when I decided to integrate a speaker into my circuit. I watched this tutorial to get a grasp on how to connect and play audio from Arduino then use the example code 02. Digital > toneMelody and integrated it into my switch code to play a sound when I’m hitting the chopsticks together.

Code Highlight / Obstacles

One part of my code that I’m particularly proud of is how I figured out the correct Arduino melody chords to play on the speaker. Unlike conventional musical notation, Arduino represents notes using predefined frequency values rather than standard sheet music notation. This means that converting a melody from traditional sheet music to Arduino code requires understanding how the notes map to Arduino’s tone() function.

To start, I searched for the chorus notes of We Will Rock You on the piano and found a beginner-friendly sheet music version. The melody consisted of the notes F, E, D, C, D, and D in the fourth octave. Since Arduino uses a different notation system, I used ChatGPT to convert these piano notes into corresponding Arduino frequency values.

After generating the initial translation, I decided that the melody sounded too low, so I asked ChatGPT to shift the notes up by one octave. This adjustment results in the notes F5, E5, D5, C5, D5, and D5, which gave the melody a brighter and more distinct sound when played through the speaker.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
const int melody[] = {698, 659, 587, 523, 587, 587};
const int melody[] = {698, 659, 587, 523, 587, 587};
const int melody[] = {698, 659, 587, 523, 587, 587};

In addition, one part that I also struggled with was making sure that the notes incremented by 1 each time that the switch was turned on since in the loop it kept going around in the same-ish notes because I had initially put the noteIndex increment in the else statement. So I instead created a boolean variable that would be set to false and then set to true once I hit the chopsticks together for the first time. After that, there was an else statement that would check if first was true, and only then increment the noteIndex and set first back to false. This way, the note only advances once per button release, preventing the loop from rapidly cycling through all the notes while the button is held down.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
if (switchPosition == HIGH) {
first = true;
digitalWrite(ledPin, HIGH);
tone(speakerPin, melody[noteIndex]);
} else {
digitalWrite(ledPin, LOW);
noTone(speakerPin);
if (first) {
noteIndex = (noteIndex + 1) % 6;
first = false;
}
}
if (switchPosition == HIGH) { first = true; digitalWrite(ledPin, HIGH); tone(speakerPin, melody[noteIndex]); } else { digitalWrite(ledPin, LOW); noTone(speakerPin); if (first) { noteIndex = (noteIndex + 1) % 6; first = false; } }
if (switchPosition == HIGH) {
  first = true;
  digitalWrite(ledPin, HIGH);
  tone(speakerPin, melody[noteIndex]);
} else {
  digitalWrite(ledPin, LOW);
  noTone(speakerPin);

  if (first) {
    noteIndex = (noteIndex + 1) % 6;
    first = false;
  }
}

Watch my live demo in class hehe 🙂

Midterm Project — Candy Collect

Introduction

This project was inspired by the food-theme (specifically, candy/sweets) games I used to play as a kid. I wanted to re-create the “vibe” of those games, also taking aesthetic references from Pinterest images. As I mentioned in my previous progress report, I drew up the idea on my iPad, then created the custom assets on Figma, such as the background image, boxes, buttons, logos, and so forth. The game I decided to create is called “CandyCollect”, where there are three different colored boxes, corresponding to a candy color. As the different colored candies are falling from the sky you have to move the corresponding box to catch the candy. You can switch the boxes by pressing the spacebar and the actively moveable box will glow yellow. You can then move the box by pressing down on the left and right arrow keys. You have a total of 30 seconds to catch as many candies as possible and depending on how you perform, you will be given different feedback from the game.

Custom Assets/Sketch
(on a real note, I actually ended up creating the candy shape with an ellipse and a rounded rectangle instead of using the image as I needed a shape haha.)

Code Highlights 

Glow effect on the box when active:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
if (isActive) {
for (let i = 0; i < 10; i++) {
stroke(255, 255, 0, 255 - (i * 30));
strokeWeight(4);
noFill();
// glow
rect(this.x - i, this.y - i, this.width + 2 * i, 60 + 2 * i);
}
if (isActive) { for (let i = 0; i < 10; i++) { stroke(255, 255, 0, 255 - (i * 30)); strokeWeight(4); noFill(); // glow rect(this.x - i, this.y - i, this.width + 2 * i, 60 + 2 * i); }
if (isActive) {
  for (let i = 0; i < 10; i++) {
    stroke(255, 255, 0, 255 - (i * 30)); 
    strokeWeight(4);
    noFill();
    
    // glow
    rect(this.x - i, this.y - i, this.width + 2 * i, 60 + 2 * i);
  }

I used a loop to create a glowing effect around the active box. The idea is pretty simple: I draw multiple rectangles on top of each other, each with a slightly lower opacity than the last. By tweaking stroke(255, 255, 0, 255 - (i * 30)), I make sure the glow fades out smoothly, giving the box a soft, dynamic highlight.

Collision detection mechanism:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
let box = boxes.find((box) => box.colorName === candies[i].colorName);
if (
candies[i].y + candies[i].size > box.y &&
candies[i].x > box.x &&
candies[i].x < box.x + box.width &&
boxes[activeBoxIndex] === box
) {
plop.play();
score++;
candies.splice(i, 1); // removing caught candy
}
let box = boxes.find((box) => box.colorName === candies[i].colorName); if ( candies[i].y + candies[i].size > box.y && candies[i].x > box.x && candies[i].x < box.x + box.width && boxes[activeBoxIndex] === box ) { plop.play(); score++; candies.splice(i, 1); // removing caught candy }
let box = boxes.find((box) => box.colorName === candies[i].colorName);
if (
  candies[i].y + candies[i].size > box.y &&
  candies[i].x > box.x &&
  candies[i].x < box.x + box.width &&
  boxes[activeBoxIndex] === box
) {
  plop.play();
  score++;
  candies.splice(i, 1); // removing caught candy
}

Even though this snippet is pretty intuitive and short, I actually liked how simple it was to basically add new features to the candy collision. First, it uses boxes.find() to match the candy’s color with the correct box, and then it checks if the candy’s position falls within the box’s bounds. If everything lines up, the plop sound plays, the score goes up, and the candy disappears. Adding additional features as I went was made easy as it was handle largely just in this section.

Problems/Struggles

1. Active Box Collisions:

At first, the collisions were not that accurate. For example, even if a box wasn’t active, it would catch the candy and I didn’t want to leave that in as a bug/feature. In order to fix this, I simply added an additional check in the collision

boxes[activeBoxIndex] === box
boxes[activeBoxIndex] === box detecting if statement.

2. Obstacles Spawning in Wrong Places:

There was this frustrating issue where multiple, and I mean MULTIPLE candies would spawn when the game is restarted, in a way that would be impossible for the user to catch. To fix this, I cleared the candy spawn interval if a previous interval still existed.

3. Audio and Performance Issues

Towards the end of the project, for some reason, some of my files (e.g. images, audios) would get corrupted, and I had a hard time making sure that everything was all accessible to the sketch.js. This was a relatively simple fix as I just double-checked in my assets folder whether all the necessary images and audios were available.

Here’s the demo:

Demo

Week 5 — Reading Response

As I read about the differences between computer vision and human vision, I couldn’t help but reflect on how fascinating it is that these two systems operate so differently. I thought that the comparison really highlighted the strengths and limitations of both. Human vision is incredibly flexible and context-aware — we can recognize a friend’s face even in poor lighting or from an unusual angle. Computer vision, on the other hand, relies on algorithms that need carefully controlled conditions to function effectively. This made me realize how much work goes into “teaching” computers to see, and it reminded me of my own experiences. I’ve always been intrigued by how technology intersects with human behavior, and this article deepened my appreciation for the challenges of bridging that gap.

To help computers “see” what we’re interested in, the article outlines several techniques, like frame differencing, background subtraction, and brightness thresholding. I thought these methods were clever in how they simplify the problem — breaking down complex visual data into manageable pieces. For example, using brightness thresholding to track a single bright point, like a flashlight, seems almost elegant in its simplicity. I believe, that these foundational techniques are a great starting point for anyone looking to experiment with computer vision (like me hehe). They also highlight the importance of designing the physical environment to support the algorithm, which is something I hadn’t fully considered before. It’s not just about writing code; it’s about creating a world that the computer can understand.

The article’s discussion of tracking and surveillance in interactive art really resonated with me, especially given my legal background. I think computer vision is a perfect example of a tool that can be both empowering and unsettling. On one hand, it enables incredible artistic experiences, like Myron Krueger’s Videoplace, where participants can interact with virtual elements using their bodies. On the other hand, it raises questions about privacy and control, as seen in projects like Suicide Box or David Rokeby’s Sorting Daemon. These works use computer vision to critique surveillance, and I think they’re a powerful reminder of how technology can shape — and be shaped by — societal values.

Personally, I believe that computer vision’s capacity for tracking and surveillance adds a layer of complexity to its use in art. As a CS major with a legal background, I’m particularly drawn to the ethical dimensions of this technology. I’d love to explore how interactive art can balance the wonder of computer vision with a critical awareness of its implications. For instance, what if an installation invited participants to play with tracking technology while also making them aware of how their data is being used? I think there’s a lot of potential here to create works that are not only engaging but also thought-provoking, encouraging viewers to reflect on the role of technology in their lives.

Midterm Progress Report

Concept/Ideation

I started off by thinking about what is the vibe I wanted to have in my project. I browsed through Pinterest boards to see if I could find some inspiration. This is where I came across this cute photo:

This may contain: the sky is filled with stars and clouds

I wanted my game’s background photo to have a similar color palette and aesthetic. It also reminded me of the nostalgic feeling I had as a kid when playing simple, food-themed games — like cooking games where you’d make ramen or decorate baked goods. Inspired by that, I decided to create a game with a similar concept but scaled down to fit within my scope. I knew from the start that I wanted it to be candy-themed.

Brainstorming

I quickly drew up the ideas that I had for the design and started to sketch the initial visuals:
I then used Figma to actually create the background of the game in addition to the necessary visual assets:

I idea is that the game involves three boxes, each corresponding to a color of candy falling from the sky. To catch the candy, you move the box of the matching color to the spot where the candy is about to fall. To switch between the boxes, simply press the spacebar. While there is no traditional winning system, you score one point for each candy you catch. And there will never be more than one candy falling at the same vertical level at a time.

Code Progress/Concerns

Right now, I’m focused on having two classes: one for the box and one for the candy. These will be central to the gameplay mechanics, as the candy will fall and the player will control the box to catch it. The most uncertain part of my midterm project is ensuring smooth interaction between the boxes and candies, especially when switching between boxes using the spacebar. This could lead to timing issues or bugs that affect the gameplay experience. To minimize this risk, I plan to write some code to test the collision detection and box-switching functionality. Currently I’ve preloaded by assets:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function preload() {
bg = loadImage("bg.png");
startButton = loadImage("start.png");
mouseSound = loadSound("mouse.mp3");
boxImages[0] = loadImage("box1.png");
boxImages[1] = loadImage("box2.png");
boxImages[2] = loadImage("box3.png");
candyImages[0] = loadImage("candy1.png");
candyImages[1] = loadImage("candy2.png");
candyImages[2] = loadImage("candy3.png");
}
function preload() { bg = loadImage("bg.png"); startButton = loadImage("start.png"); mouseSound = loadSound("mouse.mp3"); boxImages[0] = loadImage("box1.png"); boxImages[1] = loadImage("box2.png"); boxImages[2] = loadImage("box3.png"); candyImages[0] = loadImage("candy1.png"); candyImages[1] = loadImage("candy2.png"); candyImages[2] = loadImage("candy3.png"); }
function preload() {
  bg = loadImage("bg.png");
  startButton = loadImage("start.png");
  mouseSound = loadSound("mouse.mp3");

  boxImages[0] = loadImage("box1.png");
  boxImages[1] = loadImage("box2.png");
  boxImages[2] = loadImage("box3.png");

  candyImages[0] = loadImage("candy1.png");
  candyImages[1] = loadImage("candy2.png");
  candyImages[2] = loadImage("candy3.png");
}

in addition to structuring my two main classes for OOP:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class Box {
constructor(x, y, width, img) {
this.x = x;
this.y = y;
this.width = width;
this.img = img;
}
move(dx) {
this.x = constrain(this.x + dx, 0, width - this.width);
}
display(isActive) {
image(this.img, this.x, this.y, this.width, 60);
if (isActive) {
stroke(255, 0, 0);
strokeWeight(2);
noFill();
rect(this.x, this.y, this.width, 60);
}
}
}
class Candy {
constructor(x, y, img, type) {
this.x = x;
this.y = y;
this.size = 40;
this.img = img;
this.type = type;
}
fall() {
this.y += gravity;
}
display() {
image(this.img, this.x, this.y, this.size, this.size);
}
}
class Box { constructor(x, y, width, img) { this.x = x; this.y = y; this.width = width; this.img = img; } move(dx) { this.x = constrain(this.x + dx, 0, width - this.width); } display(isActive) { image(this.img, this.x, this.y, this.width, 60); if (isActive) { stroke(255, 0, 0); strokeWeight(2); noFill(); rect(this.x, this.y, this.width, 60); } } } class Candy { constructor(x, y, img, type) { this.x = x; this.y = y; this.size = 40; this.img = img; this.type = type; } fall() { this.y += gravity; } display() { image(this.img, this.x, this.y, this.size, this.size); } }
class Box {
  constructor(x, y, width, img) {
    this.x = x;
    this.y = y;
    this.width = width;
    this.img = img;
  }

  move(dx) {
    this.x = constrain(this.x + dx, 0, width - this.width);
  }

  display(isActive) {
    image(this.img, this.x, this.y, this.width, 60);
    if (isActive) {
      stroke(255, 0, 0);
      strokeWeight(2);
      noFill();
      rect(this.x, this.y, this.width, 60);
    }
  }
}

class Candy {
  constructor(x, y, img, type) {
    this.x = x;
    this.y = y;
    this.size = 40;
    this.img = img;
    this.type = type;
  }

  fall() {
    this.y += gravity;
  }

  display() {
    image(this.img, this.x, this.y, this.size, this.size);
  }
}

 

Week 4 — Reading Response

One thing that drives me crazy is poorly designed shower faucets, especially the ones with a single handle where it’s difficult to tell how to adjust temperature or water pressure. Many of these designs lack clear signifiers, making it frustrating to figure out whether turning the handle left increases temperature or pressure, or if pulling or pushing it has any effect at all. Sometimes, the hot and cold indicators are either missing, faded, or counterintuitive, leading to a trial-and-error process that often results in an unexpected blast of freezing or scalding water.

In interactive media, Norman’s principles can be applied similarly to enhance usability. Just as a well-designed shower faucet should make its function immediately clear, interactive elements in digital experiences should signal their purpose intuitively. For example, buttons and interactive objects should provide visual and/or tactile feedback, such as glowing when hovered over or vibrating when clicked (the type of feedback varying depending on the intended users). Proper mapping in digital interfaces—like ensuring a swipe-down gesture naturally leads to scrolling down—mirrors the idea of making physical controls feel intuitive. Without such authorial intent and thought in the design, interactive media can be confusing and can often lose its original intended meaning as a result. By focusing on basic HCD principles, interactive media can create smoother, more engaging experiences that minimize user frustration.

Week 4 — Assignment

Ideation / Research

I first started off with wanting to pursue an idea that was artistic but also made sense in terms of its value as data visualization. I personally believe that data visualization is a great way to tell a story and send a message, whether it be sociopolitical or otherwise. First, I had to find a dataset (.csv file) that I wanted to work with. For this, I went to a website called Kaggle which had a bunch of fun and interesting data files for users to download for free. After browsing around for a while, I came across one that had the nutritional information of 80 different cereals, presumably in the US. In another class, I had a discussion with a fellow student who told me that in the US, many food products, despite appearing diverse, are owned by only a handful of companies, essentially being a market full of monopolies. Given that the US prides itself on how democracy fosters free markets, I was surprised to learn that monopolies exist within this system, as they seem to contradict the principles of a free market. To explore this contradiction, I decided to use the cereal.csv dataset to create a visualization that highlights the monopoly in the cereal industry. Each animation and effect is designed with a particular purpose, conveying meaningful insights about this issue.

Process / Code Review

For the cereal nodes, I wanted their colors to reflect their sugar content, making it easy to visually interpret how much sugar each cereal contains. To do this, I used the ‘map()’ function to scale the red intensity based on the sugar level, where low-sugar cereals are black and higher sugar cereals appear redder than black. This creates an intuitive color gradient that emphasizes the nutritional differences between cereals.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
for (let cereal of cereals) {
let sugarLevel = cereal.sugar; // scale from 0-15
let redColor = map(sugarLevel, 0, 15, 0, 255); // more sugar = more red
fill(redColor, 50, 50); // red tones based on sugar content
ellipse(cereal.x, cereal.y, 10, 10);
}
}
for (let cereal of cereals) { let sugarLevel = cereal.sugar; // scale from 0-15 let redColor = map(sugarLevel, 0, 15, 0, 255); // more sugar = more red fill(redColor, 50, 50); // red tones based on sugar content ellipse(cereal.x, cereal.y, 10, 10); } }
  for (let cereal of cereals) {
    let sugarLevel = cereal.sugar; // scale from 0-15
    let redColor = map(sugarLevel, 0, 15, 0, 255); // more sugar = more red
    fill(redColor, 50, 50); // red tones based on sugar content
    ellipse(cereal.x, cereal.y, 10, 10);
  }
}

To connect each cereal to its parent company, I created animated lines that gradually extend from the manufacturer to each cereal. Instead of appearing all at once, the lines animate over time, emphasizing the idea of hidden connections between brands that aren’t immediately obvious.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
animationProgress = min(animationProgress + 0.02, 1);
// drawing animated connections
stroke(0, 0, 0, 100);
for (let link of links) {
let midX = lerp(link.start.x, link.end.x, animationProgress);
let midY = lerp(link.start.y, link.end.y, animationProgress);
line(link.start.x, link.start.y, midX, midY);
}
animationProgress = min(animationProgress + 0.02, 1); // drawing animated connections stroke(0, 0, 0, 100); for (let link of links) { let midX = lerp(link.start.x, link.end.x, animationProgress); let midY = lerp(link.start.y, link.end.y, animationProgress); line(link.start.x, link.start.y, midX, midY); }
animationProgress = min(animationProgress + 0.02, 1); 

// drawing animated connections
stroke(0, 0, 0, 100); 
for (let link of links) {
  let midX = lerp(link.start.x, link.end.x, animationProgress);
  let midY = lerp(link.start.y, link.end.y, animationProgress);
  line(link.start.x, link.start.y, midX, midY);
}

Finally, I added subtle movement to the cereal nodes. Each one has a slight floating effect, created by modifying its x and y positions with sine and cosine functions. This gives the visualization a dynamic, organic feel rather than a static diagram. The manufacturers remain fixed, symbolizing their stronghold over the market, while the cereals drift slightly, reinforcing the idea that individual brands may seem independent but ultimately remain tied to their parent companies.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
// floating effect for cereals
for (let cereal of cereals) {
cereal.x += sin(frameCount * 0.01 + cereal.offset) * 0.5;
cereal.y += cos(frameCount * 0.01 + cereal.offset) * 0.5;
}
// floating effect for cereals for (let cereal of cereals) { cereal.x += sin(frameCount * 0.01 + cereal.offset) * 0.5; cereal.y += cos(frameCount * 0.01 + cereal.offset) * 0.5; }
// floating effect for cereals
 for (let cereal of cereals) {
   cereal.x += sin(frameCount * 0.01 + cereal.offset) * 0.5;
   cereal.y += cos(frameCount * 0.01 + cereal.offset) * 0.5;
 }

+ Additionally, when hovering over a manufacturer, its abbreviated initials expand to reveal the full company name. This design choice symbolizes how large corporations often remain anonymous at first glance, its domineering presence only becoming recognizable when we, as consumers, dig deeper.

Reflection

Surprisingly, the hardest part of this assignment was actually loading and properly parsing the csv file. As you can also see in the code, there’s a good amount of debugging that for the csv file. Initially I had use ‘loadString()’, however, I quickly realized that ‘loadString()’ treats the CSV as a single string rather than structured data, making it difficult to extract individual values. To fix this, I switched to ‘loadTable()’ with the “csv” and “header” options enabled, which allowed me to access data in a more organized way using column names. Even after switching to “loadTable()”, I ran into issues where some rows had missing or improperly formatted values. To handle this, I added debugging statements, such as console.log(table.columns); to check the structure and console.warn() messages to flag any problematic rows. I also included a check for undefined or null manufacturer values to prevent errors from breaking the program. Overall, I think I really enjoyed doing this project because I felt like I was able to incorporate a lot of meaning into it. Through the process, I realized that I particularly find interest in embedding social commentary in my works. I think adding a bit more interactivity could be the next step.

Final Work

Week 3 – Reading Reflection

For me, Crawford’s take on interactivity cuts through the hype, offering me a clear framework for what true engagement should look like. His breakdown of listening, thinking, and speaking makes me reflect on why so many so-called “interactive experiences” can feel unsatisfying. The example of yelling at a movie that doesn’t listen resonated with me — it highlights how real interactivity isn’t just about reaction, but about meaningful, adaptive engagement. A system isn’t truly interactive if it only delivers pre-programmed responses; instead, it should feel like a conversation, where user input shapes the experience rather than just triggering pre-set outcomes. This reminds me of choice-based games, where decisions genuinely alter the course of the experience rather than funneling players down a predictable, linear path.

Reflecting on my p5.js sketches, I see areas for improvement. Right now, they react in simple, binary ways — a change in speed, color, or position based on mouse movement, but nothing deeper. This reading pushes me to think bigger. How could I implement real-time feedback loops or adaptive behaviors that evolve based on user interactions? Maybe I could introduce patterns of response, where the sketch remembers past inputs and gradually modifies its behavior, making the experience feel more alive. Crawford’s perspective forces me to ask: Am I designing true interaction, or just a glorified switch? If I want meaningful interactivity, I need to think beyond cause and effect and design systems that feel more fluid, responsive, and dynamic, creating experiences that engage users in a richer, more participatory way.

Assignment 3

Ideation

The goal of this project was to create a dynamic, interactive orbiting system that simulates celestial movement. This was inspired on top of my previous assignment that utilized oscillating, vibrant circles. Using object-oriented programming (OOP) and arrays, the system generates multiple orbits, each consisting of particles that rotate around a central point.

To add interactivity, the speed of orbiting particles changes based on mouse movement, and the color intensity varies depending on the mouse’s vertical position. Additionally, users can click to add new orbits dynamically, making the system more engaging and expandable.

Code Highlight

A key challenge in this project was ensuring a more natural and organic movement of the particles while allowing user interaction. To address this, I added quite a bit of attributes to the Particle class such as its radius, speed, angle, and color properties.

In the constructor below, each particle is assigned a randomized orbit speed and color offset to create diversity in motion and color. The update() function adjusts each particle’s speed based on mouse movement (mouseX), while the display() function continuously updates the position and color.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class Particle {
constructor(radius, speed, angle, colorOffset) {
this.radius = radius;
this.speed = speed;
this.angle = angle;
this.colorOffset = colorOffset;
}
update() {
let speedFactor = map(mouseX, 0, width, 0.5, 2); // mouseX changes speed
this.angle += this.speed * speedFactor;
}
display() {
let x = this.radius * cos(this.angle);
let y = this.radius * sin(this.angle);
// mouseY modifies color intensity
let colorFactor = map(mouseY, 0, height, 0.5, 2);
let r = (sin(this.colorOffset + frameCount * 0.01) * 127 + 128) * colorFactor;
let g = (cos(this.colorOffset + frameCount * 0.015) * 127 + 128) * colorFactor;
let b = (sin(this.colorOffset + frameCount * 0.02) * 127 + 128) * colorFactor;
stroke(r, g, b, 180);
strokeWeight(2);
point(x, y);
}
}
class Particle { constructor(radius, speed, angle, colorOffset) { this.radius = radius; this.speed = speed; this.angle = angle; this.colorOffset = colorOffset; } update() { let speedFactor = map(mouseX, 0, width, 0.5, 2); // mouseX changes speed this.angle += this.speed * speedFactor; } display() { let x = this.radius * cos(this.angle); let y = this.radius * sin(this.angle); // mouseY modifies color intensity let colorFactor = map(mouseY, 0, height, 0.5, 2); let r = (sin(this.colorOffset + frameCount * 0.01) * 127 + 128) * colorFactor; let g = (cos(this.colorOffset + frameCount * 0.015) * 127 + 128) * colorFactor; let b = (sin(this.colorOffset + frameCount * 0.02) * 127 + 128) * colorFactor; stroke(r, g, b, 180); strokeWeight(2); point(x, y); } }
class Particle {
  constructor(radius, speed, angle, colorOffset) {
    this.radius = radius;
    this.speed = speed;
    this.angle = angle;
    this.colorOffset = colorOffset;
  }

  update() {
    let speedFactor = map(mouseX, 0, width, 0.5, 2); // mouseX changes speed
    this.angle += this.speed * speedFactor; 
  }

  display() {
    let x = this.radius * cos(this.angle);
    let y = this.radius * sin(this.angle);

    // mouseY modifies color intensity
    let colorFactor = map(mouseY, 0, height, 0.5, 2);
    let r = (sin(this.colorOffset + frameCount * 0.01) * 127 + 128) * colorFactor;
    let g = (cos(this.colorOffset + frameCount * 0.015) * 127 + 128) * colorFactor;
    let b = (sin(this.colorOffset + frameCount * 0.02) * 127 + 128) * colorFactor;

    stroke(r, g, b, 180);
    strokeWeight(2);
    point(x, y);
  }
}

The mousePressed() function allows dynamic expansion of the system by generating new orbits when the user clicks, making the artwork feel infinite and ever-changing.

Reflection

I’m happy with how this project turned out, especially the dynamic orbits and interactive elements like mouse-controlled speed and color variations. That said, I see several ways to improve it. I could introduce random variations in particle paths, like elliptical or wobbly orbits, to make the motion feel more natural. Adding a subtle trail effect would enhance the visuals, making the movement more fluid. I’d also love to experiment with gravity-based interactions, where particles respond to the mouse, either pulling toward it or being repelled. Well, this kind of seems like maybe this could be a new project all on its own haha.

Check out the final work:

 

Week 2 — Reading Reflection

When I was watching Casey Reas’ Eyeo’s talk on chance operations, I tried to bare in mind the question: “Where do you feel is the optimum balance between total randomness and complete control?”. Especially because I watched the talk after doing my artwork, I felt that the talk really aligned with the concepts that I envisioned for my assignment. For example, one example that he showed that left a lasting impression on me was Project Signals:

Basically, he explained that this artwork was a result of negative or positive communications that proteins sent within a cancer cell  — essentially a visualization of biology data. Without this context, I would have assumed that it was professional art from a painter or such because of how beautiful it looked. This realization in the assumptions that I made, made me realize that there’s art everywhere and that rather than computers replacing artists, they can serve more as a tool to help us create naturally occurring phenomena into art.

Referencing back to the initial question, I think there’s an important point to be made about the balance between how much randomness we incorporate into an artwork. More specifically, I think that it doesn’t actually matter if an artwork is created totally randomly or with complete control. I believe that the beauty in creating art lies in artist autonomy, even with the uses of computer technology. Of course, for this then we would have to engage in the conversation about AI ethics (e.g. How much of the artwork is truly by the artist), but I think that might be a conversation for another day. Ultimately, I believe that what matters is the intention behind the creation, whether an artist carefully designs every element or embraces randomness as a creative tool, both approaches are valid. In my opinion, the interplay between control and unpredictability can lead to unexpected beauty, and that, in itself, is an artistic choice.