Response

Reading 1

Norman’s idea that “attractive things work better” really resonated with me, especially as someone who loves design. I’ve always felt drawn to beautifully designed objects, but I hadn’t fully realized how much their aesthetics affect the way I use them. Norman’s explanation made me reflect on how, when something looks good, I’m more patient with it, more curious, and even more forgiving when it doesn’t work perfectly. It’s like beauty creates a kind of emotional buffer that makes me feel more connected to the object. That connection isn’t just superficial—it actually helps me think more clearly and solve problems better, just like he describes. It reminded me why I care so much about thoughtful design: it’s not just about how something looks, but how it makes people feel, and ultimately, how well it works because of that feeling.

Reading 2

Reading Margaret Hamilton’s story made me reflect on how often history overlooks the people who work behind the scenes, especially women. Hamilton’s brilliance lay not just in her technical knowledge but in the way she foresaw failures and built resilience into the software itself. It is amazing to think that her code saved the Apollo 11 mission at a time of crisis, yet for a long time, she was not a household name. What impressed me most was how she fought to get software recognized as real engineering: it reminded me of how many fields that we now take seriously had to be legitimized by people who were initially dismissed. I find her story inspiring because it combines human intuition, systems thinking, and a deep belief in the importance of “invisible” work. It makes me think of how many other anonymous innovators shaped the world we live in today.

 

Newton’s Cradle Switch

Concept

This circuit demonstrates a simple, hands-free switch mechanism built using copper tape and Arduino Uno. The switch is activated by proximity without requiring mechanical pressure, enabling interaction with a system without physically pressing a button. The copper tape acts as a capacitive touch sensor,  using a Newton’s Cradle as part of the physical interface.

Video

Materials

  • Arduino Uno
  • Breadboard
  • Copper Tape
  • Jumper Wires
  • Newton’s Cradle (for physical interaction mechanism)
  • LED
  • Resistors (330Ω)
  • USB Cable

Setup

The circuit is composed of a basic LED switching system. Instead of using a traditional push button, the switch is implemented with two strips of copper tape placed strategically on the Newton’s Cradle to detect contact.

  • One wire connects the copper tape to a digital input pin on the Arduino.
  • The other strip connects to ground (GND).
  • When a person touches or closes the circuit by interacting with the Newton’s Cradle, the circuit registers a state change.

Based on this input, the LED turns on or off, simulating a touch-activated switch.

 

I am happy with the result; Newton’s Cradle never fails to make something engaging.

Midterm – Interactive Party

My project is an interactive music visualizer designed to create a party-like experience. It integrates sound, visuals, and interactivity to simulate a dynamic party environment. The motivation behind this project stems from my love for music and aesthetics, aiming to bring both together in a visually engaging way. The experience features different sound modes, including a salsa mode, and reacts dynamically to audio amplitude to enhance the party atmosphere.

Link to Fullscreen:https://editor.p5js.org/Genesisreyes/full/okpJ6jCqf

It features a background DJ image, a GIF animation for the salsa mode, and dynamically changing colors and light effects. OOP is implemented in the LiquidShape class, which generates butterfly-like abstract shapes that move and react to the music’s amplitude. The project also includes button interactions, allowing users to start the party, switch to salsa mode, and switch between songs by clicking, or return to the main page. The use of p5.Amplitude() ensures that elements like background flashing and shape movements are synchronized with the audio intensity, creating a smooth experience.

Key Highlights

1. Object-Oriented Programming (OOP) for Animation

The LiquidShape class is responsible for creating and animating butterfly-like figures that move fluidly across the screen. These shapes:

    • Respond to the music amplitude by changing motion and size.
    • Have two states: floating (trapped) and moving (liberated), which the user can toggle by clicking.
    • Implement procedural animation using noise-based randomization to ensure organic movement.

2. Audio Amplitude Analysis for Visual Synchronization

The p5.Amplitude() function is used to analyze the loudness of the currently playing audio and link it to the visual elements. The butterfly image pulses with the music, making it feel more immersive.

let level = amplitude.getLevel();

let pulseSize = lerp(width * 0.2, width * 0.4, level);
image(butterflyBg, 0, 0, pulseSize, pulseSize);

How this works:

    • amplitude.getLevel() returns a value between 0 (silence) and 1 (loudest part of the track).
    • lerp() smoothly transitions the butterfly image size between 20% and 40% of the screen width based on the music volume.
    • This creates a natural pulsing effect that visually represents the beat.

3. Smooth Audio Transitions Between Tracks

Instead of abruptly switching sounds,  fade-out and fade-in effects were used when transitioning between songs to create a smoother experience.

function mousePressed() {
  if (!started) return;

  let fadeDuration = 1.5; // Fade out in 1.5 seconds
  if (sounds[currentSoundIndex].isPlaying()) {
    sounds[currentSoundIndex].setVolume(0, fadeDuration);
    setTimeout(() => {
      sounds[currentSoundIndex].stop();
      switchTrack();
    }, fadeDuration * 1000);
  } else {
    switchTrack();
  }
}
    • The current song fades out smoothly over 1.5 seconds.
    • After fading, it stops completely and switches to the next track.
    • The new track fades in, avoiding harsh audio jumps.

A major challenge was working with WEBGL, especially in handling 2D elements like buttons alongside 3D rendering. The solution involved creating a separate graphics layer (bgGraphics) for the background image.

function windowResized() {
  resizeCanvas(windowWidth, windowHeight); // Resize the WEBGL canvas
  bgGraphics.resizeCanvas(windowWidth, windowHeight); // Resize 2D graphics buffer
  updateBackground(); // Redraw the background image
}

function updateBackground() {
  bgGraphics.background(0); // Fallback if the image doesn't load

  let imgWidth, imgHeight;
  let canvasAspect = windowWidth / windowHeight;
  let imgAspect = djImage.width / djImage.height;

  if (canvasAspect > imgAspect) {
    imgWidth = windowWidth;
    imgHeight = windowWidth / imgAspect;
  } else {
    imgHeight = windowHeight;
    imgWidth = windowHeight * imgAspect;
  }

  bgGraphics.imageMode(CENTER);
  bgGraphics.image(djImage, windowWidth / 2, windowHeight / 2, imgWidth, imgHeight);
}

One of the best aspects of the project is its ability to capture the energy of a party through interactive visuals and sound. The combination of image overlays, and reactive shapes enhances the user’s engagement.

There are areas that could be improved. One issue is optimizing the responsiveness of visual elements when resizing the window, as some components may not scale perfectly. Another challenge was ensuring seamless audio transitions between different sound modes without abrupt stops. Additionally, adding clear instructions for the users would have been ideal, so they could know that when they click, they can change the music but also collect and liberate the butterflies.

Midterm Progress

I want to create a personalized DJ experience that allows users to choose different music genres, with the environment adapting accordingly. The idea is to present an interactive space where visuals, lighting, and animations react dynamically to the music and make it feel like a real party.

When the experience starts, a button launches the animation, and clicking anywhere switches songs while updating the environment to match the new song. The visuals rely on p5.Amplitude() to analyze the music’s intensity and adjust the movement of butterfly-like shapes accordingly (I reused my previous code to draw the butterflies).

One of the biggest challenges was managing these transitions without them feeling too sudden or chaotic. Initially, switching between songs resulted in jarring color and lighting changes, breaking the immersion. To fix this, I used lerpColor() to gradually shift the background and object colors rather than having them change instantly. Another issue was synchronizing the visuals with the audio in a meaningful way, at first, the amplitude mapping was too sensitive, making the animations look erratic -this still needs improvement maybe I will try modifying the amplitude scaling.

Moving forward, I plan to expand the genre selection with more styles and refine how users interact with the interface. I want each environment to reflect the music’s vibe.

Week 5 response

Computer vision differs from human vision in that humans perceive the world more holistically and understand visual cues based on experience and context, whereas computers use quantitative forms of image representation. Instead of recognizing things based on mental processes, machines use algorithmic and pattern recognition techniques based on pixel-based image representation.

Thus, compared to humans, computers also have difficulty identifying objects with different illuminations and directions, unless they are highly trained with varied databases. Just as humans estimate depth and motion based on vision and general knowledge, computer programs need specific methods such as optical flow detection, edge detection or machine learning algorithms to deduce similar information.

The power of computer vision to capture motion and analyze visual information has a profound effect on interactive art. Artists can take advantage of these technologies and use them to create installations that respond dynamically to the viewer’s movements, gestures, or even facial expressions and create immersive, interactive experiences. However, these technologies can also raise ethical issues, related to privacy and surveillance if we talk about the use of facial recognition and motion detection in interactive artworks. Consequently, artists working with computer vision must carefully weigh their creative possibilities with the ethical implications linked to surveillance culture.

Week 4 Visualization

I created an interactive galaxy simulation where users can draw stars onto the canvas by clicking the mouse. The stars are generated based on data provided in a CSV file, and the galaxy background moves and spirals as part of the effect.

One of the most challenging aspects of this project was working with dynamic data in the form of a CSV file and integrating it into the interactive star drawing. I wanted to make a galaxy where the background constantly moved while the user could add stars by clicking on the canvas. The CSV file had to be loaded, parsed, and used to generate star properties such as position, size, and color. Managing the data flow, especially ensuring that the properties were being applied correctly to each star object, was tricky.

The file contains predefined star data such as position (x, y), size, and color (r, g, b). In the preload() function, the CSV is loaded using loadTable(), which makes the data accessible within the program. After that, in the setup() function, I loop through each row of the CSV and extract these values to create star objects. These stars are then pushed into the stars array, which holds all the stars that will be drawn on the canvas.

<br> for (let i = 0; i < starData.getRowCount(); i++) {<br> let x = starData.getNum(i, 'x');<br> let y = starData.getNum(i, 'y');<br> let size = starData.getNum(i, 'size');<br> let r = starData.getNum(i, 'r');<br> let g = starData.getNum(i, 'g');<br> let b = starData.getNum(i, 'b');<br> stars.push({<br> x: x,<br> y: y,<br> size: size,<br> color: color(r, g, b),<br> brightness: random(180, 255)<br> });<br> }<br>

for (let i = 0; i < starData.getRowCount(); i++) {
let x = starData.getNum(i, 'x');
let y = starData.getNum(i, 'y');
let size = starData.getNum(i, 'size');
let r = starData.getNum(i, 'r');
let g = starData.getNum(i, 'g');
let b = starData.getNum(i, 'b');
stars.push({
x: x,
y: y,
size: size,
color: color(r, g, b),
brightness: random(180, 255)
});
}

Right now, all the stars are drawn on the same layer, which can make the galaxy feel flat. I would like to improve this by adding a sense of depth—farther stars could be smaller and move slower than closer stars, and also add different layers of stars to simulate foreground and background elements. Also, I would like to improve the galaxy background, it could be enhanced by adding a texture or some glowing nebula-like shapes to give more life and movement.

Week 4 Response

One thing that drives me crazy is the automatic doors at the D2 Dining Hall. These doors tend to close too quickly, often shutting in people’s faces and making them difficult to reopen. This creates frustration, especially during busy meal times when students are entering and exiting frequently. The problem seems to stem from poor sensor placement or improper programming of the doors’ timing mechanisms.

This is us:
Tom Trying To Open The Door for 2 minutes - YouTube

To improve this, the sensors should be recalibrated to ensure they detect approaching individuals from a greater distance and remain open long enough for safe passage. A more intuitive design could also incorporate clear signifiers—such as visible sensors or a light indicator—to communicate to users when the doors are about to close. These adjustments would align with Don Norman’s design principles, particularly the importance of discoverability, ensuring that users can easily and safely navigate the entryway.

How it feels to enter just before the door closes:
ArtStation - THE ULTIMATE TOM AND JERRY MEME COLLECTION in Native 4K

In Interactive Media, clear conceptual models help users predict how a system functions. For example, a digital audio workstation (DAW) for music production should visually resemble a real mixing console, helping users transfer their real-world knowledge to the digital interface. If a system’s structure contradicts user expectations, such as a website with illogical navigation, it creates confusion. So, a well-designed user interface (UI) ensures that users do not have to guess where to tap or click. This aligns with Norman’s principles of design, particularly discoverability, feedback, and conceptual models, which are highly relevant to interactive media

Week 3 response

Chris Crawford stresses that interactivity is fundamentally a two-way exchange, in which both the user and the system actively listen, think and respond. He defines interactivity as a cyclical process of input, processing and output, and gives as a concrete example the interaction in a conversation and its implications, distinguishing it from passive experiences such as watching television.

A strongly interactive system must give priority to verbs-what the user can do-over nouns or static elements. For example, Crawford criticizes poorly designed software that overwhelms users with visual complexity but offers little meaningful action. To improve user interaction in p5 sketches, one could focus on creating dynamic feedback loops. For example, rather than simply displaying shapes, a sketch should allow users to manipulate them through mouse or voice gestures, with immediate visual and auditory responses. Crawford also highlights the importance of clarity and simplicity: interactive systems should avoid overwhelming users with unnecessary complexity. Applying this principle, a p5 sketch could use intuitive controls, such as drag-and-drop mechanics or touch interactions, to ensure accessibility.

In addition, Crawford’s insistence on designing for the user experience suggests incorporating playful experimentation into sketches, such as allowing users to “paint” with random colors or shapes that evolve based on their inputs. These ideas are in keeping with his core philosophy: interactivity thrives on meaningful participation and responsiveness.

Week 3 – Butterflies

This project is a fusion of randomness and intent—an attempt to create an organic yet controlled motion that feels like butterflies fluttering. I applied Object-Oriented Programming (OOP) to manage multiple independent shapes, each moving uniquely while trying to maintain a cohesive, natural effect. The idea was to make the shapes feel alive, responding to the mouse but still capable of drifting freely when “frozen.”

One of the most challenging aspects of this project was balancing chaos with grace. Randomness is great, but too much of it just looks messy. The trick was tuning the movement—using sinusoidal oscillations for a floating effect, lerp() for smooth transitions, and controlled distortions in the shape formation to make them feel organic rather than purely geometric. The rotation was another challenge; at first, it spun too fast, making everything feel frantic instead of serene. Slowing it down helped create a more elegant, butterfly-like glide.

this.x = lerp(this.x, mouseX - width / 2, 0.1);<br> this.y = lerp(this.y, mouseY - height / 2, 0.1);<br>
this.x = lerp(this.x, mouseX - width / 2, 0.1);
this.y = lerp(this.y, mouseY - height / 2, 0.1);

Visually, the metallic look was inspired by an image I saw on Canva. That liquid-metal aesthetic felt perfect for making these abstract shapes shimmer as they rotate. Using specularMaterial() and proper lighting gave them depth.

In the future, I would like to replicate better those 3D metallic shapes and make them appear smoothly while the user presses the mouse.

week 2 – response

What I found interesting about the talk is that it shows a compelling exploration of the intersection between order and chaos in art. The idea that the role of an artist is to maintain order in the face of nature’s chaos resonates with my own experiences in creative work. I have often found that the introduction of random elements can lead to unexpected and exciting results, as in the example of Reas’ tissue work, inspired by Valentino Braitenberg’s hypothetical vehicles. This approach of using biological references and physical properties as a basis for artistic exploration challenges me to reconsider my creative process. How could I incorporate more natural and chaotic elements into my work without losing a sense of artistic control?

Moreover, the concept of using randomness as a starting point, for example in John Cage’s chance-based compositions, raises questions about the nature of creativity itself. I wonder about the ethical implications of using AI-generated randomness in art: does this introduce a new form of bias or remove human intuition from the equation? Also, the observation that patterns trigger the imagination makes me reflect on how we perceive and interpret randomness. Perhaps what we see as random is simply a pattern we have not yet recognized. This talk made me want to experiment with new approaches that balance control and unpredictability in my work.