Midterm Project: The “SpectroLoom”

Concept

For my midterm project, I thought of making something unique, which seemed like art for the casual viewers, but on closer inspection would be seen as a form of data. For this, I decided to make a circular spectrogram, i.e, visualizing sound in circular patterns. That’s when I saw an artwork on Vimeo, which visualized sound in a unique way:

Using FFT analysis, and the concept of rotating layers, I decided to recreate this artwork in my own style, and go beyond this artwork, thus, creating the SpectroLoom. I also decided that since most people sing along or hum to their favourite tunes, why not include them in the loop too?

At its core, SpectroLoom offers two distinct modes: “Eye of the Sound” and “Black Hole and the Star.” The former focuses solely on the auditory journey, presenting a circular spectrogram that spins and morphs in harmony with the music. The latter introduces a dual-layered experience, allowing users to sing along via microphone input, effectively merging their voice with the pre-loaded tracks, thus creating a sense of closeness with the song.

The Code/Science

Apart from FFT analysis, the project surprisingly used a lot of concepts related to “Relative Angular Velocity”, so that I could make the sketch behave in the way I want it to be. Using FFT analysis, I was able to get the amplitude of every frequency at any given point of time. I used these values to make a linear visualizer on a layer. The background canvas is rotating at an angular velocity of one revolution for the song’s duration in anti-clockwise direction, and the visualizing layer is rotating in the opposite direction (clockwise), making it seem that the linear visualizer is stationary because the Relative Angular Velocity is “Zero”. The other user layer, which have the user’s waveform is also doing the same, but uses the mic input as the input source for the FFT Analysis (and is only in the second mode).

Also, once the user finishes the song, they can again left click for restarting the same music. This is done by resetting the angle rotated by the layer to “Zero” after a complete revolution and clearing both song visualization layer and the User input layer.

// Visualizer screen drawing function for "Black Hole and the Star" mode
function drawBlackHoleAndStar() {
  if (song.isPlaying()) {
    background(0);

    // Get the frequency spectrum for the song
    let spectrumA = fft.analyze();
    let spectrumB = spectrumA.slice().reverse();
    spectrumB.splice(0, 40);

    blendAmount += colorBlendSpeed;
    if (blendAmount >= 1) {
      currentColor = targetColor;
      targetColor = color(random(255), random(255), random(255));
      blendAmount = 0;
    }

    let blendedColor = lerpColor(currentColor, targetColor, blendAmount);

    // Draw song visualizer
    push();
    translate(windowWidth / 2, windowHeight / 2);
    noFill();
    stroke(blendedColor);
    beginShape();
    for (let i = 0; i < spectrumB.length; i++) {
      let amp = spectrumB[i];
      let x = map(amp, 0, 256, -2, 2);
      let y = map(i, 0, spectrumB.length, 30, 215);
      vertex(x, y);
    }
    endShape();
    pop();

    layer.push();
    layer.translate(windowWidth / 2, windowHeight / 2);
    layer.rotate(radians(-currentAngle));
    layer.noFill();
    layer.colorMode(RGB);

    for (let i = 0; i < spectrumB.length; i++) {
      let amp = spectrumB[i];
      layer.strokeWeight(0.02 * amp);
      layer.stroke(amp, amp, 255 - amp, amp / 40);
      layer.line(0, i, 0, i);
    }
    layer.pop();
    
    var userSpectrum = micFFT.analyze()

    userLayer.push();
    userLayer.translate(windowWidth / 2, windowHeight / 2);
    userLayer.rotate(radians(-currentAngle));
    userLayer.noFill();
    userLayer.colorMode(RGB);

    for (let i = 0; i < userSpectrum.length; i++) {
      let amp = userSpectrum[i];
      userLayer.strokeWeight(0.02 * amp);
      userLayer.stroke(255 - amp, 100, 138, amp / 40);
      userLayer.line(0, i + 250, 0, i + 250); // Place the user imprint after the song imprint
    }

    userLayer.pop();

    push();
    translate(windowWidth / 2, windowHeight / 2);
    rotate(radians(currentAngle));
    imageMode(CENTER);
    image(layer, 0, 0);
    image(userLayer, 0, 0);
    pop();
  
    currentAngle += angularVelocity * deltaTime / 1000;

    if (currentAngle >= 360) {
      currentAngle = 0;
      
      userLayer.clear();
      layer.clear();
    }

    let level = amplitude.getLevel();
    createSparkles(level);

    drawSparkles();
  }
}

Also, there is the functionality for the user to restart too. The functionality was added via the back function. This brings the user back to the instruction screen.

function setup(){
...
  // Create back button
  backButton = createButton('Back');
  backButton.position(10, 10);
  backButton.mousePressed(goBackToInstruction);
  backButton.hide(); // Hide the button initially
...
}

// Function to handle returning to the instruction screen
function goBackToInstruction() {
  // Stop the song if it's playing
  if (song.isPlaying()) {
    song.stop();
  }
  
  // Reset the song to the beginning
  song.stop();
  
  // Clear all layers
  layer.clear();
  userLayer.clear();

  // Reset mode to instruction
  mode = "instruction";
  countdown = 4; // Reset countdown
  countdownStarted = false;

  // Show Go button again
  goButton.show();
  blackHoleButton.show();
  songSelect.show();
}

The user also has the option to save the imprint of their song via the “Save Canvas” button.

// Save canvas action
function saveCanvasAction() {
  if (mode === "visualizer") {
    saveCanvas('rotating_visualizer', 'png');    
  }
  if (mode === "blackhole") {
    saveCanvas('user_rotating_visualizer', 'png')
  }
}

Sketch

Full Screen Link: https://editor.p5js.org/adit_chopra_18/full/v5S-7c7sj

Problems Faced

Synchronizing Audio with Visualization:
    • Challenge: Ensuring that the visual elements accurately and responsively mirror the nuances of the audio was paramount. Variations in song durations and frequencies posed synchronization issues, especially when dynamically loading different tracks.
    • Solution: Implementing a flexible angular velocity calculation based on the song’s duration helped maintain synchronization. However, achieving perfect alignment across all tracks remains an area for refinement, potentially through more sophisticated time-frequency analysis techniques.
Handling Multiple Layers and Performance:
      • Challenge: Managing multiple graphics layers (layer, userLayer, tempLayer, etc.) while maintaining optimal performance was intricate. Rendering complex visualizations alongside real-time audio analysis strained computational resources, leading to potential lag or frame drops.
      • Solution: Optimizing the rendering pipeline by minimizing unnecessary redraws and leveraging efficient data structures can enhance performance. Additionally, exploring GPU acceleration or WebGL-based rendering might offer smoother visualizations.
Responsive Resizing with Layer Preservation:
    • Challenge: Preserving the state and content of various layers during window resizing was complex. Ensuring that visual elements scaled proportionally without distortion required meticulous calculations and adjustments.
    • Solution: The current approach of copying and scaling layers using temporary buffers serves as a workaround. However, implementing vector-based graphics or adaptive scaling algorithms could provide more seamless and distortion-free resizing.

Midterm Project Progress: Eye of the Sound

Concept

The “Eye of the Sound” is a project which transforms sound into mesmerizing visual art using the principles of physics. There are many others before me who have used music as a way of generating artwork, like:

However, there was one artwork which was quite interesting to see on Vimeo: A circular spectrogrram. This project would be based on this visualization at the core and would be expanded on that too.

This spectrogram appears to be mostly based on the loudness of the sound, and though it is an intriguing and cool artwork, has no user interaction, no generative features, and so on. That’s where my project, “Eye of the sound” comes in. It would be based on sensory experiences of sound and sight, and the circular flow is inspired from the flow of life, and the end is quite similar to the ‘Iris’ of the eye, a symbol of life.

Implementation

I have a basic idea of what must be done in the project. The central part of the project would be FFT (Fast Fourier Transform Analysis) to distinguish the different frequencies in the song, and then use a linear visualizer to display it. The visualizer’s displacement from the mean position is stored in an array, and is used to display the results on a rotating graphics buffer layer. The layer is rotating in one direction, and the visualizer in the opposite direction to cancel the rotation effect and “appear” stationary.

The color of the imprints on the layer are based on the loudness, and the intensity of the imprints is determined by the FFT analysis. The subtitles are displayed below.

The user also has another option in which they are able to sing along with the music being played, and their voice imprints are also calculated and diplayed in the same manner but in a different color scheme according to them.

This means that providing a menu screen for the user in which the instructions are provided and separate buttons are there to lead them to the mode they want.

Improvements which can be made

  • Adding a choosing the song screen, where the user can select some songs from some options in both the modes
  • Adding a sprite running loading bar for the amount of song left
  • Adding some generative effects on the subtitles, or some sparkle effects on the “highs” of a song
  • Adding a performance score in the second mode to see how well the user has done with the song
  • A “Save Canvas” option for the user to store their experiences.

Week 2 Assignment: Rotating Optical Illusion

Concept

I was influenced by the optical illusions that I encountered during my childhood. Consequently, I resolved to develop something similar. The code uses the way motion, shape, and color interact with each other to make a dynamic visual experience that looks like an optical illusion.

Optical illusions use the way our brains handle patterns, color changes, and movement to make us feel things like depth, movement, or distortion that aren’t really there.

The squares’s lineup here make it look like the image is spinning and pulsing in the opposite direction when the user “pauses” the rotating effect by pressing the “Space” button because the speed of the movement changes abruptly. The slowly changing colors and the many concentric squares’ borders also mess with how we think about depth, drawing our attention inward while the colors in the background change easily, throwing us off.

Sketch

Code I am proud of

“For()” loops are very important to this sketch because they let you make grids of circular squares over and over again. The ‘for()’ loops make placing each square easier by going through different places over and over again. Without them, it would be hard to do by hand. The design of these loops is both organized and complicated, which improves the illusion itself.

// Loop for centers of every possible square patterns
for (let x = -500 + 50; x < 500; x += 100) {
  for (let y = -500 + 50; y < 500; y += 100) {
    // Variable stroke color based on a sine wave
    changeCol = sin(t * 100);
    let newCol = map(changeCol, -1, 1, 0, 255);

    // Draw multiple concentric squares with decreasing sizes and random RGB values
    stroke(newCol);
    strokeWeight(3);
    fill(r, g, b, 60); // Random RGB with fixed alpha
    rect(x, y, 80, 80);

    stroke(newCol / 2);
    fill(r, g, b, 80); // RGB for smaller square
    rect(x, y, 60, 60);

    stroke(newCol / 4);
    fill(r, g, b, 120); // RGB for even smaller square
    rect(x, y, 40, 40);

    stroke(newCol / 6);
    fill(r, g, b, 140); // RGB with different alpha
    rect(x, y, 20, 20);

    stroke(newCol / 8);
    fill(r, g, b, 160); // RGB with a different alpha
    rect(x, y, 10, 10);
  }
}

These loops also make the sketch scalable and easy to edit in the future.

Future Improvements

  • There should be a live control panel with settings for things like spinning speed, background color transition speed, and square size. This would give people more control over the illusion and let them make the experience their own.
  • Adding a z-axis or moving based on depth would make things even more complicated. The sense of 3D space could be better created by making the squares bigger in the center and smaller in the background to show depth.
  • Random colors add variety, but a color pattern that works well together can make something look better.

Week 5 Reading Response: Computer Vision for Artists and Designers

Computer Vision has consistently been a prominent subject among programmers. It is a principal catalyst in the AI sector today, emerging as a viable option in the healthcare business and enhancing security. However, this reading not only provides a historical context to Computer Vision being the cold, technical niche used by military and mainstream industry; it has now become an accesible tool for artists, paving the way for an increased integration between computers and artists. This seems especially applicable in the modern day, when the lines separating engineering and art are becoming increasingly hazy and anyone can create interactive, visually stunning experiences.

The recurring theme in the reading is how artists have used Computer Vision to to build interactive exhibits that engage spectators in real time. Myron Kruger’s “VideoPlace” is a fitting example of this, turning a cold, impersonal technology to something deeply human-centered: using full-body interactions as a dynamic input to create immersive environments. In Computer Engineering, this is closely tied to the evolving user interfaces, which today powers technologies like motion sensors and gesture-based controls in gaming (Like VR).

Regarding the difference between Computer and Human Vision, one important difference between computers and humans is that computers use algorithms to understand images, but human vision is intuitive and contextual. The computer can “see” what “we” want it to by using techniques like frame differencing, background removal, and brightness thresholding, but these approaches are frequently inflexible, periodic and task-specific in contrast to the flexible nature of human vision. Another key difference is the number of input channels in human and computer vision. Humans take multiple inputs like colors, contrast, visual acuity and so on to make a cohesive perception, while computers only take a limited input based on the task they are intended to perform.

In interactive art, this rigidity in Computer Vision can actually be useful as it helps in simplifying the interaction to specific movements or gestures, allowing for a clearer, more focused experience for the viewers. However, as seen in the case of “Sorting Daemon” by David Robesky, such automated systems can profile people, leading to intrusive monitoring and raise other ethical concerns. As Computer Vision technology develops further, it is imperative to guarantee accountability, equity, and transparency in its application.

Week 2 Reading Response: Casey Reas

In the realm of digital art, Casey Reas’s perspectives on ‘Order’, ‘Chaos’, and ‘Controlled Randomness’ are quite intriguing. Art was perceived as a conservatively rigorous field that necessitated meticulous, well-planned strategies in contemporary forms. Nevertheless, Casey proposes to broaden the definition of “Digital Art” through his views. His perspective on the use of algorithms and randomization as creative mechanisms enables the emergence of unanticipated patterns and themes that never would have been anticipated. He draws inspiration from historical examples from the 20th century and presents examples of how randomness can be incorporated into a creative process.

Casey’s “Process 18” is an appropriate example for this. Using parameters to control the extent of randomness is a subtle masterstroke as it is able to encompass the artist’s intentions and also show the creative factor. His work raises the question of whether the concept of “randomness” is genuinely random or if it is a more complex form of order that has yet to be fully comprehended. This phenomenon is also observed in other domains, such as the “Lava Lamps” of Cloudfare for secure data encryption, the “Quantum random number generator” of Google for the generation of random numbers based on quantum mechanics, and the “Ivy Bridge” of Intel for the generation of random numbers based on thermal noise.

I intend to integrate Casey’s perspectives into my work by employing parameters to produce unpredictable variations in my projects. These variations would be beneficial in assisting me in the exploration of a variety of aesthetics, patterns, themes, and interactions that are prompted by user inputs. These would ensure that my projects offer a distinctive experience to all users.

Week 3 Reading Response: “Art of Interactive Design”

The Art of Interactive Design” by Chris Crawford is a fascinating book as it explores the distinction between a really interactive system and basic “reactive” gimmicks. Crawford characterizes the interaction as a dynamic process with three fundamental elements: hearing, thinking, and speaking. He also emphasizes on how these three elements are crucial for a meaningful interaction between the technology and the user and extends on them separately. This is why, in my opinion, every single interaction of the system with the user needs to be meticulously prepared and ensure that all three ways in which we interact with the system—listening, speaking, and thinking—are in line with the intended usage.

Regarding implementing the reading’s insights into my work, I think I might increase the degree of interaction by paying more attention on how the system “listens” and “speaks” to the user. My p5.js sketches could react to basic inputs like mouse movements or clicks right now, but they don’t necessarily provide deliberate or meaningful response. For examples, in the pixelated text sketch (Week 4: Generative text), I could have incorporated nuances into the ways user could react with my sketch. Maybe a random song plays when the user presses the “Enter” key, or an interesting home screen with brief instructions, and so on. This would mean that I would have to place myself into the user’s shoes and make design choices according to that.

Week 4 Assignment: Generative Text

Concept

This assignment is mainly based on the text patterns of early games like “Space Invaders” or “Tetris,” where the text and images were made out of square pixels. The way the blocks move and respond to user input is like the jumpy, random effects we see in old games where graphics were powered by simple hardware.

The key interactions involve entering any word which you want to display, and pressing “Generate” forms the word in the grid. The grid can be toggled on/off with a separate “Toggle Grid” button. When the mouse hovers over the text, the blocks near the cursor change color and become more diverse. When we hold down the “Enter” key, it causes the blocks representing the text to scatter randomly, simulating random jumpy movements. As soon as the key is let go, the boxes come back to their original positions.

Sketch

Code I am proud of

// Setup Function
function setup() {
  // Create the canvas
  createCanvas(600, 400);
  cols = floor(width / gridSize);
  rows = floor(height / gridSize);
  
  // Taking the input and generating the other buttons
  input = createInput();
  input.position(20, height + 20);
  button = createButton('Generate');
  button.position(input.x + input.width + 10, height + 20);
  button.mousePressed(startAnimation);

  toggleButton = createButton('Toggle Grid');
  toggleButton.position(button.x + button.width + 10, height + 20);
  toggleButton.mousePressed(toggleGrid);

  textSize(fontSize);
  textAlign(CENTER, CENTER);

  pg = createGraphics(width, height);
  pg.pixelDensity(1);
}

// Function to show/hide the grid
function toggleGrid() {
  showGrid = !showGrid;
}

// Draw function
function draw() {
  // Smooth background blending
  bgColor.r = lerp(bgColor.r, targetBgColor.r, 0.01);
  bgColor.g = lerp(bgColor.g, targetBgColor.g, 0.01);
  bgColor.b = lerp(bgColor.b, targetBgColor.b, 0.01);
  background(bgColor.r, bgColor.g, bgColor.b);

  if (showGrid) {
    drawGrid();
  }

  if (animating && word.length > 0) {
    fillLetterPixels();
  }

  for (let i = 0; i < pixelIndex; i++) {
    let px = letterPixels[i].x * gridSize;
    let py = letterPixels[i].y * gridSize;
    
    // Jumpy explosion effect
    if (exploding) {
      // Randomize position for the explosion effect
      px += random(-10, 10);
      py += random(-10, 10);
    } 
    else {
      // Smooth return to original positions
      px = lerp(px, originalPositions[i].x * gridSize, 0.1);
      py = lerp(py, originalPositions[i].y * gridSize, 0.1);
    }

    let d = dist(mouseX, mouseY, px + gridSize / 2, py + gridSize / 2);
    if (d < gridSize * 4) {
      let blockColor = blockColors[i];
      fill(blockColor.r, blockColor.g, blockColor.b, map(d, 0, gridSize * 4, 255, 50));
    } 
    else {
      fill(0);
    }

    noStroke();
    rect(px, py, gridSize, gridSize);

    if (random(1) < sparkleChance) {
      fill(random(255), random(255), random(255));
      rect(px, py, gridSize, gridSize);
    }
  }
}
  • Setup() function sets up the canvas, makes the Graphics buffer and initializes the canvas color to white.
  • Draw() function is the function which is the main loop that runs continuously. It controls the background color blending, sparkle effect, jumpy explosion effect, stores which keys are pressed, checks grid is toggled on/off, and so on.
// Startup animation after the generate button is clicked
function startAnimation() {
  word = input.value();
  pixelIndex = 0;
  animating = true;
  getLetterPixels();

  blockColors = letterPixels.map(() => {
    return { r: random(255), g: random(255), b: random(255) };
  });

  originalPositions = letterPixels.map((p) => ({ ...p })); // Store original positions

  targetBgColor = {
    r: random(255),
    g: random(255),
    b: random(255)
  };
}

– This function is responsible for the short simulation of making the letters of the word when “Generate button” is pressed.

function getLetterPixels() {
  letterPixels = [];

  // Use the off-screen buffer (pg) to render the text
  pg.background(255);
  pg.textSize(fontSize);
  pg.textAlign(CENTER, CENTER);
  pg.fill(0);
  pg.text(word, pg.width / 2, pg.height / 2);

  pg.loadPixels();
  for (let y = 0; y < pg.height; y += gridSize) {
    for (let x = 0; x < pg.width; x += gridSize) {
      let index = (x + y * pg.width) * 4;
      let r = pg.pixels[index];
      if (r < 128) {
        let gridX = floor(x / gridSize);
        let gridY = floor(y / gridSize);
        letterPixels.push({ x: gridX, y: gridY });
      }
    }
  }
}
  • This function uses the off-screen buffer (pg) to render the input text and identifies the grid positions where the pixels are black (indicating a letter).
function fillLetterPixels() {
  if (pixelIndex < letterPixels.length) {
    let px = letterPixels[pixelIndex].x * gridSize;
    let py = letterPixels[pixelIndex].y * gridSize;
    fill(0);
    noStroke();
    rect(px, py, gridSize, gridSize);
    pixelIndex++;
  } 
  else {
    animating = false;
  }
}
  • This function simulates the pixels that form the letters. It gradually reveals the blocks one by one.
// Run the function if ENTER key is pressed
function keyPressed() {
  if (keyCode === ENTER) {
    exploding = true; // Start the explosion effect
  }
}

// Run the function if ENTER key is released
function keyReleased() {
  if (keyCode === ENTER) {
    exploding = false; // Stop the explosion and return to original positions
  }
}
  • When the Enter key is pressed or released, it starts or stops the jumpy explosion effect. When the Enter key is pressed, the exploding flag is set to true, causing the blocks to scatter randomly. When the Enter key is released, the exploding flag is set to false, and the blocks return to their original positions.

Challenges/ Future Improvements

Some of the major challenges I had faced were:

  • Creating the random “jumpy” explosion effect and making sure that the boxes making up the letters stay in place after key is released.
  • The “sparkle” effect was another challenge as it required creating intermittent flashes of random color across specific blocks.
  • The color changing due to the “hovering  mouse” was another one. However, by measuring the distance (dist()) between the mouse and the block’s center, the program smoothly blends colors based on proximity to the cursor, giving the effect of interaction with the text.

However, the code can be further improved upon by introducing some collission, introducing different layers to create a 3D pixelated text feel, and so on.

Week 4 Reading Response: Ch 1, “The Design of Everyday Things”

In Chapter 1 of the “The Design of Everyday Things” titled as, “The Psychopathology of Everyday things,” Norman emphasizes how everyday objects can be confusing and fail in their design because they can’t be understood by its users. Throughout the chapter, he introduces key concepts like:

  • “Discoverability,” which is determined by users who instinctively know how to work with something, just by seeing it, and,
  • “Feedback” which is some kind of stimulus to the user that the task they intended to perform has been carried out successfully or has failed in between.

He gives the example of doors (like the ‘Norman Doors’, named after himself) which set people thinking on how to use it. His main argument is one which I agree with too, i.e., the designs should be “Human-Centered” and ensure that it is easily usable for everyone, no matter how simple or complex the object is.

“Human-Centered Design” is a term which I have also previously encountered when I took a J-term class “Interactive Media in the World”. I learnt that this principle can be easily applied to various situations ranging from how things are arranged (like electric tools in a lab) to flow of traffic in a city (to prevent accidents), smart home technologies, human-computer interactions, and so on.

If I had to think of a product which annoys me a lot, it would be the “Electrical Adapters” with a bad design. There is a particular one which I always keep with me as a reminder of the same.

Similar products with the same design can be found on Amazon and other places. However, it is quite clear that while mobile devices can be charged without any issues, it is the laptop chargers or desktop power cords where the issue arises (especially in UAE). There is not enough distance between laptop charging cord and the wall, rendering it unusuable for devices which have bigger plugs.

In terms of interactive media, I believe Norman’s principles can significantly improve user experiences. For example, when designing websites or apps, discoverability should be a primary goal. For example, making discoverability a top priority when creating apps or interactive displays aids users in immediately understanding how to engage with the system. The user may be led through the interface with the use of obvious signifiers like labels or icons. Feedback is also important here, as when interacting with digital elements; for example, a button’s color changing when clicked or a sound signaling the completion of a task are both visual and auditory signals that boost user confidence and enhance the overall experience.

Week 3 Assignment: Generative Artwork using OOP: Screensavers and Physics

Concept and Inspiration

For this assignment, I drew inspiration from the the era of late 1990s and early 2000s. At that time, Windows 98 and Windows XP had some unique screensaveers which I still remember and which evokes a sense of nostalgia in me. These screensavers (link) often featured geometric shapes and repetitive, hypnotic patterns, such as oscillating lines, spirals, and pendulum-like movements.

Fig: Some of the Windows Screensavers

For this, I used robotic arms and “Inverse Kinematics” as my inspiration. From my point of view, I saw this as the perfect oppertunity to blend computational techniques into this visual style. The robot arms represent the pendulums of the past, but with a unique twist. Instead of being merely simple lines, these arms demonstrate the principles of Object-Oriented Programming (OOP), where each pendulum is treated as an independent object, following specific behaviors such as oscillation and length growth. Moreover, inverse kinematics allows each arm to dynamically respond to changing positions, mimicking the flexibility and precision of robotic motion. The result is a digital artwork that blends the nostalgia of retro visuals with the sophistication of modern computational design.

Code Explanation

“Arm” class

This class features a constructor which initializes each pendulum’s amplitude, angle, angle velocity, and length growth. It also has the “update()” function to update the pendulum’s length (amplitude) and oscillation (angle). Using the “display()” function, it calculates the current position of the pendulum and draws a line from the previous position.

// Defining the Arm class
class Arm {
  constructor(amplitude, angle, angleVel, lengthGrowth) {
    this.amplitude = amplitude;      // Initial length of the arm
    this.angle = angle;              // Starting angle
    this.angleVel = angleVel;        // Angular velocity
    this.lengthGrowth = lengthGrowth; // How fast the arm grows in length
  }

  // Method to update the arm's properties (growth and oscillation)
  update() {
    this.amplitude += this.lengthGrowth; // Increase length over time
    this.angle += this.angleVel;         // Update angle for oscillation
  }

  // Method to display the arm
  display(prevX, prevY) {
    let x = sin(this.angle) * this.amplitude;  // Calculate x position
    let y = cos(this.angle) * this.amplitude;  // Calculate y position
    line(prevX, prevY, x, y); // Draw line from previous position to current
    return { x, y };          // Return current x, y for the next arm
  }
}
“setup()” function

The “setup()” function initializes the canvas size and prepares the environment. It disables fills for the shapes and sets default stroke properties. It randomizes the number of pendulum arms (num_arms) and the other arm’s properties, with each arm receiving random values for amplitude, angular velocity, and growth rate. The arms are stored in an array, each represented as an object with properties for oscillation and growth.

function setup() {
  createCanvas(800, 800);
  noFill();
  stroke(255); // Initial stroke color
  strokeWeight(1); // Initial stroke weight

  // Randomize the number of arms between 2 and 10
  num_arms = int(random(2, 10));

  // Initialize the Arm objects with random values
  for (let i = 0; i < num_arms; i++) {
    let amplitude = random(70, 150);
    let angleVel = random(0.01, 0.05);
    let lengthGrowth = random(0.1, 0.5);

    // Create new Arm and push to the arms array
    arms.push(new Arm(amplitude, 0, angleVel, lengthGrowth));
  }

  // Initially set the center to an off-canvas position
  centerX = -1000;
  centerY = -1000;
}
“draw()” function

This function creates a semi-transparent background overlay to maintain the fading trails without fully erasing the canvas. The “rect()” draws a slightly transparent rectangle over the entire canvas, producing the trailing effect. The “translate()” function shifts the origin of the canvas to the clicked point (centerX, centerY), which acts as the center of the pendulum system. A loop iterates over each arm, calculating its new position based on its current angle and amplitude using “Inverse Kinematics.” The arms are drawn as lines connecting from one pendulum to the next, simulating the robot arm movement whos length increases with time.

// Draw Function
function draw() {
  if (hasStarted) {
    fill(0, 10); // Semi-transparent background to maintain trails
    rect(0, 0, width, height);

    // Set the center of the arms to the clicked position
    translate(centerX, centerY);

    let prevX = 0;
    let prevY = 0;

    // Loop through each arm and update and display them
    for (let i = 0; i < arms.length; i++) {
      let arm = arms[i];

      // Update arm properties
      arm.update();

      // Display the arm and update the previous position for the next one
      let newPos = arm.display(prevX, prevY);
      prevX = newPos.x;
      prevY = newPos.y;
    }
  }
}
“mousePressedI()” function

The “mousePressed()” function updates the center of the pendulum system to wherever the user clicks on the canvas (mouseX, mouseY). This triggers the pendulum animation by setting “hasStarted” to true. Upon clicking, it randomizes the stroke color, weight, and number of arms, creating variety and making each user interaction unique. It also reinitializes the pendulum arms with new random values, ensuring a different pattern is generated with every click.

// This function will run when the mouse is pressed
function mousePressed() {
  // Set the new center of the arm system to the clicked location
  centerX = mouseX;
  centerY = mouseY;
  hasStarted = true;

  // Randomize background and stroke properties
  stroke(random(0, 255), random(0, 255), random(0, 255));
  strokeWeight(random(1, 10));

  // Randomize the number of arms between 2 and 6
  num_Arms = int(random(2, 6));

  // Reinitialize the arms array
  arms = [];
  for (let i = 0; i < num_Arms; i++) {
    let amplitude = random(80, 150);
    let angleVel = random(0.01, 0.05);
    let lengthGrowth = random(0.1, 0.5);

    // Create new Arm objects with random values
    arms.push(new Arm(amplitude, 0, angleVel, lengthGrowth));
  }
}

Sketch

Further Improvements which can be made

Smoother Transitions: Currently, the background might change too quickly when clicking. Adding a smooth transition effect between pendulum sets can make the animation more fluid and visually appealing.

Scaling upto 3D space: I had originally though of introducing a responsive 3D canvas using “WEBGL” mode in p5.js, but that was making the idea of user interaction a little complex, so I had to drop that for now.

Damping: Currently, the simulation runs the pendulums until another person clicks it. Introducing damping can be another way to introduce realism to it.

Collission: Various arms when coming in contact with each other change their path/length can be another aspect which can be looked to.

Reflection

This project modernizes the retro screensaver aesthetic using modern programming techniques such as OOP and inverse kinematics, combined with user interactivity. The code is modular, making it easy to add new features or improvements, and the possibilities for further customization and expansion are vast.