Reading reflection – week 8a

Diving into Donald Norman’s ideas and Margaret Hamilton’s moon code journey got me thinking in ways I didn’t expect. Norman’s talk about how good design blends aesthetics with function kind of hovered in the background for me, until Hamilton’s story of crunching code for the Apollo mission brought it all home. It wasn’t about the looks; it was about making things work when it really mattered. Hamilton’s tale felt like finding clarity in a complex puzzle, showing that true genius in design can sometimes be all about the nitty-gritty of making things foolproof.

This mash-up of thoughts was more than just an academic exercise; it felt personal. It made me question my own take on what design really means and its impact. Going through their stories, I started seeing design not just as something that pleases the eye but as something deeply intertwined with solving real-world puzzles.

I realized my own moments of ‘aha’ often came when I was least expecting them, not while chasing some ideal of creativity but simply trying to work through a problem. Like Hamilton, the beauty of what I was doing often lay in the solution’s elegance and simplicity, not in how it looked. What I grasp now is that creativity isn’t just about coming up with something flashy; sometimes, it’s about the grind, the clever fixes, and making things work when the pressure’s on. It’s a reminder that there’s creativity in the chaos of problem-solving, a side of design I’ve come to appreciate in its own unique way.

Midterm project – ?sound?

While immersed in the captivating sounds of Giorgio Moroder, hailed as the pioneer of synthesizer disco and electronic dance music, I was struck by the profound realization that synthesizers are not just musical instruments; they are time machines, capable of transporting us to the soundscapes of the future. This revelation sparked an idea in my mind: to create not just any sound generator, but a sound visualizer that encapsulates the essence of music and visual artistry into one cohesive experience. This project is not a synthesizer in the traditional sense, but it’s a homage to the concept of generating “music” in a form that is both simple and visually engaging. It’s an interactive canvas where every user’s interaction weaves a unique auditory and visual narrative, ensuring that no two experiences are alike.

Full screen: https://editor.p5js.org/MarwanWalid2/full/tYNAJRvFm

The interactive sound visualizer I developed is a testament to P5’s versatility, allowing users to manipulate sound and visuals in real-time through mouse movements. Users can control the amplitude and reverb effects of the sounds produced by pressing different keys, each mapped to a distinct frequency resembling a note. The visual component—a series of organic shapes generated using p5.js’s noise() function—evolves in real-time, mirroring the auditory input for a truly synesthetic experience.

let amp = map(mouseY, 0, height, 1, 0); // Control volume with mouse Y
let dryWet = map(mouseX, 0, width, 0, 1); // Control reverb with mouse X

I’m particularly proud of how the project leverages the noise() function to create visual representations of the sound. This function generates organic, ever-changing patterns, ensuring that the visual output is as dynamic and unique as the auditory one. The decision to allow users to influence the sound’s amplitude and reverb through simple mouse movements was driven by a desire to make the experience as intuitive and engaging as possible, removing barriers to creativity and exploration.

Every creative endeavor comes with its set of challenges, and this project was no exception. Balancing the responsiveness of the visual output with the fluidity of the sound was a delicate task. Ensuring that the system could handle real-time input without significant lag required careful optimization and testing. Additionally, while the current implementation offers a novel experience, it barely scratches the surface of what’s possible with sound synthesis and real-time visual generation.

One area ripe for exploration is the integration of more complex sound synthesis techniques, moving closer to the capabilities of a full-fledged synthesizer. Expanding the range of user controls to include different waveforms, modulation effects, and perhaps even a sequencer, could enrich the user experience significantly. Moreover, the visual aspect could be enhanced by introducing more variables influenced by the sound, such as color changes, or shape transformations.

This project stands as a bridge between the past and the future, drawing inspiration from the pioneers of electronic music while inviting users to explore the boundless possibilities of digital creativity. It’s a celebration of the unpredictability and individuality inherent in artistic expression, encapsulated in an experience where no two interactions are the same. I am proud of the foundation laid by this project and excited about the potential it holds for further exploration and development in the realms of sound visualization and interactive art.

Midterm progress

The core idea of this project is to develop an interactive sound visualizer that not only generates music based on keyboard inputs but also visualizes this music with unique, noise-driven shapes on the screen. Each key press corresponds to a different note, and the shapes evolve in real-time to reflect the pitch, volume, and timbre of the sounds being produced. The project aims to blur the lines between music creation and visual art, offering users a multi-sensory experience of composing and visualizing music in real time.

Users interact with the application primarily through the keyboard, where each key triggers a distinct musical note or sound. The visual component consists of evolving shapes whose characteristics (e.g., size, color, noise level) change in response to the music. The proposed enhancement involves integrating hand detection technology, potentially through a webcam and machine learning models, to allow users to manipulate sound characteristics (like volume and pitch) and visual aspects (like shape complexity) with hand movements and gestures.

The main component is the visual class:

class ShapeVisualizer {
  constructor(x, y) {
    this.x = x;
    this.y = y;
  }

  draw(size, alpha, weight) {
    strokeWeight(weight);
    push();
    translate(this.x, this.y);
    beginShape();
    for (let a = 0; a < TWO_PI; a += 0.02) {
      let xoff = map(cos(a), -1, 1, 0, 5) + angle;
      let yoff = map(sin(a), -1, 1, 0, 5) + angle;
      let r = map(noise(xoff, yoff), 0, 1, size * 0.5, size);
      let x = r * cos(a);
      let y = r * sin(a);
      let hue = (angle * 50) % 360;
      stroke(hue, 100, 100, alpha);
      vertex(x, y);
    }
    endShape(CLOSE);
    pop();
  }
}

 

Hopefully, for future I want to make this more of an interactive synthesizer, hopefully, as I said, using hand motion

Assignment 5 – reading reflection

Reading through Golan Levin’s musings on computer vision feels like sitting down with a mad scientist who’s decided to use his powers for art instead of world domination. It’s like he’s taken this super complex tech thing and turned it into a playground. Imagine computers squinting at us, trying to make sense of what they’re seeing, and then artists stepping in saying, “Let’s make it fun.” Levin essentially tells us that with a bit of creativity, even the geekiest tech can be as expressive and unpredictable as a paint-splattered canvas. Who knew algorithms could be party tricks for the digital age? It’s a reminder that innovation isn’t just about serious business applications; it’s also about making cool stuff just because we can.

Assignment 4 – reading reflection

Diving into “The Design of Everyday Things,” particularly the chapter about the everyday challenges we face with seemingly simple objects, was like turning on a light in a room I didn’t realize was dimly lit. It struck me how something as mundane as a door could become a puzzle. Don Norman eloquently sheds light on this, making me nod in agreement and chuckle at the universal struggle against “Norman doors” – those doors we push when we need to pull and vice versa, named humorously after Norman himself for highlighting these design missteps.

In interactive media, users should grasp not only what they can do within an interactive piece but also have some insight into the cause and effect of their interactions. An interactive narrative created in p5.js, for example, should provide clear feedback in response to user inputs, making the narrative’s branching paths understandable and meaningful. Because a lot of the time when we create an “interactive art”, the interactivity is there and looks amazing, but sometimes confusing or unclear to explore.

Assignment 4 – Mood clock

For this assignment, I was inspired by the fluid nature of emotions and how our surroundings can influence our mood throughout the day. It got me thinking about the concept of time and its relentless, cyclical pattern. This reflection led me to ponder the idea of visualizing time not just as a sequence of numbers but as an ever-changing spectrum of emotions. What if we could see the time of day reflected through our moods?

**double click on the sketch to interact**

It incorporates a real-time clock, showing hours, minutes, and seconds, alongside a pulsating ellipse that grows and shrinks with each passing second. The background color shifts through various hues—each representing a different mood such as “Dreamy,” “Energetic,” “Calm,” and “Reflective”—as the day progresses. But here’s the twist: users can manually switch between these moods with the press of a key, creating a playful interaction between the digital environment and the user’s current emotional state.

//change mood based on time if the user didn't select any
  if(mood == ""){
  if (hours < 6) {
    def = "Dreamy";
  } else if (hours < 12) {
    def = "Energetic";
  } else if (hours < 18) {
    def = "Calm";
  } else {
    def = "Reflective";
  }
    mood=def;
  }

  //Change background color based on the selected mood
  switch (mood) {
    case "Dreamy":
      bgColor = background(120, 113, 255);
      break;
    case "Energetic":
      bgColor = background(255, 213, 0);
      break;
    case "Calm":
      bgColor = background(0, 255, 183);
      break;
    case "Reflective":
      bgColor = background(255, 105, 180);
      break;
    default:
      bgColor = background(0); 
  }

Adding more interactive elements, such as allowing users to input their current mood or introducing more complex animations that respond to the time of day, are just a couple of ideas I’m exploring. And I really hope the mood clock would allow people to sit and reflect on their emotions throughout the day.

Assignment 3 – what exactly is interactivity

I totally disagree with the definition of interactivity that is provided. If I did agree with it, then the IM major should have a different name since most of what we do, according to the definition provided in the readings, is not interactive. According to the reading, all these 2D (or other stuff) “interactive” art that we do is just participation of us in something. This does make sense, because the art or whatever we created is not talking back, forming a thought, or having a meaningful interaction. And someone can argue and say that some interactive art does have a reaction to us after we do something, therefore if we both react back and forth it’s an interaction. But would this be an interaction or just a lifeless programmed reaction that the program will do no matter what? Same as in the movie, no matter what, the actor is going to do something in his script.

If we look at the modern world, social media should be also a form of participation as the definition provided. But when I think about it, we do interact with each other while just using social media as a medium or a tool to do so. Same with music; the writer said we don’t interact with music but we interact with each other using music. The only form of interactivity with something that is not human, as per the definition again, would be just AI. Because AI is the only thing currently that doesn’t involve a real human in real-time in front of me to have an interaction. But other than that, should we even keep calling everything else interactive?

Assignment 3 – Around the world

For this assignment, I really wanted to do something fun. I was listening to the song “Around the World” by Daft Punk and thought about how the song is a hit while they just repeat the same sentence.

So, why not create a repetitive pattern that uses that repetitive sound to create something? So I created a robots class that keeps generating robots every 60 frames. Each robot would have a random head size, a random body size, random colors, and either a visor or a normal eye. These robots have an angle and a speed to move in so they can move in a circle.

class Robot {
  constructor(x, y, angle) {
    this.centerX = x; // Center of circular motion
    this.centerY = y; // Center of circular motion
    this.angle = angle; // Starting angle for circular motion

    this.headSize = random(12, 25);
    this.bodySize = random(this.headSize + 5, this.headSize + 25);

    this.headColor = color(random(255), random(255), random(255));
    this.bodyColor = color(random(255), random(255), random(255));
    
    //choose between visor or normal eeys
    this.eyeType = floor(random(2));

    this.orbitRadius = 200; // Radius of circular motion
    this.speed = 0.02; // Speed of rotation
  }

 

The song will be playing in the background and keeps repeating forever. So the song will keep saying “Around the World” while the robots are moving around the world (the center being the mouse).

 

**click on the sketch**

**if WordPress doesn’t make the sound work so check the sketch itself**

I really enjoyed doing this because I love Daft Punk, but I hope to maybe add some more interaction from the user to do something with these robots honestly.

Assignment 2 – Structured chaos

So basically, after watching the lecture about art and randomness, I wanted to create something chaotic. It does seem as if it doesn’t make sense and everything is random, while in reality, everything is moving using a calculated noise that makes a grey-scale artwork.

Literally, anyone who would say “oh it’s random stuff that doesn’t make sense” I will totally tell him: yes it is. Art sometimes never makes sense and has no purpose other than being chaotic. But, every time you run the code, you will get a totally different output, a different set of squares, different numbers, and of course, different offsets. And there is actually a chance to fill out the screen fully with the squares.

To create the noisy squares that appear like segments I used the noise function.

for(let i =0; i < 500; i+=50){
     for(let j =0; j < 500; j+=50){
        n = noise( (i+xOffset)*0.005, (j*yOffset)*0.005);
          n*=j;

        rect(n, j, 20,20);
  }

The noise function tries to create randomness in a way that is connected to the randomness that was generated before and after. So as I said, all this nonsense is literally structured and calculated.

 

Assignment 2 – reading response – structured chaos

When we talk about chaos, we often mention computers, wars, or art. We see a lot of chaos in our lives happening but we never notice it. Chaos usually should be random, unexpected, and sometimes even destructive. Some of the art seen in the video show complete chaos and random movements that would lead the public to say: “I can do that”. They theoretically can recreate this simple random piece, but in reality, chaos cannot be re-experienced or regenerated. No matter the tries we won’t get the same art piece; there will be always details that only chaos created.

What we forget when talking about chaos and randomness is that they both are actually not random. Chaos and randomness are both very structured and calculated. For example, when playing the game Minecraft, the game creates a “random” map for each player, while still looking as if everything is connected and makes sense. And it is actually both, it’s pure randomness that was structured in a way to be random using particular math equations to randomize everything in an interconnected way. And we saw that also in some of the artwork generated that looked like pure random shapes, but it was just a class of lines that moved in a specific way away from each other, using math, that after a certain time, they generated that “random” pattern. Chaos was never random, and random was never random, but we always have control over how much chaos we want, in what direction, and how much.