Week 11- Reflection

The author states that in the past, people thought it’s best to hide disabilities when making things, and this gives a bad feeling. It doesn’t just hide a part of the person, but it might also make them feel like they should be embarrassed to show and talk about their disability, which is actually a part of who they are.

They give an example of making things for people with disabilities, like glasses. Glasses show how some people don’t follow the old way of thinking. There are many kinds and styles of glasses so people can choose what feels comfortable and looks nice to them. Nowadays some people even get glasses just because they like how they look even if they don’t need them to see better. But if we think about another thing they talked about, like a fake leg for someone who lost a leg, some people might think it doesn’t look good and hide it, even though it works fine. I really think this should change. No one should feel bad about having a disability.

I think that disability itself isn’t the primary challenge; rather, it’s the environment that often creates difficulties for individuals with disabilities. When surroundings lack accommodations or accessibility features it becomes harder for people with disabilities to navigate and participate fully. So by creating inclusive environments and removing barriers, we can empower individuals with disabilities to engage more effectively and comfortably in various aspects of life, promoting equality and inclusivity for all.

The last thing I want to say is that I agree that these things should be simple. The main point of making things for people with disabilities is to help them. The things should be easy for them to use without making them feel stressed.

Final Project Idea

For my final project I envision revolutionizing waste management with an innovative automatic trash can that transforms the way we sort and manage our garbage. Imagine a single, sleek unit that dynamically categorizes different types of waste on its own, eliminating the need for multiple bins and the hassle of manual sorting. This intelligent trash can intuitively detects and segregates various materials like plastic, paper, and general waste as items are discarded, streamlining the recycling process effortlessly.

The system operates seamlessly, leveraging advanced sensors to identify the composition of incoming waste in real time. As items are tossed in, the trash can harnesses its sensor network to recognize the material type, swiftly directing each piece to its designated compartment within the unit. With a smart sorting mechanism at play, the trash can effortlessly organizes and manages the disposal, making eco-conscious living incredibly convenient for users.

This isn’t just a mere disposal unit; it’s a sustainable innovation aimed at promoting efficient recycling practices in households and public spaces. The simplicity of tossing items into a single receptacle while knowing they’ll be automatically sorted for proper recycling is not just convenient—it’s a step towards a cleaner, greener future.

Week 11- Serial Communication

Exercise 1: ARDUINO TO P5 COMMUNICATION

Schematic

Circuit Diagram 

P5.js Code
We used the same example provided in class, however, we just added this part to the code:

function draw() {
  
  background('#6FA9B0')

  if (!serialActive) {
    fill("rgb(255,255,255)")
    text("Press Space Bar to select Serial Port", 20, 30);
  } else {

    noStroke()
    // draw a circle, alpha value controls the x-position of the circle
    circle(map(alpha, 0, 1023, 0, 640), 240, 50)

  }
}

 

Arduino

We used the same one provided in class.

Video

 

Exercise 2: P5 TO ARDUINO COMMUNICATION

Schematic

Circuit Diagram 

P5.js Code

let brightness = 0; 
let slider;
let img;

//preload images
function preload(){
  img = loadImage('sun.png');
  img2 = loadImage('moon.png');
}

function setup() {
  createCanvas(400, 400);
  //create slider
  slider = createSlider(0, 255, 100);
  slider.position(width/2-50,height/2+25);
  slider.style('width', '80px');
}

function draw() {
  background('#85CCEC');
  image(img,235,130,150,180); 
  image(img2,30,140,100,160);
  
  let val = slider.value();
  brightness = val;
  
  // instructions
  textAlign(CENTER,CENTER);
  textSize(16);
  textStyle(BOLD)
  text("Control the brightness using the slider below!",width/2,100);
  
  //connects serial port
  if (!serialActive) {
    textSize(10);
    text("Press Space Bar to select Serial Port", 100, 30);
  } else {
    textSize(10);
    text("Connected",100,30);
  }
  
  
  
}

function keyPressed() {
  if (key == " ") {
    // important to have in order to start the serial connection!!
    setUpSerial();
  }
}


function readSerial(data) {

  //READ FROM ARDUINO HERE
  
  if (data != null) {
    // if there is a message from Arduino, continue
    
    //SEND TO ARDUINO HERE (handshake)
    
    let sendToArduino = brightness + "\n";
    writeSerial(sendToArduino);
  }
}

 

Arduino Code

int LED = 5;
void setup() {
  Serial.begin(9600);
  pinMode(LED, OUTPUT);
  // start the handshake
  while (Serial.available() <= 0) {
    Serial.println("Wait");  // send a starting message
    delay(300);               // wait 1/3 second
  }
}
void loop() {
  // wait for data from p5 before doing something
  while (Serial.available()) {
    int brightness = Serial.parseInt(); 
    if (Serial.read() == '\n') {
      analogWrite(LED, brightness); // turn on LED and adjusts brightness
      Serial.println("LIT"); 
    }
  }
}

 

Video

Exercise 3: BI-DIRECTIONAL COMMUNICATION

Schematic

Circuit Diagram 

P5.js Code

let hit = 0;  // whether the ball hit the ground
let reset = 0;  // whether Arduino sent a reset argument (a button press)

// Ball physics
let velocity;
let gravity;
let position;
let acceleration;
let wind; // wind direction is controlled by Arduino (potentiometer)
let drag = 0.99;
let mass = 50;

function setup() {
  createCanvas(600, 600);
  noFill();
  position = createVector(width / 2, 0);
  velocity = createVector(0, 0);
  acceleration = createVector(0, 0);
  gravity = createVector(0, 0.5 * mass);
  wind = createVector(0, 0);
}

function draw() {
  background('pink');
  applyForce(wind);
  applyForce(gravity);
  velocity.add(acceleration);
  velocity.mult(drag);
  position.add(velocity);
  acceleration.mult(0);
  fill(255)
  ellipse(position.x, position.y, mass, mass);
  if (position.y > height - mass / 2) {
    velocity.y *= -0.9; // A little dampening when hitting the bottom
    position.y = height - mass / 2;
    hit = 1;
  } else {
    hit = 0;
  }

  if (!serialActive) {
    console.log("Press Space Bar to select Serial Port");
  } else {
    // 
    // console.log("Connected");
    if (reset == 1) { // if reset signal is sent and flagged (button press)
      reset = 0; // clear the flag
      
      // reset ball with some random mass
      mass = random(15, 80);
      position.x = width / 2;
      position.y = -mass;
      velocity.mult(0);
    }
  }
}

function applyForce(force) {
  // Newton's 2nd law: F = M * A
  // or A = F / M
  let f = p5.Vector.div(force, mass);
  acceleration.add(f);
}

function keyPressed() {
  if (key == " ") {
    // important to start the serial connection!
    setUpSerial();
  }
}

function readSerial(data) {
  
  //READ FROM ARDUINO HERE
  

  if (data != null) {
    // split the message
    let fromArduino = split(trim(data), ",");
    // if the right length, then proceed
    if (fromArduino.length == 2) {
      reset = fromArduino[0];
      wind.x = fromArduino[1];
    }

    
    //SEND TO ARDUINO HERE (handshake)
    
    let sendToArduino = hit + "\n";
    writeSerial(sendToArduino);
  }
}

 

Arduino Code

int buttonSwitch = A2;
int potentiometer = A0;
int ledOut = 11;

void setup() {
  Serial.begin(9600);
  pinMode(12, OUTPUT);
  digitalWrite(ledOut, LOW);  // in the case of reconnection while p5 is running
  // start the handshake
  while (Serial.available() <= 0) {
    Serial.println("-1,-1");  // send a starting message
    delay(300);               // wait 1/3 second
  }
}

void loop() {
  // wait for data from p5 before doing something
  while (Serial.available()) {
    int hit = Serial.parseInt(); // receives 1 argument, whether the ball hit the ground
    if (Serial.read() == '\n') {
      digitalWrite(ledOut, hit); // turn on LED if the ball is in contact with the ground (1 -> HIGH) turn off LED if not (0, -> LOW)
      int sensor = digitalRead(buttonSwitch); // read button
      delay(1);
      int sensor2 = analogRead(potentiometer); // read potentiometer
      delay(1);
      Serial.print(sensor); // button
      Serial.print(',');
      if (sensor2 < 512) { // potentiometer; depending whether the value is over or below half, direction of the wind is set
        Serial.println(1);
      } else {
        Serial.println(-1);
      }
    }
  }
}

 

Video

Week 10- Reflection

In his blog post “A Brief Rant on the Future of Interactive Design,” Bret Victor talks about the need to create a dynamic medium that people can interact with in a way that’s similar to how they interact with physical objects. What really struck me was his point that the technology behind tablets, smartphones, and other similar devices, known as Pictures Under Glass, doesn’t offer genuine touchable interfaces. Victor believes that technologies that prioritize sleek visuals over tactile experiences are just a passing phase. 

The first post and the follow-up response both emphasize that researchers and developers should look into haptic feedback to make devices easier to use. I agree with the author’s concerns about the future of interaction design. Touchscreens are great, but they’re not the only way to interact with computers. We need to explore new technologies that let us interact with computers in a more natural and intuitive way, like haptic feedback. Haptic feedback can make our interactions with computers more immersive and engaging. Imagine feeling the texture of a virtual object or manipulating it with your hands. That would be pretty cool. But we shouldnt ignore other forms of interaction, like voice or visual cues. Instead, we should find ways to combine different interaction methods to create the best possible user experience.

Week 10- EchoGuard: Your Personal Parking Assistant

Concept

Our assigment idea was sparked by a common scenario we all encounter – parking a car in reverse. In discussing the challenges of accurately judging the distance, my partner and I realized the potential hazards and the lack of a reliable solution. Considering how much we rely on the beeping sensor in our own cars for safe parking, we envisioned a solution to bring this convenience to everyone. Imagine a situation where you can’t live without that reassuring beep when you’re reversing. That’s precisely the inspiration behind our assigment – a beeping sensor and a light that mimics the safety we’ve come to depend on, implemented with a car toy to illustrate its practical application.

Required Hardware

– Arduino
– Breadboard
– Ultrasonic distance sensor
– Red LED
– 10k resistor
– Piezo speaker
– Jumper wires

Schematic Diagram:

Circuit Diagram:

 

Setting Up the Components

Ultrasonic Distance Sensor Connections:
VCC to 5V
TRIG to digital pin 9
ECHO to digital pin 10
GND to GND on the Arduino

Speaker Connections:
Positive side to digital pin 11
Negative side to GND

LED Connections:
Cathode to GND
Anode to digital pin 13 via a 10k resistor

Coding the Logic

// defines pins numbers
const int trigPin = 9;
const int echoPin = 10;
const int buzzerPin = 11;
const int ledPin = 13;

// defines variables
long duration;
int distance;
int safetyDistance;

// Define pitches for the musical notes
int melody[] = {262, 294, 330, 349, 392, 440, 494, 523}; 

void setup() {
  pinMode(trigPin, OUTPUT); // Sets the trigPin as an Output
  pinMode(echoPin, INPUT);  // Sets the echoPin as an Input
  pinMode(buzzerPin, OUTPUT);
  pinMode(ledPin, OUTPUT);
  Serial.begin(9600); // Starts the serial communication
}

void loop() {
  // Clears the trigPin
  digitalWrite(trigPin, LOW);
  delayMicroseconds(2);

  // Sets the trigPin on HIGH state for 10 microseconds
  digitalWrite(trigPin, HIGH);
  delayMicroseconds(10);
  digitalWrite(trigPin, LOW);

  // Reads the echoPin, returns the sound wave travel time in microseconds
  duration = pulseIn(echoPin, HIGH);

  // Calculating the distance
  distance = duration * 0.034 / 2;

  safetyDistance = distance;
  if (safetyDistance <= 5) {
    // Play a musical note based on distance
    int index = map(safetyDistance, 0, 5, 0, 7); // Map distance to array index
    tone(buzzerPin, melody[index]); // Play the note
    digitalWrite(ledPin, HIGH);
  } else {
    noTone(buzzerPin); // Stop the tone when not close
    digitalWrite(ledPin, LOW);
  }

  // Prints the distance on the Serial Monitor
  Serial.print("Distance: ");
  Serial.println(distance);
}

 

Hardware Implementation:

 

Video Illustration:

Video Illustration 2 (using melody)

Working Explanation and Conclusion
The ultrasonic distance sensor measures the gap between the car and the sensor on the breadboard. When the distance diminishes below a predefined threshold (5 units in our design), the buzzer emits a warning sound, and the red LED illuminates, acting as a clear visual cue for the driver to halt. This Arduino-based system seamlessly combines hardware and software, offering an elegant solution to a common problem. In creating this assignment, we’ve not only simplified the process of reverse parking but also contributed to enhancing overall safety, turning our initial conversation into a tangible, practical innovation.

Week 9- Sunset Effect

Concept:

The concept involves creating an Arduino-based project that combines analog and digital sensors to control two LEDs in a creative way.  The goal is to simulate a sunset effect using the LDR to detect ambient light levels. The system uses the button to sequentially turn on two LEDs and the potentiometer to adjust the brightness, providing a visually appealing sunset simulation.

Sunset Simulation Sequence:

    1. Initialization:
      • Upon activation, both yellow and red LEDs light up simultaneously, representing the starting point of the sunset effect.
    2. First Button Press:
      • When the button is pressed for the first time, the yellow LED dims or turns off, symbolizing the initial phase of the sunset where daylight begins to fade.
    3. Second Button Press:
      • Upon another press of the button, the red LED dims or turns off, indicating the advanced stage of the sunset where the sky adopts warmer hues.
    4. Potentiometer Control:
      • Throughout the simulation, the potentiometer allows users to adjust the overall brightness, offering a real-time customization of the sunset effect.

https://youtu.be/DqOrlSnHzus

Code: 

const int buttonPin = 2;     // Pin number for the push button (connected to ground with pull-up resistor)
const int led1Pin = 3;       // Pin number for the first LED
const int led2Pin = 5;       // Pin number for the second LED
const int potPin = A0;       // Pin number for the potentiometer

int buttonState = HIGH;      // Variable to store the current state of the button
int lastButtonState = HIGH;  // Variable to store the previous state of the button
int led1State = LOW;         // Variable to store the state of the first LED
int led2State = LOW;         // Variable to store the state of the second LED
int potValue = 0;            // Variable to store the potentiometer value
int brightness = 0;          // Variable to store the LED brightness
int flag = 0;

void setup() {
  pinMode(buttonPin, INPUT_PULLUP);
  pinMode(led1Pin, OUTPUT);
  pinMode(led2Pin, OUTPUT);
}

void loop() {
  buttonState = digitalRead(buttonPin);

  // Check if the button is pressed (LOW) and was not pressed before
  if (buttonState == LOW && lastButtonState == HIGH) {
    // Toggle the state of the first LED
    led1State = !led1State;
    digitalWrite(led1Pin, led1State);

    // If the first LED is turned on, wait for the second button press to turn on the second LED
    if (led1State == HIGH) {
      delay(50); // Debounce delay
      buttonState = digitalRead(buttonPin);
      if (buttonState == LOW) {
        // Toggle the state of the second LED
        led2State = !led2State;
        digitalWrite(led2Pin, led2State);
        flag = 1;
      }
    }
  }

  if (flag == 1) {
    // Read the potentiometer value and map it to the LED brightness range (0-255)
    potValue = analogRead(potPin);
    brightness = map(potValue, 0, 1023, 0, 255);

    // Apply the brightness to both LEDs
    analogWrite(led1Pin, led1State ? brightness : 0);
    analogWrite(led2Pin, led2State ? brightness : 0);
  }

  // Save the current button state for the next iteration
  lastButtonState = buttonState;
}

 

Improvements:

For future upgrades, I want to make the lights in my project more interesting by creating complex patterns. I’m thinking of making the sunset simulation more dynamic and captivating with intricate lighting sequences inspired by nature. Also, I’d like to get better at drawing schematics.

Week 9- Reading Response

In his blog posts, Tom Igoe analyzes the importance of innovative thinking for those who work on physical computing projects and considers the performative dimension of digital products and creative artworks. What strikes me about his approach is that Igoe encourages students to stop thinking of specific ideas as not original and consider how to create variations of them instead. While dance floor pads, electronic instruments controlled by players’ gestures, and touch kiosks may seem overused, such projects bring about a wealth of learning opportunities. Interactive art projects enable creators to engage their audience and discover new interpretations.

Instead of scripting users’ actions, creators should prioritize giving them the freedom to express themselves through works of art. I find this strategy especially useful for product developers, artists, and creative professionals looking for ways to grab the interest of their audience. As it is impossible to predict how a target user may react to an end product without conducting thorough research, we can test out physical computing and creative art projects in real time to get feedback. I think that by analyzing users’ and viewers’ reactions, web developers and artists can increase the value of their projects.

Week 8 – Saltwater Conductivity

For this assignment, I setup a cup of saltwater with two wires connecting to two aluminum foils, which were positioned in an empty cup. Next, I configured the Arduino board, connecting an LED to the breadboard along with a resistor. Powering up the Arduino with the computer, we poured the saltwater into the empty cup. As the saltwater facilitated the flow of current between the electrodes, the circuit was completed, and the LED was activated.

Saltwater lights up the LED by serving as a conductive medium that allows the flow of electricity. When the salt dissolves in the water, it separates into positively and negatively charged ions, making the water conductive. As the two electrodes, placed in the saltwater, complete the circuit, the flow of current is facilitated through the saltwater. This flow of electricity activates the LED, causing it to emit light. The presence of the salt enables the completion of the electrical pathway, allowing the LED to be powered and produce illumination.

https://youtube.com/shorts/xvapKBzMCXU?feature=share

One practical application of the saltwater conductivity experiment could be in the field of emergency lighting systems. In remote or disaster-prone areas where access to conventional power sources is limited, this saltwater-based circuit could serve as an emergency lighting solution. For instance, in regions prone to natural disasters such as hurricanes or earthquakes, where power outages are common, individuals could use this simple saltwater-powered LED setup as an alternative lighting source. This cost-effective and simple solution could significantly contribute to enhancing safety and visibility in challenging circumstances where traditional power sources are unavailable.

 

Week 8- Reflection

In “Emotion & Design: Attractive Things Work Better,” Donald Arthur Norman puzzles over the question of why things with attractive designs work better. Similarly, Robert McMillan touches upon how Margaret Hamilton’s improved code to fix a bug that wiped out the navigational data used by the Apollo 8 crew. While reading these articles, I could not help but think that we consider a design attractive if it enables us to use a product without risk or extra effort. When McMillan argues that attractive things allow people to use their creative abilities to the fullest, this statement implies that well-thought-out design enables us to make the most out of a product.

As I often find myself dissatisfied when an app or a program crashes, I believe that thorough product testing is crucial for creating an attractive design. Had NASA approved Hamilton’s suggestions, it would have allowed her to create a better code and minimize the risks for the astronauts. These articles demonstrate that researchers and scientists realized the importance of an attractive design and now see it as something that brings enjoyment, enhances people’s cognitive and creative abilities, and improves a product’s usability.

 

Final Midterm: Generative Gallery: A Digital Art Experience

Concept: 

Whenever I step into an art gallery, I am captivated by the stories each artwork tells, the vibrant strokes on the canvas, and the interplay of light and shadow. Inspired by these immersive experiences, I wanted to recreate a similar atmosphere, merging the tranquility of a gallery with the excitement of interaction. My vision for the p5.js Art Gallery centered around creating a dynamic and immersive digital space that embodies the essence of generative art. Influenced by the interplay of light and space, I aimed to design an environment that encourages exploration and creative expression. Extensive research into contemporary digital art installations and interactive galleries inspired the overall concept, guiding me in the creation of an engaging and visually captivating experience for visitors.

The central idea behind my project is to encourage users to explore the art at their own pace and immerse themselves in the diverse creations on display. To achieve this, I plan to incorporate interactive elements such as virtual paintings that users can admire, virtual sculptures they can examine from all angles, and informational pop-ups that offer insights into each artwork. By subtly integrating these interactive features, I aim to evoke curiosity and invite users to engage with the art, thereby fostering a unique and personalized gallery experience for each visitor.

Wireframes I drew for my project:


In planning the sketch, my approach involved meticulous consideration of user experience and interaction design.. By integrating logical operations, including conditional statements, loops, and rendering functions, I aimed to ensure a seamless and engaging user journey within the gallery. Hand sketches and diagrams were instrumental in mapping out the flow of user interactions and visualizing the logical structure of the coding process, facilitating a more systematic development approach.

The pictures of the art gallery are all drawn by me using online tools. I carefully made the inside and outside parts, paying close attention to the lights, light switch, floor, and the way out. Using my creativity and computer skills, I worked hard to create a detailed and accurate representation of the gallery.

Parts I’m Proud of and Challenges Overcome:

  • Figuring out when the mouse touches the artwork:

I’m proud of this code because it lets the artwork respond when I click the mouse near it. This part specifically deals with showing the spiral artwork on the screen and checking if I’m clicking near it.

// Function to render the spiral
function renderSpiral(x, y, r) {
  push(); // Save the current drawing style settings and transformations
  translate(x + sposX, y); // Move the origin to the specified position
  image(frameIMG[1], -r / 1.8, -r / 1.8, r * 1.1, r * 1.1); // Display the image with the specified dimensions
  pop(); // Restore the previous drawing style settings and transformations

  // If mouse pressed
  if (mouseX < x + sposX + r/2 && // Check if the mouse's X position is within the specified range
      mouseX > x + sposX - r/2 && // Check if the mouse's X position is within the specified range
      mouseY < y + r/2 && // Check if the mouse's Y position is within the specified range
      mouseY > y - r/2 && // Check if the mouse's Y position is within the specified range
      mouse) { // Check if the mouse is pressed
      rect(0, 0, width, height); // Draw a rectangle covering the entire canvas
      for (var i = 0; i < spiral.length; i++) {
        push(); // Save the current drawing style settings and transformations
          spiral[i].render(width / 2.5, height / 3, r / 5); // Call the render function for the spiral object
        pop(); // Restore the previous drawing style settings and transformations
      }

 

This code checks if I’m clicking near the artwork by looking at where I click and comparing it with the artwork’s position and size. If I click close to the artwork, it shows more spirals on the screen, giving a cool effect that changes when I interact. Making this work well was a bit tough. I had to make sure the program could understand where I’m clicking and when it’s close to the artwork. Getting the right balance between showing the spirals and detecting my clicks accurately was important to make sure the interaction feels smooth and fun.

  • Switching the user perspective from scene-to-scene.

I’m proud of this code because it helps smoothly switch what I see in the program, going from one scene to another without any sudden jumps. This part, called ‘Screen,’ takes care of showing different pictures and managing the sounds in the background.

// Check if the screen is outside or inside
if (screen == 0) {
  push(); // Save the current drawing style settings and transformations
  // Display image outside screen
  image(sIMG[0], 0, 0, width, height);
  pop(); // Restore the previous drawing style settings and transformations

  // Display principal text
  push();
  translate(width / 2, height / 1.1);
  image(sIMG[2], -100, -15, 200, 30); // Display the image with specified dimensions and position
  pop();

  // Render Art Gallery
  agRender(); // Call function to render the art gallery
} else {
  push(); // Save the current drawing style settings and transformations
  // Display image inside the screen with dynamic x-position
  image(sIMG[1], sposX, 0, width * 2.5, height);
  pop(); // Restore the previous drawing style settings and transformations
}

 

The above codeIt makes sure that the sound keeps playing smoothly in the background while I move between scenes, making the whole experience more immersive and enjoyable.When I’m outside in the program (screen = 0), it shows the right picture and puts the main text in the correct place, making it look nice and organized. Making this work well was a bit tricky. I had to make sure that the images changed in a way that didn’t feel sudden or strange, and that the background sound kept playing without any interruptions. Finding the right balance between the pictures and the sound was important to make the whole experience feel smooth and natural.

  • I made sure to focus on small details. For instance, when you hover over the door, it creates a knocking sound, but it won’t keep making that sound repeatedly as long as you hover over it. The knocking will only happen again if you move your cursor out of the door and then back onto it. Also, I made sure that the initial music only plays the first time the door is opened, and it doesn’t play every time after that. This way, the music doesn’t overlap with itself. Also I added a sound effects for the light switch, moving to the right and left etc.

 

Areas for Improvement and Future Work:

Moving forward, there are several areas in my project that I recognize could be refined to enhance the overall user experience and elevate the sophistication of the artworks. One key aspect for improvement lies in enriching the interactive elements within the gallery. For instance, I plan to incorporate more diverse and dynamic interactions with the displayed artworks, enabling users to manipulate shapes, colors, and other visual elements to create a more engaging and immersive experience. By introducing additional interactive features like the ability for users to leave virtual notes or comments on specific art pieces.

Looking ahead, my future work on the project will revolve around integrating advanced generative art techniques to create more intricate and visually captivating artworks. I am particularly interested in exploring the application of complex algorithms and procedural generation methods to produce intricate patterns, textures, and visual effects, thereby adding depth and sophistication to the displayed artworks. I am also keen on delving into the use of machine learning algorithms to develop art that dynamically adapts and evolves in response to user interactions or external stimuli, thereby creating a highly immersive and personalized art experience.

 

Final Project:

 

References:

  • https://editor.p5js.org/mk7592/sketches/Q3_SYFuO6