Assignment #9 – Code – ☆Shine On☆

For this assignment, I didn’t have many ideas, so I was thinking about how to make the switch useful in a way. I thought of a few things, but then saw that my classmates had already done them, so I tried thinking of something else. That’s when it hit me! There is nothing I hate more than forgetting to wear my jewelry, particularly my rings. When I leave my room without them, I just feel… naked? However, one thing I never forget to do is to turn off the lights. I will always notice when a light is still on, unfortunately for me much more than when my rings aren’t. So, I thought, why not make a switch that is on when my jewelry is on it, and off when it isn’t? Obviously, this is a very small-scale prototype, but it was worth a try.

 

For the sake of the assignment, I only used one ring. It had to be a conductive material, so I picked a gold ring. Then, I created my circuit:

 

And here is a video demonstration:

IMG_1580

Finally, here is my circuit diagram:

Although it’s not a very complex circuit, I enjoyed creating it, particularly because I had missed two classes and wasn’t sure I had understood the process well. But I’m glad it worked out!

 

Assignment #7 – Reading Response – The Ends Justify The Means?

The end justifies the means. In other words, if the final result is what is desired, then any means of getting to that result is justified. In the case of aesthetics (or lack thereof), then, this theory of consequentialism can be applied to justify ugly design. At the end of the day, if the product or element works with a bad design, it still works. Even if physically or visually unattractive, as long as it serves its purpose, then any means of achieving the final prototype/item is justified. For instance, the “ugly” teapot by Michael Graves is justified in its “ugliness” because it achieves what it is meant to be doing. On the other hand, Jacques Carelman’s teapot, which is arguably more attractive than Graves’, is not effective – so we can’t really say that its beauty is justified by its end. But then, are the ends everything? Should one forfeit beauty for the sake of usability? Not always, as shown through the author’s very own collection. Obviously, a lot of things need a usable and working design for the world to function. But these same things can also be taken as they are solely for their beauty. So perhaps, the ends may not matter as much. Now, the author argues that attractive design makes things easier to use, but I think we must distinguish between ugly and uneffective. The door example argues that in a in a frantic, urgent environment (i.e. a fire), one will be stressed and not thinking straight, and therefore be more likely to fumble opening a door with a bad design. But I still think that a bad design is not necessarily an ugly design. If anything, a beautiful design may be ineffective, while an ugly one may be effective. This goes back to the “ugly” teapot. Besides, how attractive can a fire exit door really be?

Midterm – ☆Psychedelic Rapture☆

Sketch

This sketch will work in fullscreen, here is the link: https://editor.p5js.org/alexnajm/full/D8FoFUtc6

Concept and Inspiration

For my midterm, I decided to create some sort of audio-reactive artwork. In other words, I wanted the artwork to be controlled by the preloaded sounds.

Essentially, I created a playlist of 12 songs. I uploaded these songs and their matching cover images as well as the title and the artist. The idea is that the user can switch between songs with the left and right arrows keys, and the visuals in the background will change depending on which song is playing.

Here are the songs I picked:

  1. Heart-Shaped Box – Nirvana
  2. Cool Colorado – La Femme
  3. Weak For Your Love – Thee Sacred Souls
  4. Spooky – Dusty Springfield
  5. Karma Police – Radiohead
  6. Buddy’s Rendezvous – Lana Del Rey
  7. Althea – Grateful Dead
  8. Naive – The Kooks
  9. Drink Before The War – Sinead O’Connor
  10. Right Down The Line – Sam Evian
  11. She – The Blaze
  12. Belong In The Sun – ¿Téo?

My inspiration comes from the feelings I get when I listen to music. Sometimes, I just lay in bed and close my eyes and I start feeling the energy of the music. So I simply wanted to create an artwork that matches visuals to music, in a way that the former are controlled by the latter.

How it Works, and Code Snippets

1) The sketch begins with a start page. It has brief instructions. The user must click to start.

function displayStartPage() {
  background(0);
  image(star2, -150, 200, 620, 620);
  image(star3, 800, 200, 520, 520);
  fill(255);
  textSize(32);
  textAlign(CENTER, CENTER);
  textFont(chosenFont);
  text(
    "Switch between songs\nwith the left and right arrows\n\n\n\nClick anywhere to start vibing!",
    width / 2,
    height / 2
  );
}

2) Once on the main page, the first song starts playing with the corresponding image, text, and visuals.

function mousePressed() {
  if (currentState === "startPage") {
    currentState = "mainPage";
    song1.play();
  }
}
function displayMainPage() {
  background(0);

  let colorPalette = [
    color(112, 2, 2),
    color(2, 34, 152),
    color(228, 121, 155),
    color(203, 172, 53),
    color(162, 227, 232),
    color(255),
    color(146, 111, 55),
    color(191, 66, 38),
    color(84, 45, 151),
    color(178, 157, 202),
    color(39, 100, 151),
    color(76, 128, 93),
  ]; // color palette array in order to change colors with every track

  let currentColor = colorPalette[displayedImages.currentIndex]; // setting the current color as the color with the current index, from the color palette array
  stroke(currentColor);
  fill(255, 0.5);

  //   getting the amplitude level of the playing song and mapping it to then plug into the shape
  let volume = amplitude.getLevel();
  let heightMultiplier = map(volume, 0, 1, -2, height * 1.5);

  //   Setting the melting lines in the background
  let lineSpacing = 3; // line spacing variable to set the distance between each line
  let noiseScale = 0.005; // noise scaling variable to determine the smoothness of the noise

  for (let y = 0; y < height; y += lineSpacing) {
    // for loop which draws the parallel lines with a spacing of 3
    beginShape();
    for (let x = 0; x <= width; x += 120) {
      // nested for loop that iterates the points along a horizontal line
      let noiseVal = noise((x + frameCount) * noiseScale, y * noiseScale); // noise value variable which calculates a perlin noise value for each vertex point -- the x-coordinate is adjusted with the noise scale and the frame count, they y-coordinate is only adjusted with the noise scale
      let meltingEffect = map(noiseVal, 0, 1, -heightMultiplier / 2, heightMultiplier / 2
      ); // the melting effect created by mapping the noise value, between 0 and 1, to a greater range in order to amplify the melting effect. The range is set according to the heightMultiplier defined above, so that the amplitude levels of the song control the movement.
      curveVertex(x + meltingEffect * 1.2, y + meltingEffect * 1.2); // adding a vertex at x + melting effect and y + melting effect (horizontal + vertical offset). The vertical position is therefore altered by the noise in order to create the dynamic effect
    }
    endShape();
  }

  //   display images
  displayedImages.display();
  amplitude.setInput(displayedImages.songs[displayedImages.currentIndex]);
}

I had a lot of fun creating this part, as I got to play around with the visuals and the amplitude. It took some time, but I think it was worth it.

3) The user can move to the next song by pressing the right arrow key, or to the previous song by pressing the left arrow key.

//   keyPressed function to allow the user to change between images 
function keyPressed() {
  if (currentState === "mainPage") {
    if (keyCode === RIGHT_ARROW) {
      displayedImages.nextImage();
    } else if (keyCode === LEFT_ARROW) {
      displayedImages.previousImage();
    }
  }
}

4) There are 12 songs. If the user is at the last song and presses the right arrow key, it will go back to the first song. If the user is at the first song and presses the left arrow key, it will go back to the twelfth song.

5) There is a “Track 1” button. If it is pressed, no matter on which song the user is, it will take them back to the first song. This is the “reset” button.

//   creating a button to return to track 1
  song1Button = createButton("Track 1");
  song1Button.position(1300, 20);
  song1Button.style("background-color", "0");
  song1Button.style("border", "none");
  song1Button.style("font-size", "20px");
  song1Button.style("color", "255");
  song1Button.mousePressed(index1);
  
  amplitude = new p5.Amplitude();
  frameRate(20);
}

// index1 function incorporated into the button function, in order to get the first image, text, and sound from the array

function index1() {
  if (currentState === "mainPage") {
    displayedImages.currentIndex = 0;
    displayedImages.updateImage();
    displayedImages.stopSongs();
    displayedImages.playCurrentSong();
    amplitude.setInput(displayedImages.songs[displayedImages.currentIndex]);
  }
}

Other than these functions, I am really proud of the visuals. I have re-included the code snippet here:

 //   getting the amplitude level of the playing song and mapping it to then plug into the shape
  let volume = amplitude.getLevel();
  let heightMultiplier = map(volume, 0, 1, -2, height * 1.5);

  //   Setting the melting lines in the background
  let lineSpacing = 3; // line spacing variable to set the distance between each line
  let noiseScale = 0.005; // noise scaling variable to determine the smoothness of the noise

  for (let y = 0; y < height; y += lineSpacing) {
    // for loop which draws the parallel lines with a spacing of 3
    beginShape();
    for (let x = 0; x <= width; x += 120) {
      // nested for loop that iterates the points along a horizontal line
      let noiseVal = noise((x + frameCount) * noiseScale, y * noiseScale); // noise value variable which calculates a perlin noise value for each vertex point -- the x-coordinate is adjusted with the noise scale and the frame count, they y-coordinate is only adjusted with the noise scale
      let meltingEffect = map(noiseVal, 0, 1, -heightMultiplier / 2, heightMultiplier / 2
      ); // the melting effect created by mapping the noise value, between 0 and 1, to a greater range in order to amplify the melting effect. The range is set according to the heightMultiplier defined above, so that the amplitude levels of the song control the movement.
      curveVertex(x + meltingEffect * 1.2, y + meltingEffect * 1.2); // adding a vertex at x + melting effect and y + melting effect (horizontal + vertical offset). The vertical position is therefore altered by the noise in order to create the dynamic effect
    }
    endShape();
  }

  //   display images
  displayedImages.display();
  amplitude.setInput(displayedImages.songs[displayedImages.currentIndex]);
}

I used chatGPT to help me get the “melting” effect, which ultimately just turned into a dynamic effect. I used noise in order to create the movement, and then mapped it to a range set by the height multiplier (that includes the amplitude), which enabled the movement to be specific to the song currently playing.

Another part I am proud of is the functions inside my DisplayedImages class, which enabled the interaction of the user to switch between songs (and simultaneously with images and texts):

//   function to stop the previous song when going to the next one
  stopSongs(){
    this.songs.forEach(song => song.stop())
  }
  
//   function to play the current song
  playCurrentSong(){
    this.songs[this.currentIndex].play();
    amplitude.setInput(this.songs[this.currentIndex]); // this allows us to get the amplitude for the song that is currently playing
  }
  
//   function to update image and text to the current index, it was needed for the button I created, otherwise they wouldn't change
  updateImage() {
    this.currentImage = this.images[this.currentIndex];
    this.currentText = this.texts[this.currentIndex];
  }

  //   set what the next image, text, and song will be - also stops the current song and plays the next one
  nextImage() {
    this.songs[this.currentIndex].stop();
    this.currentIndex = (this.currentIndex + 1) % this.images.length;
    this.updateImage();
    this.currentImage = this.images[this.currentIndex];
    this.currentText = this.texts[this.currentIndex];
    this.songs[this.currentIndex].play();
  }

  //   set what the previous image, text, and sound will be - also stops the current song and plays the previous one
  previousImage() {
    this.songs[this.currentIndex].stop();
    this.currentIndex =
      (this.currentIndex - 1 + this.images.length) % this.images.length;
     this.updateImage();
    this.currentImage = this.images[this.currentIndex];
    this.currentText = this.texts[this.currentIndex];
    this.songs[this.currentIndex].play()
  }
}
Problems and Potential Improvement

The main problem I ran into was uploading the music. For some reason, when I uploaded the files, the sketch would be stuck on loading forever (and never actually load). I tried everything I could for a good three hours, and nothing worked. Eventually, I tried redownloading all the files from scratch, and through some weird manipulation it worked, thankfully. I am just sad that I lost so  much time on this when I could have added more things instead.

Another problem I ran into was when the “Track 1” button was pressed, the amplitude wouldn’t be extracted anymore, resulting in almost no movement in the visuals (and that was the same for every song playing). I ended up having to add the amplitude input in the class as well, under the playCurrentSong function.

As for improvement, I would have liked to extract the pixels from the images and have them create a gradient for the shape. I tried, but couldn’t really figure it out so I just opted with one color per song.

Something else I would like to add eventually is a pause button so that the user can pause/play the song currently playing.

Finally, I would also like to add a function so that when one song ends it automatically moves on to the next. Currently, when one song ends, it just ends. It doesn’t start over nor does it go to the next song, so no music plays. I think changing that could make the process smoother!

Conclusions

Overall, I really enjoyed creating this. It definitely wasn’t easy as there were a lot of new functions as well as elements we didn’t even talk about in class. When the sound wasn’t working it made me anxious and I honestly thought I would never finish this project. But I’m glad I figured it out, and I really did have fun making it and learning new things along the way. While there is a lot of room for improvement, I am quite proud of the way it looks!

 

Assignment #5 – Code – ☆Midterm Progress☆

Concept and Inspiration

For my midterm, I want to create some sort of audio-reactive (?) artwork. I am unsure about the audio-reactive part because I don’t think I want it to actually live react to audio; rather, I want the artwork to depend on the preloaded sounds.

Essentially, I created a playlist of 12 songs. I want to upload these songs and their matching cover images as well as the title and the artist. The idea is that the user can switch between songs with the left and right arrows keys, and the visuals in the background will change depending on which song is playing.

Here are the songs I picked:

  1. Heart-Shaped Box – Nirvana
  2. Cool Colorado – La Femme
  3. Weak For Your Love – Thee Sacred Souls
  4. Spooky – Dusty Springfield
  5. Karma Police – Radiohead
  6. Buddy’s Rendezvous – Lana Del Rey
  7. Althea – Grateful Dead
  8. Naive – The Kooks
  9. Drink Before The War – Sinead O’Connor
  10. Right Down The Line – Sam Evian
  11. She – The Blaze
  12. Belong In The Sun – ¿Téo?

My inspiration comes from the feelings I get when I listen to music. Sometimes, I just lay in bed and close my eyes and I start visualizing the energy of the music. So I simply wanted to create an artwork that matches visuals to music.

Back to my first uncertainty, I think I want the visuals to depend on the characteristics of the song itself. In other words, I would like to extract the amplitudes and frequencies of each songs and load them in the functions of the visuals in order for them to control what the visuals will look like.

For the visuals, I want to have some sort of melting effect. That is what will be controlled by the characteristics of the music. I am also thinking, for the color, whether to also have it controlled by one of the music’s characteristics or whether it should be based on the colors of the cover image (by extracting the pixels).

Risks and Problems
  1. I tried loading the songs as mp3 files and most of them didn’t work. The only two that worked I had compressed through a website as they were over 5mb. The other ones wouldn’t load for some reason, so I need to figure that out.
  2. I tried playing with the visuals a bit, but I am not satisfied. As I am still exploring how to make them, I think it will take some time for me to achieve something close to what I want to get. Basically, I tried to create lines that have a melting effect by using Perlin noise, in order to make it dynamic
  3. I think the hardest part will be having the amplitude/frequency of the music control the visuals. I will be working on that next!
Sketch Progress

Anyway, here is what my sketch looks like so far (I hate!!!! the way the background looks like currently):

You can use the left arrow and right arrow keys to switch between songs, though no music will play :/

What’s Left to Add and Change
    • At least one sound (12, technically)
    • A screen giving instructions and wait for user input (button / key / mouse / etc.) before starting
    • A way to start a new session (without restarting the sketch), though I’m not sure what exactly this would entail
    • The shape, perhaps, and to have it controlled by the music
    • The colors, and to have them controlled either by the music or by the cover image – then the text will be more readable hehe

Assignment #5 – Reading Response – Surveillance Against Surveillance

I really enjoyed reading this text, particularly discovering the artworks on surveillance. I find it quite ironic (but in a good way) that a lot of artworks that want to talk about surveillance, employ some sort of surveillance themselves. Of course, here, surveillance has to be looked at in the wider sense. Usually, when we think surveillance we imagine CCTV or hidden cameras. However, as the text mentions, human tracking is also included under surveillance. So, even artworks that simply track your body movements, as basic as the Cheese installation by Möller, can be said to be «surveilling» you in a way. There are also the other forms of human tracking, like Lozano-Hemmer’s Standards and Double Standards, which involve more intense forms of surveillance and tracking. To have items following you takes the «scrutiny» to a different level. Reading about this artwork reminded me of the Manar Abu Dhabi exhibition which took place last fall. It included 35 artworks, many of which were installations. The one that Lozano-Hemmer’s work reminded me of is the Abu Dhabi Dots by Carsten Höller, which was exhibited on the shore of the Corniche beach. Essentially, there are projected spotlights on the sand, and when you stand on them and move, they move with you.

After reading about it now, I found out that it is actually a game. Here is the description:

“Operating as a reward and punishment game, you can upgrade from red to blue by overlapping your dot with a blue dot, and can then move to green, and white. People you are playing with who are overlapped are downgraded. 

But if a series of red dots join an interrupted chain or cluster, the dots all become white. If not, the game continues until 7 or more white dots don’t overlap with other white dots any more.”

Before reading this text, I hadn’t thought much about it, but now I realize that it is a form of tracking after all. And although it seems like a game for children, the whole «reward and punishment» part of it actually reminds me a bit of surveillance and scrutiny, where your actions can favor you but also disadvantage you, while also impacting other people around you. It is therefore interesting to see how computer-based tracking artworks feed into the idea of surveillance, and potentially how they use our data (and store them?).

Assignment #4 – Reading Response – Mundane Acts Made Hard

This text reminded me of the book «Thinking, Fast and Slow» by Daniel Kahneman, in which he talks about the two modes of thinking. The first is «System 1», which is the intuitive and fast way of thinking, and the other is «System 2», which takes more effort and conscience. Often, when we learn new things, we think with System 2. Eventually, it becomes more «automatic» and therefore turns into System 1. One example of this is how we learn how to multiply as children. For instance, to solve «2 x 3», at first we count on our fingers. It is a more conscious effort, therefore involving System 2. Over time, we learn the solution to «2 x 3». Thinking the solution to that multiplication becomes automatic, therefore turning into a System 1 way of thinking. The bottom line is through repetition, we do things more intuitively, without thinking much about them. In relation to Norman’s descriptions of poor design, there seems to be some things which will never be systematic, though they should be easy to use. The example of the doors is well fitting. We push and pull through a number doors everyday, yet there will still be moments in which we push instead of pull, or push on the wrong side of the door. An example that came to mind when I read this was SD cards and SD card readers. As a film major, I cannot begin to count the amount of times I have had to insert an SD card into a reader. Yet, I never manage to put it in the right way. The worst part is: it gets stuck if you put it in the wrong way. For something that should be so easy, it takes a lot of mental effort to do. Even when the SD card symbol is on the reader in order to instruct in which way to insert it, it just does not come to me. I would think that maybe it is just a me issue, but I have seen this happen with so many people, whether they be filmmakers who do this repetitively or not.

At the end of his text, Norman talks about how technology is supposed to make things easier yet manages to make it harder sometimes. He gives the example of the phone, to which so many unnecessary functions are added making it so complicated. But even in much more simple things, this also happens. Automatic doors for instance are enabled by sensors. But there have been so many times in which the sensors don’t work, leaving you stranded outside or locked inside. Poor design and some bad technology therefore make mundane acts so complicated sometimes…

Assignment #4 – Code – ☆Meticulous Misleading☆

When we learned how to do data visualization, I immediately knew that this is what I wanted to do for this assignment. I find data very interesting, but probably not for the reasons you think. Ever since I am little, my dad tells me to think about and interpret data critically. In fact, data is very nuanced in the media. In many aspects of journalism, digital or not, the data may be honest. However, it is not about the data itself. It is about how it is presented.

I remember one instance two years ago: my dad showed me a  two-sided graph comparing deaths due to two different factors over time (I can’t remember what the factors were, nor can I find the picture unfortunately). At first glance, the two graphs seemed pretty similar. But after looking closely, the x-axis for factor A went from 0 to 10, whereas the x-axis for factor B went from 0 to 100. Therefore, the visual representation was very misleading. So, data is in fact about how it is presented, and often, it will be presented in a way that favors the author’s argument despite it being deceitful – whether that be by withdrawing or skewing information

For this assignment, then, I wanted to visualize data in a way that would say ABSOLUTELY NOTHING to the reader. I just wanted to make a sort of artwork with it to prove my point. I searched for “art” on Kaggle, and found a sheet containing the artists whose works are featured in the MoMA collection. Here is the link to the file: https://www.kaggle.com/datasets/momanyc/museum-collection

And here is my sketch:

The data I first wanted to visualize was the gender of the artists, in order to create some sort of visual comparison between the number of male and female artists featured in the collection. For that, I mapped the birth and death years to the canvas, which I then used to represent circleX and circleY respectively (I ended up switching from circles to points, so I just used circleX and circleY as the coordinates for the latter). I then used an if else function to attribute the colors green to female artists and orange to male artists.

Then, I started working on turning the visualization into an inaccurate/incomplete one:

  1. I changed the mapping to include birth years only after 1850, although some were born between 1730 and then. Similarly, I only included the artists who died after 1900, though some died between 1795 and then.
  2. I added 50 to both the width and the height in the mapping in order to “enlarge” the position of the sketch.
  3. I added a random component to circleX and circleY, which slightly randomized the position of each point, making it dynamic.
  4. I used the sin function to randomize the size of each point. Some are therefore bigger, and some are smaller, creating some sort of “illusion” when it comes to the data.
  5. I animated the background to transition from orange to green to black continuously. This way, you sometimes see all the points, sometimes only the green ones, and sometimes only the orange ones. It all depends then on at which point of the sketch you look, and you will see different things.
  6. Finally, I set the alpha of the background to 70 in order to have a trail. This gives the impression of having more points than there actually are.

For parts 5 and 6, here is the code:

//   mapping the sin value to different colors
let sinValue = (sin(angle) + 1) / 2;

let r, g, b;

if (sinValue < 1 / 3) {
  // transitions from black to green
  r = map(sinValue, 0, 1 / 3, 0, 215);
  g = map(sinValue, 0, 1 / 3, 0, 234);
  b = map(sinValue, 0, 1 / 3, 0, 193);
} else if (sinValue < 2 / 3) {
  // transitions from green to orange
  r = map(sinValue, 1 / 3, 2 / 3, 215, 232);
  g = map(sinValue, 1 / 3, 2 / 3, 234, 103);
  b = map(sinValue, 1 / 3, 2 / 3, 193, 42);
} else {
  // transitions from orange back to black
  r = map(sinValue, 2 / 3, 1, 232, 0);
  g = map(sinValue, 2 / 3, 1, 103, 0);
  b = map(sinValue, 2 / 3, 1, 42, 0);
}

background(r, g, b, 70);

I feel like I’ve achieved what I wanted to, and as usual, it has been a learning process. I would say I didn’t really encounter any difficulties, I just had fun practicing data visualization especially in my own little way 🙂

 

Assignment #3 – Reading Response – Are All Interactions Interactive?

After reading this text, it seems to me that there are two views on interactivity. The first is the overused, underunderstood meaning of interaction or interactivity, which Crawford criticizes. In that definition, interactivity bases itself on the premise that any person or object coming into contact with another object forms an interaction. The “interactive” rug, for example, is fitting, because the child logically “interacts” with the rug. Yet, the rug doesn’t return anything. That is the same for the fallen tree. In fact, in those cases, humans act or react, but do not interact. The other meaning of interact, which Crawford defines, claims that both entities should “listen, think, and speak” (that is, literally or metaphorically). But even when the two entities are capable of doing so, it doesn’t mean that they are doing it well. So, can “bad” interactivity be disregarded as interactivity altogether? Though bad, doesn’t it warrant already that the interactivity is present?

Another point that follows is that of interactivity design. Based on the aforementioned definitions, it is interesting to think about the role of the interactive designer. The latter, in “interactivizing” their design, acts on a computer. But then, can that be called interaction? Or is the interaction merely the exchange between the viewer and the final design? I believe that in a lot of cases, yes, we can say that the designer creating the design forms an interaction with the computer, but are there cases in which one of the three components of interaction lack?

Assignment #3 – Code – ☆Manic! At The Disco☆

For this assignment, I wanted to create a visual experience inspired by orange, purple, pink, and yellow club lights:

I love how the lights all stem from one source, but I did not want to limit myself to that, so I thought I could create some sort of tridimensional effect with trailing that could better represent the “feeling” of experiencing those club lights.

So I created a line class and made 10 lines (with bezier curves) that all begin at (0,0) with varying speeds. I also lowered the background alpha to 10, as transparency allows for trails, which I wanted in order to create this dynamic effect. I tried multiple shapes, but ended up opting for bezier curves. The points for the bezier curves all follow the same line, which is why there is no “curve” per se. However, the movement of the bezier curves looked much nicer than that of the lines, which is why I chose that. The bezier curves also emphasized the 3D effect better than the lines.

Every once in a while, then, the lines meet around the top corner, reproducing the effect of disco lights, before each going their own way once again:

For the colors, I created an array of five different colors which I then incorporated into a linear interpolation function in order to create a gradient:

With the help of ChatGPT, I set two data points, “this.currentColorIndex” and “this.nextColorIndex”. For the first, I randomized from the array so that the first color of each line varies. The second data point set the following color, incrementing the index of the previous “current” color by 1, ensuring that the “next” color that appears depends on the previous color.

this.currentColorIndex = floor(random(colors.length)); // set the starting color of each line as a random color from the "colors" array
this.nextColorIndex = (this.currentColorIndex + 1) % colors.length; // ensure the "next" color for each line depends on the initial randomized color
this.lerpAmount = 0; // begin with no interpolation

Then, in the display function inside the class, I set three variables: “currentColor”, “nextColor”, and “displayColor”, in which I set the data from above. The first two variables are then implemented in the third, which is itself incorporated in the stroke function. Altogether, this is what allows the lines to be displayed in the different colors.

display() {
  let currentColor = color(colors[this.currentColorIndex]); // setting the "current" color variable
  let nextColor = color(colors[this.nextColorIndex]); // setting the "next" color variable
  let displayColor = lerpColor(currentColor, nextColor, this.lerpAmount); // setting the "display" color variable
  stroke(displayColor);
  strokeWeight((this.x + this.y) / 40);

Finally, I created a last function called “updateColor” in which I set the linear interpolation amount and create an “if” function for it to reset (and hence for the colors to keep varying).

updateColor() {
    this.lerpAmount += 0.04; // linear interpolation amount to define the speed of the transition

    if (this.lerpAmount >= 1) { // when linear interpolation amount exceeds 1, reset it to 0
      this.lerpAmount = 0;
      this.currentColorIndex = this.nextColorIndex;
      this.nextColorIndex = (this.nextColorIndex + 1) % colors.length;

Overall, I really enjoyed creating this code. It was definitely hard as last week’s material was a bit dense, but it was good practice. I wanted to incorporate an “if mousePressed” function for the colors to change to a different color palette, but for some reason, it would lag after some time and the sketch would just freeze. That is something I would like to work on for another time, though!

 

Assignment #2 – Reading Response

This talk was very interesting to me, particularly the parts about chance in art and code.

The second Reas started talking about Jean Arp’s artwork, the one about dropping the pieces of paper and just leaving them there, I immediately thought of the concept of “happenings”. Allan Kaprow introduced “happenings” in the 50s as an artistic event which employs randomness and improvisation, particularly in performance art. They typically required the participation of the audience, so the spontaneity of the happenings and the random environments in which they unfolded created an unpredictable result – one that Arp’s work resonates with. I was happy to see that Reas mentioned John Cage and Marcel Duchamp, as they are the predecessors of happenings. John Cage invented the technique of “prepared piano”, in which he would place different items on piano strings in order to change the way they sound. Marcel Duchamp emphasized the role of the viewer in art, affecting the way audience participation contributed to the artwork. His idea, which Reas evokes, that this kind of randomness and chance allows in a way to take a stand against authority, power, and order and to come back to our chaotic nature, is very striking to me.

Moving on to chance in code, I had actually never thought about it. Although I cannot say that chance in code is a happening, I really felt like it resonated with this “fight” against order and rationality. Reas says: “the idea of using a rational machine to do something unexpected was at the time very subversive”. In the demos he shows next, it is then interesting to see how the slightest randomizing in code impacts the images generated. Although codes are very structured, it is then very easy to create something completely unpredictable. This, then, echoes the very unpredictable nature of the results that happenings tried to achieve. In one of the demos, he mentions how one of the codes once mirrored creates images that we start giving meaning to, whether we see a face, a skull, an animal… It reveals how we as a society can make meaning out of art that perhaps was meant to have another meaning, or no meaning at all.