Reading Reflection – Week #3

When I started reading the assigned material, I was entertained by the author’s voice, as it was not what I expected from a book. Not only that, but starting the chapter challenging the idea of what “interaction” means made me think, “What is that video games and productivity share?”. For example, we could argue that both of them require a certain level of concentration to be fully completed, but while one is applying all the degrees of interactivity (listening, thinking and speaking), the latter is very dependent on the activity being done. My homework can tell me what instructions to follow, but it cannot think on its own, although a video game will be able to give me feedback as its nature is more communicative; without this, video games would be unsatisfying.

Furthermore, I never imagined myself looking at a fridge as a medium of interactivity, since for me the concept of “interaction” was always applied in the medium of technology: the user does an action and the computer communicates. Regarding the design aspect of interactivity, it is complicated. While a Graphic Designer is important to ensure visual consistency throughout a project, as the author says, it is not all eye candy. Similarly to video games, there needs to be someone who can understand what steps to follow to have a good level of interaction and engagement. A video game designer would not put the hardest level at the start of the game as it would create a lot of frustration; likewise, a designer focused on intractability would not force users to write commands on a website to enter a page. All graphic and interactable elements need to coexist to allow the best user engagement possible.

Assignment 3 – what exactly is interactivity

I totally disagree with the definition of interactivity that is provided. If I did agree with it, then the IM major should have a different name since most of what we do, according to the definition provided in the readings, is not interactive. According to the reading, all these 2D (or other stuff) “interactive” art that we do is just participation of us in something. This does make sense, because the art or whatever we created is not talking back, forming a thought, or having a meaningful interaction. And someone can argue and say that some interactive art does have a reaction to us after we do something, therefore if we both react back and forth it’s an interaction. But would this be an interaction or just a lifeless programmed reaction that the program will do no matter what? Same as in the movie, no matter what, the actor is going to do something in his script.

If we look at the modern world, social media should be also a form of participation as the definition provided. But when I think about it, we do interact with each other while just using social media as a medium or a tool to do so. Same with music; the writer said we don’t interact with music but we interact with each other using music. The only form of interactivity with something that is not human, as per the definition again, would be just AI. Because AI is the only thing currently that doesn’t involve a real human in real-time in front of me to have an interaction. But other than that, should we even keep calling everything else interactive?

Assignment 3 – Around the world

For this assignment, I really wanted to do something fun. I was listening to the song “Around the World” by Daft Punk and thought about how the song is a hit while they just repeat the same sentence.

So, why not create a repetitive pattern that uses that repetitive sound to create something? So I created a robots class that keeps generating robots every 60 frames. Each robot would have a random head size, a random body size, random colors, and either a visor or a normal eye. These robots have an angle and a speed to move in so they can move in a circle.

class Robot {
  constructor(x, y, angle) {
    this.centerX = x; // Center of circular motion
    this.centerY = y; // Center of circular motion
    this.angle = angle; // Starting angle for circular motion

    this.headSize = random(12, 25);
    this.bodySize = random(this.headSize + 5, this.headSize + 25);

    this.headColor = color(random(255), random(255), random(255));
    this.bodyColor = color(random(255), random(255), random(255));
    
    //choose between visor or normal eeys
    this.eyeType = floor(random(2));

    this.orbitRadius = 200; // Radius of circular motion
    this.speed = 0.02; // Speed of rotation
  }

 

The song will be playing in the background and keeps repeating forever. So the song will keep saying “Around the World” while the robots are moving around the world (the center being the mouse).

 

**click on the sketch**

**if WordPress doesn’t make the sound work so check the sketch itself**

I really enjoyed doing this because I love Daft Punk, but I hope to maybe add some more interaction from the user to do something with these robots honestly.

Week 3 Reading Response: The Art of Interactive Design

In this chapter, Chris Crawford sets about to define the murky concept of ‘interactivity’. As Crawford rightly mentions, interactivity has become the buzzword of the Web age, prompting much corporate marketing based on the notion of interactivity, even when it doesn’t make sense. Thus, it is important to set a clear definition of interactivity and what constitutes as interactive.

I feel Crawford’s “Listening, Thinking, Speaking” definition is definitely a good place to start, but while trying to exclude things that are definitely not interactive, it may exclude things that are conventionally seen as interactive. After all, so-called smart lamps, for example, do not “think” much (I am talking about the most basic ones, such as the ones that respond to clapping), yet they could be classified as interactive. The argument can be made that there is some level of signal processing to differentiate a clap from background noise, and I won’t claim to be an expert on the matter, but I believe that it is still simpler than the thinking that Crawford calls for. This definition also excludes things like “interactive fiction”, because no thinking goes on in deciding between pre-coded paths in an interactive novel, and the reader doesn’t have free reign over the responses they can communicate to the characters of the story.

In this regard, I found that looking through the lens of degrees of interactivity makes more sense. Thus, things like refrigerator doors are low on the interactivity scale. Smart lamps, as well as many beginner Interactive Art projects, could be classified as medium interactive. Medium-high interactivity might include video games. And the highest tiers of interactivity are relegated to AI LLM chatbots and actual people. Thus, interactivity is a spectrum, and much to Crawford’s dislike, is inherently subjective.

Week 3 Assignment: Closer Look

The overall concept for this week’s assignment is that I wanted to portray what things look like in a closer look. The vibrating movements of various atoms and molecules based on the interactivity of the user is what i wanted to make. Based on mouse pressing and mouse dragging, there are different actions for each of them. If the mouse is pressed, a random shape subclass will be painted on the canvas, while if mouse is dragged, the background color, size changes and the shapes gain erratic movement. I used a parent class Shape and subclass for different shapes with move, display, and update methods. Although I encountered no visible problems, it was refreshing to make color and interactive artwork.

Shapes class Code:

class Shape {
  constructor(x, y, size, color) {
    this.x = x;
    this.y = y;
    this.size = size;
    this.color = color;
    this.offsetX = random(-1, 1);
    this.offsetY = random(-1, 1);
  }
  
  display() {
    // Abstract method 
  }
  
  update(x, y) {
    // Abstract method 
  }
  
  move(){
    // Abstract method 
  }
}
//star subclass
class Star extends Shape {
  constructor(x, y) {
    super(x, y, random(20, 100), color(random(255), random(255), random(255), 100));
  }
  
  display() {
    fill(this.color);
    beginShape();
    for (let i = 0; i < 10; i++) {
      let angle = TWO_PI * i / 10;
      let r = this.size * (i % 2 === 0 ? 0.5 : 1);
      let x = this.x + cos(angle) * r;
      let y = this.y + sin(angle) * r;
      vertex(x, y);
    }
    endShape(CLOSE);
  }
  
  update(x, y) {
    let d = dist(this.x, this.y, x, y);
    this.size = map(d, 0, width, 20, 100);
  }
  
  move() {
    // Vibrate the molecule in place
    this.x += random(-1, 1);
    this.y += random(-1, 1);
  }
}

//diamond subclass
class Diamond extends Shape {
  constructor(x, y) {
    super(x, y, random(20, 100), color(random(255), random(255), random(255), 100));
  }
  
  display() {
    fill(this.color);
    beginShape();
    vertex(this.x, this.y - this.size / 2);
    vertex(this.x - this.size / 2, this.y);
    vertex(this.x, this.y + this.size / 2);
    vertex(this.x + this.size / 2, this.y);
    endShape(CLOSE);
  }
  
  update(x, y) {
    let d = dist(this.x, this.y, x, y);
    this.size = map(d, 0, width, 20, 100);
  }
  
  move() {
    // Vibrate the molecule in place
    this.x += random(-1, 1);
    this.y += random(-1, 1);
  }
}

//atom subclass
class Atom extends Shape {
  constructor(x, y) {
    super(x, y, random(20, 100), color(random(255), random(255), random(255), 100));
  }
  
  display() {
    fill(this.color);
    ellipse(this.x, this.y, this.size, this.size);
    fill(255);
    ellipse(this.x - this.size / 4, this.y - this.size / 4, this.size / 2, this.size / 2);
  }
  
  update(x, y) {
    let d = dist(this.x, this.y, x, y);
    this.size = map(d, 0, width, 20, 100);
  }
  
  move() {
    // Vibrate the molecule in place
    this.x += random(-1, 1);
    this.y += random(-1, 1);
  }
}

//molecule subclass
class Molecule extends Shape {
  constructor(x, y) {
    super(x, y, random(20, 100), color(random(255), random(255), random(255), 100));
  }
  
  display() {
    fill(this.color);
    ellipse(this.x, this.y, this.size, this.size);
    fill(255);
    ellipse(this.x - this.size / 4, this.y - this.size / 4, this.size / 2, this.size / 2);
    ellipse(this.x + this.size / 4, this.y + this.size / 4, this.size / 2, this.size / 2);
  }
  
  update(x, y) {
    let d = dist(this.x, this.y, x, y);
    this.size = map(d, 0, width, 20, 100);
  }
  
  move() {
    // Vibrate the molecule in place
    this.x += random(-1, 1);
    this.y += random(-1, 1);
  }
}

//water molecule subclass
class WaterMolecule extends Shape {
  constructor(x, y) {
    super(x, y, random(20, 100), color(0, 0, 255, 100)); // Blue color for water
  }
  
  display() {
    fill(this.color);
    ellipse(this.x, this.y - this.size / 4, this.size / 2, this.size / 2); // Oxygen atom
    fill(255);
    ellipse(this.x - this.size / 4, this.y + this.size / 4, this.size / 3, this.size / 3); // Hydrogen atom 1
    ellipse(this.x + this.size / 4, this.y + this.size / 4, this.size / 3, this.size / 3); // Hydrogen atom 2
  }
  
  update(x, y) {
    let d = dist(this.x, this.y, x, y);
    this.size = map(d, 0, width, 20, 100);
  }
  
  move() {
    // Vibrate the molecule in place
    this.x += random(-1, 1);
    this.y += random(-1, 1);
  }
}

//Co2 subclass
class CarbonDioxideMolecule extends Shape {
  constructor(x, y) {
    super(x, y, random(20, 100), color(255, 0, 0, 100)); // Red color for carbon dioxide
  }
  
  display() {
    fill(this.color);
    ellipse(this.x, this.y - this.size / 4, this.size / 2, this.size / 2); // Carbon atom
    fill(255);
    ellipse(this.x - this.size / 3, this.y + this.size / 4, this.size / 3, this.size / 3); // Oxygen atom 1
    ellipse(this.x + this.size / 3, this.y + this.size / 4, this.size / 3, this.size / 3); // Oxygen atom 
  }
  
  update(x, y) {
    let d = dist(this.x, this.y, x, y);
    this.size = map(d, 0, width, 20, 100);
  }
  
  move() {
    // Vibrate the molecule in place
    this.x += random(-1, 1);
    this.y += random(-1, 1);
  }

}

 

The Art of Interactive Design Reading Response

To a certain extent I agree with the definition of interactivity: a cyclic process in which two actors alternatively listen, think and speak. There should definitely be threads of actions that take input (listen), process it (think) and speak (output), however I believe that it does not have to limit to two ‘actors’, nor does it have to be in strictly alternative order. For instance, when someone interacts with a website, it may seem like there are two actors, but if you divide the website entity to two parts, the frontend and the backend, the interaction would be between three entities. In this sense, interactivity could contain multiple entities. Furthermore, I believe that there needs to be some sort of correlation between the threads. In other words, the output of one actor must be directly or indirectly be taken as an input for the other actor. Otherwise, there would be no relation between the two actors.

Another aspect that piqued my interest was the sentence “The refrigerator does interact with a user but it does so at a low level” . When it comes to measuring the level of interactivity, besides the three requirements, the time it takes for the interactivity to occur could be an important parameter to measure. There could be many micro-interactions simultaneously happening in a short time, which could be argued as a high level interaction. While on the other hand, there could be complex interactivity but the time it takes spans a long period, therefore lowering its level of interactivity. The word interactivity spans large area in our society, and depending on how it is used in various contexts, the definition changes.

Reading Reflection Week 3 – Dachi Tarughishvili

“The Art of Interactive Design” by Chris Crawford goes in-depth refining a buzzword – interactivity – that we might be using often without understanding all the intricacies of the interface that allows users to design dynamic interactions for the users. According to the author, it’s not just an additional layer of programming but its core aspect. This has also been true in my experience since I have worked on several projects and oftentimes it’s the interactive layer that makes a difference between a solid or lacking experience. For example, making everything user-friendly, and easy to understand but also dynamic enough so that the user does not get bored of repeating the same action again and again.
He makes a clear divide between interaction and reaction and how the cyclic nature of having two systems defines if something is interactable or not. While I understand his idea of having two actors, I don’t think that the branch example is only about having one actor. Even though it’s an inanimate object, it still plays a major role. The better distinction is that the branch while being an actor, is not necessarily something that can modify its behavior based on our (first actor’s) actions.
Nevertheless, in the following paragraphs, the author gets more concrete in his definition and I have to agree, that having two living organisms interact is much more authentic than any kind of computer interaction we are trying to imitate in real life.
The author further continues to define different levels of interaction and how it is a variable that might have different strengths based on how many components it has and how advanced they are (ability to think, speak, listen, etc). I would argue, however, that it is important to define individual aspects, since while something may be interactive based on all those definitions, a user (who is good at listening, speaking, and thinking) might still find something interactive to be lackluster based on their personal experience. For example, imagine an app that teaches you some skills. On paper it is quite interactive, the user listens, speaks, and inputs answers after deliberate thinking. The app, in turn, responds, speaks when needed, and analyzes answers. However, if the user is already fluent in such skills, this interaction component will seem trivial and more of a hassle unless the app is designed to be tailored to their fields of interest or their behavioral patterns.
I agree with his book example, (there is more of a reaction than interaction). However, some movies can indeed be interactive. For example, Black Mirror: Bandersnatch is a 2018 movie where users can make a choice based on which different scenarios will play out. Even here though, you can argue that this is not true interaction since those scenes have already been pre-shot and there is nothing an individual can do to change the script of those scenes.
His final consensus lies in differentiating user experience designer versus interactivity designer. The latter is less concerned about technical aspects but more about how a function makes the user feel. As such, there is the integration of “form with the function”.
All in all, the author was very forward-looking with his statements. Crawford emphasizes its core role in user experience, distinguishing between interaction and reaction. The science of interactivity has advanced a lot after writing this book, especially with the rise of VR where new definitions can be formed every day based on discoveries and new ways to create immersive experiences. Ultimately, his work serves as a good foundation in this ever-evolving field.

Eye of Sauron – Dachi Tarughishvili – Assignment 3

While looking for ideas for generative art I stumbled upon coding train’s playlist by Daniel Shiffman. It was super interesting to see just how many things you can create using simple idea of randomness and how you add different variables to generate seemingly organic patterns. The last video in the playlist was Polar Perlin Noise loops which intrigued me the most. I followed the tutorial and executed the default code which I started messing with to get some interesting results. For example, I changed z_offset and phase values to make the shape spin faster or with greater amplitude, additionally there is a slider to increase the randomness which visually translates to increasing speed. Once I saw the outline I was interesting in not just the changing of the shape but what would happen if background did not override it every time? That way a figure would slowly come to life as we let the code play out. Adding transparency to that aspect made the animation seem smoother and more organic. I changed the color to red and saw how it morphed into circle with spikes after changing amplitude to 10*(cos a + phase). It instantly reminded me of Eye of the Sauron from the Lord of the Rings. Thus I had a picture in mind and a rough plan to execute it. I added more shapes operated by same Perlin noise logic but with different parameters. For example, different phase, color, or shape entirely (latter being very important for the black vertical pupil in the middle of the eye). I then added arcs to imitate the Dark Tower.

I decided to change color (transparency) of the eye, as well as increase its shape if the volume is loud enough. After 6.8 seconds, the camera also begins to work and 0.4 seconds before audio ends, the inner pupil starts to expand. There is also transparency aspect (commented in code) which makes eye more transparent as it comes closer to the subject, the camera overlay is also semitransparent which gives it sort of slow motion effect using tint function.

I followed articles on web to get the capture feed. I had some difficulty with getting camera to work in sync with audio but in the end, simple boolean checks did the trick. Values for audio were just trial and error, as it was for eyesize scaling and volume levels. I am really happy with how it turned out, especially how the audio dialogue matches what is actually happening on screen. (Camera = seen, and death = engulfed in darkness by expanding inner pupil gradually).

I changed few colors and parameters to make it look like the source material and in the end got exactly what I wanted. Perhaps, this is most impressive part of whole code because this is where perlin noise eye animation takes space:

// draw outer shape
  stroke(255, 10, 0, alphaValue);
  noFill();
  beginShape();
  for (let a = 0; a < TWO_PI; a += 0.1) {
    let xoff = map(10 * cos(a + phase), -1, 1, 0, noiseMax);
    let yoff = map(sin(a + phase), -1, 1, 0, noiseMax);
    let r = map(noise(xoff, yoff, zoff), 0, 1, 100, 220) * (eyeSize / 20); // scale based on eyeSize
    let x = r * cos(a);
    let y = r * sin(a);
    vertex(x, y);
  }
  endShape(CLOSE);

  // orange glow for the first outer shape
  fill(255, orange, 0, alphaValue * 0.5); // lower transparency
  beginShape();
  for (let a = 0; a < TWO_PI; a += 0.1) {
    let xoff = map(8 * cos(a + phase), -1, 1, 0, noiseMax);
    let yoff = map(8 * sin(a + phase), -1, 1, 0, noiseMax);
    let r = map(noise(xoff, yoff, zoff), 0, 1, 0, size_t) * (eyeSize / 20); // Scale based on eyeSize
    let x = r * cos(a);
    let y = r * sin(a);
    vertex(x, y);
  }
  endShape(CLOSE);

  // second glow
  fill(255, 165, 0, alphaValue * 0.5);
  beginShape();
  for (let a = 0; a < TWO_PI; a += 0.1) {
    let xoff = map(10 * cos(a + phase + 1), -1, 1, 0, noiseMax); // different phase
    let yoff = map(10 * sin(a + phase + 1), -1, 1, 0, noiseMax);
    let r = map(noise(xoff, yoff, zoff), 0, 1, 50, 220) * (eyeSize / 20); // Scale based on eyeSize
    let x = r * cos(a);
    let y = r * sin(a);
    vertex(x, y);
  }
  endShape(CLOSE);

  // inner pupil black which is a vertical ellipse
  fill(0); // black
  beginShape();
  for (let a = 0; a < TWO_PI; a += 0.1) {
    let xoff = map(5 * cos(a + phase), -1, 1, 0, noiseMax);
    let yoff = map(5 * sin(a + phase), -1, 1, 0, noiseMax);
    let rx = map(noise(xoff, yoff, zoff), 0, 1, 5, 20) * (eyeSize / 20); // Scale based on eyeSize
    let ry = map(noise(yoff, xoff, zoff), 0, 1, 50, 120) * (eyeSize / 20); // Scale based on eyeSize
    let x = rx * cos(a);
    let y = ry * sin(a);
    vertex(x, y);
  }
  endShape(CLOSE);

  // update zoff and phase
  zoff += 0.008;
  phase += 0.008;

All in all, I had lots of fun working on this project and I am very happy with the results. Hope you like it too! Here is the final version: (I would highly suggest opening it on actual browser and giving it camera/microphone permissions for the full animation- https://editor.p5js.org/dt2307/full/krd4mZZqJ)

 

 

Reading Reflection – Week #2

Having recently watched Casey Reas’ video, it’s fascinating how he navigates through concepts like art, randomness, order, chaos, algorithms, and aesthetics. One key takeaway that stood out to me is how randomness, when guided by certain rules or parameters, can lead to order – a concept that reminded me of an old game called “The Game of Life.”

In this game, lights follow a simple algorithm: if they’re lonely or surrounded by too many others, they go out; under ideal conditions, a new light emerges, creating a continuous cycle of life and death. Upon further exploration, one can realize that specific conditions, or parameters can play a crucial role. These act like guidelines, turning what might seem like randomness into a more structured and even aesthetically pleasing pattern. Imagine it as a kind of artistic recipe: the right mix of chaos and order, with a sprinkle of algorithms, can result in something visually appealing. It’s like making sense out of what initially seems like randomness, and that’s pretty cool.

In a nutshell, Reas’ video taught me that even in the seemingly chaotic world of art and algorithms, there’s an underlying order waiting to be uncovered. It’s like solving a puzzle, understanding how randomness can actually contribute to creating beautiful and ordered patterns.

Week 2 Reading Response – Dachi Tarughishvili

Casey Reas is the inventor of Processing (software which I often used in my Freshman year of university) and is well known for many projects in digital illustrations. In Eyeo2012 he talked about chance operations which I found very intriguing and interesting for several reasons. First of all, his talk about random systems with seemingly random parameters which after some time tend to order and become homogenous (Process 18) is very much similar to what I did for assignment 2, where I used random functions while ending up with clear shapes depending on different parameters after many iterations.
His work based on “Tissue Work” highlights how small changes in parameters largely influence the paths his figures follow. I also found this true when modifying variables of my project which completely changed the shape. More surprisingly, the way the change worked started making sense in a nonmathematical context as well. Without thinking about the equations I could visually deduce how one parameter (percent) influences the resolution of the shape and the other (n) influences how it spreads.
I found it inspiring how many fields can random chance operations be used in physics, music, and obviously art. One might think how all these fields are related by underlying mathematical equations and might even explain how our entire universe works.
Seeing the random number book got me thinking about how if anything is truly random. As a student who studies computer science, we have often been told how you can’t program true randomness. I started wondering if we even need true randomness as controlled randomness helps us in many ways already. Be it encryption, algorithms, and in this case creating art.
The demo regarding triggering our imagination was truly fascinating, as it got me thinking about how our brains try to declutter chaos and try to do their best to make sense of what we see based on our preconceived notions.
I think that Casey Reas’s presentation and his personal experience show us how we might find meaning and beauty in the most unexpected (random) places which in turn should inspire us to look for order in chaos in both our fields and our personal lives.