Reading Response – Week 3

After reading the excerpt from Chris Crawford’s “The Art of Interactive Design” I am excited to explore my own personal definition  of what interactivity is and how I can apply it to what we create in class. I think I would also use the three categories Crawford gave us to determine whether or not something is interactive. Listening, speaking, and thinking seem to be the standard for ‘quality interaction’ so I believe we should also apply them in our design process. I think the listening part might be the most difficult to implement because I’m not exactly sure how that would be done creatively.

To improve the user interaction in my P5 sketches I think even just consciously asking myself, is this program listening, thinking speaking, will show results in one way or another. However, to go further, I think I’d like to let my imagination go a bit further and consider what I’d enjoy interacting with. So far, I have been limiting myself to rather simple projects because I was afraid of the more daunting aspects of implementation. Using these three key standards I think even implementing my ideas will be easier because I can ask myself, well how do I do this? Ok, if the program is ‘thinking’ about what the user wants, maybe I can code a few options for the program to choose from based on user input and what it hears.

Similar to what Reas seemed to be advocating for, I think Crawford wants us to think a bit more abstractly. Whether or not something is interactive might be a harder question to answer than what one might expect and can be debatable in many different contexts.

 

Reading Reflection – Week 3

Chris Crawford provided, in my opinion, a great explanation for what people think interactivity is, and how it should look like in real. Although some might find the author’s claims surprising, I, personally, always had pretty much the same thoughts on the matter of interactivity. I am not sure when Chris Crawford wrote this, but it seems that the word “interactivity” is still quite often overused and, perhaps, even misused or confused with the words react or participate.

The thing that I see differently though, is I believe we should clearly separate the interactivity with people and the interactivity with machines. While the former should obviously “listen, think and speak”, the latter can be limited to “taking in and responding”, and maybe add “processing” in between. The strongly interactive system, as I see it, would include all three of these features, and would obviously allow us, users, to influence the outcome. Nowadays, with the rise of Generative AI, interactivity with machines is moving on to a completely different and advanced level, and the goal of people working in the interactivity design industry is to keep up with the fast pace of new technologies to be able to integrate them into the work they create for users.

Another idea from the reading that I liked is of mixing the “art” and the “code” people as it can indeed enhance the experience we get from using the technologies. It is important to have a diverse set of skills and knowledge, and back-end programmers who can also understand the goals of graphic/UX/UI designers (and vice-versa) can significantly improve the level of work. It is interesting to see such a mix even in our class – people who were majoring in Computer Science or other technical (and direct) disciplines throughout the whole time in the university come to Interactive Media class to learn about art.

As I have mentioned in the conclusion of my Assignment 3 blog post, my goal for the next assignments will be to add interactivity to what I create. Of course, I will not be able to do it on an advanced level yet, but I certainly want my work to give a certain response – audio and visual. For my midterm, I am considering creating a game, which is an example of interactivity, at least by my own definition of this word.

 

Reading Reflection – Week 3

There are two key questions that arise after reading Chapter 1 of ‘The Art of Interactive Design’. The first is: What are the characteristics of a strongly interactive system? For the second question, stay with me until the end of this response to find out; I promise it will be worth it. Before addressing these, I’d like to share my analysis of Chapter 1 and my thoughts after completing it.

Although the book was first published in 2002, its concepts remain relevant in 2024. I fully agree with the author’s argument that technology buzzwords are often overused and misunderstood. At the time of the book’s writing, “interactivity” was the buzzword in question. When we examine this closely, it becomes clear that this is largely driven by the capitalistic nature of the tech industry. Buzzwords are used because they help sell products, even if their meanings are diluted. This is equally true today with terms like “innovation” and “AI,” which are frequently misapplied to make a profit, often without a deep understanding of their implications.

This chapter offered insightful ideas, and I was particularly drawn to the concept of viewing interactivity as a conversation between two actors, where listening, speaking, and thinking all play critical roles in designing an interactive experience. The author’s approach is notable in that he shifts the definition of interactivity from a simple yes-or-no concept to a hierarchical structure, allowing us to assess interactivity from a subjective perspective. I personally agree that interactivity is subjective, given the diversity of the world’s population. What may be considered highly interactive in one part of the world might be seen as only moderately or minimally interactive elsewhere. The author strengthens his argument by clarifying what is not interactive, such as mere reaction or participation, and provides strong examples that are difficult to refute.

While the author’s arguments might seem somewhat biased towards interactive design, I find myself in agreement with most of them. He argues that a true interactive designer incorporates all three key elements—listening, speaking, and thinking—while a user interface designer typically excludes the thinking aspect, reducing interactivity. Many technologies that we use today, such as Spotify, smartphones, and gaming consoles, lack all three aspects of interactivity. For instance, while Siri can listen and speak, it doesn’t truly “think” independently yet. Despite this, humans continue to interact with these technologies frequently! This raises the question of whether these technologies were designed by user interface designers or whether these designers have somehow adopted the three key elements of interactivity. It also prompts a deeper examination of the author’s critique of designers who may have less expertise in the arts and humanities but are still creating interactive systems.

Now, the question you’ve been waiting for: What ideas do I have for improving user interaction in my p5 sketches? After reading this chapter, I plan to focus on integrating the elements of listening and speaking to make my sketches more interactive. Once I have mastered these two aspects, I will work on incorporating the third element—thinking. This will undoubtedly be a challenge, as figuring out how to make a p5 sketch “think” is no small task. However, it will be an exciting journey to explore how this can lead to truly interactive creations.

Week 3 – Reading on Interactivity

Crawford’s paper was remarkable in re-shaping my understanding of what makes something interactive, and how to approach the definition of the very term

At first, I also agreed with his statement about seeing as a conversation between two entities. This reminded me of one of my literature classes, where we discussed reading a paper is like the author trying to have a conversation with us. However, as I am writing this, and after reading the paper, perhaps I was mistaken to agree with this argument, as a conversation requires some degree of back-and-forth, which is something that papers and books do not do. Unless you consider it from a technical sense – if you are reading a book from a Kindle, where there are tons of buttons and opportunities to interact with the book e.g You can highlight any word, and it will look up the definition for you. That is seen as interactive, but reading from a physical paper does not allow users to have that sort of reaction.

This prompted me to question – what is the distinction between a reaction and an interaction? Which the author conveniently brings up at the right time. He brings up the situation of a tree branch falling, and the way he responds – that response does not prompt the fallen branch to suddenly get up and start flying around – it is still still. This example helped me to understand the difference. Once the user reacts to the prompt or situation, the other entity must react and continue as per the user’s actions and word.

What also deepened my understanding of interactivity is Crawford’s example of the Nintendo Fridges. He argues earlier that interactivity should be entertainment, but there are nuances to this statement. He explains that though adults may find a fridge to be mundane, children would like, as they can “play” with its lights by closing and opening the door. He explains that the fridge is still interactive, albeit uniquely. I learnt that interactivity has different grades: High, moderate, low, and none. An item like a fridge has low interactivity, but reading from a Kindle has a high interactivity. I found Cicero’s statement on the notion of interactivity very appealing. I discovered the notion of “imitation” in the interactive sense — I learnt that reading is an activity that imitate the idea of interactivity — as our emotional capacity is exercised there, thus alluding to the idea of interactivity, it not truly interactive. Additionally, Cicero believes “Fuller nourishment comes from the living voice.” – Another (biological) entity provides you with the intimacy that deepens your correspondences and actions with someone or something. Which made me think about the difference between interactive technical projects, and interactive human projects.

When Crawford moves on to performance, it helped me to understand how important the role of an audience is. I discovered that the larger the audience might be, the more challenging it might be to interact with them. In the theatrical world, it takes a large cast to be able to execute this – as you would have to break the fourth wall in order to deem a play as interactive. This instantly reminded me of interactive still lives. It is when an actor or the cast pose as part of an environment, allowing the audience to manipulate the scenery using their body. For example, if the scenery is a forest, then one actor may pose as a tree — and the audience ‘walks’ through this forest (keeping in mind this is a still life therefore the actors don’t move), and they play around with their environment. The audience would shape scenery by moving around the arms of an actor to make the tree appear wider, or perhaps, making them lay on the ground to show that is has fallen. That is an example of interactivity in performance, which Crawford argues barely exist.

He also argues at the end that “good interactive design integrates form with function”, which I sort of struggled to grasp as I did not understand what he exactly meant by form as it was sort of slapped on at the end, but I assume that he means that those who are in charge of interactivity must step out of convention to make a good interactive design.

Assignment 3: The Paths

My goal for Assignment 3 was to combine the knowledge about loops, OOP, and arrays with the idea of controlled randomness while still using basic shapes (the art of simplicity :)). At first, my main plan was to try to replicate one of the most famous scenes from the movie “Interstellar”, where the main character finds himself inside the so-called Tesseract, – the 4-dimensional space, that looks like a huge library made from strings (check the video if interested).

So it was decided. I will use lines as a main part of my artwork. I wanted for lines to form something similar to what I saw in that movie scene. However, pretty soon I realized that I was not getting the result I wanted. This is probably because I like this movie so much that I can’t look at any replicas that are not as good as the movie itself. So I abandoned this idea.

Working Process

Nevertheless, I did not abandon the idea of using lines as well as the idea of them crossing each other on the canvas. This time I decided to use the black canvas and instead of using completely random colors like I did for Assignment 2, I decided to create a palette from which the colors would be chosen. The black background ideally matches the neon colors, so I googled and asked ChatGPT for the RGB codes of colors, and chose the ones I liked the most. As for the movements, I also decided that I should make more orders compared to my previous assignment, so I decided to make lines emerge from the left side and top and go towards the right side and bottom respectively. The challenge that I faced straight away was to find a way for the lines to draw themselves smoothly from the beginning to the end without ‘teleporting’. Using the internet, p5.js Reference page, and TheCodingTrain video I implemented lerp().

Next, I decided to make the lines disappear over time – again, to avoid too much chaos on the canvas that happened with “Going Through Life”. To make this happen, I used already familiar Alpha value to increase the transparency of lines over time. I also implemented red(), green(), and blue() functions to make colors consistent while fading.

The most difficult part

Since I gave up on the idea of replicating the scene from “Interstellar”, I clearly needed to come up with something else. Referring back to the Casey Reas’ video that I watched last week, I decided that my lines should change the direction. Once again, I decided to implement the idea of controlled randomness, so my lines would change direction under the limited range of values after they pass the middle of the canvas. It was the challenging part because I was not sure how to approach the code. At first, I was thinking of simply drawing two separate lines, but it was too much hardcoding, so I decided to simply search the internet once again. I found the amazing function map() that is used to play with translating the scales and measures of distance. Thanks to the Reference page and TheCodingTrain video, as well as a lot of debugging, I finally reached the desired result.

drawSelf() {
    if (this.alpha > 0) { // line disappears when alpha goes below 1
      stroke(red(this.color), green(this.color), blue(this.color), this.alpha); // fading color is the same as the line's
      strokeWeight(2); // you can play with it to make line bigger/smaller

      // drawing horizontal line
      if (this.vert_vs_horiz === "horizontal") {
        let xMovement = lerp(this.x1, this.x2, this.lifespan);  // using lerp to implement smooth drawing 

        // controlling the randomness - drawing straight before middle of canvas
        if (xMovement < width / 2) {
          line(this.x1, this.y, xMovement, this.y); 
        } else {
          // after middle of canvas is reached, can change the direction
          let curveY = this.y + map(xMovement, width / 2, this.x2, 0, this.change_dir_angle);  // using map to project the initial path of line on the change in direction starting in the middle of the canvas
          line(this.x1, this.y, width / 2, this.y);  // before middle
          line(width / 2, this.y, xMovement, curveY);  // after middle
The Meaning of my Art Piece – The Path

Halfway through writing my code, I started to think of what it reminds me in a more philosophical kind of way. For me, the drawn lines look a lot like people. The set of colors represents characters and types of personalities. We have similarities, yet we all have different paths in our lives, we are going the different roads and chasing different dreams and goals. At the same time, very few people actually stick to their dreams and keep going down the path they believe they belong to. In my code, change_dir_angle represents the deviation from the initial path the person was pursuing. If this deviation is too big, it means the person abandoned his dream or goal. If this deviation is in the “adequate” range, it means the person did not give up on his dream or goal, and successfully reached it. That is why I decided to add the animation of a circle at the end. It marks the achievement of “success”. Of course, this is oversimplified, but these are the thoughts and the meanings that I inserted into my work.

Just for the fun and the beauty of it perhaps, I added the mouseIsPressed() to make an option to freeze the screen and look at how the lines were drawn.

Thank you for your attention!

Conclusion

I really enjoyed working on this assignment and was glad to see the result that I achieved. It is interesting to notice how the things we like can inspire us to apply the ideas and meaning to something we create. In contrast to the previous assignment, I did not try to superficially plug the philosophical context into my artwork – it came by itself.

As I have mentioned before, my primary goal for the Intro to IM class is to learn how to think outside the box and expose myself to art. I feel like this assignment brought me a little bit closer to this, so I am satisfied with the result.

Reflecting on my code, I think I did a good job in keeping it simple yet implementing the functions that I encountered for the first time. As for the other things that I could implement, I was thinking about making the additional lines go from the right to the left sides and from the bottom to the top, but I decided that it would be too chaotic and not as minimalistic to see. Other than that, I could probably make the effect for the lines crossing with each other – something similar to the circles, but maybe slightly different.

In my future projects, I will try to stick to the same level of randomness that I have created today, or maybe even less. I will try to create more interactivity, especially in my midterm project. I am not quite sure what I will do, but I still have time to think about it. Can’t wait to see what I and other people will come up with!

 

 

Week 3 – Reading Reflection on The Art of Interactive Design, Ch. 1

By all means, this passage intrigued me first by its plain and candid tone and the approach to start with ‘trying’ to define instead of throwing jargon and how the author concedes to potential arguments and circumvents them. As a person who loves to define terms and concepts – for a particular context, of course – I found that I much echo my own habit.

Hence, even though I do not 100 percent agree with the statements (although I appreciate that brave attempt to disentangle and establish something beyond old paradigms, a new paradigm), at least for now, the author successfully grasped my willingness to ‘listen’ to the message.

Ironically, despite the author negating words as an ‘interactiveable’ media, the words of the passage did evoke some sort of interactive process in my reading process according to the definition given (although it could be again argued that reaction is not interaction, etc.)

Indeed, I strongly concur with the idea that definitions should serve as guidance rather than constraints, particularly for concepts like interactivity that are swiftly evolving, as mentioned by the author. Sometimes, I hesitate to offer definitions in conversations for fear of appearing imposing to others, when in reality, my aim is not to assert an absolute truth but to establish a ground for my understanding.

While the author’s spectrum to evaluate interactivity from low to high and placing them in contexts of the interacter involved makes much sense to me, I’d still like to not argue but bring up an idea from theater given it is denied to be decently interactive by the passage. Also, starting with definitions, an academically common definition of theater could be ‘a relation between the actor and the spectator.’ This approach to constructing the definition actually resembles the author’s. As the author here tackled the tricky arguments of what counts for an actor in the interaction by dividing different levels of interactivity, it similarly reminds me to maybe use this approach to explain the question in theater definition: what constitutes a spectator or an actor; do they have to be aware of their action or not; etc.

On top of that, I found Socrates’ words very much relatable. That inactive nature of words – the creator cannot protect or explain them or themselves after the creation – is something that bothered my expression in many cases as well, whether in poetry or speech, whether with a real person or anonymously online.

Nevertheless, when it comes to what can be regarded as a strong interactive system, the answer seems hidden within the definition already – only depends on how we interpret those three stages. In terms of ‘listening,’ it could deal with how much it can listen, how fast it can listen, how many types of things it can listen, how obvious or undercovered it can listen, plus when, where, and other Ws. Similarly, with thinking, it really touches the black box of unlimited possibilities. But in simplest words, I would put it here as ‘the extent to which the information listened can be used and processed to present and serve as a whole to reflect the message of the system.’ Eventually, for the speaking, it seems to me that it’s a matter of picking and developing the niche method to communicate information, aka the role of media. When all three stages are carefully designed in a well-rounded manner, it may qualify as a strongly interactive system.

‘Show but not tell’ goes a long way in theater and other performative arts, and maybe for this time, we should not only satisfy with showing but reach beyond it – with whatever we have.

Week 3 — Reading Reflect Interactivity

I thought the way Crawford wrote this paper was really interesting because it was structure in relaxed and reflect-based mannerism to describe what interactivity means. To Crawford, his definition of interactivity was a spectrum of based on the principle of two parties listening, thinking, and speaking. Initially, I was a bit taken aback by his definition especially when he brought up the examples of books and dancing, but towards the end of his article I came to an understanding and agreement with his argument. I feel interactivity should invoke an experience for the individual and the program itself should react to what the user has input, whether it be through speak or physical action.

When reading his paper, it reminded of modern popup art instillation where individuals could walk through an exhibition and the art itself was what the user created through pressing button or generated through movements. I feel like interactivity has definitely come a long way and to me it seems like artists are starting to take notice and incorporate it into their design. Also, through his structure of writing, it almost felt intentional because he wanted the article to be interactivity by suggesting the reader to contact him by email and leaving a personal touch with the reader.

Week 3 — Water Lily Pond OOP

SHORT DESCRIPTION: 

In week 3 of intro to IM, we were introduced to the concept of functions, arrays, and object-oriented programming. For my project. I decided to generate a lily pad pond with unique set of flowers, then for each object they would bounce off the walls or bound off of each other. I create two object instance class, one for the lily pads and one for the flowers whose locations are dependent on the lily pads.

Design Concept 

Last week, I had created an abstract water lily pond, and you can call me uncreative, but I just REALLY, REALLY wanted to create an aesthetically looking water lily pond. With last week’s assignment, the art had changed rapidly giving a sense of discomfort and urgency, and I wanted this week’s project to have a slower and relax feeling. Especially for me studying away in Abu Dhabi, I feel like my life has been so chaotic with settling into a new environment, experiencing new cultures and lifestyle, and adjusting into the new academic routine has been overwhelming.

As such, I am glad I choose to do a water lily pond because it kept my grounded and allowed me to relax. While I am happy with the end result, I do wish I could have added more elements into my project (rain, ripples, fish, etc) and made it more interactive. However, due to the time constraints of my schedule, I did the best I could and added as much detail with shading and colors to the elements I currently have on canvas.

Coding Processes

I began my project by referencing Ball Class program we had gone over in class since the shape of my lily pads would also be circular and stay within the boundaries of the screen. Functions with collisions were initially the most confusing part for me, and taking the time to digest the logic ultimately allowed me to then implement a function that checked for object collision – which is shown below.  The function below would use the distance formula to calculate the distance between the center of one object to the center of another object. Then if the distance was less than the sum of the objects’ radius, that meant there was an overlap, and the objects must then change direction. To change the direction, I simply just exchange the x-y speeds between the two objects, so they would then move in the opposite direction.

checkLilyCollision(otherLily) {
   // compute the distance between the current lily with the other lily 
   let distance = dist(this.lilyX, this.lilyY, otherLily.lilyX, otherLily.lilyY);
   
   // if the distances between two lilys are less than the sum of the radius 
   if (distance < this.radius + otherLily.radius){
     let tempSpeedX = this.lilySpeedX;
     let tempSpeedY = this.lilySpeedY; 
     
     // exchange direction and speed of the colliding lilies 
     this.lilySpeedX = otherLily.lilySpeedX; 
     this.lilySpeedY = otherLily.lilySpeedY;
     otherLily.lilySpeedX = tempSpeedX;
     otherLily.lilySpeedY = tempSpeedY; 
   }
 }

Additionally, I had also started by played around with the arc() function, in order to achieve the tiny slit for the lily pad. Below was a code sketch of the degrees which I wanted my lily pads to look.  From there, I dove deep into randomization the attributes of the lily pad class. Most properties such as speed, rotation, location, initial angle, etc were randomized using the random() or Math.random() function because I wanted each execution of the program to create a different and unique portrait.

function waterLily() {
   fill('rgb(42,140,42)'); // color of the lilypad
   arc(100, 100, 80, 80, 0, 11 * PI/6); 
 } 

Admittivity, I had a problem with overlapping lily pads with each execution of the program and had to rely on Chatgpt to help me resolve the issue. They had suggested to test over 1000 attempts and test if a new instance lily pad object would overlap with an existing lily pad object. I did change and delete a few lines of code Chatgbt provided, but below is what I ended up in my code.

function generateLily(){
  for (let i = 1; i < numLily + 1; i+= 1){
    let validPosition = false;
    let attempts = 0;
    while (attempts < maxAttempts && !(validPosition)){
      ...

      if (isLilyPositionValid(newLilyObject)){
        gLilyArr.push(newLilyObject);
        validPosition = true;
      }
      
      attempts += 1;
    }
  }
}

function isLilyPositionValid(newLilyObject){
  let initalSpacing = random(3,9)  
  // for each existing lily within the lily array 
  for (let existingLily of gLilyArr) {
    let distance = dist(newLilyObject.lilyX, newLilyObject.lilyY, existingLily.lilyX, existingLily.lilyY);
    if (distance < newLilyObject.radius + existingLily.radius + initalSpacing) {
      return false; // Overlap detected
    }
  }
  return true; // No overlap detected
}

When I completed my lily pads, I moved onto the flowers, which were my favorite and proudest section of my program. I created another class for my flowers for it be at the center of the lily pads to have a random number of petals, rotation, and combination of color. I used push() and pop() functions that I learned from my first assignment to save and reset the state of the canvas, alongside translation() which set the center of the flower to the center of the lily pad I was currently working on. I did have a problem with the layers stacking on top of each other and the color and degree of the layers changing each frame, but in the end, I was able to get it fixed by creating a unique array for the flower’s color, rotation, and layers.

Final Design

Below is the final program for this assignment. Overall, I am happy with the movement of the lily pads and how each of the flowers turned out. I liked how it is unique for each execution of the program, but it isn’t too overwhelming to the user. There were a lot of technical problems throughout the project, but the concept of a calm and pretty end project kept me motivated to continue working.


 

 

 

 

Week 3 – Object Life Sim

final product

For this project, I would like to start by presenting the final product.

Instruction: To change the starting condition, edit the initial parameters in ‘sketch.js’

Description (This part of the text is summarized by GPT-4o from my other words and codes):

The simulation involves various instances, such as objects, foods, and sites, each with distinct behaviors. It emphasizes resource management (hunger), spatial awareness (movement and separation), and lifecycle dynamics (aging and reproduction), creating a dynamic system where objects interact with each other and their environment.

  1. Objects: These are the primary entities that move around the simulation. They have attributes like position, age, size, speed, hunger, and status (e.g., doodling, mating, eating, working). Objects can interact with other objects and food sources.
  2. Movement: Objects move based on their speed and direction. They can either follow a target (another object or food or site) or move randomly. If they encounter the edges of the simulation area, they reverse direction. They also avoid crowding by maintaining a separation distance from others.
  3. Hunger and Status: Objects experience hunger, which affects their status and behavior. When hungry, they look for food. If they consume food, their hunger decreases, and they may reproduce if conditions are favorable. Different statuses trigger different actions (e.g., eating, mating, working).
  4. Aging: Objects age over time, with their aging rate influenced by their status. For example, being full while mating speeds up aging while being hungry slows it down. If an object’s age exceeds its maximum, it dies.
  5. Reproduction: When certain conditions are met (like being sufficiently hungry), objects can reproduce. New objects are created with attributes based on the parent object.
  6. Interaction with Food and Sites: Objects can consume food to reduce hunger and may interact with sites to produce extra food on the canvas. Reaching food or sites changes their status and can trigger further actions.

concept

While there are certainly many inspirations, including Simmiland (a God-like card game), John Conway’s Game of Life, the path drawing project from Raes’ presentation, and P5 reference projects (bouncing balls, flocking behavior, Perlin noise, etc.), the idea first came to me as a reminder to think about the nature of simulation as well as how the routined life has alienated humans to be objects subjected to rules – hence the title “Object Life Sim.”

Figure 1: Simmiland (The playing God idea and the color scheme reference)

The paradox lies here, as the nature of simulation suggests that it is trying to imitate something superior, something intricate and more complex, it is so weird that if the life itself is already institutionalized, then what’s the point of simulating it? Isn’t it going to result in an Ouroboros? Yet, there’s an understated allure in simulating our surroundings and engaging with them at minimal cost, which has given rise to this very basic simulation of a life reduced to objects. Or, perhaps these are, in a way, the most crucial simulations?

Figure 2: Game of Life (Resource and reproduction reference)

Another motivation to do so – to play God for a moment – emerged during in our first class discussion. As we delved into the role of randomness in art, I held the belief that randomness could be an intrinsic element, present even in Mondrian’s analytical paintings or the precisely proportioned sculptures of ancient Greece. However, I was surprised by the idea of how possible it is for art to be random, brought up by a classmate. This prompted me to reconsider whether the prevalent randomness in today’s generative art detracts from its legitimacy as art. Then I came up with the analogy of the creation of the world – if the world was created by a deity with a singular act (akin to the First Cause) and then left to evolve independently, can it still be considered the deity’s creation?  Similarly, if the set of algorithms behind a piece is designed by human, and the initial set of parameters is decided by human, is it our creation? While my stance is affirmative, as I believe the eventually tangible ‘piece’ is not the art itself but separate from it or only plays a conduit and could be reached however we want, I would still like to pose this question for your contemplation.

CODE & Production SNIPPETS

Again, as it would be tedious to go through the structures and details in the code, I will only introduce some of the sources I used and some interesting parts of them from my perspective.

First, when it comes to reading the keyboard inputs with keyCode, it is very useful to have this website to know how the keys are linked to key codes. This enables me to set up different conditions by combining keyboard and mouse together to create the control logic:

function mouseClicked() {
  // Spawn new instances at the mouse location when clicked with different keys pressed
  if (keyCode === 79) { // If the last pressed button is 'O'
    initiateObject(mouseX, mouseY); 
  } else if (keyCode === 70) { // If the last pressed button is 'F'
    foodArray.push(new Foods(mouseX, mouseY, setMaxUtility)); 
  } else if (keyCode === 83) { // If the last pressed button is 'S'
    siteArray.push(new Sites(mouseX, mouseY, setMaxUtility));
  } else {
    // If the simulation hasn't started, initiate it and create initial objects
    if (simStart === false) {
      simStart = true; // Set the simulation start flag to true
      for (i = 0; i < initialObjectNum / 2; i ++) {
         // Spawn initial objects off-screen
        initiateObject(random([0 - initialSize / 2, windowWidth + initialSize / 2]), random(windowHeight));
        initiateObject(random(windowWidth), random([0 - initialSize / 2, windowHeight + initialSize / 2]));
      }
    } 
  }
}

Another useful source is the Unicode list for emojis (Yes, I learned to use emojis to draw stuff this time!) For example, I used it to set up random food emojis for my Foods class:

let foodIcon = ['\u{1F35E}', '\u{1F950}', '\u{1F956}', '\u{1FAD3}', '\u{1F968}', '\u{1F96F}', '\u{1F95E}', '\u{1F9C7}', '\u{1F9C0}', '\u{1F356}', '\u{1F357}', '\u{1F969}', '\u{1F953}', '\u{1F354}', '\u{1F35F}', '\u{1F355}', '\u{1F32D}', '\u{1F96A}', '\u{1F32E}', '\u{1F32F}']

class Foods {
  constructor(tempX, tempY, maxUtility, 
               tempSize = 10) {
    this.x = tempX;
    this.y = tempY;
    this.size = tempSize; // Set the initial size
    this.type = 'food';
    this.utility = random(0.5, maxUtility)
    this.status = null;
    this.icon = random(foodIcon)
  }
  
  // Display the object on canvas
  display() {
    fill('#ffd7a0'); // Set the brightness of the object based on the age
    noStroke();
    circle(this.x, this.y, this.size * this.utility + 10);
    
    textSize(this.size * this.utility);
    textAlign(CENTER, CENTER);
    text(this.icon, this.x, this.y);
  }
  
}

Next, I’d like to show two pieces of the core functions for my Objects to move. The first one finds the closest target on the canvas of its kind, and the second one is the exact math to calculate the movements. It is rather easy to have the objects move directly towards a target (I only have to copy-paste a bit from my first portrait project), while including the collision algorithm and the strategies to maneuver around is something more difficult for sure.

  find(arrayToFind) {
    let closestPoint = null; // Placeholder for the closest point
    let minDistance = Infinity; // Start with a very large distance
    let distance; // Variable to store calculated distance
    let ix, iy; // Coordinates of items in the array

    // Function to calculate the distance between two points
    const calculateDistance = (x1, y1, x2, y2) => {
      return Math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2); // Return Euclidean distance
    };

    // Iterate through the array of inquiry to find the closest object
    for (let item of arrayToFind) {
      
      ix = item.x; 
      iy = item.y;
      
      if ((ix === this.x) & (iy === this.y)) { 
        distance = Infinity; // Set distance to infinity if it's the same object
      } else {
        distance = calculateDistance(this.x, this.y, ix, iy); // Calculate distance to the item
      }
      
      // Update the closest point if the current distance is smaller
      if (distance < minDistance) {
        minDistance = distance; // Update minimum distance
        this.destObject = item; // Set the closest object as the destination
      }
    }
  }

Initially, my strategy after a collision was to let the objects nudge a bit randomly, which resulted in sticking in place with jerking behaviors. Then, I set up a strategy to let the objects escape in the opposite direction from the collision – an idea borrowed from bouncing balls. However, as in my simulation, moving toward the target is still a necessity after escaping; it resulted in the objects sticking in a line. So, I modified the strategy to slide around the collided objects, but it still didn’t work, leading to the objects rotating in place. At the end of the day, I worked through the algorithm of flocking behaviors mentioned in class and borrowed the separation to combine with my sliding behavior and put up the piece.

  move(arrayToFind) {
    this.find(arrayToFind); // Find the target object
    
    // Setup destination coordinates from the target object
    this.destX = this.destObject.x;
    this.destY = this.destObject.y;

    // Calculate the distance to the destination
    let dx = this.destX - this.x;
    let dy = this.destY - this.y;
    let distance = Math.sqrt(dx * dx + dy * dy);
    
    // Normalize the direction vector
    if (distance > 0) {
        this.directionX = dx / distance;
        this.directionY = dy / distance;
    } else {
        this.directionX = 0;
        this.directionY = 0;
    }

    // Calculate the next position
    let nextX = this.x + this.directionX * this.speed;
    let nextY = this.y + this.directionY * this.speed;
    
    // Check for collision with the destination object
    if (this.destObject) {
      let targetCombinedRadius = (this.size + this.destObject.size) / 2; // Adjust based on size
      let distToTarget = Math.sqrt((nextX - this.destObject.x) ** 2 + (nextY - this.destObject.y) ** 2);

      // If colliding with the target object, invoke reach
      if (distToTarget < targetCombinedRadius) {
        this.reach(); // Call reach() if colliding with the target
            
        // Slide away from the target
        let targetNormalX = (this.x - this.destObject.x) / distToTarget; // Normal vector
        let targetNormalY = (this.y - this.destObject.y) / distToTarget;

        // Calculate the sliding direction (perpendicular to the normal)
        let targetSlideX = -targetNormalY; // Rotate normal to find tangential direction
        let targetSlideY = targetNormalX;

        // Introduce a small random adjustment to sliding direction
        let targetRandomAdjustment = random(-0.1, 0.1); // Adjust as needed
        targetSlideX += targetRandomAdjustment;
        targetSlideY += targetRandomAdjustment;

        // Normalize the sliding direction
        let targetSlideDistance = Math.sqrt(targetSlideX * targetSlideX + targetSlideY * targetSlideY);
        if (targetSlideDistance > 0) {
            targetSlideX /= targetSlideDistance;
            targetSlideY /= targetSlideDistance;
        }

        // Move along the sliding direction away from the target
        this.x += targetSlideX * this.speed * 0.3; // Slide from the target
        this.y += targetSlideY * this.speed * 0.3;

        return; // Stop further movement after reaching
      }
    }
    
    // Maintain separation distance from other objects
    let separationDistance = this.size * 1.25; // Desired separation distance
    let separationForceX = 0;
    let separationForceY = 0;

    for (let other of objectArray) {
      // Skip if it's the same object or the target object
      if (other === this || other === this.destObject || other.status === 'mate') continue;

      // Calculate distance to the other object
      let distToOther = Math.sqrt((nextX - other.x) ** 2 + (nextY - other.y) ** 2);

      // If the distance is less than the desired separation distance, calculate a separation force
      if (distToOther < separationDistance) {
        let diffX = nextX - other.x;
        let diffY = nextY - other.y;
        
        // Normalize the difference vector
        if (distToOther > 0) {
            separationForceX += (diffX / distToOther) * (separationDistance - distToOther);
            separationForceY += (diffY / distToOther) * (separationDistance - distToOther);
        }

        // Sliding behavior
        let slideFactor = 0.3; // Adjust as needed for sliding strength
        let slideX = -diffY; // Perpendicular to the normal
        let slideY = diffX;

        // Normalize sliding direction
        let slideDistance = Math.sqrt(slideX * slideX + slideY * slideY);
        if (slideDistance > 0) {
            slideX /= slideDistance;
            slideY /= slideDistance;
        }

        // Apply sliding movement
        nextX += slideX * this.speed * slideFactor;
        nextY += slideY * this.speed * slideFactor;
      }
    }

    // Apply the separation force to the next position
    nextX += separationForceX;
    nextY += separationForceY;

    this.x = nextX;
    this.y = nextY;
    
    if (frameCount % 10 === 0) {
      // After updating the position
      this.positionHistory.push({ x: this.x, y: this.y });

      // Maintain the history size
      if (this.positionHistory.length > this.historyLimit) {
        this.positionHistory.shift(); // Remove the oldest position
      }
    }
    
  }

OBSERVATION

Lastly, now that the project is a simulation, I believe the observation of its behaviors matters a lot. While I do not have much time to fully explore the parameters and settings, here are a few general observations:

Figure 3: It is evident that the sites, as the source of food, have the most path towards and surrounded.

Figure 4: As the simulation goes on, the larger objects could start to hinder the movements of the others.

Figure 5: Towards the end of a simulation, no matter if the objects are in a healthy state, the behavior turns out to be more aimless as there is no incentive to interact.

Figure 6: The greater the average resource per area (in other words, the smaller canvas + the same amount of resource), the longer the simulation lasts.

Week 3 – Reading Response

Based on Chris Crawford’s “The Art of Interactive Design,” a strongly interactive system should be responsive, intuitive, engaging, personalized, and provide meaningful feedback. It should react promptly to user input, be easy to understand and use, encourage active participation, adapt to individual preferences, and offer informative feedback.

To improve user interaction in my p5 sketches, I will incorporate dynamic elements, provide interactive controls, personalise experiences, provide meaningful feedback, and tell a story. By introducing elements that respond to user input, offering intuitive controls, allowing users to customize their experience, giving clear and informative feedback, and creating a narrative to guide the user’s experience, I can create more engaging and interactive p5 sketches.