Week 5 – Reading Response

Reading through the passage by Golan Levin and others, I was re-introduced into the world of computer vision without being overwhelmed by technical jargon. As someone who used to get in touch with CV only on a library-resorting basic, aka. using OpenCV in Python, rediscovering some rudimentary algorithms behind it from a perspective of technology in arts brought me some really fascinating ideas compared to the old-school path-finding tasks.

The paper acknowledges how CV has traditionally been the realm of expert researchers and applications limited to military and law enforcement (and yes, of course, teenagers’ robotics tournaments).  With advancements in software development tools, the rise of open-source communities, and the affordability of digital video hardware, CV is no longer an exclusive playground. Artists and designers are now empowered to experiment and innovate, integrating vision-based interactions into their creative projects with relative ease.

A particularly insightful section for me is to emphasize the importance of the physical environment in enhancing the performance of CV systems. The authors argue that optimizing lighting conditions, using retroreflective materials, and selecting appropriate camera lenses can significantly improve the accuracy and reliability of vision algorithms. In my own projects that utilized webcam and process on the video pixels, although the ‘algorithm’ is simply resorting to the grayscale conversion, it is already noticeable to see how the physical environment affects the outcome. The collaboration between software and the physical setup underscores a holistic approach to designing interactive media, where both technological and artistic considerations play crucial roles.

Reflecting on the questions posed by the paper, it becomes clear how CV fundamentally differs from human vision. While humans effortlessly interpret context, recognize patterns, and infuse semantic meaning into visual data, CV relies on meticulously crafted algorithms to process pixel information without inherent understanding, creating both opportunities and challenges for us. By all means, the opportunities lie in the possibility of fine-tuning, altering, and utilizing what the computer has ‘seen’ with algorithms (in other words, to ‘think’ after the ‘listening’ and before ‘speaking’) while it is hard to change the optical structure of human vision. On the other hand, this ‘thought-precedes-realization’ approach in harnessing CV could result in hindering the imagination of artists as the fine line between envisioning and then realizing and the more intuitive manner of mixing the creation and designing at the same time can be easily blurred by lines of codes.

Besides, this powerful capability also introduces ethical considerations, especially when used in interactive art. The capacity for tracking and surveillance can enhance the immersive quality of art installations, making them more responsive and engaging. Yet, it also raises concerns about privacy and consent. For web-based instances like the games I’m currently working with, it is easy and necessary to ask for permission from the user, while installations and broader observational systems could skip the users’ first-hand consent. How do we balance the creative potential of CV with the need to respect individual autonomy and privacy? These questions are crucial as artists and technologists continue to push the boundaries of what interactive art can achieve.

 

Week 5 – Midterm Progress Report

intro

For my midterm project, although the Object Life Sim project seems promising to me, I do not have much idea for furthering it, and I’d rather incorporate a higher degree of interactivity in this project. Therefore, I would like to integrate the elements that appeared in my previous attempts – both conceptually and technologically – into a new game. And for this progress report, I will start with composing a standard game design documentation and then touch on the technical difficulties and that of the desgin-wise.

game design documentation

1. Game Overview

Title: Motion Ships
Concept:
Motion Ship will be a space-themed action game where players control a spaceship using head movements detected via a webcam. Navigate through space environments, combat obstacles, and achieve high scores through maneuvering and sound-activated attacks.

2. Game Structure

2.1. Game Flow

  1.  Start Screen
  2. Configuration Menu
  3. Gameplay
  4. End Screen
  5. LoopBack or Exit
3. User Interface

3.1. Start Screen

Title Display: “Motion Ships”
Introduction: “Control the spaceship with your head.”
Prompt: “Press any key to start.”
Functionality: Waits for any key press to transition to the Configuration Menu.

3.2. Configuration Menu

Customization Options:
    Spaceship Selection: Choice between two spaceship models.
    Background Selection: Choice among four distinct battle backgrounds.

Controls:
    Selection Interface: Allows players to select their preferred spaceship and background before starting the game.
    Functionality: After configuration, initiates gameplay.

3.3. End Screen

Score Display: Shows the player’s final score.
Functionality: Offers players the choice to restart or modify settings post-game.

4. Gameplay Mechanics

4.1. Player Controls

Input Methods:
    Webcam-Based Head Movement Detection: Controls spaceship movement along the X and Y axes.
    Sound Input Detection: Monitors ambient sound levels to trigger beam launches.

Control Scheme:
    Movement: The spaceship moves along the X and Y axes based on the player’s head position detected by a face-detection machine learning model.
    Rotation: The spaceship rotates in real-time to mirror the direction of movement, simulating realistic aircraft maneuvers.

4.2. Player Character

Spaceship Model:
Formats Supported: 3D .obj files.
Positioning:
    Fixed Z-Axis: The spaceship remains on a fixed plane along the Z-axis.
    Dynamic X and Y Axes: Movement is controlled via head tracking.

Energy System:
Energy Meter: Displayed on the bottom-left of the screen, indicating available energy.
Energy Consumption: Firing laser beams depletes the energy meter.
Energy Recovery: Collecting energy ores replenishes the energy meter based on the ore’s size.

4.3. Movement Mechanics

Forward Motion:
    Illusion of Movement: The spaceship appears to move forward by keeping its Z-position fixed while other objects move towards the spaceship.
    Relative Speeds: All in-game objects move at a base speed relative to the spaceship’s “forward” movement. Objects capable of independent movement have their own speed vectors in addition to the base speed.

4.4. Combat Mechanics

Attacking:
    Weapon: Laser beams automatically launched based on ambient sound levels.

Trigger Mechanism:
    Sound Input Detection: Continuously monitors the player’s microphone input.
    Threshold-Based Activation: When the ambient sound level exceeds a predefined threshold, the spaceship automatically fires laser beams.

Laser Behavior:
    Trajectory: Beams travel along the negative Z-axis from the spaceship’s X and Y positions when launching.
    Energy Cost: Each launched beam consumes a small amount of the spaceship’s energy.

Feedback Indicators:
    Visual Cue: Brief animation or flashing effect indicating that a beam has been fired due to sound input.
    Energy Indicator: Real-time update of the energy meter showing depletion upon beam firing.

4.5. Obstacles and Enemies

4.5.1. Meteoroids

Appearance: Rock-textured spherical objects.
Behavior: Typically remain fixed relative to the background.
Size Variation: Random sizes within predefined limits.

Interactions:
    Destruction: Requires a number of laser hits proportional to size to be destroyed. Upon destruction, the meteoroid shakes briefly before disappearing.
    Collision: If a meteoroid collides with the player’s spaceship, both are destroyed, resulting in game over.

4.5.2. Energy Ores

Appearance: Crystal-textured spherical objects.
Behavior: Typically remain fixed relative to the background.
Size Variation: Random sizes within predefined limits.

Interactions:
    Destruction: Requires a number of laser hits proportional to size to be destroyed. Upon destruction, the energy ore shakes briefly before disappearing, and the player’s energy is replenished based on the ore’s size.
    Collision: If an energy ore collides with the player’s spaceship, both are destroyed, resulting in game over.

4.5.3. Enemy Ships

Appearance: Alternate spaceship models not selected by the player during configuration.
Behavior: Dynamic movement across all X, Y, and Z axes relative to the base speed. Change direction randomly every few seconds.
Collision Avoidance: Actively avoid colliding with other objects except the player’s spaceship.

Interactions:
    Destruction: Requires a number of laser hits to be destroyed. Upon destruction, the enemy ship shakes briefly before disappearing.
    Collision: If an enemy ship collides with the player’s spaceship, both are destroyed, resulting in game over.

4.6. Object Spawning and Lifecycle

Spawn Mechanics:
Spawn Rate: Objects spawn at regular intervals.
Spawn Probability:
    Meteoroids and Enemy Ships: Equal likelihood.
    Energy Ores: Lower likelihood compared to meteoroids and enemy ships.

Despawn Conditions:
Objects are removed from the game once their actual Z-position surpasses the camera’s Z-position by a specified distance, ensuring optimal performance and resource management.

5. Environment and Visual Design

5.1. Backgrounds

Selection: Players can choose from four distinct battle backgrounds during the configuration phase.
Parallax Effect: Backgrounds subtly move in response to the spaceship’s movement to create a dynamic and immersive living space effect.
Camera Perspective: Fixed camera position to maintain consistent player orientation.

5.2. Lighting

Position: Fixed behind the camera.
Effect: Illuminates all objects within the 3D space, ensuring visibility and depth perception.

5.3. Graphics

3D Models:
    Spaceships: .obj models for both player and enemy ships.
    Obstacles: Spherical models for meteoroids and energy ores, with varying textures based on type (grey for meteoroids, green for energy ores).

Animations:
Destruction Effects:
Shaking animations upon object destruction before removal.
    Movement Animations: Smooth transitions and rotations for moving objects.

6. Technical Specifications

6.1. Technology Stack

Machine Learning Integration: Utilize a face-detection library (ml5.js) for webcam-based head movement tracking.
Sound Processing Library: Integrate a sound analysis library (p5.sound) to process and analyze ambient sound levels for beam activation.

6.2. Performance Considerations

Object Management: Efficient spawning and despawning of objects to minimize memory usage and ensure smooth gameplay.
Collision Detection: Algorithms for real-time detection between laser beams and obstacles/enemies, as well as between the player’s spaceship and other objects.

6.3. User Input Handling

Face Detection: Implement real-time face and head movement tracking for responsive spaceship control through webcam access.
Sound Threshold Detection: Continuously monitor ambient sound levels to trigger beam launches when thresholds are met through microphone access.

7. Additional Features and Considerations

7.1. Scoring System

Score Incrementation: Points awarded based on actions such as destroying obstacles, collecting energy ores, and surviving over time.
High Score Tracking: Persistent tracking and display of the highest score achieved across game sessions.

7.2. Audio

Background Music: Ambient space-themed music to enhance immersion.
Sound Effects:

  1. Laser Firing: Audio cues when beams are launched.
  2. Destruction: Sounds for destroying objects.
  3. Collisions: Impact sounds for collisions leading to game over.
  4. Game Over: Sound indicating the end of the game.

7.3. Accessibility

Instructions: Clear in-game instructions or tutorials to guide new players on using head movements and sound inputs for control.
Visual Indicators: HUD elements like energy meters and score displays designed for clarity and ease of reading.
Adjustable Sensitivities: Allow players to adjust head movement and sound input sensitivities to accommodate different playstyles and environments.

8. Implementation Notes

8.1. Sound-Based Beam Launching

Calibration: Provide an initial calibration phase where the game adjusts the sound input threshold based on the player’s environment to prevent unintended beam firing.
Feedback Mechanism: Visual and audio feedback when beams are launched due to sound input, helping players associate their actions with in-game events.

potential challenges

1. Real-Time Face Detection and Head Movement Tracking

Real-time video processing and face detection can be computationally intensive, leading to lag or reduced frame rates. SO, implementing throttling mechanisms to control the rate of face detection processing could be conducive to overall efficiency.

2. Sound Input Detection and Processing

Differentiating intentional sound inputs (e.g., biu-biu-biu voice command) from ambient noises to trigger beam launches without false positives may need amplitude-based or frequency-based thresholding. It might be helpful to establish a continuous record (RMS, LUFS, etc.) before gamplay to calculate a based noise level.

3. Collision Detection in 3D Space

3D collision detection would be more complex and resource-intensive than 2D that I used in previous projects, especially with multiple moving objects and interactions. The potential solution could include:

  • Early Exit Strategies:
    Implement early exit conditions where collision checks are skipped if objects are too far apart to interact, conserving resources.
  • Simplify Collision Shapes – Axis-Aligned Bounding Boxes (AABB):
    For certain objects, AABB can offer a balance between simplicity and collision detection accuracy.
4. Object Spawning and Lifecycle Management

Without proper management, the continuous spawning of objects can lead to memory leaks and degraded performance over time, while the spawn rate and probability are important to maintain a balanced difficulty without overwhelming players or underutilizing game mechanics.

Therefore, adjusting spawn rates and object probabilities dynamically based on the player’s score or progression can be used to maintain engaging and balanced gameplay.

demo version

With all that said, I developed an early demo version without fine-tuning the parameters and visual effects.

Fundamentally, the first two technical difficulties are tackled in this stage, while the rest is untouched. Personally speaking, as this project expanded in scale, I learned about the game design structure and built highly object-oriented programming with code as specialized as possible.

Week 4 – Ghost of Words

Intro

This week, I adopted my production in week 2 – the dots silhouette according to the webcam input – to create a representation of the figure reflected in the webcam as well as the soul behind it in the background. It seems to me that the product in week 2 did lack some message/meaning – or whatsoever: What is the point of minoring the webcam with dots? Then, when it comes to the text generation of this week, the answer appeared to me to be the combination of text and my existent mechanism – as the mirroring mechanism symbolizes the entity, the phantom, the ghost, and the creator as well as the user, while the poems floating across the canvas reflects a piece of my soul. By doing so, the ghost of me (or you) becomes the exact pathway to discovering that piece of soul, adding incentives to the interaction.

process

I started simply by replacing the dots drawing in my week 2 product with text drawing – and obviously, the flashing of words can not stand for further meanings except for blinding the user even more compared to the dots, as we intuitively tend to read whatever the words presented in front of us.

Therefore, I tried another approach to displaying the poem’s lines anyway in the background and let the probability pixel matrix act as the alpha value matrix, this time to overlay on the text, thus resulting in the ghosty effect.

In the preload function, I’m ensuring that all external resources are loaded before the sketch runs –  Using loadTable, I import lines of text from textLines.csv, which will be used to generate the floating texts dynamically.

function preload() {
  // Load the CSV file
  textLines = loadTable('textLines.csv', 'csv', 'header'); // Adjust the path and options as needed
}

This time, I directly use the grayscale value as alpha value as they have the same range:

function drawAlphaFilter() {
  noStroke();
  
  // Iterate through each cell in the grid
  for (let y = 0; y < k; y++) {
    for (let x = 0; x < j; x++) {
      let index = x + y * j;
      let grayValue = pixelArray[index];
      
      // Calculate alpha value
      // Ensure alphaValue is within 0-250 for better visibility
      let alphaValue = constrain(grayValue, 0, 250); 
      
      // Set fill color to background color with the calculated alpha for overlay effect
      fill(17, 38, 56, alphaValue);
      
      // Calculate the position and size of each rectangle
      let rectWidth = windowWidth / j;
      let rectHeight = windowHeight / k;
      let rectX = x * rectWidth;
      let rectY = y * rectHeight;
      
      rect(rectX, rectY, rectWidth, rectHeight);
    }
  }
}

The RGB value used in this product is extracted from my personal website: Sloth’s Slumber | Xiaotian Fan’s Collection (sloth-slumber.com).

Then, the floating texts are managed through both class and helper functions, including:

function updateFloatingTexts() {
  // Update and display existing floating texts
  for (let i = floatingTexts.length - 1; i >= 0; i--) {
    let ft = floatingTexts[i];
    ft.update();
    ft.display();
    
    // Remove if off-screen
    if (ft.isOffScreen()) {
      floatingTexts.splice(i, 1);
      
      // Also remove from its slot
      let s = ft.slot;
      slots[s] = null; // Mark the slot as free
    }
  }
  
  // Iterate through each slot to manage floating texts
  for (let s = 0; s < totalSlots; s++) {
    if (slots[s] === null) {
      // If the slot is free, add a new floating text
      let newText = getNextText();
      if (newText) {
        let ft = new FloatingText(newText, s);
        floatingTexts.push(ft);
        slots[s] = ft; // Assign the floating text to the slot
      }
    } else {
      // If the slot is occupied, check if the tail has entered the screen
      let lastText = slots[s];
      
      if (lastText.direction === 'ltr') { // Left-to-Right
        // Check if the tail has entered the screen (x + width >= 0)
        if (lastText.x + lastText.getTextWidth() >= 0) {
          // Safe to add a new floating text
          let newText = getNextText();
          if (newText) {
            let ft = new FloatingText(newText, s);
            floatingTexts.push(ft);
            slots[s] = ft; // Replace the old floating text with the new one
          }
        }
      } else { // Right-to-Left
        // Check if the tail has entered the screen (x - width <= windowWidth)
        if (lastText.x - lastText.getTextWidth() <= windowWidth) {
          // Safe to add a new floating text
          let newText = getNextText();
          if (newText) {
            let ft = new FloatingText(newText, s);
            floatingTexts.push(ft);
            slots[s] = ft; // Replace the old floating text with the new one
          }
        }
      }
    }
  }
}

Another important function is to concatenate lines in order to fulfill across the windowWidth:

function getNextText() {
  // Reset index if end is reached
  if (currentLineIndex >= textLines.getRowCount()) {
    currentLineIndex = 0; // Reset to start
  }
  
  let combinedText = '';
  let estimatedWidth = 0;
  let tempIndex = currentLineIndex;
  let concatenationAttempts = 0;
  let maxAttempts = textLines.getRowCount(); // Prevent infinite loops
  
  // Loop to concatenate lines until the combined text is sufficiently long
  while (estimatedWidth < windowWidth * TEXT_MULTIPLIER && concatenationAttempts < maxAttempts) {
    let textLine = textLines.getString(tempIndex, 0);
    if (!textLine) break; // If no more lines available
    
    combinedText += (combinedText.length > 0 ? ' ' : '') + textLine;
    tempIndex++;
    
    // Reset if at the end of the table
    if (tempIndex >= textLines.getRowCount()) {
      tempIndex = 0;
    }
    
    // Estimate text width using p5.js's textWidth
    textSize(24); // Set a default size for estimation
    estimatedWidth = textWidth(combinedText);
    
    concatenationAttempts++;
    
    // Break if the same index is on loop to prevent infinite concatenation
    if (tempIndex === currentLineIndex) break;
  }
  
  // Update the currentLineIndex to tempIndex
  currentLineIndex = tempIndex;
  
  return combinedText;
}

Eventually, this time, when dealing with the full window canvas, I added a canvas resize function to respond to window resizing:

function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
  
  // Update Y positions of floating texts based on new window size
  for (let ft of floatingTexts) {
    let padding = 5; // Padding from top and bottom
    ft.y = map(ft.slot, 0, totalSlots - 1, padding, windowHeight - padding);
  }
}

To do & reflection

While this product is germinated from my previous product, I believe it has the potential to be further polished, including the varying text aesthetics, responsive relation between text and webcam (or audio level), etc.

On the other hand, I would say that this product is indeed an improvement compared to week 2 as I started to incorporate my own message and character into the code instead of creating fancy (or not) demos.

Week 4 – Reading Response on The Design of Everyday Things Ch.1

Norman’s work strikes me as a meditation on the tension between what we create and how we live with it. That scissors example – seemingly mundane yet profound in its clarity and simplicity – is where the conversation begins, where the hands meet the object, and where intention aligns with ease. The holes for fingers, the sharpness of the blades, everything speaks in a language we already know, though we might not realize it. There’s something beautiful in that – that design could be so intuitive that it bypasses the conscious mind altogether. It’s as if the object whispers directions without uttering a word.

But then, there’s the watch. The digital one, with its five buttons and its cryptic functions, each one a mystery waiting to unravel or, more likely, frustrate. It’s a watch that asks for patience, for time spent reading manuals and pressing buttons in some uncertain combination. And here’s where the dissonance begins: the more we strive for mastery over our creations, the more we seem to lose the very thing we sought – simplicity, elegance, understanding.

I find it ironic, or perhaps inevitable, that as we design for greater utility, we inch closer to confusion. The refrigerator controls, for example, promise autonomy – adjust the freezer, adjust the fridge – but they deceive in their independence. They are not two but one, a system entangled with itself, a metaphor, perhaps, for how we often see ourselves within the systems we create: in control, but really, not quite. Is it possible to design something so complex that its very design dissolves into meaninglessness? I wonder.

Norman’s “system image” feels like a ghost, a presence that’s there but not fully. It’s the half-told story, the incomplete map, the set of instructions that leaves you more puzzled than when you began. And yet, we rely on it. It’s not just the object – the watch, the refrigerator, the scissors – but the way the object speaks to us or fails to. When the system image falters, we falter. It’s as though the object has lost its voice, or worse, it speaks in riddles.

There’s something almost tragic in this “paradox of technology” that Norman concludes with. We build and build, adding more features, more functions, more buttons, and yet the more we add, the more distant we become from the original purpose. The watch, once a simple device for telling time, now demands that we learn a new language just to access its many offerings. And I wonder – at what cost?

In the end, Norman gestures toward a truth that resonates deeply: design is not a solitary act. It’s a conversation, a collaboration – between engineers, marketers, manufacturers, and yes, the users, the humans who will ultimately live with these objects. And it is here, in this collaboration, where the balance must be struck. Not in the pursuit of complexity for its own sake but in the creation of objects that serve us, not enslave us.

Week 3 – Reading Reflection on The Art of Interactive Design, Ch. 1

By all means, this passage intrigued me first by its plain and candid tone and the approach to start with ‘trying’ to define instead of throwing jargon and how the author concedes to potential arguments and circumvents them. As a person who loves to define terms and concepts – for a particular context, of course – I found that I much echo my own habit.

Hence, even though I do not 100 percent agree with the statements (although I appreciate that brave attempt to disentangle and establish something beyond old paradigms, a new paradigm), at least for now, the author successfully grasped my willingness to ‘listen’ to the message.

Ironically, despite the author negating words as an ‘interactiveable’ media, the words of the passage did evoke some sort of interactive process in my reading process according to the definition given (although it could be again argued that reaction is not interaction, etc.)

Indeed, I strongly concur with the idea that definitions should serve as guidance rather than constraints, particularly for concepts like interactivity that are swiftly evolving, as mentioned by the author. Sometimes, I hesitate to offer definitions in conversations for fear of appearing imposing to others, when in reality, my aim is not to assert an absolute truth but to establish a ground for my understanding.

While the author’s spectrum to evaluate interactivity from low to high and placing them in contexts of the interacter involved makes much sense to me, I’d still like to not argue but bring up an idea from theater given it is denied to be decently interactive by the passage. Also, starting with definitions, an academically common definition of theater could be ‘a relation between the actor and the spectator.’ This approach to constructing the definition actually resembles the author’s. As the author here tackled the tricky arguments of what counts for an actor in the interaction by dividing different levels of interactivity, it similarly reminds me to maybe use this approach to explain the question in theater definition: what constitutes a spectator or an actor; do they have to be aware of their action or not; etc.

On top of that, I found Socrates’ words very much relatable. That inactive nature of words – the creator cannot protect or explain them or themselves after the creation – is something that bothered my expression in many cases as well, whether in poetry or speech, whether with a real person or anonymously online.

Nevertheless, when it comes to what can be regarded as a strong interactive system, the answer seems hidden within the definition already – only depends on how we interpret those three stages. In terms of ‘listening,’ it could deal with how much it can listen, how fast it can listen, how many types of things it can listen, how obvious or undercovered it can listen, plus when, where, and other Ws. Similarly, with thinking, it really touches the black box of unlimited possibilities. But in simplest words, I would put it here as ‘the extent to which the information listened can be used and processed to present and serve as a whole to reflect the message of the system.’ Eventually, for the speaking, it seems to me that it’s a matter of picking and developing the niche method to communicate information, aka the role of media. When all three stages are carefully designed in a well-rounded manner, it may qualify as a strongly interactive system.

‘Show but not tell’ goes a long way in theater and other performative arts, and maybe for this time, we should not only satisfy with showing but reach beyond it – with whatever we have.

Week 3 – Object Life Sim

final product

For this project, I would like to start by presenting the final product.

Instruction: To change the starting condition, edit the initial parameters in ‘sketch.js’

Description (This part of the text is summarized by GPT-4o from my other words and codes):

The simulation involves various instances, such as objects, foods, and sites, each with distinct behaviors. It emphasizes resource management (hunger), spatial awareness (movement and separation), and lifecycle dynamics (aging and reproduction), creating a dynamic system where objects interact with each other and their environment.

  1. Objects: These are the primary entities that move around the simulation. They have attributes like position, age, size, speed, hunger, and status (e.g., doodling, mating, eating, working). Objects can interact with other objects and food sources.
  2. Movement: Objects move based on their speed and direction. They can either follow a target (another object or food or site) or move randomly. If they encounter the edges of the simulation area, they reverse direction. They also avoid crowding by maintaining a separation distance from others.
  3. Hunger and Status: Objects experience hunger, which affects their status and behavior. When hungry, they look for food. If they consume food, their hunger decreases, and they may reproduce if conditions are favorable. Different statuses trigger different actions (e.g., eating, mating, working).
  4. Aging: Objects age over time, with their aging rate influenced by their status. For example, being full while mating speeds up aging while being hungry slows it down. If an object’s age exceeds its maximum, it dies.
  5. Reproduction: When certain conditions are met (like being sufficiently hungry), objects can reproduce. New objects are created with attributes based on the parent object.
  6. Interaction with Food and Sites: Objects can consume food to reduce hunger and may interact with sites to produce extra food on the canvas. Reaching food or sites changes their status and can trigger further actions.

concept

While there are certainly many inspirations, including Simmiland (a God-like card game), John Conway’s Game of Life, the path drawing project from Raes’ presentation, and P5 reference projects (bouncing balls, flocking behavior, Perlin noise, etc.), the idea first came to me as a reminder to think about the nature of simulation as well as how the routined life has alienated humans to be objects subjected to rules – hence the title “Object Life Sim.”

Figure 1: Simmiland (The playing God idea and the color scheme reference)

The paradox lies here, as the nature of simulation suggests that it is trying to imitate something superior, something intricate and more complex, it is so weird that if the life itself is already institutionalized, then what’s the point of simulating it? Isn’t it going to result in an Ouroboros? Yet, there’s an understated allure in simulating our surroundings and engaging with them at minimal cost, which has given rise to this very basic simulation of a life reduced to objects. Or, perhaps these are, in a way, the most crucial simulations?

Figure 2: Game of Life (Resource and reproduction reference)

Another motivation to do so – to play God for a moment – emerged during in our first class discussion. As we delved into the role of randomness in art, I held the belief that randomness could be an intrinsic element, present even in Mondrian’s analytical paintings or the precisely proportioned sculptures of ancient Greece. However, I was surprised by the idea of how possible it is for art to be random, brought up by a classmate. This prompted me to reconsider whether the prevalent randomness in today’s generative art detracts from its legitimacy as art. Then I came up with the analogy of the creation of the world – if the world was created by a deity with a singular act (akin to the First Cause) and then left to evolve independently, can it still be considered the deity’s creation?  Similarly, if the set of algorithms behind a piece is designed by human, and the initial set of parameters is decided by human, is it our creation? While my stance is affirmative, as I believe the eventually tangible ‘piece’ is not the art itself but separate from it or only plays a conduit and could be reached however we want, I would still like to pose this question for your contemplation.

CODE & Production SNIPPETS

Again, as it would be tedious to go through the structures and details in the code, I will only introduce some of the sources I used and some interesting parts of them from my perspective.

First, when it comes to reading the keyboard inputs with keyCode, it is very useful to have this website to know how the keys are linked to key codes. This enables me to set up different conditions by combining keyboard and mouse together to create the control logic:

function mouseClicked() {
  // Spawn new instances at the mouse location when clicked with different keys pressed
  if (keyCode === 79) { // If the last pressed button is 'O'
    initiateObject(mouseX, mouseY); 
  } else if (keyCode === 70) { // If the last pressed button is 'F'
    foodArray.push(new Foods(mouseX, mouseY, setMaxUtility)); 
  } else if (keyCode === 83) { // If the last pressed button is 'S'
    siteArray.push(new Sites(mouseX, mouseY, setMaxUtility));
  } else {
    // If the simulation hasn't started, initiate it and create initial objects
    if (simStart === false) {
      simStart = true; // Set the simulation start flag to true
      for (i = 0; i < initialObjectNum / 2; i ++) {
         // Spawn initial objects off-screen
        initiateObject(random([0 - initialSize / 2, windowWidth + initialSize / 2]), random(windowHeight));
        initiateObject(random(windowWidth), random([0 - initialSize / 2, windowHeight + initialSize / 2]));
      }
    } 
  }
}

Another useful source is the Unicode list for emojis (Yes, I learned to use emojis to draw stuff this time!) For example, I used it to set up random food emojis for my Foods class:

let foodIcon = ['\u{1F35E}', '\u{1F950}', '\u{1F956}', '\u{1FAD3}', '\u{1F968}', '\u{1F96F}', '\u{1F95E}', '\u{1F9C7}', '\u{1F9C0}', '\u{1F356}', '\u{1F357}', '\u{1F969}', '\u{1F953}', '\u{1F354}', '\u{1F35F}', '\u{1F355}', '\u{1F32D}', '\u{1F96A}', '\u{1F32E}', '\u{1F32F}']

class Foods {
  constructor(tempX, tempY, maxUtility, 
               tempSize = 10) {
    this.x = tempX;
    this.y = tempY;
    this.size = tempSize; // Set the initial size
    this.type = 'food';
    this.utility = random(0.5, maxUtility)
    this.status = null;
    this.icon = random(foodIcon)
  }
  
  // Display the object on canvas
  display() {
    fill('#ffd7a0'); // Set the brightness of the object based on the age
    noStroke();
    circle(this.x, this.y, this.size * this.utility + 10);
    
    textSize(this.size * this.utility);
    textAlign(CENTER, CENTER);
    text(this.icon, this.x, this.y);
  }
  
}

Next, I’d like to show two pieces of the core functions for my Objects to move. The first one finds the closest target on the canvas of its kind, and the second one is the exact math to calculate the movements. It is rather easy to have the objects move directly towards a target (I only have to copy-paste a bit from my first portrait project), while including the collision algorithm and the strategies to maneuver around is something more difficult for sure.

  find(arrayToFind) {
    let closestPoint = null; // Placeholder for the closest point
    let minDistance = Infinity; // Start with a very large distance
    let distance; // Variable to store calculated distance
    let ix, iy; // Coordinates of items in the array

    // Function to calculate the distance between two points
    const calculateDistance = (x1, y1, x2, y2) => {
      return Math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2); // Return Euclidean distance
    };

    // Iterate through the array of inquiry to find the closest object
    for (let item of arrayToFind) {
      
      ix = item.x; 
      iy = item.y;
      
      if ((ix === this.x) & (iy === this.y)) { 
        distance = Infinity; // Set distance to infinity if it's the same object
      } else {
        distance = calculateDistance(this.x, this.y, ix, iy); // Calculate distance to the item
      }
      
      // Update the closest point if the current distance is smaller
      if (distance < minDistance) {
        minDistance = distance; // Update minimum distance
        this.destObject = item; // Set the closest object as the destination
      }
    }
  }

Initially, my strategy after a collision was to let the objects nudge a bit randomly, which resulted in sticking in place with jerking behaviors. Then, I set up a strategy to let the objects escape in the opposite direction from the collision – an idea borrowed from bouncing balls. However, as in my simulation, moving toward the target is still a necessity after escaping; it resulted in the objects sticking in a line. So, I modified the strategy to slide around the collided objects, but it still didn’t work, leading to the objects rotating in place. At the end of the day, I worked through the algorithm of flocking behaviors mentioned in class and borrowed the separation to combine with my sliding behavior and put up the piece.

  move(arrayToFind) {
    this.find(arrayToFind); // Find the target object
    
    // Setup destination coordinates from the target object
    this.destX = this.destObject.x;
    this.destY = this.destObject.y;

    // Calculate the distance to the destination
    let dx = this.destX - this.x;
    let dy = this.destY - this.y;
    let distance = Math.sqrt(dx * dx + dy * dy);
    
    // Normalize the direction vector
    if (distance > 0) {
        this.directionX = dx / distance;
        this.directionY = dy / distance;
    } else {
        this.directionX = 0;
        this.directionY = 0;
    }

    // Calculate the next position
    let nextX = this.x + this.directionX * this.speed;
    let nextY = this.y + this.directionY * this.speed;
    
    // Check for collision with the destination object
    if (this.destObject) {
      let targetCombinedRadius = (this.size + this.destObject.size) / 2; // Adjust based on size
      let distToTarget = Math.sqrt((nextX - this.destObject.x) ** 2 + (nextY - this.destObject.y) ** 2);

      // If colliding with the target object, invoke reach
      if (distToTarget < targetCombinedRadius) {
        this.reach(); // Call reach() if colliding with the target
            
        // Slide away from the target
        let targetNormalX = (this.x - this.destObject.x) / distToTarget; // Normal vector
        let targetNormalY = (this.y - this.destObject.y) / distToTarget;

        // Calculate the sliding direction (perpendicular to the normal)
        let targetSlideX = -targetNormalY; // Rotate normal to find tangential direction
        let targetSlideY = targetNormalX;

        // Introduce a small random adjustment to sliding direction
        let targetRandomAdjustment = random(-0.1, 0.1); // Adjust as needed
        targetSlideX += targetRandomAdjustment;
        targetSlideY += targetRandomAdjustment;

        // Normalize the sliding direction
        let targetSlideDistance = Math.sqrt(targetSlideX * targetSlideX + targetSlideY * targetSlideY);
        if (targetSlideDistance > 0) {
            targetSlideX /= targetSlideDistance;
            targetSlideY /= targetSlideDistance;
        }

        // Move along the sliding direction away from the target
        this.x += targetSlideX * this.speed * 0.3; // Slide from the target
        this.y += targetSlideY * this.speed * 0.3;

        return; // Stop further movement after reaching
      }
    }
    
    // Maintain separation distance from other objects
    let separationDistance = this.size * 1.25; // Desired separation distance
    let separationForceX = 0;
    let separationForceY = 0;

    for (let other of objectArray) {
      // Skip if it's the same object or the target object
      if (other === this || other === this.destObject || other.status === 'mate') continue;

      // Calculate distance to the other object
      let distToOther = Math.sqrt((nextX - other.x) ** 2 + (nextY - other.y) ** 2);

      // If the distance is less than the desired separation distance, calculate a separation force
      if (distToOther < separationDistance) {
        let diffX = nextX - other.x;
        let diffY = nextY - other.y;
        
        // Normalize the difference vector
        if (distToOther > 0) {
            separationForceX += (diffX / distToOther) * (separationDistance - distToOther);
            separationForceY += (diffY / distToOther) * (separationDistance - distToOther);
        }

        // Sliding behavior
        let slideFactor = 0.3; // Adjust as needed for sliding strength
        let slideX = -diffY; // Perpendicular to the normal
        let slideY = diffX;

        // Normalize sliding direction
        let slideDistance = Math.sqrt(slideX * slideX + slideY * slideY);
        if (slideDistance > 0) {
            slideX /= slideDistance;
            slideY /= slideDistance;
        }

        // Apply sliding movement
        nextX += slideX * this.speed * slideFactor;
        nextY += slideY * this.speed * slideFactor;
      }
    }

    // Apply the separation force to the next position
    nextX += separationForceX;
    nextY += separationForceY;

    this.x = nextX;
    this.y = nextY;
    
    if (frameCount % 10 === 0) {
      // After updating the position
      this.positionHistory.push({ x: this.x, y: this.y });

      // Maintain the history size
      if (this.positionHistory.length > this.historyLimit) {
        this.positionHistory.shift(); // Remove the oldest position
      }
    }
    
  }

OBSERVATION

Lastly, now that the project is a simulation, I believe the observation of its behaviors matters a lot. While I do not have much time to fully explore the parameters and settings, here are a few general observations:

Figure 3: It is evident that the sites, as the source of food, have the most path towards and surrounded.

Figure 4: As the simulation goes on, the larger objects could start to hinder the movements of the others.

Figure 5: Towards the end of a simulation, no matter if the objects are in a healthy state, the behavior turns out to be more aimless as there is no incentive to interact.

Figure 6: The greater the average resource per area (in other words, the smaller canvas + the same amount of resource), the longer the simulation lasts.

Week 2 – Reading Reflection ‘Randomness and Control on Generative Arts and Beyond’

By all means, what does ‘generative’ stand for? And what about ‘random’, ‘chaos’, ‘definitive’, ‘rules’, ‘control’, ‘freedom’, ‘certainty’, and so on?

Before getting into details in Reas’ presentation, these questions occupied my mind. I would say that this week’s production had already raised these questions for me, and Reas’ philosophy presented exactly resonates with them. Binary systems could be potentially hazardous in terms of oversimplification from time to time, while they are very much a preferable tool for us humans to reference the world and reality. That happens to the balance between art and artist (as I touched on a bit in the first week’s production), the balance between randomness and certainty, the balance between aesthetics and technicalities (I encountered as many roles like a guitarist, a mixing engineer, a calligrapher, and here as a coder-artist), etc.

One of the observations I made out of the examples is the human physicality in that Mondrian painting where presented is the well-known seeming (or actually, as intended) abstraction of objectiveness. It is very interesting to learn that boogie-woogie music played as an inspiration to it as a musician. However, the great representation of something objective merged exactly from the human strokes of incapability to reach definite control. In that paintng, the refined thoughts and emotions – are spiritual of the mind – instead of spiritual of the body. On the other hand, the common lack of spiritual physicality these days may encourage us to also gaze through the opposite perspective and harness that to-be-decided set of definitiveness to praise the chance in our physicalities. In simpler words (or actually, in this specific context), the rules of computation and programming are grounds for the flourishing randomness in arts or, in other words, the physicality we humans could imagine, appreciate, adapt to, and reiterate we could potentially pursue.

This can be seen in many other examples in the presentation. From the deconstruction and reconstruction of figurative, symbolic, representative fragments or captures of the architecture following definitive rules to music from the Book of Change, chances on the rules and rules on the chances intersect with each other. In fact, the much amusing and providential discovery I found there is the irony of Book of Change in this particular context. While being named the Book of Change (Yi Jing 易经), this ‘book’ is many times accused of superstition but essentially represents the ancient wisdom of finding patterns, trends, and rules of the world to make predictions and guide life. That being said, the juxtaposition of chance and rule has been, in a sense, ubiquitous ever since.

Chance that is always planned and always surprising.

I believe this quote earlier said by Raes goes perfectly with the latest examples demonstrated in the presentation. While we recognize the power of randomness in certainties and the certainties in randomness, to what extent it is random/certain may stretch our attention further – as my opening wonderings suggest, and the question ‘where is the optimum balance between total randomness and complete control?’ lies in this middle ground. That being said, there is no way to maneuver around the definition of randomness and control – or at least while maneuvering, the multiplicity of them should be adopted. For example, the increment of randomness was realized by introducing more ‘randomizeable parameters’ in their second-to-last current project. Regarding merely the result, the randomness was raised for sure to human perception; However, this result was achieved by adding more control (or more controllable codes) to the program. In this case, I would rather perceive this ‘conflict’ as a mutual succession from both sides – when our technical attempts grant ideological and artistic fruits.

To end this reflection, I’d like to leave a question on ‘randomness in isolation.’ To my current understanding, there is no such thing as isolation in this field of mutual interaction – whether which side we picked at the beginning. Eventually, they merge and serve under a broader intention, purpose, collective, etc., and this will probably become one of my rules of thumb in the future.

So, on top of that, isolation could stand where?

Week 2 – Maneuvring Around Difficulties

Intro

Sometimes, unexpected pop-ups may not be serendipities but discoveries of limits that propel us to set off on a new adventure.

My expenditure this time didn’t turn out to be very smooth in terms of my realizing my initial vision and how that vision deviates from my expectations. On the other hand, I believe these sorts of experiences would be quite beneficial as a lesson learned to guide future planning.

As my initial vision (or product concept) changed many times in the course, it’s rather difficult to start this documentary with a definitive concept aforesaid. Therefore, I will briefly touch on the concept of each of the prototypes alongside my process description.

process

A Definitive Heart in 3D Space

In the first stage of my development, my goal was to construct a framework of basic interactions that could satisfy my needs later on – which was attempted later on but eventually did not stand for my final product. Hence, I stuck to the idea of generating a visible coordinate system in 3D space at the beginning and wrote a basic ‘arrow generating-shooting’ mechanism.

The loops in this prototype are mostly for displaying every ‘arrow’ I stored in an array and the axis plane in each frame.

let arrowArray = [];

function draw() {
...
// Display all arrows in arrowArray for 1 frame
  if (arrowArray[0]) {
    for (let i = 1; j = arrowArray[i] ;i += 1) {
      if (j._3Dz < -50) {
        arrowArray.splice(i, 1); // Remove the arrow that is behind the plane
      } else {
        j.arrowDisplay(); // A method in Arrow class
      }
    }
  }
}

// Draw axis plane on a Graphics object
function drawAxis() {
  // translate to 2D coordinate system
  axis.translate(-axis.width / 2, -axis.height / 2);
  
  axis.background(10);
  
  // draw the axises
  axis.stroke('white');
  axis.line(0, axis.height / 2, axis.width, axis.height / 2);
  axis.line(axis.width / 2, 0, axis.width / 2, axis.height);
  
  // draw the marks on the y axis
  for (let i = 0; i < width; i += axis.height / 20) {
    axis.line(axis.width / 2, 
         0 + i, 
         axis.width / 2 + 5, 
         0 + i);
  }
  
  // draw the marks on the x axis
  for (let i = 0; i < width; i += axis.width / 20) {
    axis.line(0 + i, 
         axis.height / 2,
         0 + i,
         axis.height / 2 - 5);
  }
}

Here, I adopted a math function that has the graphic appearance of a heart to test out the prototype. As the heart function could have multiple y values for one x value, I used the unit circle to generate the coordinates of the points on the function.

// Heart shape function
function functionHeartX(t) {
  return 160 * pow(sin(t), 3);
}

function functionHeartY(t) {
  return 150 * cos(t) - 55 * cos(2 * t) - 30 * cos(3 * t) - 10 * cos(4 * t);
}

Besides, I also tried some basic lighting to hue the ‘arrows’ with redish-pinky color. It was quite interesting to see how the gamma factor affects the actual light color.

lights();
ambientLight(50); // as a gamma factor
pointLight(
  255, 0, 0, // color
  0, 0, 0 // position
);

A Probability Heart in 2D Space

Then, I started a new prototype to specifically realize generating random positions based on the probability distribution in a 2D space (in order to later adopt into the 3D space). Multiple probability functions were tested in this stage, including:

// Gaussian probability function
function gaussianProbability(distance) {
  if (distance > maxDistance) {
    return 0; // No points beyond maxDistance
  }
  return exp(-pow(distance, 2) / (2 * pow(sigma, 2))); // Gaussian decay
}

// Quadratic probability function
function quadraticProbability(distance) {
  if (distance > maxDistance) {
    return 0; // No points beyond maxDistance
  }
  return max(0, 1 - pow(distance / maxDistance, 2)); // Quadratic decay
}

It is also noticeable that the default random() function could be used as such to mimic the probability realization:

// Add point based on probability
 if (random() < probability) {
   points.push(createVector(x + width / 2, -y + height / 2)); // Store the point
}

A Probability Heart in 3D Space

After that, it was much easier to adopt the probability parts into the 3D version (and to be presented as a cliche gift in a long distance relationship):

A Grayscale Webcam Downgrader

# Please access the p5js page directly from the instance to allow webcam/mic usage for the following demonstration.

On top of that, I started to experiment with the webcam as an input:

// Create a video capture from the webcam
video = createCapture(VIDEO, { flipped:true });
video.hide(); // Hide the default video element that appears under the canvas

My intentions in this stage is: 1. convert the video into grayscale values instead of RGBs (as the grayscale values can be later on mapped into probabilities); 2. downgrade the video resolution into a mosaic (as, for probability generation purposes, it would save a lot of time and be even more beneficial for the depiction of the overall shape that I planned to deliver with a collective of ‘arrows’).

To achieve the first goal, we have to operate directly on the pixel array that holds the RGB values for each pixel of each frame in the video with a time complexity of O(n) (instead of calling a filter() method for the displayed effect on the video stream when showing the video).

function videoToGray() {
  // Load the pixels from the video
  video.loadPixels();

  // Check if the video has pixels loaded
  if (video.pixels.length > 0) {
    // Convert to grayscale
    for (let i = 0; i < video.pixels.length; i += 4) {
      let r = video.pixels[i];     // Red
      let g = video.pixels[i + 1]; // Green
      let b = video.pixels[i + 2]; // Blue

      // Calculate grayscale value
      let gray = (r + g + b) / 3;

      // Set the pixel color to the grayscale value
      video.pixels[i] = gray;       // Red
      video.pixels[i + 1] = gray;   // Green
      video.pixels[i + 2] = gray;   // Blue
      // pixels[i + 3] stays the same (Alpha)
    }

    // Update the video pixels
    video.updatePixels();
  }
}

As for the second goal, however, it was also at this stage that I did a lot of technical idle work. I started to write my method for downgrading the video resolution right away without checking the features of the video.size() method, and ended up with a function that loops through the pixel arrays with a time complexity of O(n^2) (which became very time-consuming when the original video resolution is relatively high):

function videoToMosaic(mosaicSize) {
  // Load the pixels from the video
  video.loadPixels();
  
  // Check if the video has pixels loaded
  if (video.pixels.length > 0) {
    // Clear the mosaicPixels array
    mosaicPixels = new Uint8ClampedArray(video.pixels.length);
    
    // Loop through the canvas in blocks
    for (let y = 0; y < height; y += mosaicSize) {
      for (let x = 0; x < width; x += mosaicSize) {
        // Calculate the average color for the block
        let r = 0, g = 0, b = 0;
        let count = 0;

        // Loop through the pixels in the block
        for (let j = 0; j < mosaicSize; j++) {
          for (let i = 0; i < mosaicSize; i++) {
            let pixelX = x + i;
            let pixelY = y + j;

            // Check if within bounds
            if (pixelX < width && pixelY < height) {
              let index = (pixelX + pixelY * video.width) * 4;
              r += video.pixels[index];       // Red
              g += video.pixels[index + 1];   // Green
              b += video.pixels[index + 2];   // Blue
              count++;
            }
          }
        }

        // Calculate average color
        if (count > 0) {
          r = r / count;
          g = g / count;
          b = b / count;
        }

        // Set the color for the entire block in the mosaicPixels array
        for (let j = 0; j < mosaicSize; j++) {
          for (let i = 0; i < mosaicSize; i++) {
            let pixelX = x + i;
            let pixelY = y + j;

            // Check if within bounds
            if (pixelX < width && pixelY < height) {
              let index = (pixelX + pixelY * video.width) * 4;
              mosaicPixels[index] = r;        // Set Red
              mosaicPixels[index + 1] = g;    // Set Green
              mosaicPixels[index + 2] = b;    // Set Blue
              mosaicPixels[index + 3] = 255;   // Set Alpha to fully opaque
            }
          }
        }
      }
    }

At the end of the day, it turned out that I could simply lower the hardware resolution with video.size() method:

video.size(50, 50); // Set the size of the video

Random Points Generated Based on Probability

Next, I wrote a prototype to see how the mosaic of probabilities could guide the random generation. The core code at this stage is to map the x-y coordinates on the canvas to the mosaic pixel array and map the grayscale value to probability (the greater the grayscale, the lower the probability, as the brighter the mosaic, the fewer the points):

function mapToPixelIndex(x, y) {
  // Map the y-coordinate to the pixel array
  let pixelX = Math.floor((x + windowWidth / 2) * j / windowWidth);
  let pixelY = Math.floor((y + windowHeight / 2) * k / windowHeight);
  
  // Ensure the pixel indices are within bounds
  pixelX = constrain(pixelX, 0, j - 1);
  pixelY = constrain(pixelY, 0, k - 1);
  
  // Convert 2D indices to 1D index
  return pixelX + pixelY * j;
}

function shouldDrawPoint(x, y) {
  let index = mapToPixelIndex(x, y);
  let grayscaleValue = pixelArray[index];
  
  // Convert grayscale value to probability
  let probability = 1 - (grayscaleValue / 255);
  
  // Decide to draw the point based on probability
  return random() < probability; // Return true or false based on random chance
}

A Probability Webcam in 3D space

‘Eventually’, I incorporated all the components together to try out my initial vision of creating a fluid, responsive, and vague but solid representation of images from the webcam with arrows generated based on probabilities flying towards an axis plane.

Unfortunately, although the product with many tuning of variables like the amount of arrows, the spread and speed of the arrows, and the lapse of arrows on the axis, etc., there could be a vague representation captured (aided by re-mapping & stretching out the probabilities with more contrasting grayscale values), the huge amount of 3D objects required to shape a figure significantly undermines the experience of running the product.

function applyHighContrast(array) {
  // Stretch the grayscale values to increase contrast
  let minVal = Math.min(...array);
  let maxVal = Math.max(...array);

  // Prevent division by zero if all values are the same
  if (minVal === maxVal) {
    return array.map(() => 255);
  }

  // Apply contrast stretching with a scaling factor
  const contrastFactor = 6; // Increase this value for more contrast
  return array.map(value => {
    // Apply contrast stretching
    let stretchedValue = ((value - minVal) * (255 / (maxVal - minVal))) * contrastFactor;
    
    // Clip the value to ensure it stays within bounds
    return constrain(Math.round(stretchedValue), 0, 255);
  });
}

Besides, the 3D space did not benefit the demonstration of this idea but hindered it as the perspectives of the arrow farther away from the focus would make them occupy more visual space and disturb the probability distribution. 

A Probability Webcam in 2D space

At the end of the day, after trying the ortho() method in the 3D version (which makes the objects appear without the affecting perspectives), I realized that reconstructing a 2D version was the right choice to better achieve my goals.

In this latest 2D version, I gave up the idea of drawing the axis plane and introduced the concept of probability distribution affected by ‘mic level’.

function setup() {
  ...

  // Create an audio from mic
  audio = new p5.AudioIn();
  audio.start(); // start mic
}

function draw() {
  ...

  // Map the audio level to contrast factor
  let level = map(audio.getLevel(), 0, 1, 1.05, 30);
  // Apply high contrast transformation
  pixelArray = applyHighContrast(pixelArray, level);
  ...
}

function applyHighContrast(array, contrastFactor) {
  // Stretch the grayscale values to increase contrast
  let minVal = Math.min(...array);
  let maxVal = Math.max(...array);

  // Prevent division by zero if all values are the same
  if (minVal === maxVal) {
    return array.map(() => 255);
  }

  // Apply contrast stretching with a scaling factor
  return array.map(value => {
    // Apply contrast stretching
    let stretchedValue = ((value - minVal) * (255 / (maxVal - minVal))) * contrastFactor;
    
    // Clip the value to ensure it stays within bounds
    return constrain(Math.round(stretchedValue), 0, 255);
  });
}

reflection

TBH, the technical explorations did consume much of my time this week, and it came to me later to realize that I could have thought of more about the theme ‘loop’ before getting started – as it appears now to be a very promising topic to delve deeper into. Nevertheless, I believe our workflow of production could be like this from time to time, and it is crucial to maintain this balance thoughout.

 

 

 

Week 1 – Self-Portrait: The Sword of Damocles

Intro

Sometimes I wonder if there is indeed some force behind coincidence – especially when those coincidences are of my favorability.

Right before I set off for my adventurous journey to Abu Dhabi, I visited an exhibition in Shanghai, in which a collection of portraits and self-portraits were borrowed from Centre Pompidou. From those who devoted their efforts to whether capture the subject faithfully or replicate their physical appearance to those who indulged their artistic preference, streaks of character, habits, and egos into the paints, the portraits collectively reminded me of that recurrent theme in every art form, I presume – the tension between the art and the artist. Aside from the cliche saying from Oscar Wild (although his definitive statement is, even for an ambivalent person like I am, attractive enough), I would stick to leaving the answer for now.

On the other hand, even if I cannot answer the big question for now (and, in fact, it could be forever), I can indeed establish a bit of myself under this daunting siding game. One of my takeaways from my first acting class today is that as a narrator, despite our habits of storytelling and the eventual manner we picked, it is our responsibility to Stay True in terms of this present moment, the soul of our story. I’ll not delve into the other takeaway of being courageous to leave others the right to tell our stories for now but make use of the first one promptly.

That being said, when it comes to composing my own self-portrait, which I’ve never tried, I’d be glad to take full responsibility for representing myself at this moment. To me, this is something that demands great courage as well. Looking back at how I could probably gain this courage, I’d say it’s the fact that I, at the end of the day, tried to admit the lack of it for so many years – from representing myself, playing music, to speaking up when I’d love to, dealing with interpersonal relationships, and to live and love, had brought me that serenity and tranquility to seeking for courage. It seems for many of us, at least many people I know, as soon as the concept of being courageous and brave was instilled into our heads, we started to self-exert as well as receive external expectations to “be courageous and brave” right away. And simply, it is impossible. It is impossible to develop that true courage and bravery by merely being told how a courageous person would possibly behave instead of honing oneself step by step and again and again in this world of reality.

Product Concept

Despite my rationale (or more of like my self-prep talk), it’s time to tackle the problem (if not the question), and I’ll try to put it as plainly as possible. At this moment, I’d be willing to conceptualize myself into a set of symbols, figures, or whatever you may call some figurative representations that could be found in real life. Although I could make a list of the symbols I’m about to include in this portrait right here, I’d like to leave it till the end and see if my conceptualization and abstraction make sense to you.

Nevertheless, the overall concept is MEs versus an ideological black hole that I’ve found as the Sword of Damocles in my life. That being said, whenever the mouse is pressed, my hovering incarnations in this blended world of 2D and 3D shapes will initiate their attack toward the black hole despite their shaky, timid, and hyped state, setting off on a journey with no way back.

Technically speaking, my using 2D shapes as my personal representatives while letting the black hole be 3D results from the simple attempt to utilize both 2D and 3D drawing/shaping functions somewhere around this project (plus, it would be quite time-consuming if I used basic 3D shapes to abstract the symbols I’d like to include). However, I can indeed also justify this choice in terms of its symbolic connotation. To reflect the nature of this colossal hazard, bain of insecurity, and ineffable abyss that lies completely out of my mortal control, I placed the threat one dimension higher than myself.

Metaphorically, MEs’ attacking the black hole embodies the courage I’d like to attain despite what comes next. As for the ending of this one-act play, maybe it’s MEs’ begone from this world? Or maybe it’s MEs entering a brand-new universe. At least if the black hole could be gone from this particular universe, it would be a happy ending for both me and the rest.

coding highlight

Despite the fact that I was not familiar with js, I would self-reckon as someone who is used to product-oriented coding; in other words, I tried to utilize as much as possible at this moment from the source to get to my ideal product as closely as possible. Although, from time to time, this mindset could lead me to a hasty organization and poor-looking codes, I do believe it’s beneficial for me to realize (at least part of) my vision quickly.

In this product, I took the black hole as a starting point. While calling the function shpere() is rather an easy deal, it did take me some time to figure out how to maintain both 2D and 3D shapes on the same canvas. The answer turned out to be using push()/pop() to create an individual drawing group. Another takeaway is the general approach to creating animation with p5js – thinking from a perspective of frame generation, specifying the animation frame by frame. And this led me to use variables to store the current status and then update the variables to achieve animation (see the usage of rotateX()/rotateY()). On top of that, to add texture() to the sphere, I tried with loadImage() in the preload() function (and thus used an image as the background as well).

*: all image files used in this project are generated with DALL-E3

// draw the sphere with texture
function blackhole() {
  // Increment the rotation angles
  angleXrotate += 0.005; // Adjust the speed by changing this value
  angleYrotate += 0.005; // Adjust the speed by changing this value

  push(); // save the current state
  
  translate(0, -200, 0); // translate the coordinate system temporarily to place the sphere
  
  rotateX(angleXrotate); // Rotate around X-axis
  rotateY(angleYrotate); // Rotate around Y-axis
  
  texture(imgBlackhole); // adding texture to the sphere
  
  stroke('white'); // contour the sphere
  strokeWeight(0.25);
  
  sphere(100); // placing the sphere on the canvas with r = 100
  
  pop(); // return to the previous state

}
let imgBlackhole; // variable to hold image texture for the sphere
let imgUniverse; // variable to hold image for the background

function preload() {
  // load image files
  imgBlackhole = loadImage('blackhole.jpg');
  imgUniverse = loadImage('the universe.jpg');
}

As for the ME representatives, I break it down into two parts: the static drawing and the animation. Initially, I adopted the animation approach mentioned in the black hole section, only used functions to hold the drawing operations and called them in the draw() function. However, as my drawings are rather complex (although they have abstract appearances), I found that the code performance had started to decline. Therefore, I reorganized the code by placing the drawings into p5.Graphics object (that provides a dedicated drawing surface), in order to improve efficiency. This also enabled me to process the objects individually as a whole later on. Following is one example of my utilizing p5.Graphics object to draw an abstract guitar.

function drawGuitar() {
  let c = color('#bd6d36');
  let c1 = color('#592c0e');
  let c2 = color('#814116');
  
  // Create a graphics object guitar
  guitar = createGraphics(100, 150);

  // Draw to the graphics object guitar
  // Body of the guitar
  guitar.fill(c); // Brown color
  guitar.noStroke()
  guitar.beginShape();
  guitar.vertex(25, 100); // Start at bottom left
  guitar.bezierVertex(10, 75, 10, 50, 25, 25); // Left curve
  guitar.bezierVertex(75, 0, 75, 50, 25, 25); // Right curve
  guitar.bezierVertex(90, 50, 90, 75, 25, 100); // Bottom curve
  guitar.endShape(CLOSE);

  // Neck of the guitar
  guitar.fill(c2); // Mid-dark brown color
  guitar.rect(25, 5, 8, 80); // Neck
  
  // Sound hole
  guitar.fill(c1); // Dark sound hole
  guitar.ellipse(25, 50, 10, 25); // Draw sound hole

  // Strings
  guitar.stroke('white'); // White color for strings
  guitar.strokeWeight(1);
  for (let i = 0; i < 6; i++) {
    guitar.line(25 + 2 * i, 0, 25 + 2 * i, 60); // Draw strings
  }
  
}

That being said, as I started working on the animation for MEs, p5.Graphics objects made it easier for me to instantiate them in class in order to define different behaviors (aka. hovering around the mouse and attacking the black hole). While I will not delve into every function I wrote for the MEs, I would like to briefly outline its operation logic. As the centerpiece, a class called Hoveringme includes six variables as parameters and two methods. The X/Y variables are used to track the relative coordinate of the object to the mouse and its absolute coordinate on the canvas. The show() method is my version of the draw() function for this particular class and will be called in the draw() function in every frame. The attack() method only updates the instance status (whether it’s ‘hovering’ or ‘attacking’), which, as a condition, determines the exact operation in the show() method.

// the hovering MEs' class
class Hoveringme {
  constructor(relativeX, relativeY, shape, status) {
    this.relativeX = relativeX;
    this.relativeY = relativeY;
    this.shape = shape;
    this.status = 'hovering';
    this.displayedX = 0
    this.displayedY = 0;
  }
  
  // show the MEs
  show() {
    push();
    /*
    ...
    */
    pop();
    
  }
  
  // ME to attack the blackhole
  attack() {
    this.status = 'attacking';
    
  }
  
}

Just to add a bit about my animations, which are broken down into two statuses as mentioned. First, the hovering status includes random() in the coordinates added on mouseX/Y

if (this.status === 'hovering') {
      // Update the ME's position around the mouse
      this.displayedX = mouseX + this.relativeX + random(-10, 10) * 0.05; 
      this.displayedY = mouseY + this.relativeY + random(-10, 10) * 0.05; 
      
    }

On the other hand, to achieve the attacking effect, the mouseX/Y has to be excluded (otherwise, the translation/movement of the object will always be relative to the mouse’s position).

else if (this.status === 'attacking') {
      let spherePosition = createVector(0, -200, 0); // Sphere's position
      let targetX = spherePosition.x + width / 2; // Sphere's X position in 2D
      let targetY = spherePosition.y + height / 2; // Sphere's Y position in 2D
      
      // Update the ME's position towards the target
      this.displayedX += (targetX - (this.displayedX)) * 0.05; // Move towards target X
      this.displayedY += (targetY - (this.displayedY)) * 0.05; // Move towards target Y
}

The other part of the attacking process is to decide if the object has collided with the black hole and, if so, remove it from the canvas. I achieved this by holding all ME instances in an array and then, when needed, removing them along with removing elements from the array (splice ()). (For clearer demonstration, I’ve also placed the instantiating function callingme() here.) In fact, whenever the code deals with the instances, it accesses them by calling the first item in the array objectsme.

let objectsme = []; // array to hold ME graphics objects

// create ME instances, add to array objectsme, and create graphics objects
function callingme() {
  guitarObject = new Hoveringme(20, 20, 'guitar');
  racketObject = new Hoveringme(-40, -60, 'racket');
  rectObject1 = new Hoveringme(-70, -10);
  rectObject2 = new Hoveringme(-80, 90);
  rectObject3 = new Hoveringme(60, -60);
  objectsme.push(guitarObject, racketObject, rectObject1, rectObject2, rectObject3); 
  drawGuitar();
  drawRacket();
 }

// in Hoveringme.show() method
if (((this.displayedX - targetX)**2 < 200) & ((this.displayedY - targetY)**2 < 200)) {
   objectsme.splice(objectsme[0], 1);
}

reflections, and so on

I would say I’m quite satisfied with this production for now, whether in terms of technical tryouts or basically achieving my message-conveying intention. Still, there is much room for improvement – both technically and beyond. I could have tried with more 3D features, alpha channel, typography, etc. I could have also designed a better disappearing for the black hole when MEs had made the attacks. But overall, I believe this is quite an interesting exploration.

Continue reading “Week 1 – Self-Portrait: The Sword of Damocles”