Week 8 – Sleep No More

intro

How tough can designing a switch be? Presumably not much—but it’s the contrary. While the circuit to utilize a switch may be one of the simplest ones in the electronics world, to what extent can that switch ‘make sense’ as well as be creative and ergonomically intuitive? From the distance-triggered switch I rushed in class to a coin-classifier that utilizes the different diameters of different coins, both of these prototypes I made in the course seem too ‘basic’ in terms of being so realistic (aka. boring). At the end of the day, setting off from the concept of ‘using body parts,’ I came up with the idea of a ‘Sleepiness Detector’ that would act as an automatic alarm (visually with LEDs and sonically with a buzzar) when the user (if there is one) closes their eyes.

process

Although the first two prototypes are discarded, I would still give credit to them here as a chronicle:

Still, the second prototype did give me some inspiration about using the conductive fabric to form the later artificial eyelids. While the code below is relatively simple, I would say the most difficult part of this product is to ‘install’ artificial eyelids to my eyes—maybe this is where collaboration was a necessity.

/*
+---------+               +--------+
| Switch  |  eyes         | LED R  |
|         |               |        |
| Terminal|--- Pin 2 ---  | Anode  |--- Pin 9 (Arduino)
|         |               |        |
| Terminal|--- GND        | Cathode|--- GND (through 220Ω resistor)
+---------+               +--------+
                         
                         +--------+
                         | LED G  |
                         |        |
                         | Anode  |--- Pin 10 (Arduino)
                         |        |
                         | Cathode|--- GND (through 220Ω resistor)
                         +--------+
                         
                         +---------+
                         | Buzzer  |
                         |         |
                         | Positive|--- Pin 11 (Arduino)
                         |         |
                         | Negative|--- GND
                         +---------+
*/

#define SWITCH_PIN 2   // Pin connected to the switch
#define RED_LED_PIN 9  // Pin connected to the red LED
#define GREEN_LED_PIN 10 // Pin connected to the green LED
#define BUZZER_PIN 11   // Pin connected to the buzzer

void setup() {
    pinMode(SWITCH_PIN, INPUT_PULLUP); // Set switch pin as input with pull-up resistor
    pinMode(RED_LED_PIN, OUTPUT);       // Set red LED pin as output
    pinMode(GREEN_LED_PIN, OUTPUT);     // Set green LED pin as output
    pinMode(BUZZER_PIN, OUTPUT);        // Set buzzer pin as output
}

void loop() {
    // Read the state of the switch
    int switchState = digitalRead(SWITCH_PIN);

    if (switchState == LOW) { // Switch is ON (active low)
        digitalWrite(RED_LED_PIN, LOW);      // Turn off the red LED
        digitalWrite(GREEN_LED_PIN, HIGH);   // Turn on the green LED
        tone(BUZZER_PIN, 1000);              // Play sound at 1000 Hz
    } else { // Switch is OFF
        digitalWrite(RED_LED_PIN, HIGH);     // Turn on the red LED
        digitalWrite(GREEN_LED_PIN, LOW);    // Turn off the green LED
        noTone(BUZZER_PIN);                   // Turn off the buzzer
    }
}

schematics & illustration

Both of the graphs are generated with TinkerCAD

Product Demo

Hindsight

Obviously, the method I used to install eyelids was not wise enough to replicate easily. If this is actually to be a thing, then there must be some more mature way to detect if eyes are closed.

Week 8 – Reading Response

It’s all about the balance – right?

Aesthetics vs. function, durability vs. lightness, artistic vs. technical—it’s so easy to fall into a binary mindset. To some extent, my intuitive argument would be: ‘Yes, that indeed makes life easier—but in terms of?’ While we humans like clear frameworks and roadmaps to navigate through the complexities, it seems that at the end of the day, despite the urge to simply pick a side (or polarize, in a fancy way), things—in many cases—turned out the most favorable when we took a step down and found the middle ground.

In fact, it seems quite contradictory to me, especially considering Norman’s arguments as a whole: yes, we tend to be hard-headed and pick sides intuitively, and the ‘fact’ seems to praise the balanced, but to perceive that balanced beauty/harmony/coherence or whatever induced, it comes back to our first-hand perception system (or effects in Norman’s words).

Okay, I’ll try to step down from delving into the philosophical rabbit hole. However, maybe more practically, how can these inquiries benefit our—or my—practice? From poetry to music to tennis, I constantly struggle between the fine lines. And now, not surprisingly, but ‘coding’ turned out to be the same. If I am to name a takeaway from this, then although I may not be able to point out a solution to any of the fields at the point or even forever, it must be, again, transforming the mindset:

When we use the term balance, what’s its connotation? If something, two or more, needs to ‘be balanced,’ at least it sounds to me that we are assuming that those things are intrinsically conflicting to some extent. But what if they do not conflict with each other in the first place? I’m not saying that this is the new version of ‘fact’ but suggesting maybe we should try to get rid of that guggling mindset and try as hard as possible to really, really look at what we are dealing with as a whole.

 

 

Midterm Project – Motion Ship

#####LINK TO THE GAME#####
#####CODE OF THE GAME#####
(Unfortunately my p5 seems down for no reason;
Project is temporarily hosted on GitHub)

intro

First of all, as the game design documentation in my progress report has included the essence of my project at large, I would try to focus more on the improvements and specificities I made in this second stage of developing my NEXT-GEN-SOMATIC-MOTION-SENSING-SPACE-SHOOTING-ACTION game, Motion Ship.

tech and concept DEVELOPMENT

When it comes to the development of the project, I would say that there are two parts to the story: 1. To realize and polish my initial vision; 2. To make decisions in terms of removing elements from the plan or adding flavors to it (e.g., removing the audio level & mouse control to reduce the complexity of commands to only head motion and keyboard inputs).

1. Interactive Experience

As the centerpiece of the game, the realization of the concept of ‘controlling the spaceship with the player’s head motion’ was my primary objective. Although at the end of the first stage, I had achieved the basic mapping relationship between the head position detected by the ML model and the displayed position of the player spaceship in the game, there were still several awkward shortcomings, including:

  1. The ship respawns itself every frame in the head position directly instead of moving towards it smoothly. This was later tackled by introducing the smooth-approaching logic I used in my first project.
  2. The ship’s motion responsiveness to the head motion was too ‘authentic’, leading to the player’s having to literally move drastically in order to control the ship instead of intuitively directing the ship with slight movements. This was tackled by adding factors of motion sensitivity to the mapping relationship.
  3. The ship appeared to be ‘translating’ in space (although in terms of programming, it is), instead of reflecting the aerodynamic behavior of real aircraft. Thus, rotations in all three axes were introduced to simulate such effects.
update(headPos) {
  if (this.toDestroy === false) {
    // Update position based on head movement (-1 to 1 mapped to screen space)
    let targetX = map(headPos.x, -1, 1, -width, width);
    let targetY = map(headPos.y, -1, 1, -height * 1.5, height * 1.5);
    this.x += (targetX - this.x) * 0.15;
    this.y += (targetY - this.y) * 0.15;
    
    // Constrain the postion within the gaming zone (2.87 approx. 3 calculated from triangular perspective: fovy = 0.5, camZ = 800, shipZ = 280)
    this.x = constrain(this.x, -gamingZone.width / 3, gamingZone.width / 3);
    this.y = constrain(this.y, -gamingZone.height / 3, gamingZone.height / 3);
    
    // Update rotation based on head movement
    this.rotationX = map(-headPos.y, -1, 1, -PI / 3, PI / 3);
    this.rotationY = map(headPos.x, -1, 1, -PI / 10, PI / 10);
    this.rotationZ = map(headPos.x, -1, 1, -PI / 1.25, PI / 1.25);
    
    // Tactic engine reset
    if (this.tacticEngineOn === true) {
      let currentTime = millis();
      if (this.model === assets.models.playerShip1) {
        this.health = 100;
      } else {
        this.energy = 100;
      }
      if (currentTime - this.tacticEngineStart > 15000) {
        this.tacticEngineOn = false;
        if (this.model === assets.models.playerShip1) {
          this.health = 100;
        } else {
          this.energy = 100;
        }
      }
    }
2. Gameplay Aspect & UI

One major awkwardness I spotted then was that when the canvas aspect followed the window, the 3D spatial relationships between the objects and the visual distortion tended to be uncontrollable – for example, an enemyship could seem on the laser trajectory of the player when in the distance, but in fact it was an illusion introduced by perspectives. As a result, I devised several mechanisms to smooth out the experience, including:

  1. Define a definite gaming zone with constant aspect (1:1) on the window (regardless of whether the window is in vertical or landscape aspects).
  2. Trigonometrically calculate and confine the objects in the 3D space in relation to the camera position.
  3. Enlarge the collision box & the speed of the lasers fired to reduce difficulty when hitting moving enemies.

On top of that, other improvements besides the gaming zone include allowing enemy ships to launch lasers, incorporating different meteoroid models, displaying pilot logs and other info on the margin out of the gaming zone, displaying laser and health bar within the gaming zone, etc.

3. Visual Effects

To further improve the immersiveness of the gameplay, I made four major changes:

  1. Space dust randomly generates and flies towards the player’s ship, creating a sense of speed (compared to the insufficient indication of speed when there were only enemies and obstacles flying slowly towards the player).
    class SpaceDust {
      constructor(maxParticles = 50) {
        this.maxParticles = maxParticles;
        this.particles = [];
        this.spawnRate = 2; // Number of particles to spawn each frame
        this.initParticles();
      }
    
      // Initializes the particles array with empty particles.
      initParticles() {
        for (let i = 0; i < this.maxParticles; i++) {
          this.particles.push(this.createParticle());
        }
      }
    
      /*
      Creates a single dust particle with random properties.
      @returns {Object} A particle with position, velocity, size, and lifespan.
      */
      createParticle() {
        return {
          pos: createVector(random(-gamingZone.width / 2, gamingZone.width / 2), random(-gamingZone.height / 2, gamingZone.height / 2), -random(1000, 1500)),
          vel: createVector(0, 0, random(80, 100)), // random Z speed
          size: random(2, 4),
          lifespan: random(50, 200) // Frames the particle will live
        };
      }
    
      // Updates all particles: moves them forward and resets them if necessary.
      update() {
        for (let i = 0; i < this.maxParticles; i++) {
          let p = this.particles[i];
          p.pos.add(p.vel);
          p.lifespan --;
    
          // If the particle has passed the player or its lifespan ended, reset it
          if (p.pos.z > 300 || p.lifespan <= 0) {
            this.particles[i] = this.createParticle();
          }
        }
      }
    
       // Renders all particles onto the screen.
      render() {
        push();
        // Enable additive blending for a glowing effect
        blendMode(ADD);
        for (let p of this.particles) {
          push();
          translate(p.pos.x, p.pos.y, p.pos.z);
          noStroke();
          fill(255, 255, 255, map(p.lifespan, 0, 200, 50, 255)); // Fade out based on lifespan
          sphere(p.size);
          pop();
        }
        blendMode(BLEND); // Reset to default blending
        pop();
      }
    }
  2. Vignette effect in the background to create depth instead of having all the objects floating on a plane.
    loadBackgroundWithVignette(key, path) {
      loadImage(path, (img) => {
        const vignettedImg = this.applyVignette(img);
        this.textures[key] = vignettedImg;
      });
    }
    
    applyVignette(img) {
      // Create a graphics buffer the same size as the image
      let gfx = createGraphics(img.width, img.height);
      gfx.clear();
    
      // Parameters for the vignette
      let centerX = img.width / 2;
      let centerY = img.height / 2;
      let maxDiameter = max(img.width, img.height) * 1.25;
    
      gfx.noFill();
      gfx.background(0, 0, 0, 0); // Ensure transparency
    
      gfx.blendMode(BLEND);
    
      // Draw multiple concentric ellipses to create a radial gradient
      for (let r = maxDiameter / 2; r > 0; r -= 20) {
        // Adjust alpha based on radius
        let alpha = map(r, 0, maxDiameter / 2, 40, 0); // intensity: darkest part = 50, larger the darker
        gfx.noStroke();
        gfx.fill(0, 0, 0, alpha);
        gfx.ellipse(centerX, centerY, r, r);
      }
    
      // Convert gfx (p5.Graphics) to p5.Image
      let vignetteImage = gfx.get();
    
      // Create a copy of the original image to avoid modifying it directly
      let processedImg = img.get();
    
      // Blend the vignette image onto the processed image using MULTIPLY mode
      processedImg.blend(vignetteImage, 0, 0, vignetteImage.width, vignetteImage.height, 0, 0, processedImg.width, processedImg.height, MULTIPLY);
    
      return processedImg;
    }
  3. Parallax effect of the background to increase the responsiveness of environment to the player’s motion.
    class Background {
      constructor(texture) {
        this.texture = texture;
        this.xOffset = 0;
        this.yOffset = 0;
        this.playerPreviousX = null;
        this.playerPreviousY = null;
        this.parallaxFactor = 250; // Adjust for parallax strength
      }
    
      update(playerX, playerY) {
        let playerMovementX = playerX - this.playerPreviousX;
        let playerMovementY = playerY - this.playerPreviousY;
        
        // Calculate the background offset
        this.xOffset += playerMovementX * this.parallaxFactor;
        this.yOffset += playerMovementY * this.parallaxFactor;
        
        this.playerPreviousX = playerX;
        this.playerPreviousY = playerY; 
      }
    
      render() {
        push();
        translate(-this.xOffset, -this.yOffset, -5000); // Positioned far in the background
        noStroke();
        texture(this.texture);
        // Render a large plane to cover the background area
        plane(width * 7.5, height * 7.5);
        pop();
      }
    }
  4. The windshield (although only frames) around the gaming zone to enhance the sense of an FPP piloting experience.

4. Game Flow

After hearing feedback from several friends, I decided to add an instruction page before entering the gameplay to make life easier for the players.

In addition, I also enabled the player to restart the game immediately instead of having to restart from scratch or reconfigure the game.

5. Storytelling

Last but not least, one of the most illuminating takeaways from developing this project is to recognize and accommodate the gap between a developer’s understanding/assumption and the players’ ‘infinite’ possibilities to approach the product. For example, displaying the variable names on the screen or using them in the instructions seems to be clear enough for me during the development, while a player may not have enough experience or interest to distinguish and follow.

Therefore, I replaced the variable names with terms of more meaning within the space action worldview to create more intuitive guidelines for the player with the aid of visual indications.

SOme words, in hindsight

It is true that there is no ‘perfection’ in terms of finishing a project – at this point, I still have many ideas to add to the game if regarding it as a game to publish or so, including level design, more value balance, more storytelling, enemy and obstacle varieties, bosses, more tactic engines (special skills of each ship), more consistent aesthetics, and so on. On the other hand, I found myself quite satisfied with this current presentation – in terms of me utilizing wheels and knowledge learned in the process, trying to think not only from a developer perspective, and establishing a coherent storytelling through a product, etc. And it made me more excited to get into the physical programming.

Week 5 – Reading Response

Reading through the passage by Golan Levin and others, I was re-introduced into the world of computer vision without being overwhelmed by technical jargon. As someone who used to get in touch with CV only on a library-resorting basic, aka. using OpenCV in Python, rediscovering some rudimentary algorithms behind it from a perspective of technology in arts brought me some really fascinating ideas compared to the old-school path-finding tasks.

The paper acknowledges how CV has traditionally been the realm of expert researchers and applications limited to military and law enforcement (and yes, of course, teenagers’ robotics tournaments).  With advancements in software development tools, the rise of open-source communities, and the affordability of digital video hardware, CV is no longer an exclusive playground. Artists and designers are now empowered to experiment and innovate, integrating vision-based interactions into their creative projects with relative ease.

A particularly insightful section for me is to emphasize the importance of the physical environment in enhancing the performance of CV systems. The authors argue that optimizing lighting conditions, using retroreflective materials, and selecting appropriate camera lenses can significantly improve the accuracy and reliability of vision algorithms. In my own projects that utilized webcam and process on the video pixels, although the ‘algorithm’ is simply resorting to the grayscale conversion, it is already noticeable to see how the physical environment affects the outcome. The collaboration between software and the physical setup underscores a holistic approach to designing interactive media, where both technological and artistic considerations play crucial roles.

Reflecting on the questions posed by the paper, it becomes clear how CV fundamentally differs from human vision. While humans effortlessly interpret context, recognize patterns, and infuse semantic meaning into visual data, CV relies on meticulously crafted algorithms to process pixel information without inherent understanding, creating both opportunities and challenges for us. By all means, the opportunities lie in the possibility of fine-tuning, altering, and utilizing what the computer has ‘seen’ with algorithms (in other words, to ‘think’ after the ‘listening’ and before ‘speaking’) while it is hard to change the optical structure of human vision. On the other hand, this ‘thought-precedes-realization’ approach in harnessing CV could result in hindering the imagination of artists as the fine line between envisioning and then realizing and the more intuitive manner of mixing the creation and designing at the same time can be easily blurred by lines of codes.

Besides, this powerful capability also introduces ethical considerations, especially when used in interactive art. The capacity for tracking and surveillance can enhance the immersive quality of art installations, making them more responsive and engaging. Yet, it also raises concerns about privacy and consent. For web-based instances like the games I’m currently working with, it is easy and necessary to ask for permission from the user, while installations and broader observational systems could skip the users’ first-hand consent. How do we balance the creative potential of CV with the need to respect individual autonomy and privacy? These questions are crucial as artists and technologists continue to push the boundaries of what interactive art can achieve.

 

Week 5 – Midterm Progress Report

intro

For my midterm project, although the Object Life Sim project seems promising to me, I do not have much idea for furthering it, and I’d rather incorporate a higher degree of interactivity in this project. Therefore, I would like to integrate the elements that appeared in my previous attempts – both conceptually and technologically – into a new game. And for this progress report, I will start with composing a standard game design documentation and then touch on the technical difficulties and that of the desgin-wise.

game design documentation

1. Game Overview

Title: Motion Ships
Concept:
Motion Ship will be a space-themed action game where players control a spaceship using head movements detected via a webcam. Navigate through space environments, combat obstacles, and achieve high scores through maneuvering and sound-activated attacks.

2. Game Structure

2.1. Game Flow

  1.  Start Screen
  2. Configuration Menu
  3. Gameplay
  4. End Screen
  5. LoopBack or Exit
3. User Interface

3.1. Start Screen

Title Display: “Motion Ships”
Introduction: “Control the spaceship with your head.”
Prompt: “Press any key to start.”
Functionality: Waits for any key press to transition to the Configuration Menu.

3.2. Configuration Menu

Customization Options:
    Spaceship Selection: Choice between two spaceship models.
    Background Selection: Choice among four distinct battle backgrounds.

Controls:
    Selection Interface: Allows players to select their preferred spaceship and background before starting the game.
    Functionality: After configuration, initiates gameplay.

3.3. End Screen

Score Display: Shows the player’s final score.
Functionality: Offers players the choice to restart or modify settings post-game.

4. Gameplay Mechanics

4.1. Player Controls

Input Methods:
    Webcam-Based Head Movement Detection: Controls spaceship movement along the X and Y axes.
    Sound Input Detection: Monitors ambient sound levels to trigger beam launches.

Control Scheme:
    Movement: The spaceship moves along the X and Y axes based on the player’s head position detected by a face-detection machine learning model.
    Rotation: The spaceship rotates in real-time to mirror the direction of movement, simulating realistic aircraft maneuvers.

4.2. Player Character

Spaceship Model:
Formats Supported: 3D .obj files.
Positioning:
    Fixed Z-Axis: The spaceship remains on a fixed plane along the Z-axis.
    Dynamic X and Y Axes: Movement is controlled via head tracking.

Energy System:
Energy Meter: Displayed on the bottom-left of the screen, indicating available energy.
Energy Consumption: Firing laser beams depletes the energy meter.
Energy Recovery: Collecting energy ores replenishes the energy meter based on the ore’s size.

4.3. Movement Mechanics

Forward Motion:
    Illusion of Movement: The spaceship appears to move forward by keeping its Z-position fixed while other objects move towards the spaceship.
    Relative Speeds: All in-game objects move at a base speed relative to the spaceship’s “forward” movement. Objects capable of independent movement have their own speed vectors in addition to the base speed.

4.4. Combat Mechanics

Attacking:
    Weapon: Laser beams automatically launched based on ambient sound levels.

Trigger Mechanism:
    Sound Input Detection: Continuously monitors the player’s microphone input.
    Threshold-Based Activation: When the ambient sound level exceeds a predefined threshold, the spaceship automatically fires laser beams.

Laser Behavior:
    Trajectory: Beams travel along the negative Z-axis from the spaceship’s X and Y positions when launching.
    Energy Cost: Each launched beam consumes a small amount of the spaceship’s energy.

Feedback Indicators:
    Visual Cue: Brief animation or flashing effect indicating that a beam has been fired due to sound input.
    Energy Indicator: Real-time update of the energy meter showing depletion upon beam firing.

4.5. Obstacles and Enemies

4.5.1. Meteoroids

Appearance: Rock-textured spherical objects.
Behavior: Typically remain fixed relative to the background.
Size Variation: Random sizes within predefined limits.

Interactions:
    Destruction: Requires a number of laser hits proportional to size to be destroyed. Upon destruction, the meteoroid shakes briefly before disappearing.
    Collision: If a meteoroid collides with the player’s spaceship, both are destroyed, resulting in game over.

4.5.2. Energy Ores

Appearance: Crystal-textured spherical objects.
Behavior: Typically remain fixed relative to the background.
Size Variation: Random sizes within predefined limits.

Interactions:
    Destruction: Requires a number of laser hits proportional to size to be destroyed. Upon destruction, the energy ore shakes briefly before disappearing, and the player’s energy is replenished based on the ore’s size.
    Collision: If an energy ore collides with the player’s spaceship, both are destroyed, resulting in game over.

4.5.3. Enemy Ships

Appearance: Alternate spaceship models not selected by the player during configuration.
Behavior: Dynamic movement across all X, Y, and Z axes relative to the base speed. Change direction randomly every few seconds.
Collision Avoidance: Actively avoid colliding with other objects except the player’s spaceship.

Interactions:
    Destruction: Requires a number of laser hits to be destroyed. Upon destruction, the enemy ship shakes briefly before disappearing.
    Collision: If an enemy ship collides with the player’s spaceship, both are destroyed, resulting in game over.

4.6. Object Spawning and Lifecycle

Spawn Mechanics:
Spawn Rate: Objects spawn at regular intervals.
Spawn Probability:
    Meteoroids and Enemy Ships: Equal likelihood.
    Energy Ores: Lower likelihood compared to meteoroids and enemy ships.

Despawn Conditions:
Objects are removed from the game once their actual Z-position surpasses the camera’s Z-position by a specified distance, ensuring optimal performance and resource management.

5. Environment and Visual Design

5.1. Backgrounds

Selection: Players can choose from four distinct battle backgrounds during the configuration phase.
Parallax Effect: Backgrounds subtly move in response to the spaceship’s movement to create a dynamic and immersive living space effect.
Camera Perspective: Fixed camera position to maintain consistent player orientation.

5.2. Lighting

Position: Fixed behind the camera.
Effect: Illuminates all objects within the 3D space, ensuring visibility and depth perception.

5.3. Graphics

3D Models:
    Spaceships: .obj models for both player and enemy ships.
    Obstacles: Spherical models for meteoroids and energy ores, with varying textures based on type (grey for meteoroids, green for energy ores).

Animations:
Destruction Effects:
Shaking animations upon object destruction before removal.
    Movement Animations: Smooth transitions and rotations for moving objects.

6. Technical Specifications

6.1. Technology Stack

Machine Learning Integration: Utilize a face-detection library (ml5.js) for webcam-based head movement tracking.
Sound Processing Library: Integrate a sound analysis library (p5.sound) to process and analyze ambient sound levels for beam activation.

6.2. Performance Considerations

Object Management: Efficient spawning and despawning of objects to minimize memory usage and ensure smooth gameplay.
Collision Detection: Algorithms for real-time detection between laser beams and obstacles/enemies, as well as between the player’s spaceship and other objects.

6.3. User Input Handling

Face Detection: Implement real-time face and head movement tracking for responsive spaceship control through webcam access.
Sound Threshold Detection: Continuously monitor ambient sound levels to trigger beam launches when thresholds are met through microphone access.

7. Additional Features and Considerations

7.1. Scoring System

Score Incrementation: Points awarded based on actions such as destroying obstacles, collecting energy ores, and surviving over time.
High Score Tracking: Persistent tracking and display of the highest score achieved across game sessions.

7.2. Audio

Background Music: Ambient space-themed music to enhance immersion.
Sound Effects:

  1. Laser Firing: Audio cues when beams are launched.
  2. Destruction: Sounds for destroying objects.
  3. Collisions: Impact sounds for collisions leading to game over.
  4. Game Over: Sound indicating the end of the game.

7.3. Accessibility

Instructions: Clear in-game instructions or tutorials to guide new players on using head movements and sound inputs for control.
Visual Indicators: HUD elements like energy meters and score displays designed for clarity and ease of reading.
Adjustable Sensitivities: Allow players to adjust head movement and sound input sensitivities to accommodate different playstyles and environments.

8. Implementation Notes

8.1. Sound-Based Beam Launching

Calibration: Provide an initial calibration phase where the game adjusts the sound input threshold based on the player’s environment to prevent unintended beam firing.
Feedback Mechanism: Visual and audio feedback when beams are launched due to sound input, helping players associate their actions with in-game events.

potential challenges

1. Real-Time Face Detection and Head Movement Tracking

Real-time video processing and face detection can be computationally intensive, leading to lag or reduced frame rates. SO, implementing throttling mechanisms to control the rate of face detection processing could be conducive to overall efficiency.

2. Sound Input Detection and Processing

Differentiating intentional sound inputs (e.g., biu-biu-biu voice command) from ambient noises to trigger beam launches without false positives may need amplitude-based or frequency-based thresholding. It might be helpful to establish a continuous record (RMS, LUFS, etc.) before gamplay to calculate a based noise level.

3. Collision Detection in 3D Space

3D collision detection would be more complex and resource-intensive than 2D that I used in previous projects, especially with multiple moving objects and interactions. The potential solution could include:

  • Early Exit Strategies:
    Implement early exit conditions where collision checks are skipped if objects are too far apart to interact, conserving resources.
  • Simplify Collision Shapes – Axis-Aligned Bounding Boxes (AABB):
    For certain objects, AABB can offer a balance between simplicity and collision detection accuracy.
4. Object Spawning and Lifecycle Management

Without proper management, the continuous spawning of objects can lead to memory leaks and degraded performance over time, while the spawn rate and probability are important to maintain a balanced difficulty without overwhelming players or underutilizing game mechanics.

Therefore, adjusting spawn rates and object probabilities dynamically based on the player’s score or progression can be used to maintain engaging and balanced gameplay.

demo version

With all that said, I developed an early demo version without fine-tuning the parameters and visual effects.

Fundamentally, the first two technical difficulties are tackled in this stage, while the rest is untouched. Personally speaking, as this project expanded in scale, I learned about the game design structure and built highly object-oriented programming with code as specialized as possible.

Week 4 – Ghost of Words

Intro

This week, I adopted my production in week 2 – the dots silhouette according to the webcam input – to create a representation of the figure reflected in the webcam as well as the soul behind it in the background. It seems to me that the product in week 2 did lack some message/meaning – or whatsoever: What is the point of minoring the webcam with dots? Then, when it comes to the text generation of this week, the answer appeared to me to be the combination of text and my existent mechanism – as the mirroring mechanism symbolizes the entity, the phantom, the ghost, and the creator as well as the user, while the poems floating across the canvas reflects a piece of my soul. By doing so, the ghost of me (or you) becomes the exact pathway to discovering that piece of soul, adding incentives to the interaction.

process

I started simply by replacing the dots drawing in my week 2 product with text drawing – and obviously, the flashing of words can not stand for further meanings except for blinding the user even more compared to the dots, as we intuitively tend to read whatever the words presented in front of us.

Therefore, I tried another approach to displaying the poem’s lines anyway in the background and let the probability pixel matrix act as the alpha value matrix, this time to overlay on the text, thus resulting in the ghosty effect.

In the preload function, I’m ensuring that all external resources are loaded before the sketch runs –  Using loadTable, I import lines of text from textLines.csv, which will be used to generate the floating texts dynamically.

function preload() {
  // Load the CSV file
  textLines = loadTable('textLines.csv', 'csv', 'header'); // Adjust the path and options as needed
}

This time, I directly use the grayscale value as alpha value as they have the same range:

function drawAlphaFilter() {
  noStroke();
  
  // Iterate through each cell in the grid
  for (let y = 0; y < k; y++) {
    for (let x = 0; x < j; x++) {
      let index = x + y * j;
      let grayValue = pixelArray[index];
      
      // Calculate alpha value
      // Ensure alphaValue is within 0-250 for better visibility
      let alphaValue = constrain(grayValue, 0, 250); 
      
      // Set fill color to background color with the calculated alpha for overlay effect
      fill(17, 38, 56, alphaValue);
      
      // Calculate the position and size of each rectangle
      let rectWidth = windowWidth / j;
      let rectHeight = windowHeight / k;
      let rectX = x * rectWidth;
      let rectY = y * rectHeight;
      
      rect(rectX, rectY, rectWidth, rectHeight);
    }
  }
}

The RGB value used in this product is extracted from my personal website: Sloth’s Slumber | Xiaotian Fan’s Collection (sloth-slumber.com).

Then, the floating texts are managed through both class and helper functions, including:

function updateFloatingTexts() {
  // Update and display existing floating texts
  for (let i = floatingTexts.length - 1; i >= 0; i--) {
    let ft = floatingTexts[i];
    ft.update();
    ft.display();
    
    // Remove if off-screen
    if (ft.isOffScreen()) {
      floatingTexts.splice(i, 1);
      
      // Also remove from its slot
      let s = ft.slot;
      slots[s] = null; // Mark the slot as free
    }
  }
  
  // Iterate through each slot to manage floating texts
  for (let s = 0; s < totalSlots; s++) {
    if (slots[s] === null) {
      // If the slot is free, add a new floating text
      let newText = getNextText();
      if (newText) {
        let ft = new FloatingText(newText, s);
        floatingTexts.push(ft);
        slots[s] = ft; // Assign the floating text to the slot
      }
    } else {
      // If the slot is occupied, check if the tail has entered the screen
      let lastText = slots[s];
      
      if (lastText.direction === 'ltr') { // Left-to-Right
        // Check if the tail has entered the screen (x + width >= 0)
        if (lastText.x + lastText.getTextWidth() >= 0) {
          // Safe to add a new floating text
          let newText = getNextText();
          if (newText) {
            let ft = new FloatingText(newText, s);
            floatingTexts.push(ft);
            slots[s] = ft; // Replace the old floating text with the new one
          }
        }
      } else { // Right-to-Left
        // Check if the tail has entered the screen (x - width <= windowWidth)
        if (lastText.x - lastText.getTextWidth() <= windowWidth) {
          // Safe to add a new floating text
          let newText = getNextText();
          if (newText) {
            let ft = new FloatingText(newText, s);
            floatingTexts.push(ft);
            slots[s] = ft; // Replace the old floating text with the new one
          }
        }
      }
    }
  }
}

Another important function is to concatenate lines in order to fulfill across the windowWidth:

function getNextText() {
  // Reset index if end is reached
  if (currentLineIndex >= textLines.getRowCount()) {
    currentLineIndex = 0; // Reset to start
  }
  
  let combinedText = '';
  let estimatedWidth = 0;
  let tempIndex = currentLineIndex;
  let concatenationAttempts = 0;
  let maxAttempts = textLines.getRowCount(); // Prevent infinite loops
  
  // Loop to concatenate lines until the combined text is sufficiently long
  while (estimatedWidth < windowWidth * TEXT_MULTIPLIER && concatenationAttempts < maxAttempts) {
    let textLine = textLines.getString(tempIndex, 0);
    if (!textLine) break; // If no more lines available
    
    combinedText += (combinedText.length > 0 ? ' ' : '') + textLine;
    tempIndex++;
    
    // Reset if at the end of the table
    if (tempIndex >= textLines.getRowCount()) {
      tempIndex = 0;
    }
    
    // Estimate text width using p5.js's textWidth
    textSize(24); // Set a default size for estimation
    estimatedWidth = textWidth(combinedText);
    
    concatenationAttempts++;
    
    // Break if the same index is on loop to prevent infinite concatenation
    if (tempIndex === currentLineIndex) break;
  }
  
  // Update the currentLineIndex to tempIndex
  currentLineIndex = tempIndex;
  
  return combinedText;
}

Eventually, this time, when dealing with the full window canvas, I added a canvas resize function to respond to window resizing:

function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
  
  // Update Y positions of floating texts based on new window size
  for (let ft of floatingTexts) {
    let padding = 5; // Padding from top and bottom
    ft.y = map(ft.slot, 0, totalSlots - 1, padding, windowHeight - padding);
  }
}

To do & reflection

While this product is germinated from my previous product, I believe it has the potential to be further polished, including the varying text aesthetics, responsive relation between text and webcam (or audio level), etc.

On the other hand, I would say that this product is indeed an improvement compared to week 2 as I started to incorporate my own message and character into the code instead of creating fancy (or not) demos.

Week 4 – Reading Response on The Design of Everyday Things Ch.1

Norman’s work strikes me as a meditation on the tension between what we create and how we live with it. That scissors example – seemingly mundane yet profound in its clarity and simplicity – is where the conversation begins, where the hands meet the object, and where intention aligns with ease. The holes for fingers, the sharpness of the blades, everything speaks in a language we already know, though we might not realize it. There’s something beautiful in that – that design could be so intuitive that it bypasses the conscious mind altogether. It’s as if the object whispers directions without uttering a word.

But then, there’s the watch. The digital one, with its five buttons and its cryptic functions, each one a mystery waiting to unravel or, more likely, frustrate. It’s a watch that asks for patience, for time spent reading manuals and pressing buttons in some uncertain combination. And here’s where the dissonance begins: the more we strive for mastery over our creations, the more we seem to lose the very thing we sought – simplicity, elegance, understanding.

I find it ironic, or perhaps inevitable, that as we design for greater utility, we inch closer to confusion. The refrigerator controls, for example, promise autonomy – adjust the freezer, adjust the fridge – but they deceive in their independence. They are not two but one, a system entangled with itself, a metaphor, perhaps, for how we often see ourselves within the systems we create: in control, but really, not quite. Is it possible to design something so complex that its very design dissolves into meaninglessness? I wonder.

Norman’s “system image” feels like a ghost, a presence that’s there but not fully. It’s the half-told story, the incomplete map, the set of instructions that leaves you more puzzled than when you began. And yet, we rely on it. It’s not just the object – the watch, the refrigerator, the scissors – but the way the object speaks to us or fails to. When the system image falters, we falter. It’s as though the object has lost its voice, or worse, it speaks in riddles.

There’s something almost tragic in this “paradox of technology” that Norman concludes with. We build and build, adding more features, more functions, more buttons, and yet the more we add, the more distant we become from the original purpose. The watch, once a simple device for telling time, now demands that we learn a new language just to access its many offerings. And I wonder – at what cost?

In the end, Norman gestures toward a truth that resonates deeply: design is not a solitary act. It’s a conversation, a collaboration – between engineers, marketers, manufacturers, and yes, the users, the humans who will ultimately live with these objects. And it is here, in this collaboration, where the balance must be struck. Not in the pursuit of complexity for its own sake but in the creation of objects that serve us, not enslave us.

Week 3 – Reading Reflection on The Art of Interactive Design, Ch. 1

By all means, this passage intrigued me first by its plain and candid tone and the approach to start with ‘trying’ to define instead of throwing jargon and how the author concedes to potential arguments and circumvents them. As a person who loves to define terms and concepts – for a particular context, of course – I found that I much echo my own habit.

Hence, even though I do not 100 percent agree with the statements (although I appreciate that brave attempt to disentangle and establish something beyond old paradigms, a new paradigm), at least for now, the author successfully grasped my willingness to ‘listen’ to the message.

Ironically, despite the author negating words as an ‘interactiveable’ media, the words of the passage did evoke some sort of interactive process in my reading process according to the definition given (although it could be again argued that reaction is not interaction, etc.)

Indeed, I strongly concur with the idea that definitions should serve as guidance rather than constraints, particularly for concepts like interactivity that are swiftly evolving, as mentioned by the author. Sometimes, I hesitate to offer definitions in conversations for fear of appearing imposing to others, when in reality, my aim is not to assert an absolute truth but to establish a ground for my understanding.

While the author’s spectrum to evaluate interactivity from low to high and placing them in contexts of the interacter involved makes much sense to me, I’d still like to not argue but bring up an idea from theater given it is denied to be decently interactive by the passage. Also, starting with definitions, an academically common definition of theater could be ‘a relation between the actor and the spectator.’ This approach to constructing the definition actually resembles the author’s. As the author here tackled the tricky arguments of what counts for an actor in the interaction by dividing different levels of interactivity, it similarly reminds me to maybe use this approach to explain the question in theater definition: what constitutes a spectator or an actor; do they have to be aware of their action or not; etc.

On top of that, I found Socrates’ words very much relatable. That inactive nature of words – the creator cannot protect or explain them or themselves after the creation – is something that bothered my expression in many cases as well, whether in poetry or speech, whether with a real person or anonymously online.

Nevertheless, when it comes to what can be regarded as a strong interactive system, the answer seems hidden within the definition already – only depends on how we interpret those three stages. In terms of ‘listening,’ it could deal with how much it can listen, how fast it can listen, how many types of things it can listen, how obvious or undercovered it can listen, plus when, where, and other Ws. Similarly, with thinking, it really touches the black box of unlimited possibilities. But in simplest words, I would put it here as ‘the extent to which the information listened can be used and processed to present and serve as a whole to reflect the message of the system.’ Eventually, for the speaking, it seems to me that it’s a matter of picking and developing the niche method to communicate information, aka the role of media. When all three stages are carefully designed in a well-rounded manner, it may qualify as a strongly interactive system.

‘Show but not tell’ goes a long way in theater and other performative arts, and maybe for this time, we should not only satisfy with showing but reach beyond it – with whatever we have.

Week 3 – Object Life Sim

final product

For this project, I would like to start by presenting the final product.

Instruction: To change the starting condition, edit the initial parameters in ‘sketch.js’

Description (This part of the text is summarized by GPT-4o from my other words and codes):

The simulation involves various instances, such as objects, foods, and sites, each with distinct behaviors. It emphasizes resource management (hunger), spatial awareness (movement and separation), and lifecycle dynamics (aging and reproduction), creating a dynamic system where objects interact with each other and their environment.

  1. Objects: These are the primary entities that move around the simulation. They have attributes like position, age, size, speed, hunger, and status (e.g., doodling, mating, eating, working). Objects can interact with other objects and food sources.
  2. Movement: Objects move based on their speed and direction. They can either follow a target (another object or food or site) or move randomly. If they encounter the edges of the simulation area, they reverse direction. They also avoid crowding by maintaining a separation distance from others.
  3. Hunger and Status: Objects experience hunger, which affects their status and behavior. When hungry, they look for food. If they consume food, their hunger decreases, and they may reproduce if conditions are favorable. Different statuses trigger different actions (e.g., eating, mating, working).
  4. Aging: Objects age over time, with their aging rate influenced by their status. For example, being full while mating speeds up aging while being hungry slows it down. If an object’s age exceeds its maximum, it dies.
  5. Reproduction: When certain conditions are met (like being sufficiently hungry), objects can reproduce. New objects are created with attributes based on the parent object.
  6. Interaction with Food and Sites: Objects can consume food to reduce hunger and may interact with sites to produce extra food on the canvas. Reaching food or sites changes their status and can trigger further actions.

concept

While there are certainly many inspirations, including Simmiland (a God-like card game), John Conway’s Game of Life, the path drawing project from Raes’ presentation, and P5 reference projects (bouncing balls, flocking behavior, Perlin noise, etc.), the idea first came to me as a reminder to think about the nature of simulation as well as how the routined life has alienated humans to be objects subjected to rules – hence the title “Object Life Sim.”

Figure 1: Simmiland (The playing God idea and the color scheme reference)

The paradox lies here, as the nature of simulation suggests that it is trying to imitate something superior, something intricate and more complex, it is so weird that if the life itself is already institutionalized, then what’s the point of simulating it? Isn’t it going to result in an Ouroboros? Yet, there’s an understated allure in simulating our surroundings and engaging with them at minimal cost, which has given rise to this very basic simulation of a life reduced to objects. Or, perhaps these are, in a way, the most crucial simulations?

Figure 2: Game of Life (Resource and reproduction reference)

Another motivation to do so – to play God for a moment – emerged during in our first class discussion. As we delved into the role of randomness in art, I held the belief that randomness could be an intrinsic element, present even in Mondrian’s analytical paintings or the precisely proportioned sculptures of ancient Greece. However, I was surprised by the idea of how possible it is for art to be random, brought up by a classmate. This prompted me to reconsider whether the prevalent randomness in today’s generative art detracts from its legitimacy as art. Then I came up with the analogy of the creation of the world – if the world was created by a deity with a singular act (akin to the First Cause) and then left to evolve independently, can it still be considered the deity’s creation?  Similarly, if the set of algorithms behind a piece is designed by human, and the initial set of parameters is decided by human, is it our creation? While my stance is affirmative, as I believe the eventually tangible ‘piece’ is not the art itself but separate from it or only plays a conduit and could be reached however we want, I would still like to pose this question for your contemplation.

CODE & Production SNIPPETS

Again, as it would be tedious to go through the structures and details in the code, I will only introduce some of the sources I used and some interesting parts of them from my perspective.

First, when it comes to reading the keyboard inputs with keyCode, it is very useful to have this website to know how the keys are linked to key codes. This enables me to set up different conditions by combining keyboard and mouse together to create the control logic:

function mouseClicked() {
  // Spawn new instances at the mouse location when clicked with different keys pressed
  if (keyCode === 79) { // If the last pressed button is 'O'
    initiateObject(mouseX, mouseY); 
  } else if (keyCode === 70) { // If the last pressed button is 'F'
    foodArray.push(new Foods(mouseX, mouseY, setMaxUtility)); 
  } else if (keyCode === 83) { // If the last pressed button is 'S'
    siteArray.push(new Sites(mouseX, mouseY, setMaxUtility));
  } else {
    // If the simulation hasn't started, initiate it and create initial objects
    if (simStart === false) {
      simStart = true; // Set the simulation start flag to true
      for (i = 0; i < initialObjectNum / 2; i ++) {
         // Spawn initial objects off-screen
        initiateObject(random([0 - initialSize / 2, windowWidth + initialSize / 2]), random(windowHeight));
        initiateObject(random(windowWidth), random([0 - initialSize / 2, windowHeight + initialSize / 2]));
      }
    } 
  }
}

Another useful source is the Unicode list for emojis (Yes, I learned to use emojis to draw stuff this time!) For example, I used it to set up random food emojis for my Foods class:

let foodIcon = ['\u{1F35E}', '\u{1F950}', '\u{1F956}', '\u{1FAD3}', '\u{1F968}', '\u{1F96F}', '\u{1F95E}', '\u{1F9C7}', '\u{1F9C0}', '\u{1F356}', '\u{1F357}', '\u{1F969}', '\u{1F953}', '\u{1F354}', '\u{1F35F}', '\u{1F355}', '\u{1F32D}', '\u{1F96A}', '\u{1F32E}', '\u{1F32F}']

class Foods {
  constructor(tempX, tempY, maxUtility, 
               tempSize = 10) {
    this.x = tempX;
    this.y = tempY;
    this.size = tempSize; // Set the initial size
    this.type = 'food';
    this.utility = random(0.5, maxUtility)
    this.status = null;
    this.icon = random(foodIcon)
  }
  
  // Display the object on canvas
  display() {
    fill('#ffd7a0'); // Set the brightness of the object based on the age
    noStroke();
    circle(this.x, this.y, this.size * this.utility + 10);
    
    textSize(this.size * this.utility);
    textAlign(CENTER, CENTER);
    text(this.icon, this.x, this.y);
  }
  
}

Next, I’d like to show two pieces of the core functions for my Objects to move. The first one finds the closest target on the canvas of its kind, and the second one is the exact math to calculate the movements. It is rather easy to have the objects move directly towards a target (I only have to copy-paste a bit from my first portrait project), while including the collision algorithm and the strategies to maneuver around is something more difficult for sure.

  find(arrayToFind) {
    let closestPoint = null; // Placeholder for the closest point
    let minDistance = Infinity; // Start with a very large distance
    let distance; // Variable to store calculated distance
    let ix, iy; // Coordinates of items in the array

    // Function to calculate the distance between two points
    const calculateDistance = (x1, y1, x2, y2) => {
      return Math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2); // Return Euclidean distance
    };

    // Iterate through the array of inquiry to find the closest object
    for (let item of arrayToFind) {
      
      ix = item.x; 
      iy = item.y;
      
      if ((ix === this.x) & (iy === this.y)) { 
        distance = Infinity; // Set distance to infinity if it's the same object
      } else {
        distance = calculateDistance(this.x, this.y, ix, iy); // Calculate distance to the item
      }
      
      // Update the closest point if the current distance is smaller
      if (distance < minDistance) {
        minDistance = distance; // Update minimum distance
        this.destObject = item; // Set the closest object as the destination
      }
    }
  }

Initially, my strategy after a collision was to let the objects nudge a bit randomly, which resulted in sticking in place with jerking behaviors. Then, I set up a strategy to let the objects escape in the opposite direction from the collision – an idea borrowed from bouncing balls. However, as in my simulation, moving toward the target is still a necessity after escaping; it resulted in the objects sticking in a line. So, I modified the strategy to slide around the collided objects, but it still didn’t work, leading to the objects rotating in place. At the end of the day, I worked through the algorithm of flocking behaviors mentioned in class and borrowed the separation to combine with my sliding behavior and put up the piece.

  move(arrayToFind) {
    this.find(arrayToFind); // Find the target object
    
    // Setup destination coordinates from the target object
    this.destX = this.destObject.x;
    this.destY = this.destObject.y;

    // Calculate the distance to the destination
    let dx = this.destX - this.x;
    let dy = this.destY - this.y;
    let distance = Math.sqrt(dx * dx + dy * dy);
    
    // Normalize the direction vector
    if (distance > 0) {
        this.directionX = dx / distance;
        this.directionY = dy / distance;
    } else {
        this.directionX = 0;
        this.directionY = 0;
    }

    // Calculate the next position
    let nextX = this.x + this.directionX * this.speed;
    let nextY = this.y + this.directionY * this.speed;
    
    // Check for collision with the destination object
    if (this.destObject) {
      let targetCombinedRadius = (this.size + this.destObject.size) / 2; // Adjust based on size
      let distToTarget = Math.sqrt((nextX - this.destObject.x) ** 2 + (nextY - this.destObject.y) ** 2);

      // If colliding with the target object, invoke reach
      if (distToTarget < targetCombinedRadius) {
        this.reach(); // Call reach() if colliding with the target
            
        // Slide away from the target
        let targetNormalX = (this.x - this.destObject.x) / distToTarget; // Normal vector
        let targetNormalY = (this.y - this.destObject.y) / distToTarget;

        // Calculate the sliding direction (perpendicular to the normal)
        let targetSlideX = -targetNormalY; // Rotate normal to find tangential direction
        let targetSlideY = targetNormalX;

        // Introduce a small random adjustment to sliding direction
        let targetRandomAdjustment = random(-0.1, 0.1); // Adjust as needed
        targetSlideX += targetRandomAdjustment;
        targetSlideY += targetRandomAdjustment;

        // Normalize the sliding direction
        let targetSlideDistance = Math.sqrt(targetSlideX * targetSlideX + targetSlideY * targetSlideY);
        if (targetSlideDistance > 0) {
            targetSlideX /= targetSlideDistance;
            targetSlideY /= targetSlideDistance;
        }

        // Move along the sliding direction away from the target
        this.x += targetSlideX * this.speed * 0.3; // Slide from the target
        this.y += targetSlideY * this.speed * 0.3;

        return; // Stop further movement after reaching
      }
    }
    
    // Maintain separation distance from other objects
    let separationDistance = this.size * 1.25; // Desired separation distance
    let separationForceX = 0;
    let separationForceY = 0;

    for (let other of objectArray) {
      // Skip if it's the same object or the target object
      if (other === this || other === this.destObject || other.status === 'mate') continue;

      // Calculate distance to the other object
      let distToOther = Math.sqrt((nextX - other.x) ** 2 + (nextY - other.y) ** 2);

      // If the distance is less than the desired separation distance, calculate a separation force
      if (distToOther < separationDistance) {
        let diffX = nextX - other.x;
        let diffY = nextY - other.y;
        
        // Normalize the difference vector
        if (distToOther > 0) {
            separationForceX += (diffX / distToOther) * (separationDistance - distToOther);
            separationForceY += (diffY / distToOther) * (separationDistance - distToOther);
        }

        // Sliding behavior
        let slideFactor = 0.3; // Adjust as needed for sliding strength
        let slideX = -diffY; // Perpendicular to the normal
        let slideY = diffX;

        // Normalize sliding direction
        let slideDistance = Math.sqrt(slideX * slideX + slideY * slideY);
        if (slideDistance > 0) {
            slideX /= slideDistance;
            slideY /= slideDistance;
        }

        // Apply sliding movement
        nextX += slideX * this.speed * slideFactor;
        nextY += slideY * this.speed * slideFactor;
      }
    }

    // Apply the separation force to the next position
    nextX += separationForceX;
    nextY += separationForceY;

    this.x = nextX;
    this.y = nextY;
    
    if (frameCount % 10 === 0) {
      // After updating the position
      this.positionHistory.push({ x: this.x, y: this.y });

      // Maintain the history size
      if (this.positionHistory.length > this.historyLimit) {
        this.positionHistory.shift(); // Remove the oldest position
      }
    }
    
  }

OBSERVATION

Lastly, now that the project is a simulation, I believe the observation of its behaviors matters a lot. While I do not have much time to fully explore the parameters and settings, here are a few general observations:

Figure 3: It is evident that the sites, as the source of food, have the most path towards and surrounded.

Figure 4: As the simulation goes on, the larger objects could start to hinder the movements of the others.

Figure 5: Towards the end of a simulation, no matter if the objects are in a healthy state, the behavior turns out to be more aimless as there is no incentive to interact.

Figure 6: The greater the average resource per area (in other words, the smaller canvas + the same amount of resource), the longer the simulation lasts.

Week 2 – Reading Reflection ‘Randomness and Control on Generative Arts and Beyond’

By all means, what does ‘generative’ stand for? And what about ‘random’, ‘chaos’, ‘definitive’, ‘rules’, ‘control’, ‘freedom’, ‘certainty’, and so on?

Before getting into details in Reas’ presentation, these questions occupied my mind. I would say that this week’s production had already raised these questions for me, and Reas’ philosophy presented exactly resonates with them. Binary systems could be potentially hazardous in terms of oversimplification from time to time, while they are very much a preferable tool for us humans to reference the world and reality. That happens to the balance between art and artist (as I touched on a bit in the first week’s production), the balance between randomness and certainty, the balance between aesthetics and technicalities (I encountered as many roles like a guitarist, a mixing engineer, a calligrapher, and here as a coder-artist), etc.

One of the observations I made out of the examples is the human physicality in that Mondrian painting where presented is the well-known seeming (or actually, as intended) abstraction of objectiveness. It is very interesting to learn that boogie-woogie music played as an inspiration to it as a musician. However, the great representation of something objective merged exactly from the human strokes of incapability to reach definite control. In that paintng, the refined thoughts and emotions – are spiritual of the mind – instead of spiritual of the body. On the other hand, the common lack of spiritual physicality these days may encourage us to also gaze through the opposite perspective and harness that to-be-decided set of definitiveness to praise the chance in our physicalities. In simpler words (or actually, in this specific context), the rules of computation and programming are grounds for the flourishing randomness in arts or, in other words, the physicality we humans could imagine, appreciate, adapt to, and reiterate we could potentially pursue.

This can be seen in many other examples in the presentation. From the deconstruction and reconstruction of figurative, symbolic, representative fragments or captures of the architecture following definitive rules to music from the Book of Change, chances on the rules and rules on the chances intersect with each other. In fact, the much amusing and providential discovery I found there is the irony of Book of Change in this particular context. While being named the Book of Change (Yi Jing 易经), this ‘book’ is many times accused of superstition but essentially represents the ancient wisdom of finding patterns, trends, and rules of the world to make predictions and guide life. That being said, the juxtaposition of chance and rule has been, in a sense, ubiquitous ever since.

Chance that is always planned and always surprising.

I believe this quote earlier said by Raes goes perfectly with the latest examples demonstrated in the presentation. While we recognize the power of randomness in certainties and the certainties in randomness, to what extent it is random/certain may stretch our attention further – as my opening wonderings suggest, and the question ‘where is the optimum balance between total randomness and complete control?’ lies in this middle ground. That being said, there is no way to maneuver around the definition of randomness and control – or at least while maneuvering, the multiplicity of them should be adopted. For example, the increment of randomness was realized by introducing more ‘randomizeable parameters’ in their second-to-last current project. Regarding merely the result, the randomness was raised for sure to human perception; However, this result was achieved by adding more control (or more controllable codes) to the program. In this case, I would rather perceive this ‘conflict’ as a mutual succession from both sides – when our technical attempts grant ideological and artistic fruits.

To end this reflection, I’d like to leave a question on ‘randomness in isolation.’ To my current understanding, there is no such thing as isolation in this field of mutual interaction – whether which side we picked at the beginning. Eventually, they merge and serve under a broader intention, purpose, collective, etc., and this will probably become one of my rules of thumb in the future.

So, on top of that, isolation could stand where?