Assignment 3 & Reading Response – Shereena AlNuaimi

For this assignment, I drew inspiration from abstract paintings in general. That’s mostly because abstract paintings are essentially shapes layered on top of shapes, or at least that’s what we initially believed. The audience can learn that there’s more to an abstract artwork than meets the eye. which I attempted to include into my work.

Making this project more realistic in some manner and pushing myself to replicate at least one of Picasso’s paintings would be my best improvements, if there are any. But that will have to wait. The ellipses and circles that emerge when the mouse is clicked, which are larger than the squares themselves, is something I’m proud of.Overall, I’m somewhat proud of the outcome.

// Global variables to store the canvas and arrays of shapes and colors
let canvas;
let shapes = [];
let colors = [];
let numColors = 7; // Number of colors to generate

function setup() {
  createCanvas(600, 600);
  canvas = createGraphics(800, 600); // Create off-screen graphics buffer
  canvas.background(255); // Set background color of off-screen canvas

  // Draw background and shapes, generate colors, and apply colors to the off-screen canvas
  drawBackground();
  drawShapes();
  generateColors();
  tintColors();
}

// Function to draw the background of the off-screen canvas
function drawBackground() {
  canvas.noStroke(); 
  canvas.fill(255); 
  canvas.rect(0, 0, 600, 600); // Draw white rectangle covering the canvas
}

// Function to randomly generate and draw shapes onto the off-screen canvas
function drawShapes() {
  let numShapes = 50; // Number of shapes to draw
  let maxSize = 400; // Maximum size of shapes

  // Loop to create and draw random shapes
  for (let i = 0; i < numShapes; i++) {
    let x = random(canvas.width); 
    let y = random(canvas.height); 
    
    let size = random(50, maxSize); // Random size for the shape

    // Generate random fill and stroke colors for the shape
    let fillColor = color(random(255), random(255), random(255));
    let strokeColor = color(random(255), random(255), random(255));

    // Creates an object to represent the shape and adds it to the shapes array
    let shape = {
      x: x,
      y: y,
      size: size,
      fillColor: fillColor,
      strokeColor: strokeColor,
      shapeType: int(random(3)) // Randomly choose shape type (0: ellipse, 1: rectangle, 2: polygon)
    };

    shapes.push(shape); // Add the shape object to the shapes array
  }

  // Loop through shapes array and draw each shape onto the off-screen canvas
  for (let shape of shapes) {
    canvas.fill(shape.fillColor); 
    canvas.stroke(shape.strokeColor);
    canvas.strokeWeight(2); 

    // Draw different types of shapes based on their shapeType property
    if (shape.shapeType === 0) {
      canvas.ellipse(shape.x, shape.y, shape.size, shape.size); // Draw ellipse
    } else if (shape.shapeType === 1) {
      canvas.rect(shape.x - shape.size / 2, shape.y - shape.size / 2, shape.size, shape.size); // Draws rectangle
    } else {
      // Draws polygon with random number of vertices
      let numVertices = int(random(3, 8)); // Random number of vertices between 3 and 7
      let angle = TWO_PI / numVertices; // Angle between vertices
      let halfSize = shape.size / 2; // Half the size of the shape
      canvas.beginShape(); // Begin drawing a custom shape
      for (let a = 0; a < TWO_PI; a += angle) {
        let sx = shape.x + cos(a) * halfSize; // Calculate x-coordinate of vertex
        let sy = shape.y + sin(a) * halfSize; // Calculate y-coordinate of vertex
        canvas.vertex(sx, sy); // Add vertex to the shape
      }
      canvas.endShape(CLOSE); // End drawing the custom shape
    }
  }
}

// Function to generate random colors and store them in the colors array
function generateColors() {
  colors = []; // Clear the colors array
  for (let i = 0; i < numColors; i++) {
    colors.push(color(random(255), random(255), random(255))); // Generate random color and add it to the colors array
  }
}

// Function to apply tinted colors to the off-screen canvas
function tintColors() {
  let colorIndex = 0; // Initialize color index
  // Loop through the canvas in a grid pattern
  for (let y = 0; y < canvas.height; y += 50) {
    for (let x = 0; x < canvas.width; x += 50) {
      let c = colors[colorIndex % colors.length]; // Get the color from the colors array
      canvas.tint(c); // Apply tint with the color
      canvas.image(canvas, x, y); // Draw the off-screen canvas onto itself with the applied tint
      colorIndex++; // Increment color index
    }
  }
}

// Function called when mouse is clicked
function mouseClicked() {
  generateColors(); // Regenerate colors
  canvas.clear(); // Clear off-screen canvas
  drawShapes(); // Redraw shapes on off-screen canvas
  tintColors(); // Apply tinted colors
}

// Function called when mouse is moved
function mouseMoved() {
  generateColors(); // Regenerate colors
  canvas.clear(); // Clear off-screen canvas
  drawShapes(); // Redraw shapes on off-screen canvas
  tintColors(); // Apply tinted colors
}

// Main draw function to display the off-screen canvas onto the main canvas
function draw() {
  image(canvas, 0, 0); // Display off-screen canvas on the main canvas
}

Reading Response:

In the book “The Art Of Interactive Design”, Crawford delves into the concept of interactivity by addressing the misinterpretations of the term “interactivity” itself. He emphasizes the importance of the quality of each subtask for successful interaction, distinguishing between the genuine interactivity and instances where the terms is diluted. Moreover, Crawford introduces the concept of and “interactivity designer,” highlighting the specialized skills and consideration required for designing interactive experiences. The chapter concludes by addressing the resistance faces by interactivity designers.

In essence, this chapter provides a thought-provoking and insightful exploration of interactivity, challenging conventional understandings and advocating for a more comprehensive approach to designing interactive experiences. Crawfords perspective redefines interactivity but also emphasizes the need for a paradigm shift and integration of diverse expertise in the evolving field of interactivity design.

Week 3 Assignment – Khalifa Alshamsi

 

For this assignment, I drew inspiration from the rainfall in the UAE this past weekend, a truly cherished occurrence. Given my lack of color perception, I aimed to infuse this natural phenomenon with a vibrant spectrum of colors.

The Sketch:

Description:

The sketch simulates colorful raindrops falling against a light lavender background, which Is meant to be a blue-tinted window on a foggy morning; at least, that’s the color I see when I look at it. Each ‘raindrop’ object tracks its position, speed, and color. The raindrops reset once they fall off the bottom of the canvas, simulating a continuous rainfall.

Script:

// Defing the Raindrop's class 
class Raindrop {
  constructor() {
    this.x = random(width);
    this.y = 0; // Start from the top
    this.length = random(10, 20);
    this.speed = random(1, 5);
    this.color = color(random(255), random(255), random(255), 200); 
    // Semi-transparent
  }

  // Method to update raindrop properties for each frame
  fall() {
    this.y += this.speed;
    // Reset the drop when it goes off screen
    if (this.y > height) {
      this.y = 0;
      this.x = random(width);
      this.color = color(random(255), random(255), random(255), 200);
    }
  }

  // Method to draw the raindrop
  show() {
    stroke(this.color);
    line(this.x, this.y, this.x, this.y + this.length);
  }
}

let raindrops = []; // Array to hold Raindrop objects
const numDrops = 400; // Total number of raindrops

function setup() {
  createCanvas(640, 684);
  // Initialize raindrops
  for (let i = 0; i < numDrops; i++) {
    raindrops.push(new Raindrop());
  }
}

function draw() {
  background(230, 230, 250); // Light lavender background
  // Update and draw each raindrop
  raindrops.forEach(drop => {
    drop.fall();
    drop.show();
  });
}

Problems Encountered:

The challenge was ensuring the raindrops’ continuous flow and dynamic color changes without creating visual clutter. To address this, I implemented a reset mechanism for each raindrop once it fell off the screen and used semi-transparent colors to soften the visual impact.

Reading Response – Interactivity and Others

How can we define interactivity? I realize that the word “interactive” has been overused by many things that involve some kind of human’s action in it. However, I believe that similar to the reading, the human’s interaction with any object must also result in a response from the object. Hence, the object is interaction. To some sense, this is similar to having a conversation where the speaker expects a response from the listener. Furthermore, as in the reading mentioned, the interaction is just not simply the meaningless communication between communicators but it must involve the process of thinking in related to the given subject. It is worth noting that the level between the subjects in the interactive conversation should be similar.

Another aspect that interests me is the notion of whether interactivity is subjective. While the reading is trying define the term interactivity universally, I would like the think that interactivity is subjective. It does not limit to the functionality of the interaction, but I would rather argue that some interactivity are designed to target a certain group of people. For example, any interaction that requires a response from the text/quote questions without audio support could not consider as interactive to the young children who have not studied how to read yet. Each design for interaction seems to target a certain subject group, or all, but it is certain that interaction is not universal in my opinion.

Reading response 2: The Art of Interactive Design

Even though we are studying “Interactive Media”, I didn’t ponder much about how interactivity should be defined until I read this article. Crawford’s definition, framing it as a cyclic process where two actors alternately listen, think, and speak, seemed interesting but not all-encompassing. His skepticism about labeling everything as interactive, especially in the context of books or movies, got me thinking. It nudged me to consider a spectrum of interactivity rather than a black-and-white definition.

Low interactivity, for me, could be akin to interacting with a traffic light. While it responds to user input (pressing the button to cross the street), the interaction is limited to a predefined set of responses (changing the signal). Medium interactivity might resemble using a smartphone. While navigating through apps, users can input commands, receive feedback, and customize settings. The smartphone’s interface allows for a degree of personalization and responsiveness to user actions, but it still operates within the confines of preprogrammed functionalities. High interactivity can be exemplified by AI LLM chatbots since their capacity to comprehend intricate language inputs, showcase contextual understanding, respond coherently, and even generate creative content reflects a higher level of engagement. They can generate human-like text and personalized responses yet still lack the consciousness of a human being. However, it is starting to get borderline difficult to differentiate their responses from genuine understanding with bots like Character.ai.

Furthermore, Crawford’s distinction between user interface design and interactivity design struck a chord. It made me reflect on projects where the interface might be visually appealing but lacks the holistic experience that interactivity design aims to achieve. It aligns with my belief that interactive design should engage users not just visually but also cognitively. True interactivity is like a dance of ideas, not a one-way street.

Assignment 3: Dynamic Blend


(Click on screen)

For this assignment, I wanted to create a simple yet appealing and interactive artwork. When the user clicks on the canvas, a few balls of random color appear. These balls move independently, colliding and combining with other balls to form a new ball with the additive color of the two original balls, gradually filling the canvas with a visually striking pattern. I wanted to experiment with the Additive color theory and show how simple colors can create unexpected results.

I created a BouncingBall Class so that each ball is an object with the same properties. The part of the code I am most proud of is initializing the speed and direction for the ball using random and the ternary operator.

constructor(x, y, color) {
    this.x = x;
    this.y = y;
    this.color = color;
    this.radius = 20;
    //vaying speed and direction
    this.speedX = random(2, 4) * (random() > 0.5 ? 1 : -1); //50% chance of true and false; If the result is true, assigns 1, otherwise, assigns -1
    this.speedY = random(2, 4) * (random() > 0.5 ? 1 : -1);
  }

Initially, only one ball was formed using each click, but to make it more interesting, I decided to create balls around the click location in a triangular formation. I used polar coordinates for this, similar to my previous assignment.

function mouseClicked() {
  let randomColor = color(random(255), random(255), random(255));
  //creating multiple balls around location
  let formationRadius = 60; //distance from click
  for (let angle = 0; angle < 360; angle += 120) { //3 balls at equal angles
    //polar coordinates for position
    let x = mouseX + cos(radians(angle)) * formationRadius; 
    let y = mouseY + sin(radians(angle)) * formationRadius;
    let backgroundBall = new BouncingBall(x, y, randomColor);
    ball_array.push(backgroundBall);
  }
}

The only technical challenge I initially faced was making sure the two balls that collide create a new ball with their combined color and that the original balls disappear. The other challenge was to make it look aesthetically pleasing. This is why I decided to add the blurred trails for the balls.

 

I also wanted to highlight that final result was inspired from my initial idea: red and blue balls combine to make a growing purple ball.

(click on screen)
This idea was inspired by the same anime character in my portrait from the first assignment.

Overall, I am satisfied with my final output. It might not be as unique as I wanted it to be since it was also inspired by previous assignments we did in class. However, it looks visually pleasing and was fun to make. I would like to improve the artwork by creating a pattern in the background using the combined colors.

Week 3: [Object Oriented] Coding my own p5js Game Engine Part 1

Below is the p5 sketch, hover 👆 anywhere to interact with the robot.

In case p5js Editor website is down, below is the recording of working demo on YouTube.

TL;DR : Conceptualization

Pi’s Practicality Walker is an Inverse Kinematics powered procedurally animated simulation of a giant mechanical walker in p5.js. All the animations are not hard coded by Pi, but are calculated on the spot on demand. You can hover the mouse pointer to move around the body of the walker robot, and the leg movements will adjust to how it should be.

1) 🤔 Long-Winded Conceptualization

I was watching Doctor Strange in the Multiverse of Madness, and having a good time fantasizing myself deep in the movie…as Dr. Stephen Strange Pi , the Sorcerer  Engineer Supreme , the Master of the Mystic Arts Engineering Arts.

The only difference is that unlike Doctor Strange, I am the perfect boyfriend to all my ex-girlfriends.

And then I suddenly saw this delicious octopus.

In the class, we are learning Object Oriented Programming, and I am feeling the urge to write my own little mini game engine in p5js (in preparation for my midterm project). And I love mechanical things soooo sooo much. Hence, a giant mechanical octopus walking over the land, controllable with the mouse is a perfect idea.

Hence the piece “Pi’s Practicality Walker” is born.

To get such walking animation, Japanese Animation master Hayao Miyazaki will pour his heart and soul to his artwork and draw all the frames of the animation (that is 24 frames for 1 second of motion). But I am not Hayao Miyazaki.

But I am not Hayao Miyazaki.

~ Pi (2024)

Hence, I need to utilize my super lazy sneaky hacks to make this happen. Luckily, if you have a robotics background, the Inverse Kinematics and Procedural Animation techniques come in handy. Instead of going through the blood, sweat and tears of drawing/hard coding the animations, we can automatically generate animation in real-time to allow for a more diverse series of actions than would otherwise be tedious using predefined animations using these mathematical goodies.

2) ⚙️ Technical Plan of Attack & Implementation

The part of the code I am very proud of is of course, objectifying my octopus/spider 🕷️. Since they have a central body, which is composed of multiple legs, I can easily define the Mechanical Leg class, and the Body class as follows.

//This is the class for the individual legs
class MechanicalLeg {
  constructor(numSegments, segmentLength, isRightFacing = true) {
    this.numSegments = numSegments;
    this.segmentLength = segmentLength;
    this.isRightFacing = isRightFacing; // New parameter to determine the facing direction
    this.angleX = 0;
    this.angleY = 0;
    this.points = [];
    this.totalLength = this.segmentLength * (this.numSegments - 1);
  }

  update(targetX, targetY, canvasWidth, canvasHeight) {
    this.totalLength = this.segmentLength * (this.numSegments - 1);
    this.angleX = 0;
    this.angleY = 0;
    this.legLength = max(
      dist(targetX, targetY, canvasWidth / 2, canvasHeight / 2),
      2
    );

    let initialRotation = atan2(
      targetY - canvasHeight / 2,
      targetX - canvasWidth / 2
    );
    let rotation
// ... and so on

Then you just spawn the legs on the body, fulfilling the object and instance creation.

//Then, attach the legs to the body instance from the body class below
//Spider is walking and draggable
class SpiderBody {
  constructor(x, y) {
    this.position = createVector(x, y);
    this.baseY = y; // Base y-position to oscillate around
    this.dragging = false;
    this.dragOffset = createVector(0, 0);
    this.oscillationAmplitude = 30; // Amplitude of the up-and-down movement
    this.oscillationSpeed = 0.05; // Speed of the up-and-down movement
  }

  update() {
    this.position.x = mouseX - 50;
    // Apply a sin motion when not dragging
    this.position.y =
      mouseY +
      sin(frameCount * this.oscillationSpeed) * this.oscillationAmplitude;
  }
//...

As per project requirement, the arrays were used to hold the leg objects within the walker robot body instance.

// Line 436
function setup() {
  createCanvas(windowWidth, windowHeight);
  gaitHorizontalDistance = windowWidth / 0.7;
  spiderBody = new SpiderBody(width / 2, height / 2 + 100);
  // Initialize leg instances and add them to the legs array
  legs.push(new MechanicalLeg(4, 180, true)); // Right-facing leg
  legs.push(new MechanicalLeg(4, 180, false)); // Left-facing leg
  legs.push(new MechanicalLeg(5, 150, true)); // Another right-facing leg
  legs.push(new MechanicalLeg(5, 150, false)); // Another left-facing leg
  legs.push(new MechanicalLeg(4, 200, true)); // And so on...
  legs.push(new MechanicalLeg(4, 200, false));

Now we have a giant machine with legs, and the code is reusable and modular, but it is not moving yet. Inverse Kinematics is the art of calculating the joint angles so that given a particular end-effector coordinate in 3D space, the robot knows which joint angles it should move to to get to that target point. Hence, the animations can be automated thsi way.

Inverse Kinematics & Procedural Animation

I stole the mathematical model from the University of Illinois lecture slides here : Introduction to Robotics Lecture 11: Inverse Kinematics (https://publish.illinois.edu/ece470-intro-robotics/files/2021/10/ECE470Lec11-2.pdf)

The key idea is to have an algorithm which will iteratively adjusts the angles of each segment to ensure the end effector reaches or points towards the target. The mathematics primarily involves trigonometry to calculate angles and positions of each segment in 2D space. The model I am using is below.

Step 1 : Initial Tangent Alignment

Step 2 :  Desired Leg Length Calculation
Step 3 : Iterative Angle Adjustments
    • Initialize for all segments.
  • Iteratively adjust to stretch or contract the leg.
  • Incremental angle change:
  • Updatefor each segment.
Step 4 : Segment Position Calculation

Step 5 : Check Total Leg Length

The iterative process continues until the total length of the leg aligns with the desired length  L, hence we keep checking. This is the stopping condition.

Step 6 : Rotation Adjustment

If the legs are at the back, we have to measure the angle in the mirrored way, so mirror the angles.

The formal definitions of funky symbols above are

Number of segments in the leg (numSegments).
Length of each segment (segmentLength).
Angle of the th segment.
Coordinates of the end of the th segment.
Target coordinates (mouse position).
Width and height of the canvas (canvasWidth, canvasHeight).
Desired total length of the leg (legLength).
Initial rotation angle to point towards the target.
Total rotation angle of the leg.
Incremental angle change per iteration.

 

3) 🎨 Artistic Plan of Attack & Implementation

Once the robot is working, we enhance the aesthetics through adding a parallax grass, adding the previous swarms and gears, and playing a good old western slide guitar cowboy song and visualizing it through fast Fourier transform (FFT) in the style of Ryoichi Kurokawa .

4) 💪 Challenges

Again, no challenge. This was an enjoyable exercise.

5) 💡 Potential Improvements

To make the movements of the walker more realistic, as always, I could have used proportional–integral–derivative (PID) controllers. My current model goes with constant speed:

6) 🖥️ Source code

🖥️ Source code is just a single sketch.js file at : https://github.com/Pi-31415/Intro-To-IM/blob/main/Assignment-3/assignment3.js

📖 References :

Good artists copy, great artists steal. Of course, I stole everything by googling these publicly available stuff below 🤫😉.

Assignment #3 – Reading Response – Are All Interactions Interactive?

After reading this text, it seems to me that there are two views on interactivity. The first is the overused, underunderstood meaning of interaction or interactivity, which Crawford criticizes. In that definition, interactivity bases itself on the premise that any person or object coming into contact with another object forms an interaction. The “interactive” rug, for example, is fitting, because the child logically “interacts” with the rug. Yet, the rug doesn’t return anything. That is the same for the fallen tree. In fact, in those cases, humans act or react, but do not interact. The other meaning of interact, which Crawford defines, claims that both entities should “listen, think, and speak” (that is, literally or metaphorically). But even when the two entities are capable of doing so, it doesn’t mean that they are doing it well. So, can “bad” interactivity be disregarded as interactivity altogether? Though bad, doesn’t it warrant already that the interactivity is present?

Another point that follows is that of interactivity design. Based on the aforementioned definitions, it is interesting to think about the role of the interactive designer. The latter, in “interactivizing” their design, acts on a computer. But then, can that be called interaction? Or is the interaction merely the exchange between the viewer and the final design? I believe that in a lot of cases, yes, we can say that the designer creating the design forms an interaction with the computer, but are there cases in which one of the three components of interaction lack?

Assignment 3 – “The Other World” by Sara Al Mehairi

Overview

As a starting point, I decided to experiment with our class codes and test color palettes, and through that, I stumbled upon shades of blue and pink that immediately reminded me of a scene from the famous movie “Coraline.” With that in mind, I decided to recreate, or at least attempt to, the tunnel to the other world. In the movie, when Coraline is trying to escape the Other World, she notices that the tunnel feels longer each time she uses it, and steeper. That was my goal for this assignment, and using functions helped me achieve it.

Highlight

class RotatingPolygon {
  constructor(centerX, centerY, x, y) { 
    this.centerX = centerX; //center x coordinate
    this.centerY = centerY; //center y coordinate
    this.x = x; //starting x coordinate
    this.y = y; //starting y coordinate
    this.angle = atan2(this.y - this.centerY, this.x - this.centerX); //calculate initial angle
    this.speed = random(0.01, 0.03); //set random rotation speed
    this.radius = dist(centerX-300, centerY-100, x+20, y-20); //calculate radius
    this.sides = int(random(5, 10)); //randomize number of sides for each polygon
  }

An aspect of the code that I take pride in is the usage of cosine, sine, tangent, and pi, along with the radius; they were very useful. Something new I learned was that atan() gives an angle value between -90 & 90, whereas atan2() gives an angle value between -180 & 180. I am also proud of how easy it has become to change the outcome with a simple adjustment, given that I have created a class for rotating polygons (I initially had circles/ellipses but that wasn’t accurate enough). Though I must admit that many of the elements have been the result of trial and error, and I am still exploring this very interesting idea of using OOP in creating graphics.

Reflection

Attempting to replicate the colors from the original scene was definitely challenging, especially since I had already included transparency/opacity components. I believe there is room for improvement in that aspect. In addition, it took some time to understand where and how to position all the shapes accordingly and in line with the original scene. Overall, I believe that I’ve created something worth sharing!

Assignment #3 – Code – ☆Manic! At The Disco☆

For this assignment, I wanted to create a visual experience inspired by orange, purple, pink, and yellow club lights:

I love how the lights all stem from one source, but I did not want to limit myself to that, so I thought I could create some sort of tridimensional effect with trailing that could better represent the “feeling” of experiencing those club lights.

So I created a line class and made 10 lines (with bezier curves) that all begin at (0,0) with varying speeds. I also lowered the background alpha to 10, as transparency allows for trails, which I wanted in order to create this dynamic effect. I tried multiple shapes, but ended up opting for bezier curves. The points for the bezier curves all follow the same line, which is why there is no “curve” per se. However, the movement of the bezier curves looked much nicer than that of the lines, which is why I chose that. The bezier curves also emphasized the 3D effect better than the lines.

Every once in a while, then, the lines meet around the top corner, reproducing the effect of disco lights, before each going their own way once again:

For the colors, I created an array of five different colors which I then incorporated into a linear interpolation function in order to create a gradient:

With the help of ChatGPT, I set two data points, “this.currentColorIndex” and “this.nextColorIndex”. For the first, I randomized from the array so that the first color of each line varies. The second data point set the following color, incrementing the index of the previous “current” color by 1, ensuring that the “next” color that appears depends on the previous color.

this.currentColorIndex = floor(random(colors.length)); // set the starting color of each line as a random color from the "colors" array
this.nextColorIndex = (this.currentColorIndex + 1) % colors.length; // ensure the "next" color for each line depends on the initial randomized color
this.lerpAmount = 0; // begin with no interpolation

Then, in the display function inside the class, I set three variables: “currentColor”, “nextColor”, and “displayColor”, in which I set the data from above. The first two variables are then implemented in the third, which is itself incorporated in the stroke function. Altogether, this is what allows the lines to be displayed in the different colors.

display() {
  let currentColor = color(colors[this.currentColorIndex]); // setting the "current" color variable
  let nextColor = color(colors[this.nextColorIndex]); // setting the "next" color variable
  let displayColor = lerpColor(currentColor, nextColor, this.lerpAmount); // setting the "display" color variable
  stroke(displayColor);
  strokeWeight((this.x + this.y) / 40);

Finally, I created a last function called “updateColor” in which I set the linear interpolation amount and create an “if” function for it to reset (and hence for the colors to keep varying).

updateColor() {
    this.lerpAmount += 0.04; // linear interpolation amount to define the speed of the transition

    if (this.lerpAmount >= 1) { // when linear interpolation amount exceeds 1, reset it to 0
      this.lerpAmount = 0;
      this.currentColorIndex = this.nextColorIndex;
      this.nextColorIndex = (this.nextColorIndex + 1) % colors.length;

Overall, I really enjoyed creating this code. It was definitely hard as last week’s material was a bit dense, but it was good practice. I wanted to incorporate an “if mousePressed” function for the colors to change to a different color palette, but for some reason, it would lag after some time and the sketch would just freeze. That is something I would like to work on for another time, though!

 

Week 3 : Chris Crawford Reading – “Redefining Interactivity” by Pi

Once, a very thirsty traveler came to the bar and asked, “Water!”

The bartender, raising an eyebrow, says “Sure, sire, would you like room temperature or icy chill water 🥛?”

“Uh, cold please”

“Do you lean towards distilled, mineral, or perhaps a sparkling variety?”

The traveler scratching his head says, “Just regular water is fine”.

The bartender goes “In terms of regular water, we have classic spring regular water or purified tap…”


Judging by Chris Crawford’s Interactive Design definition, this is an interactive process.

  • There are two actors – the bartender and the traveler
  • They actively listen to each other,
  • and think (Whether bartender thinks or not is debatable)
  • and speak

The only catch here is that this interactivity “did not solve the problem”.  They did interact, there is flow of information between them, but the problem remains unsolved.

Just like how Crawford ranted about people in that day and age re-brands “The Same Old Stuff” as “New Interactive Technology” with hype and criticized how the “plays” ranks about a 0.01 on a 10-point Crawford Scale of Interactivity, I am also going to use this writing to rant about how his interactivity definition ranks pretty low (say around 3.14ish) on the 100-point Pi’s scale of Aesthetic Practicality. This definition of [Interactivity = “two actors” AND “listen” AND “think” AND “speak”] ought to be, at least expanded to be applicable.

Expanding the definition of Interactivity

Personally, when I encounter “Interactivity” I see it not as a “process” (unless you are dealing with human to human problems, where you have to Talk It Out Loud ™). Normally in the context of Human-Software Interactions, UI/UX design, interactivity is about how efficiently you can give the user the complete tutorial so that they can utilize the system with minimal guidance.

On more formal terms, if we ignore the video game industry (because by definition, games have to be interactive), I see interactivity as a measure of “the rate of transfer of information between two agents (i.e. Human-Computer), where this transferred information helps solve human problems using the computer with minimal input in minimal time.” just as in the diagram below.

Note that my definition explicitly states that the more interactive the system is,

  1. the more time it saves and
  2. the less guidance it needs to give the user in the future.

Otherwise if we go by Crawford’s definition, we fall into the danger of “Impractical Interactivity Deadlock Situation” where two parties keep on interacting without results, just like the bartender joke above.

In short, the holy grail of “Interactivity” is, ironically, to minimize more “interactivity” in the future. Because if you have to keep interacting… if you have to keep going to the bank because the bank customer service keeps tossing you back and forth with other departments and your credit card issue is still not solved, the “interactivity” is simply … not “interacting”.

In short, the holy grail of “Interactivity” is, ironically, to minimize more “interactivity” in the future.

~ Pi

Best Interactivity is Implicit Interactivity, change my mind

Personally, I agree with Crawford, that “Good Interactivity Design integrates form with function.” However, the only pebble in my shoe is that Crawford has the explicit “speak” component in his definition. In a well-designed software, you don’t necessarily have to explicitly speak. The good design speaks for itself, learning curve is so smooth such that the users enlightens themselves without any guidance and hitting F1 for user manuals.

There was a good old time in the UI design when “Skeuomorphism” – a user interface design which adds real-world design cues to virtual objects to make those objects more understandable.

This is the perfect marriage of form and function.

For instance, just look at the Garage Band guitar User Interface.

Super short, sweet and simple. Anyone who have intimately slid their fingers up the fretboard , do not need additional tutorial in order to play the Garage Band guitar. It is intuitive. There is no need to explicitly have the expanding speech bubble saying “In order to use the overdrive/flanger pedal, tap here.”

Also, the interface is just beauty in purest form 😌👌.

The design itself is already intuitive and interactive.

However, just like the average American marriage [source], after 8 years, the form and function got a divorce ☠️… and the entire world catches minimalism/flat design virus, to the extent that where intuition is murdered (Yes, I am looking at you, Material Design and Neumorphism).

The best example of the such worst UI nightmare is the audio mute/unmute icons.

 

After years of experience during COVID, and after using Zoom countless professional settings, my dad still cannot tell whether the audio is muted or not just by looking at the button. (Does red color mean that it is already muted? Or I click the red and it will mute?)

Whereas a sensible, more intuitive audio on/off button will look more like this.

(Flip the switch, light means it is currently on, no light means it’s currently off… Everyone know this from interacting with other electronic gadgets, there is no need to specially train the user)

Hence, when you don’t have this auto intuition built into the original design, explicit interactivity (a.k.a. helper texts here and there, or your IT support guy) has to come in unnecessarily. This interactivity is just a bloat in the system, and a waste of space, and resources.

Well, as they say “Communication is key”, I appreciate the importance of interactivity in human-software interactions. However, in the context of Good software, such “talk-back” explicit Interactivity should be the last resort a good designer should fall back to.

A good doctor doesn’t easily prescribe antibiotics… this is supposed to be the last resort.

Hence, from the artistic engineer point of view…. When designing anything, intuitive function has to come first, then the form, and only then throw in the “explicit” when there is no other way out.

Perhaps, it is time we rebrand Crawford’s definition to Practically-Aesthetic-Interactivity (abbreviates to PAI? Hahaha, very punny Pi, very punny.), and we may be…. just may be… see better intuitive software in the future.