Week 11: In-Class Exercises

Final Video Demonstration can be found here: https://youtu.be/CTLXGrMEBxU

Cover image for this week’s production.
Exercise 1:

“make something that uses only one sensor  on Arduino and makes the ellipse in p5 move on the horizontal axis, in the middle of the screen, and nothing on Arduino is controlled by p5”

The following code was utilized for this particular exercise:

let sensorValue = 0; // To store the sensor value from Arduino

function setup() {
  createCanvas(640, 480);
  textSize(18);
}

function draw() {
  background(220);

  if (!serialActive) {
    text("Press Space Bar to select Serial Port", 20, 30);
  } else {
    text("Connected", 20, 30);

    // Display the sensor value
    text('Sensor Value: ' + sensorValue, 20, 50);

    // Map the sensor value to the horizontal position of the ellipse
    let ellipseX = map(sensorValue, 0, 1023, 0, width);

    // Draw the ellipse in the middle of the canvas vertically
    fill(255, 0, 0);
    ellipse(ellipseX, height / 2, 50, 50);
  }
}

function keyPressed() {
  if (key == " ") {
    setUpSerial(); // Start the serial connection
  }
}

// This function is called by the web-serial library
function readSerial(data) {
  if (data != null) {
    let fromArduino = trim(data); // Trim any whitespace
    if (fromArduino !== "") {
      sensorValue = int(fromArduino); // Convert the sensor value to an integer
    }
  }
}

The following code was used in the Arduino IDE for this exercise:

// make something that uses only one sensor  on Arduino and makes the ellipse in p5 move on the horizontal axis,
// in the middle of the screen, and nothing on arduino is controlled by p5

int sensorPin = A0; // Single sensor connected to A0

void setup() {
  Serial.begin(9600);
}

void loop() {
  int sensorValue = analogRead(sensorPin); // Read sensor value
  Serial.println(sensorValue); // Send sensor value to p5.js
  delay(50); // Short delay for stability
}

 

Exercise 2:

“make something that controls the LED brightness from p5”

The following code was used to make this exercise come to fruition:

let brightness = 0; // Brightness value to send to Arduino

function setup() {
  createCanvas(640, 480);
  textSize(18);

  // Create a slider to control brightness
  slider = createSlider(0, 255, 0);
  slider.position(20, 50);
}

function draw() {
  background(220);

  if (!serialActive) {
    text("Press Space Bar to select Serial Port", 20, 30);
  } else {
    text("Connected", 20, 30);

    // Display brightness value
    text("Brightness: " + brightness, 20, 90);

    // Update brightness from the slider
    brightness = slider.value();
    
    // Send brightness to Arduino
    writeSerial(brightness + "\n");

  }
}

function keyPressed() {
  if (key == " ") {
    setUpSerial(); // Start the serial connection
  }
}

function readSerial(data) {
  if (data != null) {
    let fromArduino = trim(data); // Trim whitespace
    brightness = int(fromArduino); // Parse data into an integer
  }
}

The following Arduino code was used for this particular exercise:

//make something that controls the LED brightness from p5

int ledPin = 3;

void setup() {
  Serial.begin(9600);
  pinMode(ledPin, OUTPUT);
}

void loop() {
  if (Serial.available()) {
    int brightness = Serial.parseInt();
    if (Serial.read() == '\n') {
      brightness = constrain(brightness, 0, 255);
      analogWrite(ledPin, brightness);
      Serial.println(brightness); // Send brightness to p5.js
    }
  }
}

 

Exercise 3:

The following code is an alteration of professor Aaron Sherwood’s code which was used for this exercise:

let velocity;
let gravity;
let position;
let acceleration;
let wind;
let drag = 0.99;
let mass = 50;

let windSensorValue = 0; // Value from the wind sensor
let connectButton; // Button for connecting to the serial port

function setup() {
  createCanvas(640, 360);
  noFill();

  position = createVector(width / 2, 0); // Initial position of the ball
  velocity = createVector(0, 0); // Initial velocity
  acceleration = createVector(0, 0); // Initial acceleration
  gravity = createVector(0, 0.5 * mass); // Gravity force
  wind = createVector(0, 0); // Initial wind force

  // Create a button to initiate the serial connection
  connectButton = createButton("Connect to Serial");
  connectButton.position(10, 10);
  connectButton.mousePressed(setUpSerial); // Trigger serial connection on button press
}

function draw() {
  background(255);

  if (!serialActive) {
    text("Click 'Connect to Serial' to start", 20, 50);
    return; // Exit the draw loop until the serial connection is established
  }

  // Map wind sensor value to wind force (affects horizontal movement)
  wind.x = map(windSensorValue, 0, 1023, -1.5, 1.5); // Adjust force range as needed

  // Apply forces
  applyForce(wind); // Apply wind force
  applyForce(gravity); // Apply gravity force

  // Update velocity and position
  velocity.add(acceleration);
  velocity.mult(drag); // Apply drag (friction)
  position.add(velocity);
  acceleration.mult(0); // Reset acceleration

  // Ball bounce logic (vertical boundary)
  if (position.y > height - mass / 2) {
    position.y = height - mass / 2; // Place the ball on the ground
    velocity.y *= -0.9; // Reverse and dampen vertical velocity

    // Notify Arduino to toggle the LED when the ball touches the ground
    writeSerial("1\n"); // Send '1' to Arduino
  } else {
    // Ensure the LED is off when the ball is not touching the ground
    writeSerial("0\n"); // Send '0' to Arduino
  }

  // Draw the ball
  ellipse(position.x, position.y, mass, mass);
}

function applyForce(force) {
  // Newton's 2nd law: F = M * A -> A = F / M
  let f = p5.Vector.div(force, mass); // Scale force by mass
  acceleration.add(f); // Add force to acceleration
}

// Reset the ball to the top of the screen when the space key is pressed
function keyPressed() {
  if (key === " ") {
    position.set(width / 2, 0); // Reset position to top center
    velocity.set(0, 0); // Reset velocity to zero
    mass = random(15, 80); // Randomize mass
    gravity.set(0, 0.5 * mass); // Adjust gravity based on new mass
  }
}

// Serial communication: Read sensor value from Arduino
function readSerial(data) {
  if (data != null) {
    let trimmedData = trim(data);
    if (trimmedData !== "") {
      windSensorValue = int(trimmedData); // Read wind sensor value
    }
  }
}

The following code was used in the Arduino IDE to bring this to life:

//gravity wind example
int ledPin = 2;     // Pin connected to the LED
int windPin = A0;   // Analog pin for the potentiometer (A0)

void setup() {
  Serial.begin(9600);        // Start serial communication
  pinMode(ledPin, OUTPUT);   // Set the LED pin as an output
  digitalWrite(ledPin, LOW); // Turn the LED off initially
}

void loop() {
  // Read the analog value from the potentiometer
  int windValue = analogRead(windPin);

  // Send the wind value to p5.js over serial
  Serial.println(windValue);

  // Check if a signal is received from p5.js for the LED
  if (Serial.available()) {
    char command = Serial.read(); // Read the signal from p5.js
    if (command == '1') {
      digitalWrite(ledPin, HIGH); // Turn on the LED when the ball touches the ground
    } else if (command == '0') {
      digitalWrite(ledPin, LOW);  // Turn off the LED
    }
  }

  delay(5); // Small delay for stability
}


The following schematic was used for all 3 of the exercises with slight moderations, provided in class:

Week 11: Design meets Disability

One of the main arguments I extracted from this week’s reading is the interplay between fashion and discretion in design, particularly in the context of disability. Whether a design should blend in or stand out is subjective and depends on the user. For instance, teeth implants and removable teeth were initially functional medical solutions meant to resemble natural teeth. Over time, however, these appliances have become fashion statements, with materials like gold being used to signify wealth or spirituality. This shift exemplifies how functional designs can appeal to broader audiences and evolve into tools of self-expression. Similarly, the example of the athlete in the reading, who embraced her prosthetic legs as a fashionable part of her identity, demonstrates how design choices can transcend functionality to reflect individuality. This underscores the idea that the line between utility and self-expression is fluid and often shaped by societal influences.

The reading also provokes thought about the ethics of design, particularly when it comes to medical appliances. While designers from unrelated fields might bring fresh perspectives, their lack of specialized knowledge can lead to unintended consequences. For example, the trend of hearing aids resembling earphones doesn’t address how excessive earphone use may itself lead to hearing loss, creating a harmful cycle. This highlights the risk of prioritizing aesthetics or profit over the users’ actual needs. These insights also apply to interactive design, reminding us that functionality and user experience must take precedence over superficial appeal. Thoughtful design must strike a balance, respecting the user’s needs and individuality while avoiding exploitation or unnecessary commercialization.

Week 10: The Arduino Piano (Takudzwa & Bismark)

The final product for you convenience is here: https://youtu.be/62UTvttGflo

Concept:

The motivation behind our project was to create a unique piano-like instrument using Arduino circuits. By utilizing two breadboards, we had a larger workspace, allowing for a more complex setup. We incorporated a potentiometer as a frequency controller—adjusting it changes the pitch of the sounds produced, making the instrument tunable. To enhance the experience, we added synchronized LED lights, creating a visual element that complements the sound. This combination of light and music adds a fun, interactive touch to the project. Here’s the project cover:

The tools used for this project were: The potentiometer, Piezo Speaker, LEDs, 10k & 330 ohm resistors, push buttons and jump wires.

Execution:

The following was the schematic for our project, which served as the foundation that allowed us to successfully execute this project:

The following Arduino code snippet brought our project to life, controlling both sound and light to create an interactive musical experience:

void setup() {
  // Set button and LED pins as inputs and outputs
  for (int i = 0; i < 4; i++) {
    pinMode(buttonPins[i], INPUT);       // Button pins as input
    pinMode(ledPins[i], OUTPUT);         // LED pins as output
  }
  pinMode(piezoPin, OUTPUT);             // Speaker pin as output
}

void loop() {
  int potValue = analogRead(potPin);                    // Read potentiometer value
  int pitchAdjust = map(potValue, 0, 1023, -100, 100);  // Map pot value to pitch adjustment range

  // Check each button for presses
  for (int i = 0; i < 4; i++) {
    if (digitalRead(buttonPins[i]) == HIGH) {         // If button is pressed
      int adjustedFreq = notes[i] + pitchAdjust;      // Adjust note frequency based on potentiometer
      tone(piezoPin, adjustedFreq);                   // Play the adjusted note
      digitalWrite(ledPins[i], HIGH);                 // Turn on the corresponding LED
      delay(200);                                     // Delay to avoid rapid flashing
      noTone(piezoPin);                               // Stop the sound
      digitalWrite(ledPins[i], LOW);                  // Turn off the LED
    }
  }
}

Finally, the final project can be found here: https://youtu.be/62UTvttGflo

Reflection:

Although our project may seem simple, we encountered several challenges during its development. Initially, we accidentally placed the digital pins incorrectly, preventing the project from functioning as expected. After hours of troubleshooting, we sought help to identify the issue. This experience turned into a valuable teamwork activity, helping us grow as students and problem-solvers. I view challenges like these as opportunities to build skills I can apply to future projects, including my final one. To enhance this project further, I would improve its visual design and sound quality to make it more appealing to a wider audience. That’s all for now!

Week 10: Reading Response

Bret Victor – A Brief Rant on the Future of Interactive Design

My initial instinct upon reading Bret Victor’s article was to push back and think, “But you’re not really improving anything.” This reaction softened after reading his direct responses to these criticisms. Victor’s defensiveness, in some ways, protects his vision—arguably so. But even beyond that, the true point of his article lies in his challenge to our current conception of interactivity. He questions why we’ve limited ourselves to “single-finger” interaction, arguing that we’re barely scratching the surface of what interactive technology could become. I found myself agreeing, especially when he mentioned that if, 20 years down the line, all we had were glorified iPads, it would be a sign of stagnation. Now, over a decade since the article was written, we’ve indeed developed more advanced interfaces—like VR, AR, and even some early-stage holographic tech—but these technologies haven’t become mainstream, and they haven’t revolutionized interaction to the degree Victor imagined thus proving his point to a degree.

Reflecting on his perspective today, it’s clear he raises valid points. For the field of interactive design to truly evolve, we need critical voices like his, highlighting what’s lacking and pushing the boundaries of what’s possible. Yet, while I appreciate his vision, I also think it’s worth noting that the demand for fully immersive tech isn’t universal. Not everyone wants full-body interaction, and realistically, innovations often emerge only if there’s sufficient market interest. While technologies like VR and AR are groundbreaking, they remain largely inaccessible to many—especially those in marginalized or economically disadvantaged communities. In contrast, iPads and similar devices, while more limited, have found a place even in lower-income communities. Victor’s perspective is compelling and reminds us of the potential for interactive design, but it also underscores the need for accessibility and practical applications.

Thoughts on the Video:

The video accompanying Victor’s article showcases futuristic and visually stunning technologies, like transparent phones and computers. These concepts seem efficient, fast, and intuitive, presenting a vision of an accessible tech-forward society. But this vision quickly becomes complicated when you consider the societal implications. The choice to illustrate this world in a city like Johannesburg, for instance, inadvertently sidelines the broader realities of poverty and inequality. The technology depicted is only accessible to specific demographics, creating an illusion of widespread accessibility that falls short in practice. Can such tech really deliver on its promise of happiness if it only deepens capitalist divides?

Moreover, there’s an unsettling irony in the interactions depicted in the video. People engrossed in advanced technology appear disconnected and isolated, as though their devices detract from meaningful social interactions. This tension isn’t new; many feared that the rise of technology would eventually isolate us, even as it ostensibly connects us. The video seems to highlight this concern, making me reflect on whether such advancements genuinely enhance human happiness or merely serve to reinforce certain fears about a tech-saturated future.

Week 9: Day & Night Simulator

Video Link to final project: https://youtu.be/lS588oI_GPU

Concept

For this assignment, my concept was to simulate a day and night state using 2 LEDs and a photosensor. The digital sensor would be the switch, while the photosensor would be the analog sensor. One light should turn on while the other is off during each state. I was also inspired by this week’s readings, which discussed the idea of not being afraid to draw inspiration from what other artists have created. The articles by Tigoe helped me gather inspiration from various sources on the internet, such as our GitHub page. From there, I was able to develop this day and night simulation prototype.

Design and Execution

The following schematic represents the foundation on which I built this day and night simulator:

Schematic for day and night simulation

After drawing the schematic, I carefully assembled the entire circuit to represent the desired effect. The following code is the loop which allowed the entire idea come to life:

void loop() {
  int lightLevel = analogRead(photoPin); // Read photosensor value
  bool isDay = lightLevel > lightThreshold; // Determine if it's daytime
  Serial.println(lightLevel); // Print light level for debugging

  // Read switch positions
  bool switch1 = !digitalRead(switchPin1); // True if position 1 is active
  bool switch2 = !digitalRead(switchPin2); // True if position 2 is active

  if (switch1) {
    // Position 1: Both LEDs off
    digitalWrite(dayLed, LOW);
    analogWrite(nightLed, 0);
  } else if (switch2) {
    // Position 2: Daytime/Nighttime mode based on light sensor
    if (isDay) {
      // Daytime behavior
      digitalWrite(dayLed, HIGH); // Daytime LED on
      analogWrite(nightLed, 0);   // Nighttime LED off
    } else {
      // Nighttime behavior
      digitalWrite(dayLed, LOW);       // Daytime LED off
      analogWrite(nightLed, lightLevel / 4); // Nighttime LED brightness based on light level
    }
  }

  delay(100); // Small delay to smooth transitions
}

Finally, the project works, and here is a link that demonstrates the final outcome: https://youtu.be/lS588oI_GPU

Final Thoughts and Reflection:

This project allowed me to utilize previous knowledge and to learn and apply new concepts. I believe a great way to improve this project would be to have multiple lights that would react differently for day and night states. The challenge behind this would be wire management, and I believe there is a solution to this problem that I’m still yet to encounter. I am, however, curious about how this could manifest. This exercise is pivotal to my knowledge basket, as it will all contribute to my final project. That’s all for now!

Week 9: Reading Response

Articles:

  • Physical Computing’s Greatest hits and misses
  • Making Interactive Art: Set the Stage, Then Shut Up and Listen

 

In Tigoe’s two articles, he explores the ways in which artists can find inspiration from the work of others, offering both insights and reassurance about the process of building on existing ideas. A particularly compelling aspect of these articles is how Tigoe illustrates the concept of ‘copying’ among artists—not as mere replication but as transformation. He shows that while artists may draw from each other’s work, they often create something entirely new and unique, reshaping the borrowed elements into original expressions. This idea aligns with the well-known notion that artists often remix various sources, juxtaposing them to create fresh, unique works. In interactive art, this approach can ease the burden of starting from scratch, which can be overwhelming and anxiety-inducing. Without inspiration from existing works, artists may struggle to bring their ideas to life fully—unless they are among the rare few who don’t rely on external inspiration at all.

Tigoe discusses various interactive pieces that appear to share certain traits but yield vastly different experiences for the audience, such as ‘Video Mirrors’ and ‘Body and Hand Cursors’. One of my favorite examples is the gloves that produce music when tapped on a surface. This design is not only interactive but fun and highly engaging, embodying the playful spirit of interactive art.

One critical reflection I have is about Tigoe’s reference to the “Scooby Doo painting” example, where he highlights a common mistake among designers: confusing presence with attention. He states, “Presence is easy to sense… but it’s harder to tell whether someone’s paying attention.” I think that in cases where artwork detects a person’s presence and responds by moving, it naturally draws the viewer’s attention, fostering interaction. For instance, in a crowded space, artwork that moves autonomously could spark collective interest, showing how even unintended effects can enhance user experience.

This concept connects with Tigoe’s advice in the second article about avoiding interference with a finished product while users engage with it. I wholeheartedly agree with this perspective, and I believe it’s an essential practice to adopt in designing interactive experiences. Even if I don’t incorporate this approach all the time, it’s a valuable insight I’ll certainly keep in mind.

Week 8 Assignment: Head Switch

Concept:

 

Final results for you convenience: https://youtu.be/6M-4nbYk2Is

 

Initially, the idea of using a switch that didn’t require hands felt challenging to execute. However, after some contemplation, the thought process shifted: if not manually, perhaps turning on the switch wirelessly would be ideal. My initial idea was to see if I could use my laptop to turn on the light with a clap. This, however, didn’t work for two main reasons: 1) it still required using my hands, and 2) the claps were too soft, as sound is typically best detected in a controlled setting. I then considered if I could control the light by turning my head left or right. Once this idea settled, the execution began.

Design and Execution:

The following schematic represents the electrical connection for the Arduino Uno board:

Schematic image painfully made using photoshop.

The final connection represented by the image above can be found from the image below:

Connection Image for the head switch LED control

Finally, the magic that brought everything together was not only the Arduino code but also a Python script, with a bit of help from everyone’s favorite chatbot. The following code was used in the Arduino IDE:

const int ledPin = 13;  // Pin connected to the LED

void setup() {
  Serial.begin(9600);       // Initialize serial communication
  pinMode(ledPin, OUTPUT);  // Set the LED pin as output
}

void loop() {
  if (Serial.available() > 0) {  // Check if data is available on the serial port
    char command = Serial.read();  // Read the incoming byte

    if (command == '1') {
      digitalWrite(ledPin, HIGH);  // Turn LED on
    } else if (command == '0') {
      digitalWrite(ledPin, LOW);   // Turn LED off
    }
  }
}

I then ran the Python code in my terminal, which activated the camera. Head tracking began, and from that point, turning my head to the left switched the light on, while turning it to the right switched it off. The following portion of the code made this possible:

while True:
       # Capture a frame from the camera
       ret, frame = cap.read()
       if not ret:
           break

       # Convert frame to RGB
       rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

       # Process the frame with Mediapipe
       results = face_mesh.process(rgb_frame)

       # If a face is detected, analyze head direction
       if results.multi_face_landmarks:
           landmarks = results.multi_face_landmarks[0].landmark
           direction = calculate_turn_direction(landmarks)

           if direction < LEFT_THRESHOLD and not led_on:
               print("Head turned left - Turning LED on")
               arduino.write(b'1')  # Send signal to Arduino to turn LED on
               led_on = True

           elif direction > RIGHT_THRESHOLD and led_on:
               print("Head turned right - Turning LED off")
               arduino.write(b'0')  # Send signal to Arduino to turn LED off
               led_on = False

       # Display the frame (optional)
       cv2.imshow("Head Movement Detection", frame)
       if cv2.waitKey(1) & 0xFF == ord('q'):
           break

 

Final Project:

Have a look at how the final project turned out in this short YouTube video:

https://youtu.be/6M-4nbYk2Is

Here is a progress of what happens when the user turns their head left and right:

Progress of head turns
Final Thoughts & Reflection:

This homework exercise was both fun and enjoyable. It pushed me to learn Arduino while thinking creatively about solving problems. Throughout the project, I kept considering how it might be integrated into my final project. So, instead of making this exercise long and complex, I approached it as a potential feature for the final project. That’s where I envision improvements and a broader application of this single project. That’s all for now!

Reading Response: Week 8

Her Code Got Humans on the Moon—And Invented Software Itself

After reading the article on Margaret Hamilton, I can genuinely say her work left me inspired. One of the most fascinating aspects of her journey in software engineering is that the career path was not taught formally at the time; pioneers like Hamilton navigated complex, unprecedented problems relying solely on their resourcefulness and intellect. To me, this makes her story all the more exciting and inspiring. In the article, Hamilton reflects, saying, “When I first got into it, nobody knew what it was that we were doing. It was like the Wild West.” This captures an intriguing concept: how something entirely unfamiliar can eventually grow into a field recognized by the world. It emphasizes the invaluable contributions of innovators like Hamilton, whose dedication transformed this “Wild West” of programming into a modern, booming industry.

Today, what was once unknown is now a leading field, with software engineering being a top choice for bachelor’s degree pursuits. Interestingly, there’s a marked contrast between Hamilton’s era and ours: while she and her peers pursued this path out of passion and a pioneering spirit, the $400-billion industry today is often seen as a gateway to financial stability, with passion sometimes secondary. Despite the fact that software wasn’t even included in the Apollo mission’s budget or official documentation, its impact is undeniably felt across industries today. This article brilliantly highlights the work of women in technology and helps bridge the gender gap by celebrating these overlooked pioneers.

 

Norman, “Emotion & Design: Attractive things work better”

When I first began reading Norman’s essay on “Attractive Things Work Better,” I was initially skeptical, believing functionality should outweigh aesthetics—why prioritize appearance if it compromises performance? Yet, as I delved deeper, I found myself agreeing with Norman’s insights on how attractive designs can enhance usability. One key point he raises is that while aesthetics sometimes need to be sacrificed in contexts where functionality is paramount, there are cases where the two can harmoniously coexist. This perspective helped clarify any initial doubts I had about the essay’s premise.

Norman’s explanation of positive and negative affect, and their impact on user experience, is particularly compelling. His argument on how emotions like anxiety and fear can be leveraged to overcome procrastination, especially when working within time constraints, feels remarkably applicable. At first, I was hesitant about his idea that a product’s utility can be influenced by mood, and the suggestion of owning multiple versions of a product seemed inefficient to me. It felt impractical to rely on mood when selecting functional items, especially considering cases where only the “undesirable” product might be available, potentially leading to a poor experience. This extends beyond the teapot example he provides. However, his quote, “When we feel good, we overlook design faults. Use a pleasing design, one that looks good and feels, well, sexy, and the behavior seems to go along more smoothly, more easily, and better. Attractive things work better,” presents a thought-provoking angle that I acknowledge has merit, though I feel it doesn’t apply universally.

I do disagree with Norman’s claim that color displays offer no additional value compared to black-and-white screens. This seems particularly outdated in today’s digital world; for example, working in design software like Photoshop often requires accurate color information, and relying on a black-and-white display could lead to unintended, muted results. This is just one of several cases where Norman’s argument might fall short.

Finally, Norman’s emphasis on products being affordable, functional, and pleasurable resonates in our present context. This balance is essential, as seen in decisions by major companies like Apple, which might discontinue products like the Vision Pro if they fail to meet these criteria, particularly in terms of affordability.

The Flame Boy: Becoming a Robinson (Midterm Project)

Concept

link to full sketch: https://editor.p5js.org/takuthulani/full/gdL68sjHc

This project is an adaptation of a fictional world I’ve created, designed to offer an immersive storytelling experience. Instead of passively observing, users actively engage with the narrative, making it more like an interactive movie or book. The story centers around the protagonist and key events from his life. To preserve the element of surprise and keep the experience enjoyable, I encourage you to explore the story firsthand!The narrative also takes place in a sci-fi setting, featuring an alien planet inhabited by human-like beings with enhanced abilities. This concept influenced the design, with fire playing a central role and red as the dominant color theme.

How the Game Works:

The game starts by greeting the user with a cover page that plays The Flame Boy’s theme song (it’s catchy, so you might get lost in it—don’t forget to progress, though you’re welcome to just listen!). The theme song was created using Suno AI (credit to them). On the cover image, there are instructions to click on the screen, which takes the user to a menu page (with its own theme song as well from Pixabay). The menu presents two options: “His Story” and “His Home,” and the user is expected to click on one.

If the user clicks on “His Home,” the page transitions to an image showing his home—simple and straightforward. This image was made using DALL-E. To exit this view and return to the menu, the user can press the escape button. If the user clicks on “His Story,” a video begins to play, so sit back and enjoy! After the video, the user must make a choice: either “Trust” or “Don’t Trust” the character they interact with. Clicking “Don’t Trust” progresses the game to the conclusion and eventually to a thank you message. The user can return to the cover screen by pressing the mouse key.

If the user chooses “Trust,” the game transitions to a section where they need to earn 100 points to reach the story’s conclusion. The experience continues until the user decides to stop the sketch.

Parts I’m Proud of:

There are several aspects of this project that I’m truly proud of. First and foremost is the video element, which made everything possible. Initially, I planned to use images and recordings to narrate the story, but p5.js kept crashing, so integrating video became the best solution. Even though it’s technically “out of scope” based on what we covered in class, it was a practical way to bring the project to life. I turned to resources like Stack Overflow, the p5.js help section, and ChatGPT to guide me through the video implementation. Below is an example of the code I used to incorporate video into the project:

// Video elements
let storyVideo; // Video for the story sequence
let conclusionVideo; // Video for the conclusion sequence
let decisionImage; // Image for the decision screen where the player chooses their path
...
function playStory() {
  // Stop the story video if it's currently playing to reset the playback
  storyVideo.stop(); 
  
  // Reset the playback time of the story video to the beginning (0 seconds)
  storyVideo.time(0);  
  
  // Start playing the story video from the beginning
  storyVideo.play();  
}
...
function playConclusion() {
  // Stop the conclusion video if it is currently playing
  conclusionVideo.stop();
  
  // Reset the video playback time to the start (0 seconds)
  conclusionVideo.time(0);
  
  // Start playing the conclusion video from the beginning
  conclusionVideo.play();
}

Secondly, I’m particularly proud of the shooting game and its mechanics. I’ll include snippets of the code for the parts I’m most proud of below, and I’ll also provide a link to the full sketch so you can explore the many amazing functions I’ve implemented. Here’s a sample of the code for the game within the game:

function playShootingGame() {
  // Draw the background night sky for the shooting game
  drawNightSky();
  
  // Set the fill color to white for the score text
  fill(255);
  
  // Set the text size for the score display
  textSize(16);
  
  // Align text to the right and top of the canvas
  textAlign(RIGHT, TOP);
  
  // Display the current score at the top-right corner of the canvas
  text("Score: " + score, width - 10, 10);

  // Check if 2 seconds have passed since the last star was spawned
  if (millis() - starTimer > 2000) {
    // Spawn new stars for the shooting game
    spawnGameStars();
    
    // Update the star timer to the current time
    starTimer = millis();
  }

  // Loop through the stars array in reverse order to avoid issues with splicing
  for (let i = stars.length - 1; i >= 0; i--) {
    // Get the current star object from the stars array
    let star = stars[i];
    
    // Set the fill color to yellow for larger stars, white for smaller stars
    fill(star.size === 40 ? 'yellow' : 'white'); 
    
    // Draw the star as an ellipse at its specified x and y coordinates with its size
    ellipse(star.x, star.y, star.size);

    // Check if the star has been on screen for more than 2 seconds
    if (millis() - star.appearTime > 2000) {
      // Remove the star from the array if it has been displayed long enough
      stars.splice(i, 1);
    }
  }

  // Check if the score has reached 100 points
  if (score >= 100) {
    // Change the game state to 'conclusion'
    state = 'conclusion';
    
    // Play the conclusion video or sequence
    playConclusion();
  }
}

link to the sketch: <iframe src=”https://editor.p5js.org/takuthulani/full/gdL68sjHc”></iframe>

Problems I encountered and possible solutions:

The main challenge I faced was implementing video into the sketch after my original plan didn’t work out. I found solutions using the online resources mentioned earlier. Another problem was integrating the mini-game within the story. The best approach was to treat the game as a separate entity and then integrate it into the sketch, rather than seeing it as one large game (which added unnecessary stress while coding). Additionally, I encountered performance issues, as I used too many audio and visual files. Optimizing the game speed and performance became a key focus. Below is a screenshot showing some of the media I used:

files and resources used to generate the story for the game

This includes various assets like fonts, PowerPoint presentations, a mini script for the game, and a massive 500MB PSD file (trust me, you don’t want to know how many images are in that!). I also went through multiple iterations of the menu and cover page designs.

Since I’m not the best at drawing, I used Meta AI to generate the images, which I then manually edited in Photoshop. I recorded the vocal narrations using Audacity and assembled the video using CapCut before uploading it into my program. Some of the images, sounds, and music were sourced from Pixabay.com, which allows usage for projects like this.

Areas needing improvements:

The overall workflow of the game could use some enhancements. For example, the “His Home” area could feature more interactive functions to increase engagement and enjoyment. Additionally, the story’s flow would benefit from a clearer rationale for why the user needs to earn 100 points to advance to the conclusion. While some of these creative flaws may have been overlooked, they can be incorporated as features in future updates.

Overall, I thoroughly enjoyed working on this project and was fully invested, as I was creating something I loved, among other reasons. It provided a fun and technical way to learn, and I am excited for you all to see what I build as time progresses.

The following images are alternative designs that did not make the cut:

According to a few people, the guy on the right did not fit in well so this menu did not make it.
Although my overall theme was red and black, I did not think thiss would benefit the aesthetics of the overall story.
This was a test I did when I tried integrating mock-up text with my overall design.

 

 

 

 

Midterm Project Progress – Week 5

Concept and Introduction

Like a movie teaser, this would serve as the very first visual to be seen for this project. I designed it using Adobe Photoshop. Additional images are from pixabay.com

For my midterm project, I wanted to explore something I deeply love—a concept that excites me to bring to life. This project combines elements from my Week 1 and Week 3 assignments into an interactive piece that I’m thrilled to work on. My love for sci-fi began with watching Dragon Ball Z and was later fueled by shows like Naruto, Marvel and DC animated and live-action films, Star Wars, and many more. From this inspiration, I created a universe that’s too vast to fully explain here, but I can say that this project represents a small piece of something much larger. ‘The Flame Boy’ is a character I’ll be exploring through interactive storytelling, which excites me because it allows me to experiment with a different medium than the filmmaking I’m most accustomed to.

In short, ‘The Flame Boy’ is about a young boy who lives with his grandfather. He was abandoned by the royal family (the reasons are explained in the project) and left on the side of the planet where the sun never rises. He meets someone special who inspires him to trace his roots, as he never felt he truly belonged to this side of the world. The interactive story allows the user to explore this world and learn more about this character. Eventually, he discovers the truth about his family, specifically the Robinsons.

The concept of the interactive artwork offers a choice: you can either explore The Flame Boy’s world or uncover the truth (think ‘red pill, blue pill,’ if you will). Choosing to explore his home lets the user interact with his room and discover his personality. On the other hand, choosing to know the truth allows the user to experience the story through interactive storytelling, which will unfold as you continue the journey.

 

User Interaction

  1. The interactive artwork begins with an opening splash screen. On this screen, there are visual cues guiding the user to enter and continue. This is the starting point every time the user explores this world. A soundtrack plays in the background whenever this screen appears. The following images are rough sketches of how I envisioned the splash screen before moving into Photoshop:
Cover screen idea 1
Cover screen idea 2

2. Once the user presses any button, they are transported to a menu screen. This screen presents them with the option to either explore The Flame Boy’s world/home or learn about his story.

If they choose to explore his home, the screen transitions to a scene resembling his house. Users will be able to interact with various objects within his space, allowing them to learn more about him through this interaction. This will be created using a combination of shapes in p5.js, along with a few images, music, and sounds. The experience will be simple and intuitive.

 

If the user chooses to learn about his story, they are transported into a movie/book-like environment. Here, a narrator introduces the protagonist, explaining how he was born, how he received his name and powers, and why he is where he is now. The user can advance to the next page by clicking the screen. As the story progresses, they meet a magician who guides the protagonist in discovering his identity.

The user is then presented with another choice: either ‘shoot the stars’ using The Flame Boy’s fire powers to earn 100 star coins, or navigate their way through a dark maze using The Flame Boy’s fire as a light source. The maze changes each time the user selects this option, creating an unpredictable and ‘random’ experience. Once the user completes these mini-games, they witness The Flame Boy meeting his parents for the first time. The experience then concludes, prompting the user to start over.

The following image is a simple node sketch I made in photoshop which depicts the flow of the program in its entirety:

This was the 3rd version of this visual. A more sophisticated version exists. The nodes represent the structure explained above.

The following is a brief progress report on the program as of the time of writing. The music was made in Suno A.I:

 

Most frightening Part and its Solution

Problem: The main challenge of my interactive artwork lies in implementing the two mini-games within the story segment. This project feels like three projects combined into one, which makes me concerned about whether users will find it as rewarding as I hope. Specifically, I felt apprehensive about how to implement the maze game and the shooting stars feature.

Solution: At the time of writing this report, I am researching ways to integrate these features using predefined algorithms available on GitHub. I will discuss these algorithms in my final update, whether they work or if there’s a need to change the creative and technical approach. For now, this is my progress update.

The following is an image of the splash screen as a thank you for reading this report in its entirety (and because I’m excited to share this image I made in Photoshop!).

Image made in photoshop for the interactive piece titled: The Flame Boy: Becoming a Robinson. Further developments will be made. Images were downloaded from Pixabay and Meta A.I