Week 11: Design meets Disability

One of the main arguments I extracted from this week’s reading is the interplay between fashion and discretion in design, particularly in the context of disability. Whether a design should blend in or stand out is subjective and depends on the user. For instance, teeth implants and removable teeth were initially functional medical solutions meant to resemble natural teeth. Over time, however, these appliances have become fashion statements, with materials like gold being used to signify wealth or spirituality. This shift exemplifies how functional designs can appeal to broader audiences and evolve into tools of self-expression. Similarly, the example of the athlete in the reading, who embraced her prosthetic legs as a fashionable part of her identity, demonstrates how design choices can transcend functionality to reflect individuality. This underscores the idea that the line between utility and self-expression is fluid and often shaped by societal influences.

The reading also provokes thought about the ethics of design, particularly when it comes to medical appliances. While designers from unrelated fields might bring fresh perspectives, their lack of specialized knowledge can lead to unintended consequences. For example, the trend of hearing aids resembling earphones doesn’t address how excessive earphone use may itself lead to hearing loss, creating a harmful cycle. This highlights the risk of prioritizing aesthetics or profit over the users’ actual needs. These insights also apply to interactive design, reminding us that functionality and user experience must take precedence over superficial appeal. Thoughtful design must strike a balance, respecting the user’s needs and individuality while avoiding exploitation or unnecessary commercialization.

Week 10: The Arduino Piano (Takudzwa & Bismark)

The final product for you convenience is here: https://youtu.be/62UTvttGflo

Concept:

The motivation behind our project was to create a unique piano-like instrument using Arduino circuits. By utilizing two breadboards, we had a larger workspace, allowing for a more complex setup. We incorporated a potentiometer as a frequency controller—adjusting it changes the pitch of the sounds produced, making the instrument tunable. To enhance the experience, we added synchronized LED lights, creating a visual element that complements the sound. This combination of light and music adds a fun, interactive touch to the project. Here’s the project cover:

The tools used for this project were: The potentiometer, Piezo Speaker, LEDs, 10k & 330 ohm resistors, push buttons and jump wires.

Execution:

The following was the schematic for our project, which served as the foundation that allowed us to successfully execute this project:

The following Arduino code snippet brought our project to life, controlling both sound and light to create an interactive musical experience:

void setup() {
  // Set button and LED pins as inputs and outputs
  for (int i = 0; i < 4; i++) {
    pinMode(buttonPins[i], INPUT);       // Button pins as input
    pinMode(ledPins[i], OUTPUT);         // LED pins as output
  }
  pinMode(piezoPin, OUTPUT);             // Speaker pin as output
}

void loop() {
  int potValue = analogRead(potPin);                    // Read potentiometer value
  int pitchAdjust = map(potValue, 0, 1023, -100, 100);  // Map pot value to pitch adjustment range

  // Check each button for presses
  for (int i = 0; i < 4; i++) {
    if (digitalRead(buttonPins[i]) == HIGH) {         // If button is pressed
      int adjustedFreq = notes[i] + pitchAdjust;      // Adjust note frequency based on potentiometer
      tone(piezoPin, adjustedFreq);                   // Play the adjusted note
      digitalWrite(ledPins[i], HIGH);                 // Turn on the corresponding LED
      delay(200);                                     // Delay to avoid rapid flashing
      noTone(piezoPin);                               // Stop the sound
      digitalWrite(ledPins[i], LOW);                  // Turn off the LED
    }
  }
}

Finally, the final project can be found here: https://youtu.be/62UTvttGflo

Reflection:

Although our project may seem simple, we encountered several challenges during its development. Initially, we accidentally placed the digital pins incorrectly, preventing the project from functioning as expected. After hours of troubleshooting, we sought help to identify the issue. This experience turned into a valuable teamwork activity, helping us grow as students and problem-solvers. I view challenges like these as opportunities to build skills I can apply to future projects, including my final one. To enhance this project further, I would improve its visual design and sound quality to make it more appealing to a wider audience. That’s all for now!

Week 10: Reading Response

Bret Victor – A Brief Rant on the Future of Interactive Design

My initial instinct upon reading Bret Victor’s article was to push back and think, “But you’re not really improving anything.” This reaction softened after reading his direct responses to these criticisms. Victor’s defensiveness, in some ways, protects his vision—arguably so. But even beyond that, the true point of his article lies in his challenge to our current conception of interactivity. He questions why we’ve limited ourselves to “single-finger” interaction, arguing that we’re barely scratching the surface of what interactive technology could become. I found myself agreeing, especially when he mentioned that if, 20 years down the line, all we had were glorified iPads, it would be a sign of stagnation. Now, over a decade since the article was written, we’ve indeed developed more advanced interfaces—like VR, AR, and even some early-stage holographic tech—but these technologies haven’t become mainstream, and they haven’t revolutionized interaction to the degree Victor imagined thus proving his point to a degree.

Reflecting on his perspective today, it’s clear he raises valid points. For the field of interactive design to truly evolve, we need critical voices like his, highlighting what’s lacking and pushing the boundaries of what’s possible. Yet, while I appreciate his vision, I also think it’s worth noting that the demand for fully immersive tech isn’t universal. Not everyone wants full-body interaction, and realistically, innovations often emerge only if there’s sufficient market interest. While technologies like VR and AR are groundbreaking, they remain largely inaccessible to many—especially those in marginalized or economically disadvantaged communities. In contrast, iPads and similar devices, while more limited, have found a place even in lower-income communities. Victor’s perspective is compelling and reminds us of the potential for interactive design, but it also underscores the need for accessibility and practical applications.

Thoughts on the Video:

The video accompanying Victor’s article showcases futuristic and visually stunning technologies, like transparent phones and computers. These concepts seem efficient, fast, and intuitive, presenting a vision of an accessible tech-forward society. But this vision quickly becomes complicated when you consider the societal implications. The choice to illustrate this world in a city like Johannesburg, for instance, inadvertently sidelines the broader realities of poverty and inequality. The technology depicted is only accessible to specific demographics, creating an illusion of widespread accessibility that falls short in practice. Can such tech really deliver on its promise of happiness if it only deepens capitalist divides?

Moreover, there’s an unsettling irony in the interactions depicted in the video. People engrossed in advanced technology appear disconnected and isolated, as though their devices detract from meaningful social interactions. This tension isn’t new; many feared that the rise of technology would eventually isolate us, even as it ostensibly connects us. The video seems to highlight this concern, making me reflect on whether such advancements genuinely enhance human happiness or merely serve to reinforce certain fears about a tech-saturated future.

Week 9: Day & Night Simulator

Video Link to final project: https://youtu.be/lS588oI_GPU

Concept

For this assignment, my concept was to simulate a day and night state using 2 LEDs and a photosensor. The digital sensor would be the switch, while the photosensor would be the analog sensor. One light should turn on while the other is off during each state. I was also inspired by this week’s readings, which discussed the idea of not being afraid to draw inspiration from what other artists have created. The articles by Tigoe helped me gather inspiration from various sources on the internet, such as our GitHub page. From there, I was able to develop this day and night simulation prototype.

Design and Execution

The following schematic represents the foundation on which I built this day and night simulator:

Schematic for day and night simulation

After drawing the schematic, I carefully assembled the entire circuit to represent the desired effect. The following code is the loop which allowed the entire idea come to life:

void loop() {
  int lightLevel = analogRead(photoPin); // Read photosensor value
  bool isDay = lightLevel > lightThreshold; // Determine if it's daytime
  Serial.println(lightLevel); // Print light level for debugging

  // Read switch positions
  bool switch1 = !digitalRead(switchPin1); // True if position 1 is active
  bool switch2 = !digitalRead(switchPin2); // True if position 2 is active

  if (switch1) {
    // Position 1: Both LEDs off
    digitalWrite(dayLed, LOW);
    analogWrite(nightLed, 0);
  } else if (switch2) {
    // Position 2: Daytime/Nighttime mode based on light sensor
    if (isDay) {
      // Daytime behavior
      digitalWrite(dayLed, HIGH); // Daytime LED on
      analogWrite(nightLed, 0);   // Nighttime LED off
    } else {
      // Nighttime behavior
      digitalWrite(dayLed, LOW);       // Daytime LED off
      analogWrite(nightLed, lightLevel / 4); // Nighttime LED brightness based on light level
    }
  }

  delay(100); // Small delay to smooth transitions
}

Finally, the project works, and here is a link that demonstrates the final outcome: https://youtu.be/lS588oI_GPU

Final Thoughts and Reflection:

This project allowed me to utilize previous knowledge and to learn and apply new concepts. I believe a great way to improve this project would be to have multiple lights that would react differently for day and night states. The challenge behind this would be wire management, and I believe there is a solution to this problem that I’m still yet to encounter. I am, however, curious about how this could manifest. This exercise is pivotal to my knowledge basket, as it will all contribute to my final project. That’s all for now!

Week 9: Reading Response

Articles:

  • Physical Computing’s Greatest hits and misses
  • Making Interactive Art: Set the Stage, Then Shut Up and Listen

 

In Tigoe’s two articles, he explores the ways in which artists can find inspiration from the work of others, offering both insights and reassurance about the process of building on existing ideas. A particularly compelling aspect of these articles is how Tigoe illustrates the concept of ‘copying’ among artists—not as mere replication but as transformation. He shows that while artists may draw from each other’s work, they often create something entirely new and unique, reshaping the borrowed elements into original expressions. This idea aligns with the well-known notion that artists often remix various sources, juxtaposing them to create fresh, unique works. In interactive art, this approach can ease the burden of starting from scratch, which can be overwhelming and anxiety-inducing. Without inspiration from existing works, artists may struggle to bring their ideas to life fully—unless they are among the rare few who don’t rely on external inspiration at all.

Tigoe discusses various interactive pieces that appear to share certain traits but yield vastly different experiences for the audience, such as ‘Video Mirrors’ and ‘Body and Hand Cursors’. One of my favorite examples is the gloves that produce music when tapped on a surface. This design is not only interactive but fun and highly engaging, embodying the playful spirit of interactive art.

One critical reflection I have is about Tigoe’s reference to the “Scooby Doo painting” example, where he highlights a common mistake among designers: confusing presence with attention. He states, “Presence is easy to sense… but it’s harder to tell whether someone’s paying attention.” I think that in cases where artwork detects a person’s presence and responds by moving, it naturally draws the viewer’s attention, fostering interaction. For instance, in a crowded space, artwork that moves autonomously could spark collective interest, showing how even unintended effects can enhance user experience.

This concept connects with Tigoe’s advice in the second article about avoiding interference with a finished product while users engage with it. I wholeheartedly agree with this perspective, and I believe it’s an essential practice to adopt in designing interactive experiences. Even if I don’t incorporate this approach all the time, it’s a valuable insight I’ll certainly keep in mind.

Week 8 Assignment: Head Switch

Concept:

 

Final results for you convenience: https://youtu.be/6M-4nbYk2Is

 

Initially, the idea of using a switch that didn’t require hands felt challenging to execute. However, after some contemplation, the thought process shifted: if not manually, perhaps turning on the switch wirelessly would be ideal. My initial idea was to see if I could use my laptop to turn on the light with a clap. This, however, didn’t work for two main reasons: 1) it still required using my hands, and 2) the claps were too soft, as sound is typically best detected in a controlled setting. I then considered if I could control the light by turning my head left or right. Once this idea settled, the execution began.

Design and Execution:

The following schematic represents the electrical connection for the Arduino Uno board:

Schematic image painfully made using photoshop.

The final connection represented by the image above can be found from the image below:

Connection Image for the head switch LED control

Finally, the magic that brought everything together was not only the Arduino code but also a Python script, with a bit of help from everyone’s favorite chatbot. The following code was used in the Arduino IDE:

const int ledPin = 13;  // Pin connected to the LED

void setup() {
  Serial.begin(9600);       // Initialize serial communication
  pinMode(ledPin, OUTPUT);  // Set the LED pin as output
}

void loop() {
  if (Serial.available() > 0) {  // Check if data is available on the serial port
    char command = Serial.read();  // Read the incoming byte

    if (command == '1') {
      digitalWrite(ledPin, HIGH);  // Turn LED on
    } else if (command == '0') {
      digitalWrite(ledPin, LOW);   // Turn LED off
    }
  }
}

I then ran the Python code in my terminal, which activated the camera. Head tracking began, and from that point, turning my head to the left switched the light on, while turning it to the right switched it off. The following portion of the code made this possible:

while True:
       # Capture a frame from the camera
       ret, frame = cap.read()
       if not ret:
           break

       # Convert frame to RGB
       rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

       # Process the frame with Mediapipe
       results = face_mesh.process(rgb_frame)

       # If a face is detected, analyze head direction
       if results.multi_face_landmarks:
           landmarks = results.multi_face_landmarks[0].landmark
           direction = calculate_turn_direction(landmarks)

           if direction < LEFT_THRESHOLD and not led_on:
               print("Head turned left - Turning LED on")
               arduino.write(b'1')  # Send signal to Arduino to turn LED on
               led_on = True

           elif direction > RIGHT_THRESHOLD and led_on:
               print("Head turned right - Turning LED off")
               arduino.write(b'0')  # Send signal to Arduino to turn LED off
               led_on = False

       # Display the frame (optional)
       cv2.imshow("Head Movement Detection", frame)
       if cv2.waitKey(1) & 0xFF == ord('q'):
           break

 

Final Project:

Have a look at how the final project turned out in this short YouTube video:

https://youtu.be/6M-4nbYk2Is

Here is a progress of what happens when the user turns their head left and right:

Progress of head turns
Final Thoughts & Reflection:

This homework exercise was both fun and enjoyable. It pushed me to learn Arduino while thinking creatively about solving problems. Throughout the project, I kept considering how it might be integrated into my final project. So, instead of making this exercise long and complex, I approached it as a potential feature for the final project. That’s where I envision improvements and a broader application of this single project. That’s all for now!

Reading Response: Week 8

Her Code Got Humans on the Moon—And Invented Software Itself

After reading the article on Margaret Hamilton, I can genuinely say her work left me inspired. One of the most fascinating aspects of her journey in software engineering is that the career path was not taught formally at the time; pioneers like Hamilton navigated complex, unprecedented problems relying solely on their resourcefulness and intellect. To me, this makes her story all the more exciting and inspiring. In the article, Hamilton reflects, saying, “When I first got into it, nobody knew what it was that we were doing. It was like the Wild West.” This captures an intriguing concept: how something entirely unfamiliar can eventually grow into a field recognized by the world. It emphasizes the invaluable contributions of innovators like Hamilton, whose dedication transformed this “Wild West” of programming into a modern, booming industry.

Today, what was once unknown is now a leading field, with software engineering being a top choice for bachelor’s degree pursuits. Interestingly, there’s a marked contrast between Hamilton’s era and ours: while she and her peers pursued this path out of passion and a pioneering spirit, the $400-billion industry today is often seen as a gateway to financial stability, with passion sometimes secondary. Despite the fact that software wasn’t even included in the Apollo mission’s budget or official documentation, its impact is undeniably felt across industries today. This article brilliantly highlights the work of women in technology and helps bridge the gender gap by celebrating these overlooked pioneers.

 

Norman, “Emotion & Design: Attractive things work better”

When I first began reading Norman’s essay on “Attractive Things Work Better,” I was initially skeptical, believing functionality should outweigh aesthetics—why prioritize appearance if it compromises performance? Yet, as I delved deeper, I found myself agreeing with Norman’s insights on how attractive designs can enhance usability. One key point he raises is that while aesthetics sometimes need to be sacrificed in contexts where functionality is paramount, there are cases where the two can harmoniously coexist. This perspective helped clarify any initial doubts I had about the essay’s premise.

Norman’s explanation of positive and negative affect, and their impact on user experience, is particularly compelling. His argument on how emotions like anxiety and fear can be leveraged to overcome procrastination, especially when working within time constraints, feels remarkably applicable. At first, I was hesitant about his idea that a product’s utility can be influenced by mood, and the suggestion of owning multiple versions of a product seemed inefficient to me. It felt impractical to rely on mood when selecting functional items, especially considering cases where only the “undesirable” product might be available, potentially leading to a poor experience. This extends beyond the teapot example he provides. However, his quote, “When we feel good, we overlook design faults. Use a pleasing design, one that looks good and feels, well, sexy, and the behavior seems to go along more smoothly, more easily, and better. Attractive things work better,” presents a thought-provoking angle that I acknowledge has merit, though I feel it doesn’t apply universally.

I do disagree with Norman’s claim that color displays offer no additional value compared to black-and-white screens. This seems particularly outdated in today’s digital world; for example, working in design software like Photoshop often requires accurate color information, and relying on a black-and-white display could lead to unintended, muted results. This is just one of several cases where Norman’s argument might fall short.

Finally, Norman’s emphasis on products being affordable, functional, and pleasurable resonates in our present context. This balance is essential, as seen in decisions by major companies like Apple, which might discontinue products like the Vision Pro if they fail to meet these criteria, particularly in terms of affordability.

The Flame Boy: Becoming a Robinson (Midterm Project)

Concept

link to full sketch: https://editor.p5js.org/takuthulani/full/gdL68sjHc

This project is an adaptation of a fictional world I’ve created, designed to offer an immersive storytelling experience. Instead of passively observing, users actively engage with the narrative, making it more like an interactive movie or book. The story centers around the protagonist and key events from his life. To preserve the element of surprise and keep the experience enjoyable, I encourage you to explore the story firsthand!The narrative also takes place in a sci-fi setting, featuring an alien planet inhabited by human-like beings with enhanced abilities. This concept influenced the design, with fire playing a central role and red as the dominant color theme.

How the Game Works:

The game starts by greeting the user with a cover page that plays The Flame Boy’s theme song (it’s catchy, so you might get lost in it—don’t forget to progress, though you’re welcome to just listen!). The theme song was created using Suno AI (credit to them). On the cover image, there are instructions to click on the screen, which takes the user to a menu page (with its own theme song as well from Pixabay). The menu presents two options: “His Story” and “His Home,” and the user is expected to click on one.

If the user clicks on “His Home,” the page transitions to an image showing his home—simple and straightforward. This image was made using DALL-E. To exit this view and return to the menu, the user can press the escape button. If the user clicks on “His Story,” a video begins to play, so sit back and enjoy! After the video, the user must make a choice: either “Trust” or “Don’t Trust” the character they interact with. Clicking “Don’t Trust” progresses the game to the conclusion and eventually to a thank you message. The user can return to the cover screen by pressing the mouse key.

If the user chooses “Trust,” the game transitions to a section where they need to earn 100 points to reach the story’s conclusion. The experience continues until the user decides to stop the sketch.

Parts I’m Proud of:

There are several aspects of this project that I’m truly proud of. First and foremost is the video element, which made everything possible. Initially, I planned to use images and recordings to narrate the story, but p5.js kept crashing, so integrating video became the best solution. Even though it’s technically “out of scope” based on what we covered in class, it was a practical way to bring the project to life. I turned to resources like Stack Overflow, the p5.js help section, and ChatGPT to guide me through the video implementation. Below is an example of the code I used to incorporate video into the project:

// Video elements
let storyVideo; // Video for the story sequence
let conclusionVideo; // Video for the conclusion sequence
let decisionImage; // Image for the decision screen where the player chooses their path
...
function playStory() {
  // Stop the story video if it's currently playing to reset the playback
  storyVideo.stop(); 
  
  // Reset the playback time of the story video to the beginning (0 seconds)
  storyVideo.time(0);  
  
  // Start playing the story video from the beginning
  storyVideo.play();  
}
...
function playConclusion() {
  // Stop the conclusion video if it is currently playing
  conclusionVideo.stop();
  
  // Reset the video playback time to the start (0 seconds)
  conclusionVideo.time(0);
  
  // Start playing the conclusion video from the beginning
  conclusionVideo.play();
}

Secondly, I’m particularly proud of the shooting game and its mechanics. I’ll include snippets of the code for the parts I’m most proud of below, and I’ll also provide a link to the full sketch so you can explore the many amazing functions I’ve implemented. Here’s a sample of the code for the game within the game:

function playShootingGame() {
  // Draw the background night sky for the shooting game
  drawNightSky();
  
  // Set the fill color to white for the score text
  fill(255);
  
  // Set the text size for the score display
  textSize(16);
  
  // Align text to the right and top of the canvas
  textAlign(RIGHT, TOP);
  
  // Display the current score at the top-right corner of the canvas
  text("Score: " + score, width - 10, 10);

  // Check if 2 seconds have passed since the last star was spawned
  if (millis() - starTimer > 2000) {
    // Spawn new stars for the shooting game
    spawnGameStars();
    
    // Update the star timer to the current time
    starTimer = millis();
  }

  // Loop through the stars array in reverse order to avoid issues with splicing
  for (let i = stars.length - 1; i >= 0; i--) {
    // Get the current star object from the stars array
    let star = stars[i];
    
    // Set the fill color to yellow for larger stars, white for smaller stars
    fill(star.size === 40 ? 'yellow' : 'white'); 
    
    // Draw the star as an ellipse at its specified x and y coordinates with its size
    ellipse(star.x, star.y, star.size);

    // Check if the star has been on screen for more than 2 seconds
    if (millis() - star.appearTime > 2000) {
      // Remove the star from the array if it has been displayed long enough
      stars.splice(i, 1);
    }
  }

  // Check if the score has reached 100 points
  if (score >= 100) {
    // Change the game state to 'conclusion'
    state = 'conclusion';
    
    // Play the conclusion video or sequence
    playConclusion();
  }
}

link to the sketch: <iframe src=”https://editor.p5js.org/takuthulani/full/gdL68sjHc”></iframe>

Problems I encountered and possible solutions:

The main challenge I faced was implementing video into the sketch after my original plan didn’t work out. I found solutions using the online resources mentioned earlier. Another problem was integrating the mini-game within the story. The best approach was to treat the game as a separate entity and then integrate it into the sketch, rather than seeing it as one large game (which added unnecessary stress while coding). Additionally, I encountered performance issues, as I used too many audio and visual files. Optimizing the game speed and performance became a key focus. Below is a screenshot showing some of the media I used:

files and resources used to generate the story for the game

This includes various assets like fonts, PowerPoint presentations, a mini script for the game, and a massive 500MB PSD file (trust me, you don’t want to know how many images are in that!). I also went through multiple iterations of the menu and cover page designs.

Since I’m not the best at drawing, I used Meta AI to generate the images, which I then manually edited in Photoshop. I recorded the vocal narrations using Audacity and assembled the video using CapCut before uploading it into my program. Some of the images, sounds, and music were sourced from Pixabay.com, which allows usage for projects like this.

Areas needing improvements:

The overall workflow of the game could use some enhancements. For example, the “His Home” area could feature more interactive functions to increase engagement and enjoyment. Additionally, the story’s flow would benefit from a clearer rationale for why the user needs to earn 100 points to advance to the conclusion. While some of these creative flaws may have been overlooked, they can be incorporated as features in future updates.

Overall, I thoroughly enjoyed working on this project and was fully invested, as I was creating something I loved, among other reasons. It provided a fun and technical way to learn, and I am excited for you all to see what I build as time progresses.

The following images are alternative designs that did not make the cut:

According to a few people, the guy on the right did not fit in well so this menu did not make it.
Although my overall theme was red and black, I did not think thiss would benefit the aesthetics of the overall story.
This was a test I did when I tried integrating mock-up text with my overall design.

 

 

 

 

Midterm Project Progress – Week 5

Concept and Introduction

Like a movie teaser, this would serve as the very first visual to be seen for this project. I designed it using Adobe Photoshop. Additional images are from pixabay.com

For my midterm project, I wanted to explore something I deeply love—a concept that excites me to bring to life. This project combines elements from my Week 1 and Week 3 assignments into an interactive piece that I’m thrilled to work on. My love for sci-fi began with watching Dragon Ball Z and was later fueled by shows like Naruto, Marvel and DC animated and live-action films, Star Wars, and many more. From this inspiration, I created a universe that’s too vast to fully explain here, but I can say that this project represents a small piece of something much larger. ‘The Flame Boy’ is a character I’ll be exploring through interactive storytelling, which excites me because it allows me to experiment with a different medium than the filmmaking I’m most accustomed to.

In short, ‘The Flame Boy’ is about a young boy who lives with his grandfather. He was abandoned by the royal family (the reasons are explained in the project) and left on the side of the planet where the sun never rises. He meets someone special who inspires him to trace his roots, as he never felt he truly belonged to this side of the world. The interactive story allows the user to explore this world and learn more about this character. Eventually, he discovers the truth about his family, specifically the Robinsons.

The concept of the interactive artwork offers a choice: you can either explore The Flame Boy’s world or uncover the truth (think ‘red pill, blue pill,’ if you will). Choosing to explore his home lets the user interact with his room and discover his personality. On the other hand, choosing to know the truth allows the user to experience the story through interactive storytelling, which will unfold as you continue the journey.

 

User Interaction

  1. The interactive artwork begins with an opening splash screen. On this screen, there are visual cues guiding the user to enter and continue. This is the starting point every time the user explores this world. A soundtrack plays in the background whenever this screen appears. The following images are rough sketches of how I envisioned the splash screen before moving into Photoshop:
Cover screen idea 1
Cover screen idea 2

2. Once the user presses any button, they are transported to a menu screen. This screen presents them with the option to either explore The Flame Boy’s world/home or learn about his story.

If they choose to explore his home, the screen transitions to a scene resembling his house. Users will be able to interact with various objects within his space, allowing them to learn more about him through this interaction. This will be created using a combination of shapes in p5.js, along with a few images, music, and sounds. The experience will be simple and intuitive.

 

If the user chooses to learn about his story, they are transported into a movie/book-like environment. Here, a narrator introduces the protagonist, explaining how he was born, how he received his name and powers, and why he is where he is now. The user can advance to the next page by clicking the screen. As the story progresses, they meet a magician who guides the protagonist in discovering his identity.

The user is then presented with another choice: either ‘shoot the stars’ using The Flame Boy’s fire powers to earn 100 star coins, or navigate their way through a dark maze using The Flame Boy’s fire as a light source. The maze changes each time the user selects this option, creating an unpredictable and ‘random’ experience. Once the user completes these mini-games, they witness The Flame Boy meeting his parents for the first time. The experience then concludes, prompting the user to start over.

The following image is a simple node sketch I made in photoshop which depicts the flow of the program in its entirety:

This was the 3rd version of this visual. A more sophisticated version exists. The nodes represent the structure explained above.

The following is a brief progress report on the program as of the time of writing. The music was made in Suno A.I:

 

Most frightening Part and its Solution

Problem: The main challenge of my interactive artwork lies in implementing the two mini-games within the story segment. This project feels like three projects combined into one, which makes me concerned about whether users will find it as rewarding as I hope. Specifically, I felt apprehensive about how to implement the maze game and the shooting stars feature.

Solution: At the time of writing this report, I am researching ways to integrate these features using predefined algorithms available on GitHub. I will discuss these algorithms in my final update, whether they work or if there’s a need to change the creative and technical approach. For now, this is my progress update.

The following is an image of the splash screen as a thank you for reading this report in its entirety (and because I’m excited to share this image I made in Photoshop!).

Image made in photoshop for the interactive piece titled: The Flame Boy: Becoming a Robinson. Further developments will be made. Images were downloaded from Pixabay and Meta A.I

Reading Reflection – Week 5

Exploring the Pioneering World of Videoplace and Computer Vision

One of the most intriguing things about technology before the 2010s is how companies like Videoplace managed to harness ‘Artificial Reality’ between 1969 and 1975. This was long before smartphones and personal computers became mainstream, making their achievements feel ahead of their time. In fact, Videoplace was a pioneer in creating a VR-like experience, and this early attempt at computer vision offers us a glimpse into how computers and humans “see” differently.

Computer Vision vs. Human Vision

While human vision is holistic and context-driven, computer vision is all about analyzing images pixel by pixel. For example, Myron Krueger’s Videoplace used algorithms like the brightest pixel detection to track users’ movements. This means the computer focused on changes in brightness and motion, rather than understanding the full scene as a human would. Despite this difference, computers excel at tracking minute details or movements that humans might miss, making them ideal for tasks requiring precision.

Techniques for Helping Computers “See”

The article discusses techniques like motion detection, which helps computers identify changes in pixel values to track movement. These early algorithms now form the foundation of modern visual effects software like Adobe After Effects and Nuke, allowing them to “see” and track objects with high accuracy. As someone interested in filmmaking, I’ve realized how powerful this technology is for enhancing video quality and adding visual effects, making it possible to create dynamic, interactive experiences.

The Impact on Interactive Art and Surveillance

Videoplace demonstrated how computer vision can transform interactive art, allowing viewers to influence digital experiences in real-time. This ability to track movements made art installations more immersive, blending the boundaries between the physical and digital world. However, this same technology also lends itself to surveillance, as seen with systems like the Suicide Box, which monitored human behavior. This dual nature of computer vision—both as an artistic tool and a means of surveillance—reveals its vast potential and challenges in shaping interactive media. This shows how interactive media is not just limited to entertainment but also social political issues within society.

Final Thoughts 

Reflecting on Krueger’s belief that human interaction is vital in developing computer systems, it’s clear that his vision holds true today. Despite their differences, computer vision and human vision can work together to create compelling, interactive experiences. Understanding how to leverage this technology allows us to not just create art but to redefine how we engage with the digital world. It’s fascinating to realize that the more we harness computer vision, the more limitless our creative potential becomes.

 

Sources Used
  • http://www.flong.com/archive/texts/essays/essay_cvad/index.html (The article
  • Grammarly and ChatGPT  for grammar and paragraph formatting.