Reading Response: Week 8

Her Code Got Humans on the Moon—And Invented Software Itself

After reading the article on Margaret Hamilton, I can genuinely say her work left me inspired. One of the most fascinating aspects of her journey in software engineering is that the career path was not taught formally at the time; pioneers like Hamilton navigated complex, unprecedented problems relying solely on their resourcefulness and intellect. To me, this makes her story all the more exciting and inspiring. In the article, Hamilton reflects, saying, “When I first got into it, nobody knew what it was that we were doing. It was like the Wild West.” This captures an intriguing concept: how something entirely unfamiliar can eventually grow into a field recognized by the world. It emphasizes the invaluable contributions of innovators like Hamilton, whose dedication transformed this “Wild West” of programming into a modern, booming industry.

Today, what was once unknown is now a leading field, with software engineering being a top choice for bachelor’s degree pursuits. Interestingly, there’s a marked contrast between Hamilton’s era and ours: while she and her peers pursued this path out of passion and a pioneering spirit, the $400-billion industry today is often seen as a gateway to financial stability, with passion sometimes secondary. Despite the fact that software wasn’t even included in the Apollo mission’s budget or official documentation, its impact is undeniably felt across industries today. This article brilliantly highlights the work of women in technology and helps bridge the gender gap by celebrating these overlooked pioneers.

 

Norman, “Emotion & Design: Attractive things work better”

When I first began reading Norman’s essay on “Attractive Things Work Better,” I was initially skeptical, believing functionality should outweigh aesthetics—why prioritize appearance if it compromises performance? Yet, as I delved deeper, I found myself agreeing with Norman’s insights on how attractive designs can enhance usability. One key point he raises is that while aesthetics sometimes need to be sacrificed in contexts where functionality is paramount, there are cases where the two can harmoniously coexist. This perspective helped clarify any initial doubts I had about the essay’s premise.

Norman’s explanation of positive and negative affect, and their impact on user experience, is particularly compelling. His argument on how emotions like anxiety and fear can be leveraged to overcome procrastination, especially when working within time constraints, feels remarkably applicable. At first, I was hesitant about his idea that a product’s utility can be influenced by mood, and the suggestion of owning multiple versions of a product seemed inefficient to me. It felt impractical to rely on mood when selecting functional items, especially considering cases where only the “undesirable” product might be available, potentially leading to a poor experience. This extends beyond the teapot example he provides. However, his quote, “When we feel good, we overlook design faults. Use a pleasing design, one that looks good and feels, well, sexy, and the behavior seems to go along more smoothly, more easily, and better. Attractive things work better,” presents a thought-provoking angle that I acknowledge has merit, though I feel it doesn’t apply universally.

I do disagree with Norman’s claim that color displays offer no additional value compared to black-and-white screens. This seems particularly outdated in today’s digital world; for example, working in design software like Photoshop often requires accurate color information, and relying on a black-and-white display could lead to unintended, muted results. This is just one of several cases where Norman’s argument might fall short.

Finally, Norman’s emphasis on products being affordable, functional, and pleasurable resonates in our present context. This balance is essential, as seen in decisions by major companies like Apple, which might discontinue products like the Vision Pro if they fail to meet these criteria, particularly in terms of affordability.

The Flame Boy: Becoming a Robinson (Midterm Project)

Concept

link to full sketch: https://editor.p5js.org/takuthulani/full/gdL68sjHc

This project is an adaptation of a fictional world I’ve created, designed to offer an immersive storytelling experience. Instead of passively observing, users actively engage with the narrative, making it more like an interactive movie or book. The story centers around the protagonist and key events from his life. To preserve the element of surprise and keep the experience enjoyable, I encourage you to explore the story firsthand!The narrative also takes place in a sci-fi setting, featuring an alien planet inhabited by human-like beings with enhanced abilities. This concept influenced the design, with fire playing a central role and red as the dominant color theme.

How the Game Works:

The game starts by greeting the user with a cover page that plays The Flame Boy’s theme song (it’s catchy, so you might get lost in it—don’t forget to progress, though you’re welcome to just listen!). The theme song was created using Suno AI (credit to them). On the cover image, there are instructions to click on the screen, which takes the user to a menu page (with its own theme song as well from Pixabay). The menu presents two options: “His Story” and “His Home,” and the user is expected to click on one.

If the user clicks on “His Home,” the page transitions to an image showing his home—simple and straightforward. This image was made using DALL-E. To exit this view and return to the menu, the user can press the escape button. If the user clicks on “His Story,” a video begins to play, so sit back and enjoy! After the video, the user must make a choice: either “Trust” or “Don’t Trust” the character they interact with. Clicking “Don’t Trust” progresses the game to the conclusion and eventually to a thank you message. The user can return to the cover screen by pressing the mouse key.

If the user chooses “Trust,” the game transitions to a section where they need to earn 100 points to reach the story’s conclusion. The experience continues until the user decides to stop the sketch.

Parts I’m Proud of:

There are several aspects of this project that I’m truly proud of. First and foremost is the video element, which made everything possible. Initially, I planned to use images and recordings to narrate the story, but p5.js kept crashing, so integrating video became the best solution. Even though it’s technically “out of scope” based on what we covered in class, it was a practical way to bring the project to life. I turned to resources like Stack Overflow, the p5.js help section, and ChatGPT to guide me through the video implementation. Below is an example of the code I used to incorporate video into the project:

// Video elements
let storyVideo; // Video for the story sequence
let conclusionVideo; // Video for the conclusion sequence
let decisionImage; // Image for the decision screen where the player chooses their path
...
function playStory() {
  // Stop the story video if it's currently playing to reset the playback
  storyVideo.stop(); 
  
  // Reset the playback time of the story video to the beginning (0 seconds)
  storyVideo.time(0);  
  
  // Start playing the story video from the beginning
  storyVideo.play();  
}
...
function playConclusion() {
  // Stop the conclusion video if it is currently playing
  conclusionVideo.stop();
  
  // Reset the video playback time to the start (0 seconds)
  conclusionVideo.time(0);
  
  // Start playing the conclusion video from the beginning
  conclusionVideo.play();
}

Secondly, I’m particularly proud of the shooting game and its mechanics. I’ll include snippets of the code for the parts I’m most proud of below, and I’ll also provide a link to the full sketch so you can explore the many amazing functions I’ve implemented. Here’s a sample of the code for the game within the game:

function playShootingGame() {
  // Draw the background night sky for the shooting game
  drawNightSky();
  
  // Set the fill color to white for the score text
  fill(255);
  
  // Set the text size for the score display
  textSize(16);
  
  // Align text to the right and top of the canvas
  textAlign(RIGHT, TOP);
  
  // Display the current score at the top-right corner of the canvas
  text("Score: " + score, width - 10, 10);

  // Check if 2 seconds have passed since the last star was spawned
  if (millis() - starTimer > 2000) {
    // Spawn new stars for the shooting game
    spawnGameStars();
    
    // Update the star timer to the current time
    starTimer = millis();
  }

  // Loop through the stars array in reverse order to avoid issues with splicing
  for (let i = stars.length - 1; i >= 0; i--) {
    // Get the current star object from the stars array
    let star = stars[i];
    
    // Set the fill color to yellow for larger stars, white for smaller stars
    fill(star.size === 40 ? 'yellow' : 'white'); 
    
    // Draw the star as an ellipse at its specified x and y coordinates with its size
    ellipse(star.x, star.y, star.size);

    // Check if the star has been on screen for more than 2 seconds
    if (millis() - star.appearTime > 2000) {
      // Remove the star from the array if it has been displayed long enough
      stars.splice(i, 1);
    }
  }

  // Check if the score has reached 100 points
  if (score >= 100) {
    // Change the game state to 'conclusion'
    state = 'conclusion';
    
    // Play the conclusion video or sequence
    playConclusion();
  }
}

link to the sketch: <iframe src=”https://editor.p5js.org/takuthulani/full/gdL68sjHc”></iframe>

Problems I encountered and possible solutions:

The main challenge I faced was implementing video into the sketch after my original plan didn’t work out. I found solutions using the online resources mentioned earlier. Another problem was integrating the mini-game within the story. The best approach was to treat the game as a separate entity and then integrate it into the sketch, rather than seeing it as one large game (which added unnecessary stress while coding). Additionally, I encountered performance issues, as I used too many audio and visual files. Optimizing the game speed and performance became a key focus. Below is a screenshot showing some of the media I used:

files and resources used to generate the story for the game

This includes various assets like fonts, PowerPoint presentations, a mini script for the game, and a massive 500MB PSD file (trust me, you don’t want to know how many images are in that!). I also went through multiple iterations of the menu and cover page designs.

Since I’m not the best at drawing, I used Meta AI to generate the images, which I then manually edited in Photoshop. I recorded the vocal narrations using Audacity and assembled the video using CapCut before uploading it into my program. Some of the images, sounds, and music were sourced from Pixabay.com, which allows usage for projects like this.

Areas needing improvements:

The overall workflow of the game could use some enhancements. For example, the “His Home” area could feature more interactive functions to increase engagement and enjoyment. Additionally, the story’s flow would benefit from a clearer rationale for why the user needs to earn 100 points to advance to the conclusion. While some of these creative flaws may have been overlooked, they can be incorporated as features in future updates.

Overall, I thoroughly enjoyed working on this project and was fully invested, as I was creating something I loved, among other reasons. It provided a fun and technical way to learn, and I am excited for you all to see what I build as time progresses.

The following images are alternative designs that did not make the cut:

According to a few people, the guy on the right did not fit in well so this menu did not make it.
Although my overall theme was red and black, I did not think thiss would benefit the aesthetics of the overall story.
This was a test I did when I tried integrating mock-up text with my overall design.

 

 

 

 

Midterm Project Progress – Week 5

Concept and Introduction

Like a movie teaser, this would serve as the very first visual to be seen for this project. I designed it using Adobe Photoshop. Additional images are from pixabay.com

For my midterm project, I wanted to explore something I deeply love—a concept that excites me to bring to life. This project combines elements from my Week 1 and Week 3 assignments into an interactive piece that I’m thrilled to work on. My love for sci-fi began with watching Dragon Ball Z and was later fueled by shows like Naruto, Marvel and DC animated and live-action films, Star Wars, and many more. From this inspiration, I created a universe that’s too vast to fully explain here, but I can say that this project represents a small piece of something much larger. ‘The Flame Boy’ is a character I’ll be exploring through interactive storytelling, which excites me because it allows me to experiment with a different medium than the filmmaking I’m most accustomed to.

In short, ‘The Flame Boy’ is about a young boy who lives with his grandfather. He was abandoned by the royal family (the reasons are explained in the project) and left on the side of the planet where the sun never rises. He meets someone special who inspires him to trace his roots, as he never felt he truly belonged to this side of the world. The interactive story allows the user to explore this world and learn more about this character. Eventually, he discovers the truth about his family, specifically the Robinsons.

The concept of the interactive artwork offers a choice: you can either explore The Flame Boy’s world or uncover the truth (think ‘red pill, blue pill,’ if you will). Choosing to explore his home lets the user interact with his room and discover his personality. On the other hand, choosing to know the truth allows the user to experience the story through interactive storytelling, which will unfold as you continue the journey.

 

User Interaction

  1. The interactive artwork begins with an opening splash screen. On this screen, there are visual cues guiding the user to enter and continue. This is the starting point every time the user explores this world. A soundtrack plays in the background whenever this screen appears. The following images are rough sketches of how I envisioned the splash screen before moving into Photoshop:
Cover screen idea 1
Cover screen idea 2

2. Once the user presses any button, they are transported to a menu screen. This screen presents them with the option to either explore The Flame Boy’s world/home or learn about his story.

If they choose to explore his home, the screen transitions to a scene resembling his house. Users will be able to interact with various objects within his space, allowing them to learn more about him through this interaction. This will be created using a combination of shapes in p5.js, along with a few images, music, and sounds. The experience will be simple and intuitive.

 

If the user chooses to learn about his story, they are transported into a movie/book-like environment. Here, a narrator introduces the protagonist, explaining how he was born, how he received his name and powers, and why he is where he is now. The user can advance to the next page by clicking the screen. As the story progresses, they meet a magician who guides the protagonist in discovering his identity.

The user is then presented with another choice: either ‘shoot the stars’ using The Flame Boy’s fire powers to earn 100 star coins, or navigate their way through a dark maze using The Flame Boy’s fire as a light source. The maze changes each time the user selects this option, creating an unpredictable and ‘random’ experience. Once the user completes these mini-games, they witness The Flame Boy meeting his parents for the first time. The experience then concludes, prompting the user to start over.

The following image is a simple node sketch I made in photoshop which depicts the flow of the program in its entirety:

This was the 3rd version of this visual. A more sophisticated version exists. The nodes represent the structure explained above.

The following is a brief progress report on the program as of the time of writing. The music was made in Suno A.I:

 

Most frightening Part and its Solution

Problem: The main challenge of my interactive artwork lies in implementing the two mini-games within the story segment. This project feels like three projects combined into one, which makes me concerned about whether users will find it as rewarding as I hope. Specifically, I felt apprehensive about how to implement the maze game and the shooting stars feature.

Solution: At the time of writing this report, I am researching ways to integrate these features using predefined algorithms available on GitHub. I will discuss these algorithms in my final update, whether they work or if there’s a need to change the creative and technical approach. For now, this is my progress update.

The following is an image of the splash screen as a thank you for reading this report in its entirety (and because I’m excited to share this image I made in Photoshop!).

Image made in photoshop for the interactive piece titled: The Flame Boy: Becoming a Robinson. Further developments will be made. Images were downloaded from Pixabay and Meta A.I

Reading Reflection – Week 5

Exploring the Pioneering World of Videoplace and Computer Vision

One of the most intriguing things about technology before the 2010s is how companies like Videoplace managed to harness ‘Artificial Reality’ between 1969 and 1975. This was long before smartphones and personal computers became mainstream, making their achievements feel ahead of their time. In fact, Videoplace was a pioneer in creating a VR-like experience, and this early attempt at computer vision offers us a glimpse into how computers and humans “see” differently.

Computer Vision vs. Human Vision

While human vision is holistic and context-driven, computer vision is all about analyzing images pixel by pixel. For example, Myron Krueger’s Videoplace used algorithms like the brightest pixel detection to track users’ movements. This means the computer focused on changes in brightness and motion, rather than understanding the full scene as a human would. Despite this difference, computers excel at tracking minute details or movements that humans might miss, making them ideal for tasks requiring precision.

Techniques for Helping Computers “See”

The article discusses techniques like motion detection, which helps computers identify changes in pixel values to track movement. These early algorithms now form the foundation of modern visual effects software like Adobe After Effects and Nuke, allowing them to “see” and track objects with high accuracy. As someone interested in filmmaking, I’ve realized how powerful this technology is for enhancing video quality and adding visual effects, making it possible to create dynamic, interactive experiences.

The Impact on Interactive Art and Surveillance

Videoplace demonstrated how computer vision can transform interactive art, allowing viewers to influence digital experiences in real-time. This ability to track movements made art installations more immersive, blending the boundaries between the physical and digital world. However, this same technology also lends itself to surveillance, as seen with systems like the Suicide Box, which monitored human behavior. This dual nature of computer vision—both as an artistic tool and a means of surveillance—reveals its vast potential and challenges in shaping interactive media. This shows how interactive media is not just limited to entertainment but also social political issues within society.

Final Thoughts 

Reflecting on Krueger’s belief that human interaction is vital in developing computer systems, it’s clear that his vision holds true today. Despite their differences, computer vision and human vision can work together to create compelling, interactive experiences. Understanding how to leverage this technology allows us to not just create art but to redefine how we engage with the digital world. It’s fascinating to realize that the more we harness computer vision, the more limitless our creative potential becomes.

 

Sources Used
  • http://www.flong.com/archive/texts/essays/essay_cvad/index.html (The article
  • Grammarly and ChatGPT  for grammar and paragraph formatting.

Assignment 4 – The Lyrical Video

Concept

For this project, I wanted to explore something simple yet engaging with text. My initial idea involved allowing user input, where text would fall to the ground. While that was a good starting point, I felt it needed more interactivity. Then, inspiration struck while I was listening to music: why not create a lyric video? And that’s how this project took shape – a lyric video with the text fading in and out, synchronized to the music playing in the background.

 

Code I’m Particularly Proud Of

In this simple project, the code I’m most proud of is the part that handles the fade-in and fade-out effect of the text. Normally, this would require a loop, but since the draw() function in p5.js acts as a natural loop, I managed it using a simple if statement combined with a counter that gradually changes the opacity of the text until it fully fades out. Here’s the core code snippet:

// Display the current line with a fade-in effect
  fill(255, fadeValue);
  text(lyrics[currentLine], width / 2, lineY); // Display the current line of lyrics at the center of the canvas

  // Gradually make the text appear by decreasing its opacity
  fadeValue -= 1;

  // When the text is fully faded, move to the next line
  if (fadeValue <= 0) 
  {
    currentLine = (currentLine + 1) % lyrics.length; // Move to the next line, looping back to the start if at the end
    
    currentColor = (currentColor + 1) % colors.length; // Change to the next background color, looping through the array
    
    fadeValue = 255; // Reset the fade value to fully opaque
  }

 

Final Product

The final product is available to experience, and you can interact with it by pressing the mouse button to move the lyrics forward. Otherwise, you can simply watch it as a lyric video with music in the background. Just a heads-up: the video includes audio, so be mindful before playing it.

 

 

Final Thoughts and Reflection

Working on this project was both intriguing and challenging. It was a lesson in embracing simplicity, as my initial ideas were quite ambitious. However, I realized that there’s a unique power in crafting something straightforward yet effective. While I’m pleased with the outcome, there are a few areas for improvement: I would like to synchronize the lyrics with the music more precisely, enhance the background visuals, and add more interactive elements to make it more engaging. This project has sparked new ideas, and I look forward to applying these insights to something even bigger and better. Stay tuned!

Reading Reflection – Week 4

Don Norman’s arguments on affordance, discoverability, and communication offer a compelling and analytical perspective on how interactive objects should be designed. Unlike readings such as “The Art of Interactive Design,” which tend to focus more on the finished product, Norman delves into the intricacies of poorly designed objects, analyzing the root causes and suggesting improvements. His credibility as an engineer strengthens his arguments, as he understands both the technical and human aspects of design. This raises an interesting question: if a professional fictional writer presented the same solutions, would they carry the same weight? This reflects the importance of practical experience in validating design solutions. I relate to Norman’s critique of poorly designed objects; for example, VR/AR goggles often suffer from an imbalance that makes them front-heavy. Redesigning them to be lighter would significantly enhance the immersive experience. This demonstrates how affordance and ergonomics directly impact the user experience.

In discussing the “Paradox of Technology,” Norman argues that as technology becomes more functional, it becomes harder to learn. I agree but question the extent to which this should be solely a designer’s responsibility. While Norman emphasizes that engineers should approach design from a non-logical, human perspective, this principle should also apply to the Paradox of Technology. Designers face limitations in simplifying complex functionality, suggesting that the solution lies in ensuring engineers design with the average human in mind, not just other engineers. This point is particularly relevant when considering advanced technologies like Artificial Super Intelligence (ASI). While intended to serve humanity, ASI must be designed to accommodate the everyday user, not just the logical thinker, ensuring it remains accessible and understandable.

Feedback, another of Norman’s key points, is essential in design, as humans are naturally impatient with technology. However, the expectation of rapid feedback, especially in digital interactions, can have unintended psychological consequences, such as increased anxiety and impatience in the extreme usage of  cellphones among young people. While Norman rightly argues that feedback should be provided correctly and appropriately, I contend that excessively fast responses can condition users to expect instant gratification, potentially harming mental well-being. Designers and engineers must be mindful of balancing efficiency with psychological impact, aligning with the principles of Human-Centered Design (HCD).  Norman’s insights on affordance, feedback, and human-centered design are not just applicable but essential in shaping interactive media. Especially in contexts like end-of-semester showcases, where media installations encounter a diverse audience, applying Norman’s principles ensures intuitive and meaningful human-machine interactions.

Assignment 3 – The Alternate Universe

Concept

Imagine a universe where two planets exist on the same orbit. For many years, I’ve been working on fictional stories, and I wanted to bring one of these concepts to life using p5.js for this assignment. The idea emerged a while ago, and although it was inspired by sci-fi media, I can’t say I’m directly mimicking anyone else’s work. The concept for this project is to make the two planets move 180 degrees apart, as if they are locked in orbit, to prevent them from crashing into each other (I’m aware that real planets don’t behave like this). This is how the project began. During the planning stage, I used Adobe Photoshop to composite some images, which helped me visualize how I would approach the project.

This is the image I used to plan out how I would like the final project to look like.

The Procedure

First, I began by creating functions to generate the stars. I didn’t want the stars to move, but they needed to be positioned randomly. The following code snippet demonstrates how I randomly generated the stars and created another function to store their positions.

// Function to generate stars and store their positions, size, and color
function generateStars(numStars) {
  for (let i = 0; i < numStars; i++) {
    let x = random(0, width);  // Random x position
    let y = random(0, height);  // Random y position
    let size = random(1, 3); // Random size
    
    // Colors for stars (magenta, blue, yellow, and white)
    let colr = [color('magenta'), color('blue'), color('yellow'), color(255)];
    
    // Store each star's position, size, and color
    starPositions.push({
      x: x,
      y: y,
      size: size,
      colr: random(colr)
    });
  }
}

// Function to draw stars from stored positions
function drawStars() {
  for (let i = 0; i < starPositions.length; i++) {
    let star = starPositions[i];
    
    stroke(star.colr);  // Set color for each star
    strokeWeight(star.size); // Set size for each star
    point(star.x, star.y);   // Draw the star
  }
}

In this project, I used various functions, including one to detect comet collisions with planets and another for collisions with the sun. The comets were created using a class and stored in a dynamic array, making memory management simpler compared to other programming languages. The project involved a lot of mathematical concepts, especially for the comet class, and I drew inspiration from p5.js projects and AI-assisted planning. I experimented through trial and error to execute certain parts.

Code I’m most proud of:

The code I’m most proud of is the Planet class, which brings the entire concept to life using trigonometry. By applying cosine and sine functions to measure angles, I was able to make the planets behave as intended. This idea came from Google searches. Here’s a snippet of my Planet class:

class Planet {
  constructor(size, colr, strokecolor) {
    this.size = size;
    this.colr = color(colr);  // Fill color
    this.strokecolor = strokecolor;  // Outline color
    this.x = 0;  // X position (calculated later)
    this.y = 0;  // Y position (calculated later)
    
  }

  // Update the planet's position based on the angle and radius
  update(angle, radius) {
    this.x = width / 2 + cos(angle) * radius;  // X position
    this.y = height / 2 + sin(angle) * radius; // Y position
  }

  // Show the planet
  show() {
    stroke(this.strokecolor);  // Outline color
    strokeWeight(2)
    fill(this.colr);  // Fill color
    ellipse(this.x, this.y, this.size);  // Draw the planet
  }
}

In my draw function, I used the following code, utilizing the principles of Object Oriented Programming to bring this idea to fruition.

// Update and draw the planets
 planet1.update(angle, radius);
 planet1.show();
 
 planet2.update(angle + PI, radius);  // The PI angle ensures planets are always opposite each other
 planet2.show();
 
 // Increment the angle for continuous rotation
 angle += 0.01;

 

The Final Project:

 

Reflection:

For this project, I was a bit ambitious when I first started. I prioritized realism over functionality to give the viewer a great experience. However, I learned that this wasn’t necessary and settled for a simpler approach using Object-Oriented Programming. After reading The Art of Interactive Design, I realized my final project lacked key elements of interactivity, like even a simple mouse click. In my defense, I removed interactivity because, in the storyline, humans don’t have the power to move objects in space. However, I would improve the project by adding music, moving stars, mouse-click-generated comets, and perhaps better visuals. I faced challenges in ensuring the planets always rotated 180 degrees apart, but after solving this, I was amazed at what code can achieve. I’m excited to see what it can do with more advanced graphics in the future. For now, this is it.

 

Reading Reflection – Week 3

There are two key questions that arise after reading Chapter 1 of ‘The Art of Interactive Design’. The first is: What are the characteristics of a strongly interactive system? For the second question, stay with me until the end of this response to find out; I promise it will be worth it. Before addressing these, I’d like to share my analysis of Chapter 1 and my thoughts after completing it.

Although the book was first published in 2002, its concepts remain relevant in 2024. I fully agree with the author’s argument that technology buzzwords are often overused and misunderstood. At the time of the book’s writing, “interactivity” was the buzzword in question. When we examine this closely, it becomes clear that this is largely driven by the capitalistic nature of the tech industry. Buzzwords are used because they help sell products, even if their meanings are diluted. This is equally true today with terms like “innovation” and “AI,” which are frequently misapplied to make a profit, often without a deep understanding of their implications.

This chapter offered insightful ideas, and I was particularly drawn to the concept of viewing interactivity as a conversation between two actors, where listening, speaking, and thinking all play critical roles in designing an interactive experience. The author’s approach is notable in that he shifts the definition of interactivity from a simple yes-or-no concept to a hierarchical structure, allowing us to assess interactivity from a subjective perspective. I personally agree that interactivity is subjective, given the diversity of the world’s population. What may be considered highly interactive in one part of the world might be seen as only moderately or minimally interactive elsewhere. The author strengthens his argument by clarifying what is not interactive, such as mere reaction or participation, and provides strong examples that are difficult to refute.

While the author’s arguments might seem somewhat biased towards interactive design, I find myself in agreement with most of them. He argues that a true interactive designer incorporates all three key elements—listening, speaking, and thinking—while a user interface designer typically excludes the thinking aspect, reducing interactivity. Many technologies that we use today, such as Spotify, smartphones, and gaming consoles, lack all three aspects of interactivity. For instance, while Siri can listen and speak, it doesn’t truly “think” independently yet. Despite this, humans continue to interact with these technologies frequently! This raises the question of whether these technologies were designed by user interface designers or whether these designers have somehow adopted the three key elements of interactivity. It also prompts a deeper examination of the author’s critique of designers who may have less expertise in the arts and humanities but are still creating interactive systems.

Now, the question you’ve been waiting for: What ideas do I have for improving user interaction in my p5 sketches? After reading this chapter, I plan to focus on integrating the elements of listening and speaking to make my sketches more interactive. Once I have mastered these two aspects, I will work on incorporating the third element—thinking. This will undoubtedly be a challenge, as figuring out how to make a p5 sketch “think” is no small task. However, it will be an exciting journey to explore how this can lead to truly interactive creations.

Assignment 2 – Move Like Water

Project Concept

For this project, I spent considerable time thinking about a fun and interactive way to use loops, while keeping the task manageable. Eventually, I was inspired by the idea of creating water ripples, envisioning someone walking on a dark night and stepping into a puddle, triggering a ripple effect. This concept motivated me to explore a way for users to interact with the screen, allowing them to click anywhere to generate ripples. By varying the click speed, users could create randomized but controlled loops, which would enhance the experience. I decided to limit the ripple colors to red, green, and blue (RGB), as this simple palette would still make the effect visually appealing and enjoyable for users.

 

Code I’m Proud Of

In this project, I’m proud of how I utilized two for loops to bring the ripple effect to life. The loop that progressively grows the ripples is particularly noteworthy, as it effectively simulates the natural ripple motion. Additionally, I optimized the code by using an array to handle the data efficiently, avoiding the need for multiple data types as I had initially considered. Below is the section of code that I’m most proud of:

for (let i = 0; i < ripples.length; i++) 
  {
    let ripple = ripples[i];
    stroke(ripple.col);  // Set stroke color to ripple's assigned color
    
    // Draw a circle for each ripple
    ellipse(ripple.x, ripple.y, ripple.radius * 2);
    
    // Increase the radius to create the growing effect, adjusting the speed
    ripple.radius += expansionSpeed;  // Increase radius faster
  }

 

Final Outcome

You can try out the interactive ripple effect below. Simply click or tap on the screen, and watch the ripples form. Feel free to experiment by clicking multiple times to see how the loops overlap and interact.

Reflection and Future Improvements

This project was a rewarding challenge. It allowed me to transform an idea into a tangible, interactive experience using loops. While I’m pleased with the current result, there are several areas I would like to improve in the future. I envision adding more color variations, enhancing the background for greater interactivity, and introducing additional, smaller ripples beneath the main one for a smoother, more dynamic transition. These improvements would create more intricate, overlapping ripple patterns, making the experience even more engaging and visually captivating.

Reading Reflection – Week 2

One of the most intriguing aspects of coding is how a few lines can produce unexpected creative outcomes. Casey Reas demonstrated this in his video by using minimal code to create compelling artwork. Initially, I saw coding as writing on a text editor and getting feedback through a simple console, but the idea that code could generate art opened a new perspective for me. Reas emphasized the importance of balancing randomness and order in art. Too much randomness leads to computer-generated results, while introducing structure creates something unique. I agree with his point that art can emerge from blending chaos with control, making it not just a product of machines but a collaboration between human creativity and computational processes.

The video also raised thought-provoking questions about the evolving definition of art in the age of technology, particularly with tools like text-to-image generation. Traditionally, artists have played a central role in shaping chaos into order, but as computers become more advanced, capable of simulating both chaos and structure, the lines between human and machine contributions blur. At what point does the creative process shift from being driven by human intention to being shaped by the algorithms and systems that generate these works? Reas touched on this when quoting Michael Noll, who suggested that computers are not just assistants but active participants in creating new artistic mediums. This is especially relevant today, with AI art becoming a legitimate form of expression, as machines are now generating images, music, and even literature with minimal human input.

This raises deeper questions about control and authorship in the creative process. If computers can generate artwork from chaotic prompts to what extent can we still claim that the final product is “human” art? Moreover, as AI systems evolve, there may come a time when they independently balance chaos and order, leading to entirely new forms of creativity without human intervention. This shifts the role of the artist from creator to curator, selecting and guiding the machine’s output rather than crafting the work directly. Reas’ observation about the natural world mirrors this dynamic: just as humans bring order to nature’s inherent chaos, AI could bring order to the randomness of creative prompts. This raises a paradox, where we attempt to control the chaos in our own creations, while simultaneously relying on machines to navigate the very chaos we introduce into the creative process. As AI art grows, this will continue to challenge traditional notions of what it means to be an artist, while finding balance between chaos and order.