Week 9: Reading Response

This week’s first reading, Physical Computing’s Greatest Hits (and Misses), made me think a lot about different physical computing themes, some that are around me, and the things that I could integrate and do with them. A sentence that stood out to me was at the start, “Sometimes when people learning about physical computing hear that a particular idea has been done before, they give up on it, because they think it’s not original,” as I related to it both during our p5 sketches work and now with Arduino, where I pressure myself to try to create something completely different. I was really interested as I read more, and some of the things that came to my mind that I could relate to were things I have already encountered, such as the floor pad games that exist in many arcades today in different styles, and the “Fields of Grass” theme that is used in places like TeamLab and museums, where you run your hand over sensors and experience an interaction. Interestingly, the dolls and pets theme reminded me of Build-A-Bear teddies I made when I was younger, where I would press a part of the bear and hear recorded audio. While reading, I found myself thinking about my Arduino and all the different sensors I have, and I think all the ideas mentioned are great inspiration and encouraged me to look deeper and find new ways to integrate them into my projects.

This week’s second reading, Making Interactive Art: Set the Stage, Then Shut Up and Listen, brought my attention to an aspect of interactive artworks that I had not considered before, and that feels different from what people usually say. I feel like it made a lot of sense to me that users who encounter something interactive should feel comfortable giving inputs based on their instincts. What I took away from the reading is that it is important to guide the user and give them context about what they are experiencing, but not to completely tell them exactly what to do, in order to maintain the purpose of “interactivity” in the artwork. The concluding example of a director working with actors really clarified this idea and made it more convincing.

All together, I feel like both readings are well aligned with each other and added to my knowledge and understanding of interactive artworks and programming. The first reading felt more like a collection of examples and structures of how a project could be, while the second reading added to it by focusing on the user experience and interaction within that same project. The knowledge I gained connects clearly to the work I have done and will continue to do in this course, and I feel like both readings gave me ideas, inspiration, and awareness of what I should consider moving forward.

Week 8: Unusual Switch

Concept:

For my first Arduino assignment, I had to create an LED light switch that worked without any direct contact with the hands. I got the idea of creating one that does not need direct contact with any part of the body, so no contact at all. I was interested in the sensor function when we were introduced to the kit, so I chose to experiment with it and see if I could create a switch using a motion sensor.

My switch almost works like a distance-check system: when any motion is detected close to the sensor, 10 cm or less, the green light turns on, almost as if it is in the perfect position. When motion is detected farther from the sensor, more than 10 cm, the red light turns on, signaling that it is too far away from the sensor. Although there is no physical contact, the ultrasonic sensor acts as a switch by detecting distance and controlling whether the circuit behaves as open or closed.

This idea brings to mind different real-world applications, such as mall parking lots, where you see a red light on top if a space is occupied and green if the parking spot is empty.

Link to Code:

Arduino File on GitHub

Demonstration:

unusualswitch

Setup:

For the Arduino setup, I started by placing the sensor and connecting four wires, each aligned with one of its pins. The red wire connects to 5 volts, the black to ground, and the blue and green wires are assigned to numbers to trigger and receive the echo pulse for the lights. Then I placed the two LEDs, with a wire of the same color aligned with one pin, which connects to an assigned Arduino pin, and a 330 ohm resistor aligned with the other pin. I then added a red wire that connects the negative side to the ground, and on that same row, I placed one side of both resistors.

Code:

My code, in general, is straightforward. I assign variables, and the code is designed to know when to accept and receive inputs and outputs. All explanations are included directly in the code itself.

One part of the code I found particularly important is using digitalWrite with LOW to ensure that the lights turn off when they are not called by the commands. Without this, the lights would stay on the whole time, no matter where I placed my hand, since the program wasn’t being instructed to turn them off again.

digitalWrite(trigPin,HIGH); // Send pulse to trigger sensor
delay(1000); // One second delay to trigger sensor 
digitalWrite(trigPin,LOW); // End trigger pulse

digitalWrite(led1,LOW); // Return green light to initial state before going into if else condition
digitalWrite(led2,LOW); // Return red light to initial state before going into if else condition

Additionally, the if-else statement is the most crucial part, as it tells the lights when to turn on and off based on the distance. I can also adjust it to suit whatever distances and measurements I want.

if (distance<=10){
  digitalWrite(led1, HIGH); // If distance from sensor is equal or less than 10 cm, green light on
}
else if (distance>10){
  digitalWrite(led2, HIGH); // If distance is more than 10 cm, light up led 2, red light on
}

Diagram:

Reflection:

Overall, I am satisfied with my work on this first assignment. Before starting, I was really overwhelmed and thought I would struggle a lot to make it work. However, as soon as I began understanding what each wire and piece of code does, I started to actually enjoy it and had fun experimenting with different colors.

I did get a little confused during the wiring process because of all the colors of the wires. The sensor and LEDs were placed on opposite sides from their wires, so I moved them, and everything made more sense, both on the board and in the code.

For future improvements, I feel like I could add more LEDs at different distances, or even make it more advanced. For example, instead of sensing simple hand waves, it could sense a specific area of walking.

References:

I first went over the class slides to recap what we learned, and I also reviewed parts of the Zoom recording for guidance on plugging in and uploading code.

I found a tutorial on using a motion sensor, which I followed along with, making sure I understood every step and the code needed to make it work:

https://youtu.be/7zPbEEPAwQA?si=mkxxcjrJ8M2CU19C 

I also used links to specific code references on the official Arduino website to recap what each function does:

https://docs.arduino.cc/language-reference/en/functions/digital-io/pinMode/

https://docs.arduino.cc/language-reference/en/functions/digital-io/pinMode/

https://docs.arduino.cc/language-reference/en/functions/advanced-io/pulseIn/

Week 8: Reading Response

This week’s first reading, Emotion and Design: Attractive Things Work Better by Norman, gave me a new perspective on how and when things can work better in different situations than I had thought before. At first, reading the title Attractive Things Work Better made me feel like I would disagree with the reading, since it gave the idea of prioritizing appearance, which does not sound right when trying to create something functional and useful. However, as I read through it, I was impressed by the number of ideas that made me realize things I had not thought of before, especially that we often use things based on how we feel, and those feelings are strongly influenced by design. This idea was clear in the example of the car, where we would prefer to drive it when it looks clean rather than when it looks dirty. It also made me think about how designers should consider the situations the user might be in, so they can adjust the design to create a smoother experience, like the idea of opening a push or pull door when being relaxed versus stressed. It also made me reflect on the things I create in this course, that I should make sure my work functions well, but also has a clear and attractive appearance that supports the user’s experience. It is almost like saying design is not only about physical appearance, but also a psychological aspect of the work.

This week’s second reading, Her Code Got Humans On The Moon and Invented Software Itself by McMillan, was very interesting to me because it was so different from the other readings we have done, including the one above. I was fascinated by how Margaret Hamilton was able to succeed and prove the importance of software and correct programming at a time when it was not fully developed or taken seriously. I was struck by her quote, “When I first got into it, nobody knew what it was that we were doing. It was like the Wild West. There was no course in it. They didn’t teach it,” as it made me realize that they had to create code and commands without references or guidance like we have today. It felt like coding from scratch, which is especially intense considering it was connected to astronauts and people’s lives. I was also inspired by her dedication, especially in the way she worked on preventing errors and thinking through different possible scenarios. This made me connect it to the work we do in our projects, for example when using if-else statements to control different outcomes.

Both readings were very different from each other, but they added a lot to my understanding of this course and my own work. One connection I was able to make was between Hamilton’s focus on error-preventing systems and Norman’s explanation of how people use things differently depending on their situation. For example, in a programmed website or interactive project, if a user is overwhelmed, they might start clicking the wrong things, so error-preventing code could help guide or correct their actions. I feel like these readings encouraged me to try new coding techniques and explore different ideas, while also paying attention to the appearance of my work, since it can affect how it is experienced and used.

Midterm Project – The Maze Captures

Concept:

My midterm project is designed as a new interactive game that combines both an artistic experience and an enjoyable gameplay element. The Maze Captures is inspired by the Sheikh Zayed Museum and the Qasr Al Hosn Festival in Abu Dhabi. The artistic aspect comes from the museum, which is modern in design while maintaining a strong historical and cultural connection to the United Arab Emirates. I then developed the game aspect based on the Qasr Al Hosn Festival, which featured a culturally themed escape room, but I adapted this idea into my own concept.

Here is an image that demonstrates my inspiration from the Sheikh Zayed National Museum in Abu Dhabi:

The concept of the game is to allow the player to walk around and experience a museum-like environment filled with Emirati cultural artworks. These artworks are AI-generated interpretations of real pieces found in the Sheikh Zayed Museum. Each time an object is spotted, the player must position it within their camera lens frame and capture it within a specified time to increase their score. Below are two of the 8 actual references used to create the artworks in my game:

Embedded Sketch:

How it works:

The game begins with an instruction screen, allowing the user to understand what is expected for the game to function properly. When the player presses the space bar, they are positioned at the start of the maze. The player uses the arrow keys to move smoothly and the mouse pad to look around, allowing for easier navigation and a more immersive experience. The maze is structured to provide multiple paths, similar to a traditional maze, with various types of objects, ranging from artworks to cultural items, placed throughout the space. I also included background music inspired by a song about Sheikh Zayed to reinforce the cultural theme and atmosphere of the museum. When an object is spotted, it must be centered within the camera lens frame and captured using the Enter key. A capture sound effect is played to enhance interactivity and create a sense of realism. Each time an object is captured, the player’s score increases depending on how much time remains, the earlier the capture, the higher the score. Once the timer ends, the player can view their final score and press the space bar to restart the game.

In terms of code organization, I used Object-Oriented Programming by creating two classes: one for walls and one for objects. Since these elements are repeated multiple times in different positions, using classes allowed me to keep the code organized and reusable by simply calling them in the sketch and adjusting their positions as needed. I then structured the main sketch using the three game states we covered: a start screen, the gameplay screen, and an end screen where the user can return to the beginning. I handled user interaction through functions such as keyPressed() for keyboard input and mouseDragged() for camera movement. Additionally, I incorporated audio elements, including a capture sound effect and background music, to enhance the realism and overall experience of the game. I also implemented collision detection to ensure that the player cannot walk through walls, which helps maintain the structure and realism of the maze. The project was built using WEBGL mode, which allowed me to create a 3D environment and simulate a first-person perspective using a camera system.

Parts I am proud of:

I am particularly proud of creating the movement of the player, as it initially started as a problem, but I was able to overcome it in the best way possible. The issue was that the arrow keys were not aligned with the movement and would keep switching directions, meaning that sometimes the forward key would move the player backwards and vice versa. I also couldn’t properly look around to decide which direction I was walking in. I then realized that I needed to use sine and cosine to control the direction, and carefully adjust the positive and negative values for each key. This allowed each movement key to function correctly without conflicting with the others. I also used the mouseDragged() function, which we covered in class, to allow the player to look around left and right.

// Allow user to look around left and right, using mouse
  function mouseDragged() {
  angle += movedX * 0.01;
}

// Move forward using the up arrow key
  if (keyIsDown(UP_ARROW)) {
    nextX += sin(angle) * speed;
    nextZ -= cos(angle) * speed;
  }

  // Move backward using the down arrow key
  if (keyIsDown(DOWN_ARROW)) {
    nextX -= sin(angle) * speed;
    nextZ += cos(angle) * speed;
  }

  // Move left the left arrow key
  if (keyIsDown(LEFT_ARROW)) {
    nextX -= cos(angle) * speed;
    nextZ -= sin(angle) * speed;
  }

  // Move right using the right arrow key
  if (keyIsDown(RIGHT_ARROW)) {
    nextX += cos(angle) * speed;
    nextZ += sin(angle) * speed;
  }

Another part I am proud of, which was also challenging at first, was preventing the player from walking through walls. This issue made the game feel unrealistic and not function as intended. By following tutorials on movement in WEBGL, I was able to implement collision detection using a loop and a detection function to stop the player from passing through walls. I also applied this logic to prevent the player from moving outside the boundaries of the game floor. However, I acknowledge that this system is not 100% accurate, as moving too close to a wall can sometimes allow the player to slightly see through it. This is something I would improve in the future.

In the walls class:

//Setup collision detection code to check if user is near a wall
  detect(px, pz, size) {
    return (
      px + size > this.x - this.w / 2 &&
      px - size < this.x + this.w / 2 &&
      pz + size > this.z - this.d / 2 &&
      pz - size < this.z + this.d / 2
    );
  }

In sketch:

let blocked = false;

  // For loop to check collision with all walls
  for (let wall of walls) {
    if (wall.detect(nextX, nextZ, playerSize)) {
      // check position
      blocked = true; // if blocked by wall, stop movement
      break;
    }
  }

  //Prevent moving outside game floor area

  // Floor is centered
  let half = floorSize / 2;

  if (nextX < -half + playerSize || nextX > half - playerSize) blocked = true; // check left and right movement on floor
  if (nextZ < -half + playerSize || nextZ > half - playerSize) blocked = true; // check forward and backward movement on floor

  // Update position of player if no collision
  if (!blocked) {
    player.x = nextX;
    player.z = nextZ;
  }

The part I am most proud of is placing the walls and objects within the maze. Since the project is built in a 3D environment, I had to continuously move around the space to check the positioning, orientation, and visibility of each element. This included making sure that walls were correctly aligned, objects were facing the right direction, and everything was visible and accessible to the player. After several adjustments and iterations, I became much more confident and was able to place elements exactly where I wanted them.

//Layout of the maze

  //Starting hallway
  walls.push(new Wall(-150, 500, 20, 280, 1000)); // left wall
  walls.push(new Wall(150, 500, 20, 280, 1000)); // right wall

  //Split to right and left
  walls.push(new Wall(-290, -10, 300, 280, 20)); //connected to the left
  walls.push(new Wall(290, -10, 300, 280, 20)); //connected to the right
  walls.push(new Wall(0, -190, 530, 280, 20)); // far end at the front

  //Top walls
  walls.push(new Wall(-450, -90, 20, 280, 180)); // left end
  walls.push(new Wall(450, -90, 20, 280, 180)); // right end

  //First room on the left
  walls.push(new Wall(-525, -380, 150, 280, 20)); // bottom wall
  walls.push(new Wall(-590, -280, 20, 280, 200)); // back wall
  walls.push(new Wall(-525, -170, 150, 280, 20)); // top wall

  //First room on the right
  walls.push(new Wall(525, -170, 150, 280, 20)); // bottom wall
  walls.push(new Wall(590, -280, 20, 280, 200)); // back wall
  walls.push(new Wall(525, -380, 150, 280, 20)); // top wall

  //Inner hallways - connected to the centered wall
  walls.push(new Wall(260, -405, 20, 280, 450)); //right hallway
  walls.push(new Wall(-260, -405, 20, 280, 450)); //left hallway

  //Far right section
  walls.push(new Wall(450, -610, 20, 280, 480)); // extended wall

  //Far left section
  walls.push(new Wall(-590, -550, 20, 280, 590)); // long back wall
  walls.push(new Wall(-525, -600, 150, 280, 20)); // divide that wall

  //Inner center section
  walls.push(new Wall(-90, -500, 20, 280, 350)); //vertical wall towards the right
  walls.push(new Wall(0, -330, 200, 280, 20)); // horizontal wall to divide that wall
  walls.push(new Wall(200, -620, 100, 280, 20)); // seperate horizontal wall from the other side

  //Far back wall at the end of the maze
  walls.push(new Wall(70, -850, 1500, 280, 20));

  //Layout of the artworks in the maze

  //Each image placed at a specific (x,z), last value to custom the direction of the artwork

  artworks.push(new ArtWork(0, -179, img1, PI)); // center front wall
  artworks.push(new ArtWork(-500, -300, img2, HALF_PI)); // first left room
  artworks.push(new ArtWork(249, -400, img3, HALF_PI)); // inner center, left wall
  artworks.push(new ArtWork(0, -341, img4, PI)); // inner center, horizontal wall
  artworks.push(new ArtWork(-510, -500, img5, -HALF_PI)); // second left room
  artworks.push(new ArtWork(-510, -700, img6, -HALF_PI)); // third left room
  artworks.push(new ArtWork(-155, -220, img7, 0)); // inner center, main wall towards the right
  artworks.push(new ArtWork(500, -280, img8, HALF_PI)); // first right room
}

 

Another simple but meaningful aspect I am proud of is my design decisions. I chose to use actual artworks from the museum as references, recreate them as visual elements in the game, and combine them with background music related to Sheikh Zayed. I believe these choices added cultural value and depth to the overall experience.

Problems I ran into:

This project was a real challenge, and I had to work through most of the steps independently. As mentioned above, I struggled with player movement, interaction within the 3D space, and the placement of objects throughout the maze, but I was able to resolve these issues through testing and adjustments.

However, there were two features from my initial plan that I decided to leave for future development, as they were slowing down my progress. The main challenge was adding a 2D camera frame that would include the timer and score. I wanted these elements to remain stable and fixed on the player’s screen, but since the project uses WEBGL, it was difficult to achieve. Every attempt resulted in issues such as the player moving through the frame or the timer and score appearing unstable or misplaced. I experimented with different approaches, including using resetMatrix() and attempting to switch between 2D and 3D rendering based on available resources, but I was unable to achieve the desired result within the time I had. Instead, I adapted this limitation into a design decision by encouraging players on the start screen to move quickly in order to achieve a higher score within the 30-second limit, and by displaying the final score at the end of the game.

Areas for improvement:

Overall, I am proud of my project. Working with WEBGL required a significant amount of effort, and I believe I successfully challenged myself and managed to achieve a strong outcome. However, there are several areas I would like to improve in the future, particularly in terms of visual design and graphics. 

One key improvement would be implementing the features I originally planned, such as a camera frame with a visible timer and score during gameplay. In addition, I would like to enhance the environment by adding more detailed and advanced visual elements, such as decorative features or more refined structures, to make the experience feel more realistic and immersive. I would also like to expand the environment beyond the interior of the maze by designing an exterior view of the museum, allowing players to explore both inside and outside spaces. Another potential improvement would be adding more interactive elements, such as objects that trigger sounds or actions when interacted with, to further engage the player.

References:

I began by learning about WEBGL mode in order to create my project in the way I envisioned, using the following resources:

https://p5js.org/reference/p5/WEBGL/ 

https://youtu.be/nqiKWXUX-o8?si=tSUSM77-Ie64E13u 

To meet the assignment requirements, I also used the slides covered in class, particularly those from Week 6, which included useful concepts for the project. I referred back to these materials whenever I had specific questions.

Go into fullscreen:

https://p5js.org/reference/p5/fullscreen/ 

Since I worked in WEBGL mode, I followed tutorials and reference links for different parts of the project:

Setting up the maze, with the floor and walls:
https://youtu.be/6TPVoB4uQCU?si=LFsx9b4IhB0HOYoW 

Allowing the player to move around:

https://youtu.be/BW3D9WwalQE?si=144ULyiZDN_HCjMr

https://p5js.org/reference/p5/camera/

https://p5js.org/reference/p5/sin/

https://p5js.org/reference/p5/cos/

Add images and objects in the maze:

https://youtu.be/O1mYw-3Wl_Q?si=4ENHn11BepkQGdqX

https://p5js.org/reference/p5/texture/

https://p5js.org/reference/p5/rotate/

https://p5js.org/reference/p5/rotateY/

https://p5js.org/reference/p5/translate/

Through these videos, I found a sample of a 3D game, which I used as a reference for specific aspects such as object placement and movement. This was discovered through the following video:

Through this video:
https://youtu.be/VoFsB0toORk?si=TB3XRVKkrED6CbsE

I found this code:

https://github.com/jrc03c/queasycam/blob/master/examples/MazeRunner/Maze.pde

I used Gemini to generate the visual assets in my project, including the start page and the artworks. For the artworks, I uploaded images of real pieces and asked for regenerated versions in a specific style, aiming for a modern, clean, anime-inspired aesthetic. For the start page, I specified the text, font, colors, and overall layout I wanted.

I also used ChatGPT throughout my development process to assist with challenges I faced when I could not find solutions through other resources, particularly for debugging and problem-solving as the code became more complex. Below are some clear examples of how I used it:

When working on the placement of walls in the maze, I became confused about positive and negative coordinates. I was guided using the following diagram to better understand the WEBGL coordinate system:

              -Z (forward)

                  ↑

                  |

                  |

– X (left) ← —— (0,0) —— → +X (right)

                  |

                  |

                  ↓

              +Z (back)

I also wanted to view the maze from above to better understand the layout and avoid duplicating paths. A temporary solution was suggested, which allowed me to toggle between a first-person and top-down view using a key. I implemented this feature during development and removed it afterward. This helped me understand how to switch perspectives in WEBGL using the keyPressed() function. The code used was:

let topView = false;

In keyPressed();
if (key === 'T' || key === 't'){
topView = !topView;
}

In runGame();
if (topView){
  camera(0, -800, 0, 0, 0, 0, 0, 0, -1);
} else {
  camera(
    player.x,
    -50,
    player.z,
    player.x + sin(angle) * 100,
    -50,
    player.z - cos(angle) * 100,
    0,
    1,
    0
  );
}

Finally, when completing the project, I encountered an issue where pressing the space bar at the end of the game would immediately restart gameplay instead of returning to the start screen. I learned that this was because both game states were using the same key input. I resolved this by restructuring the condition to first check for the key press, and then use if and else if statements to assign different outcomes depending on the current game state.

Week 5: Midterm Project Progress

Concept:

For my midterm project, I knew I wanted to create an interactive game, something creative, original, and fun, while also incorporating everything required for this project. I came across multiple ideas but ended up choosing a 3D Maze Capturing game, where players walk around a maze holding a camera and must find objects placed within and capture them within a set duration. Each captured object increases their score, and the earlier they capture it, the higher the points.

While developing and improving my idea, I wanted a specific theme to add a personal touch. I was inspired by an escape room I visited this year at Qasr Al Hosn, which reminded me of a maze, as well as the new Sheikh Zayed Museum, with its many Emirati historical artworks and objects. I decided to go with a Historical Emirati Theme, where players first choose their character, either an Emirati girl or boy, and the maze will feature modern walls and decorations, while the objects the player needs to capture are Emirati cultural items, such as traditional clothing, historical artworks, and objects used in the past.

I used AI to generate sample images to illustrate my idea and show what I want my game to look like, not exactly, but to convey the concept:

Design:

My initial plan is to have an introductory page displaying the game title, Maze of History, along with instructions on how to play. For interactivity, the player will click a start button, then select their preferred character, and the game will begin with a camera lens displayed on the screen, as if held in their hands.

Players navigate the maze using the arrow keys, moving through hallways and spaces. When they spot an object, they center it in the camera lens and press the Enter key to capture it, accompanied by sound effects of the camera. Each captured object increases the player’s score, with higher points awarded for capturing objects earlier. When the 40-second timer ends, the game stops, the score is displayed in the center of the screen, and players can press the Space bar to return to the main menu and start a new session.

I planned out the structure of my code to ensure it would be manageable and easy to work with. My game will use Object-Oriented Programming, with classes for the characters to track their positions and movements, classes for the objects placed around the maze that can be captured by the characters to manage their placement and display captured state, and a class for the maze structure itself, including walls and decorations. I will also include functions for the game to handle the start screen, gameplay interactions such as clicking and key pressing, and the score screen after the game ends.

For the visual design, I would like to create a modern museum aesthetic, with clean white walls and subtle decorations, along with objects such as traditional Emirati clothing, historical artworks, and artifacts placed throughout the maze. From the first-person perspective, the camera lens simulates the player holding a camera, adding immersion to the exploration experience. Sound effects for capturing objects and potentially background music will further enhance interactivity.

Frightening Parts and Plan to Reduce Risks:

In this project, I’m going with a more advanced idea compared to what I already did, and I have identified some parts that could be challenging while working on this project to make it work. I am concerned about the whole “3D” aspect, since everything I have done so far is 2D, and I’ll need a 3D setting with hands and a camera. To reduce this risk, I have already started going through different tutorials on YouTube by The Coding Train about how to create and work with 3D objects in p5.js, and I have started experimenting by creating some simple shapes.

Another risk I am considering is ensuring that the players can move smoothly within the maze without walking through walls or objects. To reduce this risk, I plan to start by testing a small maze prototype with a few walls and one character, while also watching tutorials before expanding the maze.

Moreover, I am also concerned about implementing the score system, where points increase based on how quickly the player captures objects. Calculating and updating the score dynamically while the timer counts down could be tricky to get right. To reduce this risk, I plan to prototype the timer and score separately first, testing that the points update correctly for each capture before integrating it into the full maze.

Week 5: Reading Response

This week’s reading truly improved and clarified my understanding of computer vision and how interactions between humans and computers actually work. Initially, I knew computer vision differed from human vision, but I did not expect the difference to be that significant, as I assumed it still “sees” scenes in a similar way to us. I realized that computer vision operates in fundamentally different ways. Human vision naturally interprets what is in front of us and allows us to focus on what we want to see almost effortlessly. In contrast, computer vision relies on specific algorithms and processes images as numerical data including pixels, brightness, color values, distance, and other measurable characteristics. Rather than simply recognizing an object, the computer analyzes patterns within many pixels and numbers. While human vision can adapt easily to complex environments, computer vision often struggles unless it is carefully programmed and assigned clear, straightforward tasks. This made me think of everyday examples such as social media face filters and games like Just Dance, where tracking our movements seems simple but actually depends on structured computational processes behind the scenes.

I found the reading especially beneficial as a beginner in interactive media because it clearly explained techniques that help computers “see” or track what we are interested in. For example, the background subtraction technique allows the computer to detect people or objects within a frame by analyzing differences in pixels, color, and brightness, while the frame differencing technique helps detect motion by comparing changes between frames and locations of objects. These techniques along with many more show that computer vision depends on controlled conditions and specific goals. While the author explains the technical and creative potential of computer vision clearly, I feel that the perspective leans more toward its artistic possibilities than fully addressing its broader social risks. At the same time, thinking about computer vision’s capacity for tracking and surveillance made me question how far its use should go, or be allowed. Although it can enhance interactive art by increasing engagement and allowing systems to respond to users’ movements in real time, it can also raise privacy concerns and create discomfort, as users may feel constantly watched. Overall, the reading helped me better understand both the technical complexity and the ethical implications of computer vision, and it encouraged me to think more about how I might apply it responsibly in my own interactive media work.

Week 4: Arts and Crafts Generative Text

My Concept:
For this week’s assignment, I was interested in creating a generative text for something specific and engaging. I explored different ideas, such as movie recommendations, a recipe cookbook, and motivation cards, but then I realized I wanted to create something playful and useful, something that’s fun for the user and easy to benefit from. That’s when I came up with the idea of an arts and crafts guide, since it’s simple, enjoyable, and something many people could try, and I personally like it as well. I created a note-like arts and crafts guide that gives the user multiple random ideas they could experiment with, whether using paper, paint, or other materials. It has a stable template of familiar instructions but allows the user to explore many ideas by simply clicking, changing the materials, actions, and the craft they will end up with. My vision was a board with a note pad on it, where other notes get stuck on top, so I chose a pastel brown background to resemble a pin board and a yellow note pad for the text. I also made the text color and the pins on the top corners change each time, creating a more realistic effect of a new note being stuck on top of an older one.

Embedded Sketch:

A Code I’m proud of:
A piece of code I’m particularly proud of is using the split function to allow one position of my sentence to generate multiple options. This way, one material could have more than one possible action and craft. I used the | character in the CSV file to separate multiple choices within a single row, and then used it in the sketch when calling the variables into the sentence, allowing the program to select one option for each category.

CSV file:

paper, fold|cut|glue, flower|card|plane
cardboard, fold|cut|glue, box|poster|house
clay, mold|shape|paint, bowl|vase|sculpture
cloth, sew|stitch|cut, shirt|bag|scarf
yarn, knit|weave|stitch, scarf|blanket|bracelet

Sketch file:

// These read and call each material, action, and object  from the chosen row
  // The actions and objects are split since they have multiple options assigned in each row
  let material = singleRow[MATERIAL];
  let actions = split(singleRow[ACTIONS], "|");
  let objects = split(singleRow[OBJECTS], "|");

  // This allows only one action and object to be chosen randomly
  let action = random(actions);
  let object = random(objects);

Another part of the code I’m satisfied with is the fonts. I enjoyed applying what we learned in class to change the font of my text, which made it look much better. I was also able to include more than one font in the same sketch.

// Define font variables to store the different ones used
let myFont;
let myFont2;
let myFont3;

// This function loads all the files in the sketch before it generates, to ensure they appear
  myFont = loadFont("titlefont.ttf");
  myFont2 = loadFont("textfont1.ttf");
  myFont3 = loadFont("textfont2.ttf");

// Draw and adjust the title of the note pad
  fill("black");
  textSize(32);
  textFont(myFont);
  text("Arts and Crafts", 6, -50);

  // Draw and adjust the text of instructions
  fill(textColor);
  textSize(16);
  textFont(myFont2);
  textWrap(WORD);
  text(idea, -90, 20, 190);

  // Provide text instructions at the bottom of the frame
  fill("rgb(103,85,79)");
  textFont(myFont3);
  textSize(16);
  text("Click for another idea!", width / 2, height - 20);

A minor piece of code I initially struggled with was the rotation of the note. At first, the note moved too slowly and didn’t look like a realistic new note being placed. I then figured out that I needed to adjust the positioning and randomize the range of radians, which gave the note the movement I wanted.

// Tilt the note pad
  noteAngle = radians(random(-5, 5));

Reflection and ideas for future work or improvements:
Overall, I feel that I was able to apply what we have learned over the past four weeks while also exploring additional aspects to create my project, and I am satisfied with the result. I personally enjoyed designing a set framework that can generate many different outcomes, while also allowing me to add personal touches. I find this concept very fascinating, and I think many different ideas, whether fun or practical, could be implemented in a neater and more accessible way using this approach. However, while I initially found the project interesting, I began to feel that it might be a bit basic as I worked on it. For future work, I feel I could improve the visual quality to make it more appealing to the viewer or user, for example, by adding elements like scissors or paint around the note pad, or including an icon for each craft. I could also make it more interactive, either by allowing the notes to move or by creating a book that actually flips pages, with photos and longer instructions for more advanced crafts, to give a more realistic and engaging experience.

References:
I mainly used the class slides to remind myself of the key concepts needed for creating generative text. In particular, I referred to uploading fonts and lists using different files, such as .csv and .ttf, along with the preload() function.

I also used the p5.js website to review and better understand specific functions, including:
https://p5js.org/reference/p5/translate/
https://p5js.org/reference/p5/rotate/
https://p5js.org/reference/p5/split/
https://p5js.org/reference/p5/int/ 

I was introduced to the textWrap() function using:
https://p5js.org/reference/p5/textWrap/ 

I used ChatGPT when I faced a setback with the movement of the note pad. As mentioned, I defined the noteAngle variable and used it in the mouseClicked() function to tilt the note. Initially, the note moved only a very small distance with each click, which made it look static and took away from the effect of new notes being added on top. ChatGPT helped me realize that my mistake was writing noteAngle = radians(-1, 1) with fixed numbers, which made the movement predictable. Instead, I learned to use the random() function to randomly select a position within the desired range on each click, producing the realistic and dynamic note movement I wanted.

Week 4: Reading Response

This week’s reading made me notice many things around me that I had not thought about before, even though they are right in front of me. When using objects, we usually try to figure out how they work no matter how confusing they are until we reach the desired outcome, but I never really stopped to ask why they were designed that way or to critique them in terms of how well they meet user needs. One thing that drives me crazy is the light control system in hotel rooms, where there is a pad of lightbulb icons but they rarely indicate which lights they turn on or off. What usually happens is that whenever I want to turn off the lights, I have to press random buttons until the ones I want finally switch off, often mixing bedside lights, main lights, entrance lights, and others. This connects to Norman’s ideas because these systems have poor signifiers, since they do not clearly indicate which light each control operates, as well as weak mapping and mismatched mental models, since the arrangement of controls does not reflect the layout of the room. Therefore, this design could be improved by including clearer labels for each icon and arranging each control closer to the light or area it affects.

After reading and reflecting, I can apply Norman’s design principles to interactive media by ensuring that I always include clear signifiers and efficient mapping in my work, whether it is a p5 sketch or a physical device. There should always be signs, icons, or instructions that clearly communicate what the user should do in order to achieve the intended outcome. For instance, in interactive sketches such as games, I would provide clear guidance about where to click or what actions are possible, and in tools or interfaces I would make sure that labels and controls are easy to understand. I would also arrange functions logically so that controls and outcomes match efficiently. Ultimately, I need to ensure that everything in my work clearly communicates what the user can do and what they can expect, allowing them to reach their goals without confusion. The reading helped me understand how many things are designed to look more modern or visually appealing but still fail to meet user expectations that we are not always aware of, which made me realize that future designs should aim to improve functionality while remaining clear and understandable to users.

Week 3: Object-Oriented Programming Constellation

My Concept:
When I first started thinking about this assignment based on what we did in class, I came up with simple ideas such as a balloon pop, but I felt like I wanted something more interesting and interactive. I got inspired when I was sitting outside at night and watching the stars above, which gave me the idea to recreate that image using code and add animation by making the stars move. Throughout the process, I came up with more ideas, such as adding a glow so they appear as “shining stars,” and turning the piece into something interactive by allowing users to connect the stars together. This way, users can create their own drawings by forming constellations and remove the lines using their keyboard. In the end, I created a constellation-inspired artwork of shining stars that float around a midnight-sky background, where the stars connect through the mouse and the connections are fully controlled by the user, allowing them to create different drawings or patterns.
Inspiration:

Embedded Sketch:

A Code I’m proud of:
In this assignment, I explored the code more deeply, and I’m proud of most of my work. However, a particular part I’m especially proud of is the mouse interaction I added. When I first implemented it, I couldn’t make it fully controlled by the user. Instead, when clicking on a star, it would randomly select another star to connect to, and eventually all the lines would connect to the same one. After looking at the references and experimenting with the code, I was able to use the if and else statements, along with the null value to reset the selection, which allowed the user to freely choose which stars to connect.

//Interactions in the Sketch

//Loop to find first star
//When the mouse is pressed on a star
function mousePressed() {
  if (!chosenStar) {
    for (let star of stars) {
      if (dist(mouseX, mouseY, star.x, star.y) < 6) {
        //Select it as the first star and stop loop
        chosenStar = star;
        break;
      }
    }
  }
  //When first star is found, loop to find the second star
  else {
    //Check that its a different star
    for (let star of stars)
      if (dist(mouseX, mouseY, star.x, star.y) < 6 && star !== chosenStar) {
        //Create a line that connects the two stars and store it in the connections array
        connections.push(new Connection(chosenStar, star));
        //Reset so user can choose new stars
        chosenStar = null;
        break;
      }
  }
}

Another part of the code I’m proud of is the temporary line that appears when selecting stars before finalizing the connection. At first, the line simply appeared, but I wanted it to clearly show where the connection was going. After researching and following tutorials, I was able to add this feature using an if statement along with mouseX and mouse Y, making the interaction clearer and more natural for the user.

//Allow a temporary line to be created when selecting a star and moving to select the second one, controlled by mouse
  if (chosenStar) {
    stroke("white");
    strokeWeight(0.2);
    line(chosenStar.x, chosenStar.y, mouseX, mouseY);
    noStroke();
  }

 

Reflection and ideas for future work or improvements:
Overall, I feel satisfied with the outcome of this assignment. I tried new things and experimented much more with effects such as the glow and breathing animations for the stars. I also added more interactivity for the first time and learned many new things throughout the process. I really enjoyed seeing the idea I had in mind actually work the way I wanted it to. However, there is always room for improvement, and I feel like these are just my first steps. For future work, I would love to add more details to the aesthetics of the piece and possibly introduce even more interactivity. Since the theme is stars, I could add elements such as flying rockets, shooting stars, or even a moon that moves or reacts when something specific happens.

References:
I met with a peer tutor, Mariam Alkhoori, who taught me how to create the glowing, floating stars as a starting point for this project.
I used the p5.js reference page (
https://p5js.org/reference/) to ensure I was using the functions correctly. Some of the specific references I used include:
https://p5js.org/reference/p5/line/
https://p5js.org/reference/p5/mousePressed/
https://p5js.org/reference/p5/sin/
I used the Happy Coding tutorial on array functions to better understand how to use arrays and remove elements using pop() (https://happycoding.io/tutorials/p5js/array-functions#pop)
I used ChatGPT to help navigate and understand bugs in my code when it did not work. When working with the Star class, I sometimes mixed up how object properties are referenced. I mistakenly used ‘star.x’ inside the class instead of ‘this.x’, which caused errors because ‘star.x’ is not defined inside the class. This caused the sketch to not properly access or draw the stars. By reviewing the code with ChatGPT, I was able to identify how properties inside a class must be accessed, as we practiced in class. I applied the same logic when working with the connection class to ensure consistency across all classes and the main sketch. 

Week 3: Reading Response

Thinking of interactivity in general, I understand it as an object or device that actively responds to the user, allowing engagement, or as the word mentions, interaction. However, reading the first chapter of The Art of Interactive Design made me realize how much deeper this concept can be. Although I sometimes found the author too critical, especially when they seemed to give less importance to actions that are not considered interactive, the examples used in the text to demonstrate what is and is not considered “interactive” expanded my understanding. The idea that interactivity involves listening, thinking, and responding helped me better grasp the characteristics of a strongly interactive system. Such a system should not simply respond, but rather fully process what the user is inputting, without disregarding any part of it, correctly fulfill the demands, and respond in a complete and effective way.

One part of the text that strongly made me connect to my own work stated, “The designer does a slam-bang job with two of the three steps but blows the third step … But one weak area in an interactive product is like a weak link in a chain. The chain breaks regardless of the strength of the rest of the links.” This made me think of how I could avoid similar weaknesses in my own code. When considering how to enhance the degree of user interaction in my future P5 sketches, I began thinking of ways to ensure that my code can accept a wider range of user input and respond effectively. Some ideas I would like to try include allowing users multiple possible commands rather than limiting their options. As well as ideas such as incorporating audio interaction, whether responding with sound or responding to the voices of the users, or using cameras to influence and engage with the sketch. I now understand that a strong interactive system depends on multiple well-developed elements working together, and I hope to be able to create work that reaches that level of interaction.