Week 4 – Reading Reflection

This reading is probably my favorite reading so far. It triggered me to reflect on various objects we encounter daily that are not very efficient, and have not had a change in their mechanisms for years. When reading about mapping lights to their switches, it reminded me of my own house, where we have a set of switches near the main door.

Even after living in this house for around 10 years, I still flick multiple switches until I find the one I need. The placement of the switches is especially inefficient because some of these switches are for lights outside the house in the front yard, and some are for lights inside the house.

It took me a while to think of something that functions inefficiently besides what was already mentioned in the reading, because I feel like Norman covered some of the core ones. Then, I remembered my electric toothbrush. My electric toothbrush is the most basic Oral-B model, and it only has one button (as do most electric toothbrushes). On top of this button, the word “timer” is written, indicating that this button is a timer. However, this button is also the same button I use to turn on the toothbrush. I was always confused about how to trigger the timer, and I did not have access to the manual to check. It took me around a year to realize that the toothbrush vibrates in a specific pulsing pattern for a short 2 seconds after 2 minutes of being on. I always wondered why the toothbrush would randomly vibrate weirdly mid-brush, and then I connected the dots. Using Norman’s definitions, this is an issue with the feedback of the system. How was I to know that the timer had started? While to me specifically, I think something like a light or a sound would’ve been a better indicator for the timer, I actually think this design is the most accessible design. It means deaf and blind people can still sense the toothbrush’s vibrations to know the timer is up. So, I think sometimes designs can seem inefficient to some people, but in reality they are designed this way to make them functional to everyone no matter their abilities.

This is a clear example of an item that affords timing your brush, but has poor design with signifiers that allow the usage of this feature. Norman’s argument on the importance of designing products keeping in mind the complexity of humans and eliminating the thought that how to use a feature you create will be obvious to its users can be applied to our interactive media projects. Interactive media is all about experience and interactivity. Without clear instructions from the artist on how to interact with their artwork or their game, users cannot maximize their experience. Imagine how frustrating it would be for a user to play a game with no instructions on how to control the game or about the main goal of the game. The game loses its purpose to entertain, because the user is frustrated with figuring out how to work the game.

Week 4 Assignment

For the concept of my work, I based on what I partially developed from week 3 where last time I did a bubble generative text from the mouth drag, and how after dragging the mouse after the output would make lots of bubbles react, thus this time I did greetings and simple phrases from each of the languages that I speak. From watching the video tutorial 1.4 on JSON Data I was able to break it into 3 sections language, punctuation, decoration where for each greeting or word language I set a random variable deco where output randomly decides to go along with language, punctuation and decoration respectively. Once again ranking all of the arrays within the square brackets.

The key codes that I am proud of were the code stamped texts: Where I let text = array and where I set the function to put the line 56 (main text) and line 61 the emoji run it into the array. Secondly, for the IF text length I inputted the function where if text length is more than 20 this means that in the 21st click after running output my result. Would come out as. The first phrase that showed in the first click after running the output would disappear . Also lastly I inputted a sin(frameCount) function as currently my code after running output has some blinking going on so I ran it into a way that it would appear as wave that run smoothly.

<iframe src=”https://editor.p5js.org/po2127/full/s0dQdQGk0″></iframe>

https://editor.p5js.org/po2127/full/s0dQdQGk0


references:


https://p5js.org/reference/p5/loadJSON/

Dan Shiftman Array YouTube 7.1-7.5

In terms of what I think I could improve in the future would be to play around with functions that everytime that a text have “burst up” and burst everytime that we click. So overall is to explore more with varieties and different animations , etc.

Week 4 – Data Visualization

Here is my final sketch:


Concept:

I really liked the map idea that we did in class, so I wanted to do something similar but on a smaller scale. I chose Abu Dhabi and decided that I wanted the map to show something useful, like public resources. I chose 10 police stations and plotted them on a map image I found on Google. The goal was too amek he map simple and interactive so that people could click on the locations and see the name and phone number of the station.

Code Highlight:

  //draw popup if active
  if (popup) {
    drawPopup(); // it calls the function to draw the popup overlay and the text
  }
}

function mousePressed() {
  let clicked = false; // to track if the circle was clicked
  for (let i = 0; i < stations.getRowCount(); i++) {
    let x = stations.getNum(i, "X"); //to get the circle x
    let y = stations.getNum(i, "Y"); //to get the circle y
    let name = stations.getString(i, "Name"); //to get the station names
    let phone = stations.getString(i, "Phone"); //to get the station phone numbers

    //check if mouse is inside the circles
    if (dist(mouseX, mouseY, x, y) < 10) {
      popup = true; //show popup
      popupText =
        name +
        "\nPhone: " +
        phone +
        "\n" +
        "click anywhere on the outside to return to map"; //set popup text, with /n to set up the phone number on a new line underneath it, same with the insturctions to leave the popup
      clicked = true; //mark that a circle was clicked
    }
  }
  // If clicked outside any circle to hide the popup
  if (!clicked) {
    popup = false;
  }
}

//to draw the popup with the info
function drawPopup() {
  // Semi-transparent overlay
  fill(0, 0, 0, 150); //transparent grey overlay for the background
  rect(0, 0, width, height); //to cover the entire canvas

  // White popup box
  fill("white"); //color for box
  rect(100, 150, 300, 150, 10); //rect in center with rounded corners

  // Text inside box
  fill(0); //black text
  textFont(myFont); //outside font
  textSize(18); //size
  textAlign(CENTER, CENTER); //to center the text horziontally and vertically
  text(popupText, width / 2, height / 2); //to display the popup text
}

The part of the code I’m particularly proud of is the interactive circles, where you click on the circle, and a pop-up box with information appears. I created a function called drawPopup() to organize my popup drawing code, where I call it manually when the popup boolean is true.

Reflection/future work:

I started this assignment using the same concept as the map visualization we did in class. In the class example, we used latitude and longitude values, which are good for geographic visualizations. But I changed the method to manual X/Y coordinates because I’m using a custom image map, so I needed precise visual placement instead of mathematical mapping. I manually assigned x and y coordinates for each police station and placed them into my Excel sheet, so that they would visually match their positions on my map. I also added interaction logic so users can click the circles to see information, which turns the project from a static visualization into something more interactive using the mousePressed() and dist() functions. I also added a grey transparent overlay behind the pop-up to make it clearer and easier to read.

I used ChatGPT mainly for debugging because I would make silly mistakes like writing CVS instead of CSV, and had a hard time finding the problem with why my sketch wouldn’t load. I also used ChatGPT because I added the title and instructions text and placed it in the middle, but every time I interacted with the circles, the text would be placed on the left of the canvas. So, ChatGPT said it was because of my textAlign(CENTER, CENTER); and it told me to set alignment only for the popup, and to reset it to LEFT, BASELINE before drawing any other text. That helped me better understand how the text-align feature worked as well. I also used a Google font for my text: https://fonts.google.com/selection 

And I searched up on Google how to make a pop-up using a boolean, and I took inspiration from this sketch I found: 

https://editor.p5js.org/YifanLiu/sketches/HkcS9wXo-#:~:text=let%20on%20=%20false;,(204%2C%20153%2C%20255) 

But overall, I mainly used the example we did in class and the lecture slides for reference. I also took some of the concepts from this week’s reading into consideration and provided clear instructions on how to manage the map. For future improvements, I would like to make my animations smoother and maybe add zooming or panning for better navigation on the map, but this assignment definitely helped me understand the different ways we can portray data. 

Reading Reflection – Week 4

One thing that really drives me crazy is when websites log me out suddenly while I’m still working on something. I could literally be in the middle of typing a really long text or even filling out a form that’s like 20 questions, and suddenly the website automatically takes me back to the login page, and everything I wrote is gone and erased. It makes me feel so frustrated and annoyed at myself and the website. However, after reading Norman, I realized this is a failure of feedback and signifiers. My mental model assumes that if I’m actively typing, the system will know that I’m there. But clearly it’s running on some sort of invisible timer that I can’t see, and there’s nothing to signal that I’m about to lose everything I worked so hard on. I think this could be improved so easily by adding a simple message like “You will be logged out in 2 minutes” or even by just automatically saving drafts. Just anything really that respects the fact that users are human and not always aware of the system’s hidden rules.

If I apply Norman’s design principles to interactive media, I would focus on making those hidden rules visible in a way. I would make sure there are clear signifiers and signals showing what’s happening and clear feedback so users don’t feel confused or suprised by sudden things that the system outputs. I would also design while keeping in mind that people will make mistakes and are prone to getting distracted, because that’s just normal human nature. The reading honestly made me rethink how often we blame ourselves for doing things wrong or not understanding when really the design just didn’t communicate those rules clearly. Overall, a really eyeopening read and made me realize that good dystems should make the user feel very capable rather than lost and confused.

Reading Reflection-Week#4

I&M Reading
Reading this chapter made me really think how I would interact with everyday objects in a very personal way. As I used to assume that if I struggled with something simple such as a confusing door, a complicated appliance, or a strange interface I now think it was my own fault. The reading argues that these frustrations usually come from poor design, not user incompetence, which felt astonishing and an eye opener. The examples about doors and household devices made the ideas feel very realistic, because they reflect small annoyances we experience almost daily without questioning them. It made me realize how much of my behavior is lead by intuitive signals from objects, and how frustrating it becomes when those signals are kind of unclear or misleading. What drives me crazy is when everyday interfaces have too many unclear buttons, hidden functions, which could be improved by using clearer signifiers, simpler layouts, and immediate feedback so users do not have to guess what to do.

What I found especially meaningful is the emphasis on human-centered design and the idea that designers should adapt to human psychology, not expect humans to adapt to machines. This perspective feels very relevant in today’s technology-heavy world, where many products are powerful but not intuitive. The discussion about discoverability, signifiers, and feedback made me more aware that good design should communicate naturally without needing long instructions. Overall, I think the reading is insightful because it shifts the way we see mistakes as instead of blaming users for the confusion, it encourages us to critically evaluate the design itself, which feels like a more human and realistic approach to technology and everyday life.

Week 4 Reading Psychopathology of Everyday Things

One thing that drives me crazy that was not mentioned in the text is when I go to a bathroomm at a restaurant or any other place and I try to use the sink, but the sink does not turn on/I don’t know how the sink turns on. I believe one way that could be fixed is by having clear symbols on the handles or visible handles to indicate where and what motion to do. Or a poster on the wall to show how to open the sink.

One way I would incorporate some of Norman’s principles of design into my work is by having more indicators of what the user is supposed to do to activate my sketch’s interactive element. For instance, in my bubble-pop design, it is unclear for my users what to click or what to try to do as there are no indicators that the bubbles are meant to pop. I would therefore include more explicit instructions.

Assignment 3: Dancing hearts

For this assignment, I decided to continue developing the heart theme from my previous sketch. In Assignment 2, I created a structured heart wallpaper where all the hearts moved together when pressed. This time, I wanted to test my abilities more and make the interaction feel more refined and independent rather than controlling everything at once.

Instead of using one shared movement variable, I used arrays and object-oriented programming to create a grid of Heart objects. Each heart now has its own properties, such as position, size, and animation phase. I used nested loops again to build the grid, but instead of shifting the entire grid together, each heart updates individually using its own update() function.

To create smoother movement, I used sin() and cos() to make the hearts pulse and slightly wiggle when the mouse hovers over them. The hearts remain structured in a grid, but when the mouse gets close, they move in a controlled sinusoidal motion. I also created the heart shape using beginShape() and bezierVertex() instead of ellipses and a triangle, which made the hearts look more developed and smooth.

The part of my code that I am most proud of is the section where the hearts move when hovered over:

if (this.isMouseNear()) this.x = this.baseX + sin(frameCount * 0.18 + this.phase) * this.wiggleAmount; this.y = this.baseY + cos(frameCount * 0.18 + this.phase) * this.wiggleAmount; }

This part controls the interactive movement. I learned that frameCount acts like time, and using sin() and cos() creates smooth back-and-forth motion instead of random movement. Adding phase also prevents all the hearts from moving in sync, which makes the animation feel more natural.


Overall, I’m happy with how this sketch developed from the previous one. It feels more structured and intentional, especially since each heart now behaves independently. The most challenging part was understanding how to separate update and display logic, and how to keep the original grid position while still allowing movement. For future improvements, I would like to experiment with how nearby hearts respond to interaction, and create a ripple effect instead of only affecting one/two hearts at a time.

Reading Reflection

In the reading, Crawford does not directly state that interactivity must include technology, but he clearly frames it within digital systems and computational design. Most of his examples revolve around computers, simulations, and structured input-response systems. While I understand his definition of interactivity as a “cyclic process of listening, thinking, and speaking,” I think his focus on digital media slightly narrows how we understand the concept.

Personally, I don’t think interactivity necessarily has to involve technology. I think something can be interactive simply by changing how we perceive or respond to it. For example, a physical artwork that shifts depending on where you stand, or a space that makes you question your perspective, still creates interaction, even without code. This reading made me question whether interactivity is really about technology, or if it is more about creating a meaningful exchange between a system and a participant, regardless of medium.

Reading Reflection – Week 4

One thing that drives me crazy is the car touch screens and buttons that are used to control things like the air conditioning and volume. I have to literally look away from the road just to figure out which button to press. I get so frustrated and end up pressing everything just to see what would change. From what I understood in Norman’s reading, it is the failure of signifiers and mapping. I used to think they were the same thing, but with all of his examples, he made it clear that affordances are what actions are possible, while signifiers are what show those actions to the user. The car buttons do not communicate their function, and their icons are quite confusing. I think this could be improved by adding clear labels, rearranging icons, or even having words instead of icons, so I could predict what will happen before I press it. 

I can apply Norman’s design principles by making sure the buttons actually look clickable and that the interactive elements and directions stand out visually so users do not have to guess. I will incorporate mapping, where the controls match their results spatially and conceptually, so that people can predict what will happen. I noticed that when mapping is unclear, I hesitate before acting because I do not trust what will happen. I would also probably prioritize feedback or make sure to show a loading animation or confirmation message so people know their action worked. I think I would also design my sketches with the assumption that users will make mistakes, by including undo options or very clear instructions. The reading made me realize that good design isn’t about the visual aesthetics of something, but about making sure people understand how to use it; otherwise, it would be really annoying to use.

Week 3 — Reading Response

Reading the first chapter of Chris Crawford’s The Art of Interactive Design really made me stop and rethink how I’ve been using the word “interactive.” Honestly, I realized I’ve been throwing that term around for years to describe anything with a screen and a few buttons. Crawford’s metaphor of a conversation — a tight, messy loop of listening, thinking, and speaking—sets a much higher bar than I ever considered. It hit me that most of my “interactions” with technology are actually just reactions. Like his example of dodging a falling branch, I’m often just responding to a trigger rather than engaging in a real exchange. This definitely challenged my assumption that user agency automatically equals interactivity.

However, as much as I appreciate the clarity of his model, I couldn’t help but feel that Crawford is being incredibly restrictive — almost to the point of being a gatekeeper. By insisting that a system must “think” (process data in a complex way) to be interactive, he’s basically dismissive of a huge chunk of digital art and media that I love. I thought about atmospheric games or simple creative tools where the “thinking” is minimal, but the emotional impact is huge. Does a digital instrument not count as interactive just because it doesn’t deliberate before making a sound? This rigid, almost elitist definition feels like it prioritizes computer science logic over the actual human experience of a medium. It makes me wonder if he’s so focused on the “cycle” that he misses the beauty of simple, high-quality reaction.

One passage that really stuck with me was his critique of the “gauchely techie” replacement of human terms with “input, process, and output.” It’s a bit ironic because, while he wants to keep the human element (the “conversation”), his requirements for what qualifies as “thinking” feel very mechanical. This leaves me with a lot of questions: Has my own definition of interactivity been too lazy, or is Crawford’s just too narrow for the modern world? This reading didn’t just give me a new framework; it actually made me more defensive of the “simpler” digital experiences that he seems to look down on, while also making me want to strive for more depth in the things I build myself.

Week 3: OOP and Array

CONCEPT

After looking at examples from previous students and the sample car sketch in the syllabus, I was reminded of the Filipino children’s fable Ang Kuneho at ang Pagong, or in English, The Rabbit and the Turtle. The story is about a fast but overconfident rabbit who challenges a slow turtle to a race, only to lose after falling asleep halfway through. Inspired by this, I created a race animation where the turtles move forward at their own slow and steady pace, while the rabbit stays asleep unless the user wakes it up by clicking the mouse.

I used object-oriented programming to better organize my code. Each turtle is treated as its own object, with individual position and speed values, allowing them to move smoothly and independently across the screen. I also used arrays to control the turtles’ speeds and the rabbit’s hopping positions. Every time the mouse is pressed, the rabbit moves to the next position in the array, reinforcing the idea that the user’s input directly affects what happens in the sketch.

HIGHLIGHT

In creating my code, I mainly referenced the class slide decks and my previous p5.js sketches from earlier exercises. These helped guide both the structure and logic of my work. I also used ChatGPT as a support tool, especially when I struggled with connecting another JavaScript file to my sketch.js. Because of that issue, I decided to keep everything in a single file, which made the project easier to manage. Whenever my code did not work as expected, I also used ChatGPT to help me identify where things were going wrong and how to fix them, which helped me better understand my mistakes instead of just copying solutions.

One part of the code that I am particularly proud of is how I handled the movement of the rabbit and the turtles. For the rabbit, I created a function that stores possible hop positions in an array, allowing the rabbit to move step by step each time the user interacts. For the turtles, I used a loop to update each turtle’s position based on its own speed, making them move independently across the screen and loop back once they reach the end.

//---------------- Moving the Rabbit ----------------//

function hop() {
  rabbitPositions = [];
  for (let x = 33; x < width; x += n + 5) {
    rabbitPositions.push(x);
  }
}


//---------------- Moving the Turtle ----------------//

function moveTurtles() {
  
  for (let t of turtles) {
    t.x += t.speed;          
    turtle(t.x, t.y, 50, 35); 

    if (t.x > width + 40) t.x = -40;
  }
}
REFLECTION

Overall, I really enjoyed making this mini game and seeing how it developed through the process. One thing I would like to explore further is adding more movement and animation to both the turtle and the rabbit as they move along the track. Once I feel more confident with the code, I want to push the sketch further and make the characters more expressive.

I am also interested in expanding the game logic, such as clearly showing who is winning by tracking how many laps each character has completed and adding a visual response when a winner is determined. Adding small details like the rabbit blinking, hopping in a more natural way, or the turtles moving their legs as they walk would make the sketch feel more alive and engaging for the user.

Sources:

https://p5js.org/reference/ and Class Slide decks