Reading Reflection_Week4

The author states that a product must have discoverability and allowing understanding to be usable, without these it would not be human-centered enough and cause fustration. Except for all the example mentioned like doors and remote controls, one that often bothers me is the shower knobs at hotels. Usually we assume twisting a knob to the left is hot water and to the right cold water. I have encontered ones that are the opposite, ones that control temprature and water flow at the same time (it definately isn’t a water pressure problem it is a true design flaw) , ones that need a button to be pressed before it is spinned, and other confusing designd. This lead to me standing before the shower head for at least half a minute every time tryin to figure out how things work, and often getting blasted by cold water accidentaly.

The same applies to a lot of electric cars, at least my experience back in China was sometimes frustrating. The electric cars almost all aim for a futuristic design, many following Tesla in using the flat door handles. Some of them need to be pressed to pop out, some of them requires the driver to control them, some are automatic and some need to be pressed on one side, grabbed and them pulled to open the door. The open door button inside the doors and easily just as annoying as them come in knobs, buttons and handles.

The authors idea that the design should be human centered can serve as a central rule to interactive design. The experience of the participant is of the utmost importance. For example, signifiers should be present almost everywhere. The interaction is most likly strange to the user, so they must be instructed what to do. Clear instructions would make an experience a whole lot better than a confusing one.

Feedback is also a key component. Feedback should be fast and accurate to the action to allow the user to truly feel that they are engaging in an interactive activity. It needs to be clear to what it is responding to create logical connections between the users actions and the design’s response, creating a smooth transition in interaction.

Assignment_Week4

Concept

The inspiration for the homeowrk came easily as the Spring Festival is around the corner. In China everyone heads back home before the Sprign Festival and the whole country is on the move, we call this Chunyun (literately meaning Spring travel). Also being inspired by the world population map that we made in class I wanted to make a map that showed the great migration of people that happen every New Year.

The most difficult part was actually looking for the data. Because this was a very Chinese thing and is not really recorded on databases in other countries, I first tried a government site in China but it requested a lot of filling out forms and emails and getting back to me in a few working days so I gave up. Eventually, I asked Gemini to dig through the internet to see if there is any Chunyun data, and Gemini turned up with some data and crawling codes on github. I decided to use the data Gemini found but it was recorded in a way that completely didn’t suit my needs and I was unable to transform it in to a easy understandable format. So I gave up and picked the top 50 most traveled intercity routes and put them on the map.

Code that I am proud of

//decide the real limits of the Chinese map so the cities are mapped to the shape of the country
  //West
  let mapLeft = 71.0;
  //East
  let mapRight = 136.0;
  //North
  let mapTop = 57.0;
  //South
  let mapBottom = 18.0;
  //for loop to draw all the dots
  for (i = 0; i < numRows; i++) {
    //map the coordinated acourding to the outer rims of the country defined earilier
    let x1 = map(float(originLon[i]), mapLeft, mapRight, 0, width);
    //y coordinates are upside down because origin is in top left, reverse height and 0 to reverse the map
    let y1 = map(float(originLat[i]), mapBottom, mapTop, height, 0);
    let x2 = map(float(destLon[i]), mapLeft, mapRight, 0, width);
    let y2 = map(float(destLat[i]), mapBottom, mapTop, height, 0);

This is how I mapped the locations of the cities to the actual shape of China instead of just mapping them to the canvas. I identified the four outer rims of China and kept the dots of cities within the limits so they would be proportional to their real location.

Sketch

The cities are either at their actual location or close. I was unable to be very exact because the margins and stretch of the background map was manually adjusted.

Reflection and improvements

The result of the map is not very good because only 50 routes were chosen and becaue of how people populate china they are mostly on the east side of the country. This makes the map look unbalances. Secondly, 50 is way too far from enough to create something like a flightmap that would outline the country with arcs. This was difficult to do because I had to process the data manually and even 50 took a huge amount of time. I will see if there are futher ways to used exel functiond or formulas to easily transform the data into something that can be processed easily. That would also deal with the problem of dots not fitting on maps properly because the dots and lines would create the map itself.

Reading week 4

While reading I was constantly thinking that the author is being too sensitive about the intricacies. “If I were at his place I would have Ignored and moved”. But after some giving it a thought later and looking at things around me. I kinda see the point. I would obviously appreciate if the the buttons on my headphones  were easily navigatable. I understand that there should be ease along side utility but that is kinda of contrary to modern development. Techonology wants to pack everything in a box and give it to humanity like a phone. Will it be easy to use? Probably not. Beacause of its range of functionalities. Lets take the example of the washing machine with all those buttons. Now what will be HDC?, all functions in one button or one button for each functions. It clicked me when the author said “accumulation of poorly designed interactions”. This is indeed evident in the realm of computer science. For example, I open chatgpt and it gives a list of model to choose from. It is marketed as choose the one based on your work. I believe it to be an inconvenicence as the engineers believe that I have and memorised the up to knowledge of Large Language Models.

This makes me question whether human-centered design is always about reducing complexity, or if it is more about organizing complexity in a way that makes sense. In interactive media especially, I think designers should focus less on adding features and more on clear mapping and feedback, so users understand what their actions are doing. The reading changed my perspective in a way that I have been noticing how often modern technology expects me to adapt to machines instead of machines adapting to me. At the same time, as an engineer I think, is bad design inevitable? or its just a result of race era we are in, in which we focus on developing, when we get to pause and we will start working on the design

Week 4: Reading Response

This week’s reading made me notice many things around me that I had not thought about before, even though they are right in front of me. When using objects, we usually try to figure out how they work no matter how confusing they are until we reach the desired outcome, but I never really stopped to ask why they were designed that way or to critique them in terms of how well they meet user needs. One thing that drives me crazy is the light control system in hotel rooms, where there is a pad of lightbulb icons but they rarely indicate which lights they turn on or off. What usually happens is that whenever I want to turn off the lights, I have to press random buttons until the ones I want finally switch off, often mixing bedside lights, main lights, entrance lights, and others. This connects to Norman’s ideas because these systems have poor signifiers, since they do not clearly indicate which light each control operates, as well as weak mapping and mismatched mental models, since the arrangement of controls does not reflect the layout of the room. Therefore, this design could be improved by including clearer labels for each icon and arranging each control closer to the light or area it affects.

After reading and reflecting, I can apply Norman’s design principles to interactive media by ensuring that I always include clear signifiers and efficient mapping in my work, whether it is a p5 sketch or a physical device. There should always be signs, icons, or instructions that clearly communicate what the user should do in order to achieve the intended outcome. For instance, in interactive sketches such as games, I would provide clear guidance about where to click or what actions are possible, and in tools or interfaces I would make sure that labels and controls are easy to understand. I would also arrange functions logically so that controls and outcomes match efficiently. Ultimately, I need to ensure that everything in my work clearly communicates what the user can do and what they can expect, allowing them to reach their goals without confusion. The reading helped me understand how many things are designed to look more modern or visually appealing but still fail to meet user expectations that we are not always aware of, which made me realize that future designs should aim to improve functionality while remaining clear and understandable to users.

Week 4 Assignment- Generative Text

Concept:

At the beginning of this project, I was deciding whether I should create a truth or dare style generative text. After researching different types of questions, I came across “Who’s Most Likely To,” which has a similar concept but felt more suitable for the project. Because the project focuses on generative text, I chose to create an interactive “Who’s Most Likely To” generator. Each click produces a new randomly generated question using words taken from a CSV file.

I wanted the interaction to feel playful and social, similar to a board game night activity. To support this, the background color changes every time the user clicks, so the experience feels dynamic rather than static. I also designed a bold border, a clear title, and a short prompt explaining how to interact with the page so the user immediately understands what to do. I used a playful font to match the tone of the game and make the interface feel light and inviting. The goal of the project is to create a simple interactive experience where the user repeatedly clicks and receives a new question each time.

Embedded code:

Highlight of the code:

 

push();//saves current drawing setting
 stroke(3);
 strokeWeight(20);
 noFill();
 rect(15, 15, width - 29, height - 29);//draws the rect to be framing the canvas
 noStroke();
 pop();//closes the drawing seeting what is between push and pop wont affect the other lines of code

 push();//saves current drawing setting
 textSize(30);
 text("Whos Likely Game", width / 2, height / 4);
 pop();//closes the drawing seeting what is between push and pop wont affect the other lines of code

 push();//saves current drawing setting
 textSize(17);
 // textAlign(CENTER,CENTER);
 text("Click the screen to generate a question!", width / 2, height / 3.5);
 pop();//closes the drawing seeting what is between push and pop wont affect the other lines of code

 translate(width / 2, height / 2);
function randomColor() {
  bgColor = color(random(255), random(255), random(255)); //everytime th emouse is clicked a new color is given to the background
}

I used push () and pop () to isolate the styling of the title and prompt. This prevents their font size and styling from affecting the generated question. It helped me understand that p5.js keeps drawing settings active unless they are reset, so push and pop act like saving and restoring the canvas settings. Instead of calculating the exact position of the generated text, I used translate () to move the coordinate system to the center of the canvas. This allowed me to draw the question at (0,0) and keep it centered regardless of canvas size. I implemented randomness visually by generating a new RGB color on every click. The background change works as feedback, letting the user know their interaction triggered the system while also making every interaction visually unique.

How it was made:

For the process, I first began by implementing the CSV file. I created a CSV containing the word groups I was going to use, specifically the determiners and the verbs. I then loaded the CSV file using loadStrings () and tested that the data was being imported correctly. I mostly used the same structure as the code we used in class, but modified it so it would fit my sentence structure and generate a “Who’s Most Likely To” question.

After that, I added a custom font to improve the visual style. I placed the font file as a . ttf file and loaded it inside the preload () function so it would be available before the sketch starts. Then I applied the font to the text elements.

Next, I created the stationary text elements, including the title “Who’s Most Likely Game” and the prompt “Click the screen to generate a question.” I positioned them using width/2 and height/4, and a similar proportional placement for the prompt. This way, if the canvas size changes, the text stays centered and aligned above the generated question. I then added a border using rect () and adjusted the dimensions so it scales with the canvas size. For the stationary text, I wrapped the styling inside push () and pop () so the text settings would not affect the rest of the sketch.

I also implemented interaction using mousePressed (). Each click refreshes the question and changes the background color to make the experience more playful. I referenced the p5.js example (link in refrences) for this part and adapted the example to my project. To properly position the generated question, I used translate() to center the text on the canvas. Throughout the process, I relied on the examples from class to build the basic structure first, and then gradually added variations and visual improvements.

Reflection and Future ideas:

For reflection and future ideas, I am proud that I was able to understand how to implement important p5.js features such as push () and pop (), the use of translate(), and how to properly structure the draw() function. I also learned how to work with external files by importing a custom font and a CSV dataset into p5.js. Through this project I improved my understanding of transformations and how randomness can be used to generate a dynamic canvas. The background color changes helped me better understand how interaction and feedback can make a simple system feel more engaging. I also became more aware of layout and visual clarity, and how small design choices affect how easily the user understands the interface.

In the future, I would like to expand the project by animating the generated text so it fades in or slightly bounces when a new question appears. I would also like to allow users to input their own names so the game feels more personalized, and possibly add a point system to make it feel closer to an actual game. Additionally, adding small icons or visual elements could make the interface more visually appealing and reinforce the playful board-game atmosphere.

References:

For the color background changing whenever its pressed:

https://p5js.org/reference/p5.Element/mousePressed/

For text randomizer :

https://editor.p5js.org/maa9946/sketches/-4EPAewPV

For refreshing my understanding of transformations:

https://p5js.org/tutorials/coordinates-and-transformations/

Background color labeling and randomizer:

https://editor.p5js.org/aferriss/sketches/B1oYHcN9W

For text font:

https://editor.p5js.org/maa9946/sketches/FUnL3noV1

https://fonts.google.com/specimen/Barriecito

For who’s most likely to questions:

https://www.teenvogue.com/story/most-likely-to-questions#how-to-play

Week 4- Reading Response

The first example that came to mind that drives me insane is the Apple Magic Mouse. Although it isn’t mentioned in the reading it is an outside example that illustrates several of Norman’s design principles in action and the failure to achieve these principles and its consequences. The mouse cannot be used while charging because the charging port is placed on the bottom which was something that received a lot of backlash due to its inconvenient design placement. When the battery dies, you are forced to stop using it entirely until it charges, which makes the interaction extremely inconvenient. This design breaks several of Don Norman’s design principles. First, it demonstrates poor mapping. The location of the charging port forces the mouse to be flipped upside down, which physically prevents interaction. The physical layout directly maps to the user not being able to use the product. A mouse is meant to support continuous use, but the placement of the port makes its core function temporarily impossible.

Second, it breaks the conceptual model. A user expects a rechargeable device to continue working while plugged in, similar to a phone or laptop. When the mouse suddenly becomes unusable during charging, the user realizes their mental model was wrong. This mismatch between expectation and reality causes frustration. It also violates discoverability. The user does not learn this limitation during purchase or setup they discover it only at the worst possible moment: when the battery dies. At that point, the design interrupts workflow and negatively impacts the entire user experience. The issue appears to come from prioritizing aesthetics over usability. The bottom placement hides the charging port and keeps the mouse visually minimal, but it sacrifices function. This overall poor design is a result of the designer prioritizing appearance over functionality this is similar to Normans friends case where he got stuck in between a set of poorly designed swinging doors, and Norman stated that the” designer aimed for beauty, not utility.” Which is essentially what happened with the apple magic mouse.

A better design solution would be to move the charging port to the front or side of the mouse. The cable could lay flat while the mouse remains usable, improving mapping and maintaining a clean appearance. Another improvement would be a swappable battery, similar to camera batteries, allowing users to replace a depleted battery instantly instead of waiting. By relocating the charging port and optionally adding a replaceable battery, the mouse would better support continuous interaction, align with the user’s conceptual model, and improve the overall user experience.

Some of the author’s principles that caught my attention, and that I think strongly apply to interactive media, are discoverability, understanding, signifiers, and conceptual models. I feel these are the minimum standards that any interactive project should meet, especially when creating an interactive website.

First, discoverability and understanding are essential. When a user opens a website, they should immediately understand what the website is and how they are supposed to interact with it. The user should not have to guess where to click or what is interactive. For example, on the NYU Brightspace website there are clear icons for settings, notifications, and the user profile. These icons communicate visual cues, and because the symbols are simple and familiar, the user can quickly understand their function. The elements are clearly mapped, easy to find, and obviously clickable, which allows the user to comfortably navigate the interface.

Another important principle is signifiers, which Norman describes as indicators that communicate where the action should take place. In interactive media, signifiers guide the user through the experience. For example, on a game website there might be a “Start Here” button that glows while the background is dimmed. This immediately captures attention and shows the user where to begin. Similarly, a button might be a different color, such as red, or use motion like a pulsing animation. In many games, especially during tutorials, an arrow icon hovers over an object to show the player what to tap and how to interact. These visual cues reduce confusion and help the user learn the system naturally.

Another principle is the conceptual model, which explains how something works. This is especially important in interactive media because users need to form a mental understanding of the system before they can fully engage with it. Tutorials are a good example, they teach the user the rules of interaction before the experience becomes logic of the system, they feel more confident navigating it.

Overall, these principles help make an interactive project intuitive rather than frustrating. When discoverability, clear signifiers, and a strong conceptual model are present, the user does not have to think about how to use the interface and can instead focus on the experience itself. This is ultimately the goal of good interaction design.

Data in Graph

This weekend I was wokring on a replication study where I had to deal with data of mulitple repos. Somewhere in the process I encountered a csv which had two columns, release version and last pull request associated with that. I needed to figure out on average how many pull requests the repo used per release. When I sat of the IM assignment I thought by not display that csv on a graph using p5. This was the motivation for my work.

Initialy I wanted to make a tree on which branches represent the version number and the leaves represent the pull requests. But after some hassle I figured out that the data I have is not as hieracial and evenly distributed to create a presentable tree. After which I settled down on a graph represenation of the data.

First and the hardest part was to figure out how to represent it. cuz the amount of value was alot. I decided to group the major and minor version togather and make bins of the of the numbers of Pull requests. Even though it gave less information, the output was at least readable and understandable. In coding the grouping was the hardest part, because I had to read the array and keep track of the of what I have read and match it with the next coming value. and that is also the part I am most proud of figuring it out. The AI assitance for that is mentioned in the end.

// Loop through data row by row
  let lastGroup = "";   // keep track of the last major.minor group
  let y = paddingY;     // starting from Y position
  let groupCount = 0;   // how many PRs in the current group
  let maxBins = 0;      // track maximum number of bins for X-axis

  for (let r = 0; r < data.getRowCount(); r++) {
    let pr = int(data.getString(r, 0));
    let version = data.getString(r, 1);

    // Major.Minor for grouping
    let parts = version.replace("v", "").split(".");
    let group = `v${parts[0]}.${parts[1]}`;

    // New group  discovery
    if (group !== lastGroup) {
      y += yStep;
      groupCount = 0;
      lastGroup = group;

Sketch

AI Assistance

I asked for functions like replace() and split(), to manipulate the data values. I also asked AI to help me tidy up the code in terms of formating and variable names. The most major contribution which is not mine is the following line of code

// Update maxBins if needed
let bins = groupCount / binSize;
if (bins > maxBins) maxBins = bins;

this was didn’t change anything in the ouput but it is useful if the file size is increased significantly. I let it be there as I deem it “Nice to have” or “Good COding practice”.

Week 4 — Reading Reflection

In reading “The Psychopathology of Everyday Things,” I was struck by how often I have blamed myself for “operator error” when, in reality, I was just a victim of bad design. Don Norman’s breakdown of the disconnect between engineers — who often design with a logic that doesn’t match human behavior — and actual users was eye-opening. One thing that drives me crazy, which Norman doesn’t specifically detail, is the modern design of “smart” appliances that replace physical knobs with touchscreens.

Trying to set a timer on a touch-based stove while my hands are wet or covered in flour is an exercise in pure frustration. The lack of a physical affordance (something to grab and turn) and the absence of immediate haptic feedback make the interface feel fragile and unresponsive. This could be vastly improved by following Norman’s principle of discoverability –incorporating tactile controls like physical dials that naturally signal how they should be operated without me having to look away from my cooking to navigate a sub-menu.

When applying Norman’s principles to interactive media, I see a clear path forward through the use of signifiers and natural mapping. In a digital environment, we don’t always have physical objects, so signifiers become our primary tool for communication. For example, in a complex mobile app, instead of hiding features behind ambiguous icons, designers should use signifiers that clearly communicate “where the action should take place”. Furthermore, applying natural mapping—like having a volume slider that moves vertically to represent “more” or “less” — takes advantage of biological standards that lead to immediate, intuitive understanding. By focusing on human-centered design and assuming that users will make mistakes, we can build interactive media that feels like a helpful collaboration rather than a “fight against confusion”.

Week 4 — Generator

1. Sketch and Code


Code

2. Overview

For this assignment, I moved away from abstract shapes toward a conceptual exploration. The project uses Object-Oriented Programming to manage a dense grid of hundreds of independent “Bit” objects. The artwork functions as a digital ecosystem that transitions from a chaotic, flickering state of binary data to a structured, symbolic heart shape when the user provides “attention” via a mouse click.

3. Concept

My goal was to visualize the friction between cold machine logic and human emotion. I wanted to create a “hidden” layer of data that only reveals itself through interaction. The aesthetic is inspired by classic terminal interfaces — matrix-green on a deep black background — but the focus shifts to how human interaction (the mouse) acts as a lens, organizing random 0s and 1s into meaningful language and iconography.

4. Process and Methods
    • I populated the canvas using a nested loop that instantiates Bit objects which allowed every single character on the screen to have its own internal state, managing whether it currently displays a binary digit or a human word.
    • To make the background feel “alive,” I implemented a probability check within each object. By giving each bit a 2% chance to flip its value every frame, I created a shimmering effect.
    • Instead of a simple radius, I utilized a heart curve equation which creates a high-definition mathematical boundary that filters the grid, determining which objects should “corrupt” their binary state into human text.
5. Technical Details
    • Unlike a standard distance check, which creates a simple circle, I implemented an algebraic heart curve. The challenge was that p5 coordinates are pixel-based and grow downwards from the top-left (0,0). To make the equation work, I had to translate every point in the grid into a normalized local space relative to the cursor.
// 2. COORDINATE NORMALIZATION:
// Transforms pixel coordinates into a math-friendly scale (-x to +x) centered on the mouse position.
let x = (this.pos.x - mouseX) / 150; 
let y = (this.pos.y - mouseY) / 150;

// 3. ALGEBRAIC HEART CURVE:
// Formula: (x^2 + y^2 - 1)^3 - x^2 * y^3 <= 0
// If the result is negative, the coordinate is inside the heart shape.
let heartEquation = pow(x*x + pow(-y, 2) - 1, 3) - pow(x, 2) * pow(-y, 3);

// INTERACTION LOGIC
if (mouseIsPressed && heartEquation < 0) {
  // If inside the heart boundary, reveal a human word
  // 'frameCount % 20' creates a shifting glitch effect within the heart
  if (this.currentText.length === 1 || frameCount % 20 === 0) {
    this.currentText = random(humanWords);
  }
} else {
  // Revert to background binary logic
  this.currentText = this.binaryState;
}
    • The background’s “breathing” light effect is driven by a sine wave. By adding the x coordinate of the object to the frameCount inside a sin() function, I created a horizontal phase-shift, resulting in a light ripple that moves gracefully across the grid.
// 4. TRIGONOMETRIC PULSING:
// Uses a sine wave to cycle brightness based on time and X-position
let pulse = sin(frameCount * 0.05 + this.pos.x * 0.02);
let brightness = map(pulse, -1, 1, 60, 200);
6. Reflection

This project was a bit of a shift in my style of works, where I dipped more into the artistic side rather than just the technical. My main challenge was mastering the balance between the machine code and the heart. Initially, the heart appeared distorted because I hadn’t accounted for the rectangular aspect ratio of the canvas. By learning to normalize the X and Y scales independently of the pixel count alongside changes in sizes, I achieved a more crisp silhouette. Using push() and pop() to isolate the font styles of individual objects allowed me to create a dual-identity system where “Machine” and “Human” layers exist in the same space without visual conflict.

7. Resources

Week 4 – Reading Reflection

When I read about Chapter 1 of The Design of Everyday Things, the first thing that comes to my mind is how confused I felt using Brightspace when I first came to NYUAD. The author explains that when something is designed poorly, people usually blame themselves instead of the system that created it. That stood out to me because that is exactly how I felt. I kept thinking I was the problem and that I just was not understanding how to use it I had trouble when uploading my assignments,  and when trying to find professors’ feedbacks. Before NYUAD, I used Google Classroom, and everything there felt simple and clear and it was easy to use and understand. Brightspace felt crowded and all over the place which made it hard to navigate. After reading this chapter, I realized my frustration was not only about me. It was also about the design itself.

The chapter also talks about how designers sometimes assume everyone thinks the same way they do. That made me think about how different people use technology in different ways. Some people grow up using all kinds of apps and websites, so they get used to complicated layouts and things that could be considered “confusing” to other people. Others do not have that same experience, so they might feel lost more easily. I noticed this with WordPress too. The first time I used it, I had no idea where anything was. It felt overly crowded and confusing. After using it many times, I slowly got more comfortable. That helped me understand why designers need to think about all kinds of users, not just people who already know similar platforms.

There are also websites that feel easy right away. To me Amazon is a good example. The search bar is easy to find when trying to search for the product you’re looking for, the cart is always in the same place and when you click it you could see what you added there, and checking out or tracking an order is simple. You do not have to guess what to do next. Even simple games like Wordle are easy to understand and use. So it gives the idea that the interaction they offered suits any type of person even people that don’t even know how to use technology. When you open it, you already know what to do without instructions. That shows how good design can make something feel natural to someone.

Overall, this reading made me pay more attention to the design of the things I use every day. I started noticing how much design affects whether something feels stressful or simple. It also made me think that simple design is usually better. I am still wondering how designers decide how much is too much. Adding more features can be helpful, but it can also make things more confusing. Finding the right balance seems difficult, and I am curious about how designers figure that out.