Week 4 Reading Reflection

Something that drives me crazy, and I noticed it a lot here during Professor Goffredo Puccetti’s wayfinding class, is when signs are misplaced or point the wrong way. We used to find arrows that looked like they were telling you to turn when you were actually supposed to keep going straight, or signs that were posted too early or too late to actually help. For some reason, it always felt like they were almost mocking us, because instead of guiding, they just created hesitation. I think the only real fix is testing them in the real moment. You can’t design a hallway sign by looking at a blueprint in an office. You have to walk that path, see where people naturally pause, and place the sign right where the choice needs to be made.

Norman’s ideas connect to this really well, and I think they translate to interactive media almost directly. A misplaced sign feels like a button on a website that looks clickable but isn’t, or a menu gesture that does something different than you expect. Norman talks about affordances and signifiers, and those matter so much online. Buttons should actually look tappable, gestures should be hinted at with little cues, and feedback should come right away. Even a tiny animation or sound can reassure someone that the system heard them, the same way a well-placed arrow reassures you that you’re still going the right way in a building.

For me, applying Norman’s principles to interactive media is really about remembering that people use things while distracted, busy, or in motion. Just like we tested signs while walking, I think apps and websites should be tested while people are juggling other tasks. Mapping should feel natural—swiping left should move left, clicking save should clearly save—and the system’s “model” should always be consistent. What I really took from both the chapter and that class is that when people get lost, whether in a hallway or an app, it’s almost never their fault. It’s the design that failed to guide them. That reminder makes me want to design things that feel clear, forgiving, and human.

Week 4: Reading Reflection

Intuition guides our everyday lives, most applications and devices that we run into in different situations are ones we are not familiar with, yet we are able to interact with them and intuitively figure out how to utilize them. This can be attributed to unspoken rules that we have gotten used to over the years a button is to be pushed, and a knob is to be turned. These are affordances derived from our intuition and guided by signifiers and feedback to design devices usable by everyone. An interaction that I would say drives me crazy, due to it’s lack of all that is aforementioned would be trying to find my way around in the public transportation of some locations, especially underground trains where there are not enough signs signifying which train is coming or which platform to take. Often the maps are even cluttered or outdated making it even harder to wayfind my way around. This lack of clear signifiers and feedback makes it difficult to form a reliable mental model of the system, making it impossible to wayfind intuitively without searching things up or asking someone else around you.

When it comes to interactive media, this did open my eyes to the lack of intuition or clear instructions and signifiers to how to use my system and navigate my work so far. Most of the them depend on the user having previously provided knowledge through reading the concept or speaking to me. For example, this week my project has an interaction where the user can switch a book through clicking it, which can only be found by chance by clicking the books if not having read the concept. However, the switching of the book could count as feedback clarifying that the action of clicking does make a change. Moving forward I’d like to make my work more intuitive to interact with where the user is able to identify the purpose and the next move upon coming across a piece of mine. I’m looking to explore the possibility of strong signifiers through visual cues or micro-interactions that lead the user naturally without their having to be told beforehand. Such cues would create a seamless and engaging experience for users where they are able to independently go out and play and interact with the system. At the end of the day, the works we are creating are very user-focused most of the time and keeping their perspective in mind might be more important than in most other art forms.

Data Visualization

Concept:

My work for this week was inspired by a website called Receiptify, which takes your Spotify data and makes a list of your statistics and displays it in the format of a receipt. This is data visualization in itself, while I didn’t make a version of it, I chose to use it to generate my dataset. I asked it to give me my top 50 songs for the month, then I made those into a dataset, which I uploaded onto p5.js. I was racking my brain for different ways music can be represented in a visual sense, and then I saw one of my suitemates got coasters that look like vinyls, which gave me the idea to represent the songs as vinyls. I think most of us have seen people use vinyls as room decor rather than for playing music; this work is sort of a spin on that (pun intended ?).

Part(s) I’m Proud of:

Note: To be completely honest, I did consult ChatGPT for a few things, just to make the process less overwhelming.

1- I remembered Professor Aya saying to try not to hardcode variables, and I am proud that I thought of this part. I decided not to hardcode the number of rows in my grid, in case I want to edit the number of songs in my dataset in the future.

  //compute rows automatically based on number of songs

  //not hard coded in case i want to change the number of

  //songs in the future

  //ceil cause we need to round up

  let rows =ceil(records.length / cols);

2- I made it so that the vinyls expand when you click on them, and I am proud of my implementation because it makes the audience experience less static; you’re not just visualizing the data, you’re also interacting with it.

/when mouse is pressed

function mousePressed() {

  //loop through each vinyl

  for (let i = 0; i <records.length; i++) {

    //check if mouse is inside vinyl

    if (dist(mouseX, mouseY, records[i].x,records[i].y) < 25) {

      //deselect if same vinyl clicked

      if (activeIndex === i) activeIndex= -1;

      //otherwise set this as active

      else activeIndex= i;

    }

  }

}

Heres the sketch:

Reflection: 

There’s always room for improvement. If I spend more time on this, I’d probably make the rest of the vinyls disappear when a vinyl is clicked. Instead of just a color per artist, I can make it more realistic by adding album covers. A possible full upgrade would be making it almost a game where the user chooses between these vinyls, then we get an animation of a vinyl actually being played through a player, and the actual song being played. It would be a different way to interact with music, as opposed to seeing it as a huge list of strings, as we do on Spotify.

Assignment 4 – Generative Text

Concept

Well, so I wanted to create something that would bring words to life based on their emotional weight. The idea came to me while I was writing an essay and then I was just thinking about how certain words just hit differently; like how “thunder” feels more intense than “whisper,” for example. So I built this visualization that reads emotion data from a CSV file and animates words according to their intensity levels. Each word gets its own personality through movement – high-intensity words explode across the screen in chaotic circles, medium-intensity ones flow like waves, and gentle words just float peacefully. The colors and sizes change too, so you can literally see and feel the emotion in each word. Every 3 seconds it switches to a new word automatically, but you can also hit the spacebar to jump ahead if you’re impatient (like me).

Highlight of the code I’m proud of

The part that really clicked for me was figuring out how to make each letter move independently while still keeping the word readable. At first, all my letters were just clumped together or flying off in random directions, but then I realized I needed to treat each letter as its own little character with its own animation offset.

// Animate each letter individually
for(let i = 0; i < word.length; i++) {
  let x, y, size;
  
  if(intensity >= 80) {
    // High energy - explosive circular movement
    let angle = time * speed + i * PI/3;
    let radius = sin(time * 6 + i) * intensity * 0.5;
    x = cos(angle) * radius * 0.4 + i * 40 - (word.length * 20);
    y = sin(angle) * radius * 0.3;
    size = 45 + sin(time * 8 + i) * energy * 15;
  }
  // ... more animation types
}

Using i * PI/3 and i * 0.6 as offsets made each letter follow the same pattern but at slightly different phases, so they stay connected as a word but each one has its own rhythm.

I’m also pretty happy with how I handled the CSV parsing. I decided to do it manually instead of using a library.

// Parse CSV manually - split into lines then extract values
const lines = csvText.trim().split('\n');
const headers = lines[0].split(',');

// Process each data row (skip header row)
for (let i = 1; i < lines.length; i++) {
  const values = lines[i].split(',');
  csvData.push({
    word: values[0],
    intensity: parseInt(values[1]),
    color: [
      parseInt(values[2]),  // red
      parseInt(values[3]),  // green
      parseInt(values[4])   // blue
    ]
  });
}

Sketch

The visualization cycles through emotional words automatically, or press spacebar to advance manually.

 

Future Improvements

There are definitely some things I want to tackle next. First, I’d love to add sound. Another idea is to let users upload their own word lists or even type in words manually to see how the system interprets them. Right now it’s limited to my CSV file, but it would be cool to make it more interactive. I also want to experiment with particle effects – maybe letters could leave trails or break apart into smaller pieces for really explosive words.

The color system could be smarter too. Right now I’m manually assigning colors, but it would be interesting to generate them automatically based on the word’s emotional category or intensity. Maybe cooler colors for calm words and warmer ones for energetic words, or even colors that shift gradually as the intensity changes.

Week 4: Data Visualization

Concept

Through this work I wanted to create a virtual library where a dataset of books is visualized in a dynamic way. I explored the possible datasets in search of one that includes the genre of every book, then integrated that into the visualization with every genre having it’s respective color that represents them to have a more effective format that reveals more about a book. To accomodate a larger number of books in the dimensions of the work, it is interactive with the user pressing each book switches it out to another book, converting the title, author and color to convey the details of the new selected book. The screen displays nine books at a time not to overwhelm the use with too many books and not to have too few to not exclude too many books. The main plan here is to create an interactive and engaging visualization of a dataset of books to help a user explore the dataset for recommendations or to discover the range of books available.

Highlight Code

wrapText(txt, cx, cy, maxW, maxH) {
  let words = txt.split(" "); //split text into words
  let lines = [""]; //initialize lines array

  //build lines until they exceed width
  for (let w of words) {
    let test = lines[lines.length - 1] + (lines[lines.length - 1] ? " " : "") + w; //add word to current line
    if (textWidth(test) <= maxW) lines[lines.length - 1] = test; //fits line, add it
    else lines.push(w); //doesn't fit, start new line
  }

  //limit lines to available height
  let lineH = textAscent() + textDescent(); //line height
  let maxLines = floor(maxH / lineH); //max number of lines
  if (lines.length > maxLines) {
    lines = lines.slice(0, maxLines); 
    lines[maxLines - 1] += "..."; //truncate last line with ellipsis
  }

  //draw centered vertically
  let startY = cy - ((lines.length - 1) * lineH) / 2; //calculate starting y position
  for (let i = 0; i < lines.length; i++) {
    text(lines[i], cx, startY + i * lineH); //draw each line
  }
}

I’m most proud of the wrapText function within the book class, due to it’s ability solve a formatting problem that I faced with the lengthy book titles not fitting into the small allocated space on the book. It tokenizes the input string, by separating each word detected with a space between them then builds lines that respect the maximum width. For the titles that were too long to fit into the book I tried to using smaller font at the start but it was too difficult to read the lines in that case. So I decided to replace words that do not fit with an ellipsis if it exceeds the height to avoid having to use a smaller unreadable font size. This helped me ensure that all the titles fit into the book cover and are visually coherent and uniform.

Embedded Sketch 

Reflection 

Through this assignment I’d say I learnt a lot on data visualization and how to accommodate a large amount of data that comes in with it’s own attributes, like different lengths and format of sentences in the book titles. This taught me how to alter code to accommodate rather than smaller set of data determined by me to a larger more diverse set. Which also taught me the importance of dynamic code that is able to adapt to different inputs. For further assignments and projects I’d like to take lessons I learnt from completing this assignment and integrating them with more interactivity and storytelling so there is a more dynamic path to take rather than the current one where there is only one aspect that could be changed by the user interacting with it.

Reading Reflection Week#4

When Norman was talking throughout the reading about frustrations with design, the first thought that popped into my head was the very complex, ‘interactive’ online shopping sites. I put ‘interactive’ in quotation marks as the design lags, which takes away the whole point of the experience, and I feel like they value aesthetic more than they do user experience. As a user, I’m just trying to look for clothes. Why are you making it so complicated for me? When Norman began to explain HCD, I remembered the workflow of the company I interned at this summer, used for their development department. The company was in charge of an Enterprise Resource Planning system. Suppose you’re a client and you raise a ticket. The ticket goes through Presales, then Analysis, where the time for the development is estimated, and the business need is highlighted by a functional consultant, finally it’s picked up by a developer. After the code is done and reviewed, it’s pushed to functional testing before it goes to the client. Through this flow, it ensures the code is fully tested for the client, which means the probability of the client having errors is minimized. 

In terms of applying the author’s principles to Interactive Media, I think especially as we are learning the technology (like p5js), it’s very easy to lose track of the aesthetics and forget about user experience. There’s a sacrifice to be made, or maybe a balance to be found between prioritizing user experience over aesthetics, but aesthetics is part of the user experience as well. Let’s take websites, for example, the likelihood of wanting to use a poorly decorated website is slim; it may work perfectly, but it wouldn’t look appealing. At the other end of the spectrum is the example I gave earlier, where aesthetics completely take over and the user experience is no longer enjoyable.

Assignment 4 Data Visualization

My Concept

I wanted to make a visualization that’s both fun and interactive, inspired by something I love: music. I picked the top 50 Spotify songs of 2023 because I thought it would be exciting to explore popular tracks visually. The idea was to take a simple concept; songs with different features like energy, mood, popularity, and danceability, and show it in a more creative and engaging way, instead of a boring, standard chart. I wanted people to notice patterns and interact with the data while keeping it simple and playful. In my visualization, the size of each circle represents popularity, the color represents danceability, (where pink is the least and blue is the most and purple is the in between), the higher the circle is, the happier the song’s mood, and the further to the right, the more energy the song has. The dataset I used was from Kaggle: https://www.kaggle.com/datasets/yukawithdata/spotify-top-tracks-2023.

A Highlight of Some Code That I’m Particularly Proud Of

One part I’m really proud of is the hover effect that shows the song title and artist. It makes the visualization feel more interactive and personal because you can actually explore the songs instead of just seeing circles on a canvas. The code works by checking the distance between the mouse position (mouseX, mouseY) and the center of each circle (c.x, c.y). If the distance is smaller than the circle’s radius (size / 2), it means the mouse is hovering over that circle, and then the text appears above it. I used fill(0) to make the text black, textSize(12) to set the size, and textAlign(CENTER) so it’s centered nicely over the circle.

The ${} syntax is called a template literal in JavaScript. It lets you insert variables directly into a string, so ${songs.getString(i, ‘track_name’)} pulls the song name from the table and ${songs.getString(i, ‘artist_name’)} pulls the artist. Using it this way makes it easy to combine them into one neat line of text without writing extra code to join them. I like this part because it’s simple, but it really makes the visualization interactive and fun to use.

// if mouse is hovering over the circle, show the song name and artist
    if (dist(mouseX, mouseY, c.x, c.y) < size / 2) {
      fill(0);
      textSize(12);
      textAlign(CENTER);
      text(`${songs.getString(i, 'track_name')} - ${songs.getString(i, 'artist_name')}`, c.x, c.y - size);
    }
  }

Reflection and Ideas for Future Work or Improvements

I’m happy with how the visualization turned out because it’s simple, colorful, and interactive, and you can immediately see patterns in the data. In the future, I’d love to explore adding more interactivity, like filtering songs by genre or letting users compare two songs directly. I could also experiment with different visual encodings, like using different shapes to represent additional variables. Another idea is adding key press options to trigger effects or even including sound, so the circles could play a snippet of the song when clicked or hovered over. Overall, I think it’s a fun way to combine data and creativity, and it shows how you can turn something as tricky as data into a playful visual experience. 

The Design of Everyday Things Reading Response

What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?

One thing that can be a bit frustrating is microwaves with a lot of buttons and a confusing dial. You just want to heat your food, but some have many buttons, like “Popcorn,” “Defrost,” “Reheat”, and then there’s the round knob for setting the timer. Sometimes it’s hard to tell which way to turn it, how to stop it, or how to restart it. It’s not always clear, and you end up taking extra time just figuring it out.

Norman explains in the reading that when things are confusing, it’s usually the design, not the user. This microwave has bad mapping, because the buttons and dial don’t clearly show what will happen, and it gives little feedback, so it’s hard to know if it’s working. A simpler design; fewer buttons, clear instructions for the dial, and a display showing the timer and status, would make it much easier to use.

How can you apply some of the author’s principles of design to interactive media?

I think I could apply some of Norman’s principles of design to interactive media by being organized and guiding the user through the steps. For example, if I’m creating a sketch or an interactive project, I’d make sure it’s clear what the user should do at each stage. I’d also keep in mind that the user might not be familiar with the work, so I’d include hints, feedback, and cues to make it easier to understand. Basically, I’d try to make the experience intuitive while staying open-minded, remembering that just because something makes sense to me doesn’t mean it’s obvious to someone else.

Week 4 Reading Reflection


Reading Don Norman’s reflections on the “psychopathology of everyday things” immediately pulled me back to my first-year CADT course on redesign. Professor Geffredo Puccetti. In the course we studied the idea of a “nudge” which is basically a subtle element that steers users in the right direction without needing explicit instructions. It is a simple yet powerful thing to have in a design. Norman’s principles of affordance and signifiers echo this beautifully. He reminded me that design should make the right action feel almost self-evident, sparing users from the awkward trial and error of guessing.

I see this all the time in everyday spaces. At NYUAD, the plain steel strips on push doors are a perfect example. Without a single word, they tell you what to do. Yet, recently, they pasted “push” and “pull” stickers on doors, a sign of design overcompensating for its own ambiguity. Digital design isn’t so different. Minimalist interfaces often leave users hunting for functionality, hiding navigation behind icons. Sometimes I find myself clicking around blindly, wondering if something is interactive or just static.

Norman’s framework helps me think through why. Affordances, like shading, button shapes, or a small animation hint at what’s possible and invite us to try.  Signifiers like a microphone icon or a heart symbol work almost instinctively, cutting down the need for extra instructions.

But the rise of minimalism has complicated things. I think often of the infamous “Kim Kardashian sink”. A perfect example of how design can privilege beauty over usability. The sink looks striking, but newcomers can’t figure out how to use it. I’ve had similar frustrations with everyday objects, like awkwardly designed shopping baskets that seem almost painful to the human hand. Having the handle in the middle makes it uncomfortable to carry and increases the probability of things falling out. This is a clear example of poor design.

For me, that tension between minimalism and intuitiveness is the heart of the matter. Designers are often tempted towards beauty, but at the expense of comfort and clarity. I’ve realised that the best designs aren’t the ones that impress me visually at first glance but the ones that quietly work. In this course I would try to implement buttons which clearly indicate they are to be pressed by a simple shadow or glow effect.  I would prioritise intuitive and visual cues, making every function discoverable, understandable, and, ideally, a little delightful. I would also follow a key for instance, all glowing objects are interactable and all the non-glowing ones are static. Have icons to demonstrate power ups and add a visual animation to add a signifier. Overall, I hope these changes add simplifies discoverability and understanding.

Reading Reflection – Week 4

 

I have always thought some things could’ve been simpler. At times, I have had the thought that I was “dumb” for not being able to figure out how to use everyday appliances, but that’s not true. In this reading, Don Norman explains how the fault isn’t in the user, but in the design of things.

Something that drives me crazy is the variety of shower controls. Every time I go to a hotel or a friend’s house, the shower is always different from the one I’m used to. Some showers have two buttons for an overhead shower and a handheld one, but it’s not always clear which button controls which. I’ve even ended up soaked once because I accidentally turned on the overhead shower. You basically have to figure it out through trial and error, since there are rarely any instructions. This could be improved by creating a more universal design for shower controls, or by adding clear markings so it’s obvious what each button does.

Norman mentions feedback as a key principle of design, which is especially important in interactive media. Feedback lets users know their actions have been recognized by producing clear results. For example, an elevator button lights up when it’s pressed to show the action has been registered. In interactive media, feedback can be shown in many ways: a button changing color when clicked, a sound confirming an action, etc. Without feedback, users would feel lost or even think the system isn’t working.