Week 4 Reading Response: Ch 1, “The Design of Everyday Things”

In Chapter 1 of the “The Design of Everyday Things” titled as, “The Psychopathology of Everyday things,” Norman emphasizes how everyday objects can be confusing and fail in their design because they can’t be understood by its users. Throughout the chapter, he introduces key concepts like:

  • “Discoverability,” which is determined by users who instinctively know how to work with something, just by seeing it, and,
  • “Feedback” which is some kind of stimulus to the user that the task they intended to perform has been carried out successfully or has failed in between.

He gives the example of doors (like the ‘Norman Doors’, named after himself) which set people thinking on how to use it. His main argument is one which I agree with too, i.e., the designs should be “Human-Centered” and ensure that it is easily usable for everyone, no matter how simple or complex the object is.

“Human-Centered Design” is a term which I have also previously encountered when I took a J-term class “Interactive Media in the World”. I learnt that this principle can be easily applied to various situations ranging from how things are arranged (like electric tools in a lab) to flow of traffic in a city (to prevent accidents), smart home technologies, human-computer interactions, and so on.

If I had to think of a product which annoys me a lot, it would be the “Electrical Adapters” with a bad design. There is a particular one which I always keep with me as a reminder of the same.

Similar products with the same design can be found on Amazon and other places. However, it is quite clear that while mobile devices can be charged without any issues, it is the laptop chargers or desktop power cords where the issue arises (especially in UAE). There is not enough distance between laptop charging cord and the wall, rendering it unusuable for devices which have bigger plugs.

In terms of interactive media, I believe Norman’s principles can significantly improve user experiences. For example, when designing websites or apps, discoverability should be a primary goal. For example, making discoverability a top priority when creating apps or interactive displays aids users in immediately understanding how to engage with the system. The user may be led through the interface with the use of obvious signifiers like labels or icons. Feedback is also important here, as when interacting with digital elements; for example, a button’s color changing when clicked or a sound signaling the completion of a task are both visual and auditory signals that boost user confidence and enhance the overall experience.

Week 4 – Reading Reflection

Don Norman’s “The Design of Everyday Things” highlights how poor design impacts our daily interactions with common objects. In the first chapter, he introduces the concept of discoverability, which emphasizes that people should be able to figure out how something works simply by looking at it. Norman explains that objects like doors or light switches can become confusing when they lack clear visual cues, often leading users to make mistakes. The idea of feedback is also essential, ensuring users receive confirmation that their actions—such as opening a door or turning off a light—are completed successfully. Norman’s focus is on making designs intuitive and straightforward, so users can easily understand them without needing instructions.

One frustration I experience, which Norman’s work sheds light on, is how stressful the use of modern smart devices like home assistants or thermostats are. These devices often overwhelm users with too many options and hidden functions, making them difficult to use. Applying Norman’s principles of signifiers and mapping could make these systems more user-friendly. For example, clearer labels, icons, or simple gestures could improve usability. Additionally, feedback through visual or sound cues would help users feel confident that they’ve completed a task correctly. Norman’s focus on human-centered design highlights the importance of keeping the user in mind, ensuring that products are approachable and simple to navigate. This approach could significantly improve our interactions with modern technology, reducing frustration and making these tools more accessible.

Week 4 – Ghost of Words

Intro

This week, I adopted my production in week 2 – the dots silhouette according to the webcam input – to create a representation of the figure reflected in the webcam as well as the soul behind it in the background. It seems to me that the product in week 2 did lack some message/meaning – or whatsoever: What is the point of minoring the webcam with dots? Then, when it comes to the text generation of this week, the answer appeared to me to be the combination of text and my existent mechanism – as the mirroring mechanism symbolizes the entity, the phantom, the ghost, and the creator as well as the user, while the poems floating across the canvas reflects a piece of my soul. By doing so, the ghost of me (or you) becomes the exact pathway to discovering that piece of soul, adding incentives to the interaction.

process

I started simply by replacing the dots drawing in my week 2 product with text drawing – and obviously, the flashing of words can not stand for further meanings except for blinding the user even more compared to the dots, as we intuitively tend to read whatever the words presented in front of us.

Therefore, I tried another approach to displaying the poem’s lines anyway in the background and let the probability pixel matrix act as the alpha value matrix, this time to overlay on the text, thus resulting in the ghosty effect.

In the preload function, I’m ensuring that all external resources are loaded before the sketch runs –  Using loadTable, I import lines of text from textLines.csv, which will be used to generate the floating texts dynamically.

function preload() {
  // Load the CSV file
  textLines = loadTable('textLines.csv', 'csv', 'header'); // Adjust the path and options as needed
}

This time, I directly use the grayscale value as alpha value as they have the same range:

function drawAlphaFilter() {
  noStroke();
  
  // Iterate through each cell in the grid
  for (let y = 0; y < k; y++) {
    for (let x = 0; x < j; x++) {
      let index = x + y * j;
      let grayValue = pixelArray[index];
      
      // Calculate alpha value
      // Ensure alphaValue is within 0-250 for better visibility
      let alphaValue = constrain(grayValue, 0, 250); 
      
      // Set fill color to background color with the calculated alpha for overlay effect
      fill(17, 38, 56, alphaValue);
      
      // Calculate the position and size of each rectangle
      let rectWidth = windowWidth / j;
      let rectHeight = windowHeight / k;
      let rectX = x * rectWidth;
      let rectY = y * rectHeight;
      
      rect(rectX, rectY, rectWidth, rectHeight);
    }
  }
}

The RGB value used in this product is extracted from my personal website: Sloth’s Slumber | Xiaotian Fan’s Collection (sloth-slumber.com).

Then, the floating texts are managed through both class and helper functions, including:

function updateFloatingTexts() {
  // Update and display existing floating texts
  for (let i = floatingTexts.length - 1; i >= 0; i--) {
    let ft = floatingTexts[i];
    ft.update();
    ft.display();
    
    // Remove if off-screen
    if (ft.isOffScreen()) {
      floatingTexts.splice(i, 1);
      
      // Also remove from its slot
      let s = ft.slot;
      slots[s] = null; // Mark the slot as free
    }
  }
  
  // Iterate through each slot to manage floating texts
  for (let s = 0; s < totalSlots; s++) {
    if (slots[s] === null) {
      // If the slot is free, add a new floating text
      let newText = getNextText();
      if (newText) {
        let ft = new FloatingText(newText, s);
        floatingTexts.push(ft);
        slots[s] = ft; // Assign the floating text to the slot
      }
    } else {
      // If the slot is occupied, check if the tail has entered the screen
      let lastText = slots[s];
      
      if (lastText.direction === 'ltr') { // Left-to-Right
        // Check if the tail has entered the screen (x + width >= 0)
        if (lastText.x + lastText.getTextWidth() >= 0) {
          // Safe to add a new floating text
          let newText = getNextText();
          if (newText) {
            let ft = new FloatingText(newText, s);
            floatingTexts.push(ft);
            slots[s] = ft; // Replace the old floating text with the new one
          }
        }
      } else { // Right-to-Left
        // Check if the tail has entered the screen (x - width <= windowWidth)
        if (lastText.x - lastText.getTextWidth() <= windowWidth) {
          // Safe to add a new floating text
          let newText = getNextText();
          if (newText) {
            let ft = new FloatingText(newText, s);
            floatingTexts.push(ft);
            slots[s] = ft; // Replace the old floating text with the new one
          }
        }
      }
    }
  }
}

Another important function is to concatenate lines in order to fulfill across the windowWidth:

function getNextText() {
  // Reset index if end is reached
  if (currentLineIndex >= textLines.getRowCount()) {
    currentLineIndex = 0; // Reset to start
  }
  
  let combinedText = '';
  let estimatedWidth = 0;
  let tempIndex = currentLineIndex;
  let concatenationAttempts = 0;
  let maxAttempts = textLines.getRowCount(); // Prevent infinite loops
  
  // Loop to concatenate lines until the combined text is sufficiently long
  while (estimatedWidth < windowWidth * TEXT_MULTIPLIER && concatenationAttempts < maxAttempts) {
    let textLine = textLines.getString(tempIndex, 0);
    if (!textLine) break; // If no more lines available
    
    combinedText += (combinedText.length > 0 ? ' ' : '') + textLine;
    tempIndex++;
    
    // Reset if at the end of the table
    if (tempIndex >= textLines.getRowCount()) {
      tempIndex = 0;
    }
    
    // Estimate text width using p5.js's textWidth
    textSize(24); // Set a default size for estimation
    estimatedWidth = textWidth(combinedText);
    
    concatenationAttempts++;
    
    // Break if the same index is on loop to prevent infinite concatenation
    if (tempIndex === currentLineIndex) break;
  }
  
  // Update the currentLineIndex to tempIndex
  currentLineIndex = tempIndex;
  
  return combinedText;
}

Eventually, this time, when dealing with the full window canvas, I added a canvas resize function to respond to window resizing:

function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
  
  // Update Y positions of floating texts based on new window size
  for (let ft of floatingTexts) {
    let padding = 5; // Padding from top and bottom
    ft.y = map(ft.slot, 0, totalSlots - 1, padding, windowHeight - padding);
  }
}

To do & reflection

While this product is germinated from my previous product, I believe it has the potential to be further polished, including the varying text aesthetics, responsive relation between text and webcam (or audio level), etc.

On the other hand, I would say that this product is indeed an improvement compared to week 2 as I started to incorporate my own message and character into the code instead of creating fancy (or not) demos.

Assignment 4 – The Lyrical Video

Concept

For this project, I wanted to explore something simple yet engaging with text. My initial idea involved allowing user input, where text would fall to the ground. While that was a good starting point, I felt it needed more interactivity. Then, inspiration struck while I was listening to music: why not create a lyric video? And that’s how this project took shape – a lyric video with the text fading in and out, synchronized to the music playing in the background.

 

Code I’m Particularly Proud Of

In this simple project, the code I’m most proud of is the part that handles the fade-in and fade-out effect of the text. Normally, this would require a loop, but since the draw() function in p5.js acts as a natural loop, I managed it using a simple if statement combined with a counter that gradually changes the opacity of the text until it fully fades out. Here’s the core code snippet:

// Display the current line with a fade-in effect
  fill(255, fadeValue);
  text(lyrics[currentLine], width / 2, lineY); // Display the current line of lyrics at the center of the canvas

  // Gradually make the text appear by decreasing its opacity
  fadeValue -= 1;

  // When the text is fully faded, move to the next line
  if (fadeValue <= 0) 
  {
    currentLine = (currentLine + 1) % lyrics.length; // Move to the next line, looping back to the start if at the end
    
    currentColor = (currentColor + 1) % colors.length; // Change to the next background color, looping through the array
    
    fadeValue = 255; // Reset the fade value to fully opaque
  }

 

Final Product

The final product is available to experience, and you can interact with it by pressing the mouse button to move the lyrics forward. Otherwise, you can simply watch it as a lyric video with music in the background. Just a heads-up: the video includes audio, so be mindful before playing it.

 

 

Final Thoughts and Reflection

Working on this project was both intriguing and challenging. It was a lesson in embracing simplicity, as my initial ideas were quite ambitious. However, I realized that there’s a unique power in crafting something straightforward yet effective. While I’m pleased with the outcome, there are a few areas for improvement: I would like to synchronize the lyrics with the music more precisely, enhance the background visuals, and add more interactive elements to make it more engaging. This project has sparked new ideas, and I look forward to applying these insights to something even bigger and better. Stay tuned!

Week 4: Reading response

One thing that drives me crazy in design and has not been directly discussed in the reading  is the struggle to create works with meaningful intentions or justifications. For me a good design, besides its aesthetics, interactivity and the good user experience is what it carries. I am usually interested in the reasons why the creator of an art piece wanted to make it in the first place. Even further beyond art  As I think about my future works, I often question myself on what challenges I want to tackle or what messages I want my designs and works  to convey as I believe that impactful arts and works should emerge from genuine reasons. 

Many principles from the reading are directly relevant to interactive media, as the field is rooted in design. Concepts such as discoverability, understanding, affordances, and signifiers as discussed in the reading can be directly applied to create more engaging experiences while using interactive designs. One thing I hope to take into serious consideration for my future works is to develop human-centered designs. Inspired by the reading I aim to create interactive artworks that first and foremost have a clear and easy to understand purpose. I can achieve this by using simple design principles that can be directly implied by the user or using simple and clear instructions if need be. Second, I also intend to take into consideration the significance of feedback as the reading reminded me of scenarios when even feedback given subconsciously makes a great difference. I hope that with artworks that provide feedback users may want to continue exploring to find more interaction thus enrich their experiences with the designs. 

Week 4 – Generative Text

Concept

In this project, I created an interactive tool that lets users see how emotions can be expressed through emojis. Inspired by my brother’s misuse of emojis, each emoji represents a specific feeling, and by clicking on them, users can see a contextual message that describes when to use that emoji. 

Implementation

The implementation of this interactive tool uses p5.js to create an engaging experience for users. It starts with an array of emoji objects that include their symbols, emotions, and positions on the canvas. The `setup()` function initializes the canvas size and text settings. In the `draw()` function, emojis are displayed along with a gradient background that transitions from light blue to dark slate blue for visual appeal. When users click on an emoji, the `mousePressed()` function checks if the click is near an emoji, then shows a message above it explaining its meaning. Overall, this simple structure effectively helps users understand emoji meanings in a fun and interactive way.

Highlight

One aspect of the code that I’m particularly proud of is the text generation feature that dynamically displays contextual messages when users click on an emoji. By using simple logic in the `mousePressed()` function, the code checks the position of the mouse relative to each emoji and identifies the selected one. This triggers a specific message that appears above the emoji, explaining its meaning and when to use it. 

This dynamic text generation not only enhances interactivity but also provides an educational element, helping users understand the emotional context behind each emoji. I appreciate how this feature brings the project to life, making it more engaging and informative. The clarity of the messages ensures that users leave with a better understanding of emojis, which is the core goal of this tool. This is how i implemented it.

function mousePressed() {
  // Check if an emoji is clicked
  for (let emoji of emojis) {
    let d = dist(mouseX, mouseY, emoji.x, emoji.y);
    if (d < 30) {  // Check if the mouse is close enough to the emoji
      selectedEmoji = emoji;  // Set the selected emoji
      break;
    }
  }
}

When a user clicks near an emoji, the code checks the distance between the mouse pointer and the emoji’s position. If the distance is small enough, it sets that emoji as the selected emoji, allowing the following code in the draw() function to display the corresponding message:

if (selectedEmoji != null) {
  textSize(24);  // Smaller text for the message
  fill(0);
  text(`Use this emoji when you are ${selectedEmoji.emotion}.`, selectedEmoji.x, selectedEmoji.y - 50);  // Display above the emoji
}

Embedded Sketch

 

Future Reflections and Ideas for Future Work

Looking ahead, I plan to enhance this project by adding more emojis, including diverse options, and allowing users to submit their own. Additionally, I would like to incorporate a quiz feature to make learning about emojis more fun and engaging. These improvements will help create a more comprehensive tool for understanding emojis and their meanings.

SOLAR SYSTEM

Concept:

For this assignment, I honestly had no clue what to create, at first, I wanted to create generative text data; however, I couldn’t think of a sketch, so I decided to do visualization data instead. My concept was straightforward as I was still trying to understand the codes, so I decided to do a simple solar system, but the data wasn’t accurate it was just for the sketch and the placements in p5. I first started with a blank black background with the planets orbiting, which was too basic, so I decided to use the Lerp function, which I got my inspiration from Jheel’s assignment last week, to change the color to blue gradually. Furthermore, I added shooting stars and normal starts, to make it look more appealing.

Highlight:

The highlight of my code is the animation of the planets and setting the planets, as it was the hardest to figure out. However, the PowerPoint and previous in-class work helped a lot, and without them, I would still be trying to make it work.

// Draw and animate planets orbiting the sun
 for (let i = 0; i < planets.length; i++) {
   let planet = planets[i];

   // planet position based on orbit
   angles[i] += planet.speed;
   let x = sun.x + cos(angles[i]) * planet.distance;
   let y = sun.y + sin(angles[i]) * planet.distance;

   // Draw the orbit path
   stroke(255, 255, 255, 50);
   noFill();
   ellipse(sun.x, sun.y, planet.distance * 2);

   // Draw the planet
   noStroke();
   fill(planet.color);
   ellipse(x, y, planet.diameter);

   // Display planet name
   fill(255);
   textSize(12);
   text(planet.name, x + planet.diameter / 2 + 5, y);
 }

Reflection:

For improvements , as you can see the planet, are going out of the canavs, i tried fixing it by making the orbit smaller, but then everything look tight, so i left it as it is. Also I believe some user interaction would’ve been a great addition, as of now there isn’t any interaction, I should’ve maybe allowed the users to control the orbiting of the planets using the mouse, or maybe the shooting stars.

My design:

Reading Response 4

In Norman’s text, he argues for having a human-centered design, focusing on creating products that are simple and straightforward to use by aligning the design with what users need and can do. In context to that, one thing that drove me crazy (probably because I was really hungry) is how confusing digital appliances can be—like the air fryer I tried to use in the dorm. I expected it to be super easy, but instead, both me and Amna my sister, had to spend ages trying to figure out the functions because the instructions were just vague images and the digital display wasn’t clear. For someone who doesn’t cook often, it was frustrating to the point where I had to search TikTok to find out how to use it, and still it took ages to figure out as I had to find a similar Air fryer. To fix this, I think appliances like this should have built-in, interactive tutorials. Imagine turning on the air fryer for the first time and having it guide you step-by-step on the screen, showing you exactly how to use it. That way, you wouldn’t have to guess or waste time searching for help online.

In terms of applying Norman’s principles to interactive media, his ideas about affordances and signifiers would be super helpful. For example, in apps or websites, using clear icons or buttons that naturally show what they do would make everything easier to navigate. Also, feedback is key, like when you press a button, having a small animation or sound that lets you know the app is working on your request. It’s those little things that make the user experience smoother. Plus, having a simple design that allows users to quickly figure out how everything works without needing a tutorial, would make interactive media way more intuitive, kind of like how appliances should work right out of the box without you needing to look up instructions.

Week 4 – A web of words

Concept

For this week’s assignment, I knew right off the bat that I wanted to do something with poetry. Having felt inspired by Camille Utterback’s Text Rain piece, I thought that an unexpected poem concept would be interesting and originally wanted to use a database of google searches to generate poetry. However, after not finding any databases that struck my interest I decided to take a break and come back to the assignment later. It was then that I started writing a poem inspired by the feeling of being homesick that I ended up having my idea. In the poem, I kind of jumped around a bit as I dissected how my homesickness portrayed itself in my life and ended up getting the idea of a word web. From there I decided that I wanted lines from my poem to connect to other random lines from poems I’d gathered on the internet.

Components 

Because I was somewhat combining the database option with the generative text output I decided I wanted to keep the design of both aspects to be relatively simple so as to keep my expectations fairly realistic. However, even what I thought would be simple to execute, took a lot of trial-and-error / re-starting for simplicity. Generating the typewriter like output is pretty simple in the code but took a lot of planning and replanning on my part as I kept getting stuck with the line breaks, and how the words were being stored.

Once I got that rolling I wanted to focus on how I would incorporate the database of poetry. I ended up creating a few different methods so that I could randomly generate information in my poem, and then have it located in the provided poems. Then, just for an aesthetic component I imported a font so that each line of poetry had a bit more uniqueness to it and added visual aspect. You can also click the mouse to generate more words with different excerpt of the poems which I think further instils the idea of how everything is connected in some megaweb (even in ways we could never imagine)

An Aspect I Am Proud Of 

I think in writing this program I put more of my ‘programming skills’ to the test than I have in the past. I needed to spend a lot of time on the design process because after hitting a few dead ends I realized the importance my algorithm played into the functionality of my code. Although this is not the part I struggled the most with, I’d like to highlight my functions to generate the random words and phrases seen below.

function randomWord() {
  let randLine = lines[int(random(0,23))]; //choose a line from my poem
  let words = split(randLine, ' '); //make each word
  //an element in an array
  let chosenWord = words[int(random(0, words.length))];
  //choosing a random word from the random line 
  wordInData(chosenWord); //find it in the data set
}
function wordInData(chosenWord) {
  let attempts = 0; //preventing max stack call error
  while (attempts < 1000) {
    let randomDataLine = dataPoems[int(random(0,dataPoems.length))];
    let words = split(randomDataLine, ' ');
    //doing the same thing from my poem
    if (words.includes(chosenWord)) { //if random word is found
      magazineLine = randomDataLine;
      return; //stops the function from continue 
    }
    attempts++;
  }
  // If no match found after maxAttempts, choose a random line
  // = random(dataPoems);
  randomWord();
}

I chose to highlight this code because it took me quite a long time to figure out how to parse the data from both text files to make them into arrays I could work with (although in the end it ended up being quite simply I just didn’t understand the structure of loading files).

Final Product (double click for the random poetry!)


(Also I know from this week’s reading that we should have to explicitly say what to do for it to be good design and user-friendly but Rome wasn’t built in a day!!!)

Reflection

This excerpt however, also brings me to what I’d like to improve about the program. I’m still not really sure if this is an issue with the data set of my program or just poor design on my part but I was running into a maximum stack error when attempting to find words in the data set of poems. Because the program would iterate over 1,000 times trying to find a word, it would end up crashing would also created a lot of stress and wasted time.

Therefore, in the future I’d like to find the root of the issue and redesign the program so that it is solved. Although I am proud of what I designed because of how far I came in terms of hardcoding my version, I know that this algorithm is far from ideal and definitely want to tweak it. Furthermore, I’d love to make it more interactive with maybe the user choosing a word to search for instead of the program generating it for them but just hadn’t gotten that far conceptually with this version.

Week 4 : Data Visualisation

Concept

This week I decided to implement a simple data visualisation. The simplest form of data visualisation techniques that came in mind were graphs. Out of many ways to visualise data I decided to implement four basic ones, Bar graph, Pie chart, scatterplot and Line graph. I was motivated to make my design interactive to the user, so I wanted to allow manipulation of  the default data from user input.

Implementation 

I managed to implement my design by creating four classes one for each of the four data visualisation method I chose. In each class, I defined data and categories attributes and implemented a display function that uses the data stored to decide how the graph is drawn. I also implemented a function outside all classes to decide what graph is plotted depending on the user input. I also added an input window, where user can add data and see them in the visualised.

Sketch

Below is my final sketch:

Piece of Code I am proud of

I am particularly proud with the design of the pie chart as it was hard to write labels and colour each pie section. Initially I used random colour generation but the colour appeared blinking so I decided to add a colour attribute so each section could have its colour. Below is the pie chart class definition :

class PieChart 
{
  constructor(data, categories) 
  {
    this.data = data;
    this.categories = categories;
    this.total = 0;
// Get sum of all data 
    for (let i = 0; i < data.length; i++) 
    {
      this.total += data[i]
    }
    this.colors = [];
    for (let i = 0; i < this.data.length; i++) 
    {
// Generate a unique color for each data section
      this.colors.push(color(random(255)%200, random(255), random(255)));
    }
  }
  
  display() 
  {
    let angleStart = 0;
    let radius = min(rectW, rectH) * 0.4; 
    let centerX = rectX + rectW / 2;
    let centerY = rectY + rectH / 2;

    for (let i = 0; i < this.data.length; i++) 
    {
      let angleEnd = angleStart + (this.data[i] / this.total) * TWO_PI;
      fill(this.colors[i]);
      arc(centerX, centerY, radius * 2, radius * 2, angleStart, angleEnd, PIE);
      
// Get positions for Categories lables 
      let midAngle = angleStart + (angleEnd - angleStart) / 2;
      let labelX = centerX + cos(midAngle) * (radius * 0.5); 
      let labelY = centerY + sin(midAngle) * (radius * 0.5);

      fill(0);  
      textAlign(CENTER, CENTER); 
      text(this.categories[i], labelX, labelY); 
    
      fill(0);  
      textAlign(CENTER, CENTER);  
      text(this.categories[i], labelX, labelY);
      angleStart = angleEnd;
    }
     
  }
}

Reflection and Improvements for future work

For future work, I would prefer adding more options with multiple kinds of data such as multi-categories data. For example data with temperatures of two cities. A visualisations techniques that allows for visual comparison between the two set of data.