Reading Reflection – Week 5

Computer vision differs from human vision in several key ways, primarily in its struggle with environmental variability, lack of semantic understanding, and limited field of view. While humans can easily adapt to changes in lighting, perspective, and context, computer vision systems process images as raw pixel data without inherent meaning. This fundamental difference presents both challenges and opportunities for artists and designers working with computer vision technologies.

To help computers see and track objects of interest, several techniques have been developed. These include controlled lighting to create consistent illumination, background subtraction to identify moving objects, brightness thresholding to detect significant differences, frame differencing to identify motion, and object tracking to maintain focus on specific elements. These methods, as highlighted in Golan Levin’s article, provide a toolkit for novice programmers and artists to incorporate computer vision into their work, enabling the creation of interactive experiences that respond to movement, gestures, and objects in real time.

I find it interesting how artists navigate ethical considerations regarding privacy and surveillance while also leveraging these technologies to create immersive and responsive installations. Some artists use computer vision as a medium for critical commentary on surveillance culture and social issues, turning the technology’s capabilities into a subject for artistic exploration. This dual nature of computer vision in art- as both a tool and a topic- encourages artists to deeply consider the societal impact of their work.

As computer vision tools become more accessible, there’s a growing tension between the democratisation of technology and the depth of understanding required to use it effectively. While user-friendly interfaces and AI-powered tools (like DALL-E and SORA) make it easier for artists to incorporate computer vision into their work, there’s a risk of oversimplification and a potential loss of the underlying principles that drive these technologies. This evolution in the artistic landscape offers exciting new avenues for creativity but also raises questions about the role of human ingenuity and technical literacy in art creation. As the field continues to advance rapidly, artists are challenged to balance the use of cutting-edge tools with a thoughtful approach to their application, ensuring that technology enhances rather than replaces human creativity.

Midterm Progress Report

Concept:

For my midterm project, I’m creating a grid version of Color by Number game. I’ve always loved these types of games because they help me relax, focus, and feel super satisfied as the artwork slowly comes to life. My goal is to bring that same experience to users by making a game that’s easy to use, fun, and visually appealing.

The game gives users a color palette, where each color is linked to a letter or number. The image itself is made up of a grid, with each cell labeled to show which color should go there. All users have to do is click on a grid cell, and the color fills in automatically. It’s quite simple for users to use.

Challenging Aspects and Risk Prevention

1. One of the biggest challenges was ensuring precise click detection on the image grid. Since the image itself doesn’t inherently support click interactions, I had to create an invisible grid and lay it on top of the image. This allowed the game to register mouse clicks accurately while keeping the visual presentation clean. However, this was difficult because I had to align the overlay perfectly so that the user wouldn’t see it, but it would still work effectively for detecting clicks and filling in colors.

2. Another challenge was allowing users to select colors from the palette and ensuring the correct color was applied to the chosen grid cell. I had to implement logic that detected which color was clicked on and then stored that color as the “active” selection. The game then needed to apply this color to any grid cell the user clicked until a new color was chosen. The difficulty came in precisely mapping click positions to the correct palette color while keeping the interface responsive and user-friendly.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class ColoringPage {
constructor(name, imagePath, rows, cols, palette) {
this.name = name;
this.img = loadImage(imagePath);
this.rows = rows;
this.cols = cols;
this.cellSize = 600 / this.cols;
this.grid = Array.from({ length: this.rows }, () => Array(this.cols).fill(null));
this.palette = palette;
this.selectedColor = Object.values(palette)[0].color;
}
display() {
this.drawPalette();
image(this.img, 100, 90, 600, 600);
this.drawGrid();
this.drawColoredGrid();
}
drawGrid() {
stroke(0, 50);
noFill()
for (let row = 0; row < this.rows; row++) {
for (let col = 0; col < this.cols; col++) {
rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize);
}
}
}
drawColoredGrid() {
for (let row = 0; row < this.rows; row++) {
for (let col = 0; col < this.cols; col++) {
if (this.grid[row][col]) {
fill(this.grid[row][col]);
rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize);
}
}
}
}
drawPalette() {
let keys = Object.keys(this.palette);
let x = (width - keys.length * 60) / 2; // Decrease 70 to 60 for less spacing
let y = 20;
noStroke();
for (let i = 0; i < keys.length; i++) {
let colorValue = this.palette[keys[i]].color;
fill(colorValue);
ellipse(x + i * 60 + 30, y + 25, 50, 50);
let c = color(colorValue);
let brightnessValue = (red(c) * 0.299 + green(c) * 0.587 + blue(c) * 0.114); // Standard luminance formula
fill(brightnessValue < 128 ? 255 : 0); // White text for dark colors, black text for light colors
textSize(14);
textAlign(CENTER, CENTER);
text(this.palette[keys[i]].label, x + i * 60 + 30, y + 25); // Adjusted positioning to match circles
}
}
class ColoringPage { constructor(name, imagePath, rows, cols, palette) { this.name = name; this.img = loadImage(imagePath); this.rows = rows; this.cols = cols; this.cellSize = 600 / this.cols; this.grid = Array.from({ length: this.rows }, () => Array(this.cols).fill(null)); this.palette = palette; this.selectedColor = Object.values(palette)[0].color; } display() { this.drawPalette(); image(this.img, 100, 90, 600, 600); this.drawGrid(); this.drawColoredGrid(); } drawGrid() { stroke(0, 50); noFill() for (let row = 0; row < this.rows; row++) { for (let col = 0; col < this.cols; col++) { rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize); } } } drawColoredGrid() { for (let row = 0; row < this.rows; row++) { for (let col = 0; col < this.cols; col++) { if (this.grid[row][col]) { fill(this.grid[row][col]); rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize); } } } } drawPalette() { let keys = Object.keys(this.palette); let x = (width - keys.length * 60) / 2; // Decrease 70 to 60 for less spacing let y = 20; noStroke(); for (let i = 0; i < keys.length; i++) { let colorValue = this.palette[keys[i]].color; fill(colorValue); ellipse(x + i * 60 + 30, y + 25, 50, 50); let c = color(colorValue); let brightnessValue = (red(c) * 0.299 + green(c) * 0.587 + blue(c) * 0.114); // Standard luminance formula fill(brightnessValue < 128 ? 255 : 0); // White text for dark colors, black text for light colors textSize(14); textAlign(CENTER, CENTER); text(this.palette[keys[i]].label, x + i * 60 + 30, y + 25); // Adjusted positioning to match circles } }
class ColoringPage {
  constructor(name, imagePath, rows, cols, palette) {
    this.name = name;
    this.img = loadImage(imagePath);
    this.rows = rows;
    this.cols = cols;
    this.cellSize = 600 / this.cols;
    this.grid = Array.from({ length: this.rows }, () => Array(this.cols).fill(null));
    this.palette = palette;
    this.selectedColor = Object.values(palette)[0].color;
  }

  display() {
    this.drawPalette();
    image(this.img, 100, 90, 600, 600);
    this.drawGrid();
    this.drawColoredGrid();
  }

  drawGrid() {
    stroke(0, 50);
    noFill()
    for (let row = 0; row < this.rows; row++) {
      for (let col = 0; col < this.cols; col++) {
        rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize);
      }
    }
  }

  drawColoredGrid() {
    for (let row = 0; row < this.rows; row++) {
      for (let col = 0; col < this.cols; col++) {
        if (this.grid[row][col]) {
          fill(this.grid[row][col]);
          rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize);
        }
      }
    }
  }

  drawPalette() {
  let keys = Object.keys(this.palette);
  let x = (width - keys.length * 60) / 2; // Decrease 70 to 60 for less spacing
  let y = 20;
  
  noStroke();

  for (let i = 0; i < keys.length; i++) {
    let colorValue = this.palette[keys[i]].color;
    
    fill(colorValue);
    ellipse(x + i * 60 + 30, y + 25, 50, 50); 
    let c = color(colorValue);
    let brightnessValue = (red(c) * 0.299 + green(c) * 0.587 + blue(c) * 0.114); // Standard luminance formula

    fill(brightnessValue < 128 ? 255 : 0); // White text for dark colors, black text for light colors
    
    textSize(14);
    textAlign(CENTER, CENTER);
    text(this.palette[keys[i]].label, x + i * 60 + 30, y + 25); // Adjusted positioning to match circles
  }
}

 

Week 5: Midterm Update

For my midterm, I decided to do a spin-off of a classic maze that is also heavily inspired by 2 other things I love: mystery and cats. The story of the game involves a person who is admiring an abandoned tomb that is said to hold a large treasure with their cat. Suddenly, something spooks the cats and it runs off into the tomb. It is then revealed that the tomb is like a maze, and the user must navigate through it. The catch? It’s pitch black apart from the small light from their flashlight and there are booby traps all around them. My idea is to have the user be able to call for the cat when a key like the spacebar is pressed, and when the cat responds, a larger part of the maze is temporarily revealed, as if the cat is just ahead of the user, running away or perhaps even guiding them through the maze until the user reaches the end/the treasure. Turning into a booby trap will result in “death” and the user will need to start over. I’m kind of imagining the layout to look like story pages or a comic book with the interactive game in the middle.

This task is certainly difficult as it requires a number of different elements, object-oriented programming, design/aesthetics, object animation, and even the possible use of sprites. Those sprites and getting the character and cat to move smoothly through the maze are the most challenging. Plus, getting the animation for booby traps and dealing with the character’s collision with them. Before I could even begin coding, it was really important that I knew what everything I would be designing looked like, so I tried to make it easier by finding gifs for specific animations I needed that could not be done in the program. That way, I knew exactly what I needed to try and create in the program myself and what I already have. For example, I found this gif of a cat being scared which I can use in the beginning scene when the cat is spooked and runs (I’m not sure if this website supports gifs, but you can kind of get the idea). I’m also working on creating the classes for all of the elements in the game right now. Here’s an example of what I have for the player, the traps, and even the cat:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class Player {
constructor(x, y) {
this.x = x;
this.y = y;
this.size = 20;
}
move() {
if (keyIsDown(LEFT_ARROW)) this.x -= 2;
if (keyIsDown(RIGHT_ARROW)) this.x += 2;
if (keyIsDown(UP_ARROW)) this.y -= 2;
if (keyIsDown(DOWN_ARROW)) this.y += 2;
}
display() {
fill(255);
ellipse(this.x, this.y, this.size);
}
}
class Cat {
constructor(x, y) {
this.x = x;
this.y = y;
}
}
class Trap {
constructor(x, y) {
this.x = x;
this.y = y;
this.size = 20;
}
display() {
fill(255, 0, 0);
rect(this.x, this.y, this.size, this.size);
}
checkCollision(player) {
return dist(player.x, player.y, this.x, this.y) < this.size;
}
}
class Player { constructor(x, y) { this.x = x; this.y = y; this.size = 20; } move() { if (keyIsDown(LEFT_ARROW)) this.x -= 2; if (keyIsDown(RIGHT_ARROW)) this.x += 2; if (keyIsDown(UP_ARROW)) this.y -= 2; if (keyIsDown(DOWN_ARROW)) this.y += 2; } display() { fill(255); ellipse(this.x, this.y, this.size); } } class Cat { constructor(x, y) { this.x = x; this.y = y; } } class Trap { constructor(x, y) { this.x = x; this.y = y; this.size = 20; } display() { fill(255, 0, 0); rect(this.x, this.y, this.size, this.size); } checkCollision(player) { return dist(player.x, player.y, this.x, this.y) < this.size; } }
class Player {
    constructor(x, y) {
        this.x = x;
        this.y = y;
        this.size = 20;
    }
    move() {
        if (keyIsDown(LEFT_ARROW)) this.x -= 2;
        if (keyIsDown(RIGHT_ARROW)) this.x += 2;
        if (keyIsDown(UP_ARROW)) this.y -= 2;
        if (keyIsDown(DOWN_ARROW)) this.y += 2;
    }
    display() {
        fill(255);
        ellipse(this.x, this.y, this.size);
    }
}

class Cat {
    constructor(x, y) {
        this.x = x;
        this.y = y;
    }
}

class Trap {
    constructor(x, y) {
        this.x = x;
        this.y = y;
        this.size = 20;
    }
    display() {
        fill(255, 0, 0);
        rect(this.x, this.y, this.size, this.size);
    }
    checkCollision(player) {
        return dist(player.x, player.y, this.x, this.y) < this.size;
    }
}

 

 

Week 5 : Reading Response

Humans recognize faces and objects effortlessly because we rely on memory and perception, without needing to crunch huge amounts of data. Computers, on the other hand, have to go through an entire process—analyzing pixel data, running algorithms, and comparing what they see to stored information just to identify something. Plus, we’re naturally good at adapting to different environments, while computers can struggle when conditions change from what they were programmed for, often leading to glitches or poor performance.

To help computers track objects better, there are a few go-to techniques. Brightness thresholding boosts contrast by filtering colors and intensities, making objects stand out more clearly. Background subtraction helps by removing the static background so the system can focus only on what’s moving. These methods make it easier for computer vision to detect and track what matters, whether in art, security, or interactive installations.

In interactive art, computer vision takes audience engagement to the next level by allowing people to actively shape the artwork with their movements and gestures. Unlike traditional static art, these installations respond in real-time, making the experience more dynamic and immersive. By tracking participants as they move, computer vision can trigger visual, auditory, or even tactile reactions, turning viewers into active contributors rather than just passive observers. This creates a deeper connection with the artwork, giving people a sense of agency in the creative process. Whether it’s motion-triggered visuals, gesture-controlled projections, or body-driven games, computer vision is expanding the possibilities of creativity in interactive media.

Week 5 – Midterm Progress

For a midterm project, I decided to make a game for baking cookies! As a passionate baker, cookies are one of my specialties (ask my friends trust me).

The concept: A simple game to bake chocolate chip cookies. If time permits and if my skill set can handle it, I also want to include a timer. This is a very basic sketch of what I’m planning on: 

  1. Start page
  2. Ingredients with a bowl; the users will drag the ingredients into the bowl, and then click on the egg to crack it. This was my “scary” part, as I wanted to try using sprite sheets, a concept I never heard of or tried before this.
  3. A mix button will bring a whisk, which is animated to turn, and then an animation will switch to a tray filled with cookie dough.
  4. The tray will be pushed into an “oven” (which I’ll draw using shapes)
  5. A screen shows up saying “Cookies are ready!!”. There will be a restart button, and a recipe button, for my actual original recipe.

So, the user interaction would essentially be their role as the baker, dragging the ingredients into the bowl, putting the tray into the oven and so on.

The scary part: Using a sprite sheet for the egg cracking. I drew the stages of an egg being cracked and then created a sprite sheet of it using Canva. I then coded the egg breaking, which was a bit confusing. At first, the egg would “crack” wherever we click on the screen, but then I figured out how to make it confined to just the egg picture.

I was very happy with the result, and it made me less afraid in approaching the midterm project, since the scariest part is solved now. Here’s an embedded sketch of my “cracking the egg” test. I might need to update the sprite sheet and code to get the egg to ACTUALLY fall into the bowl, but so far, I’m pretty confident with my project concept, and excited to code it to reality!

 

Week 4 Project

For this week’s project I decided to make something somewhat ironic in an attempt to create something funny (I think it is).

I made a quote generator that generates a quote and refreshes it for the user’s motivational needs. Needless to say, I was the one that inspired my own design, as I found myself at certain points in the day, struggling to gather the motivation to do anything remotely productive.

One part of this project that was challenging to me was finding out how to load my .csv file into the project, then gathering the correct string to be displayed back into the print().

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function keyPressed() {
if (keyCode === 13 ) {
let currentQuote = int(random(1,20))
print(currentQuote)
displayText = quotes.getString(currentQuote,0)
function keyPressed() { if (keyCode === 13 ) { let currentQuote = int(random(1,20)) print(currentQuote) displayText = quotes.getString(currentQuote,0)
function keyPressed() {
    if (keyCode === 13 ) {
      
      
    let currentQuote = int(random(1,20))
    print(currentQuote)
    displayText = quotes.getString(currentQuote,0)

 

Assignment 4: Fortune Cookies

This is my text-based art piece, which I have taken inspiration from during a recent trip to a Chinese restaurant. My friends and I enjoyed opening our cookies and reading each other’s fortunes. I wanted to give others this experience too so created this piece to do exactly that.

 

📋Assignment Brief

  • Create an interactive piece of artwork by loading data or displaying text

💭Conceptualisation

The idea for this project emerged from a delightful dining experience at a Chinese restaurant. As my friends and I cracked open our fortune cookies and shared our fortunes with laughter and anticipation, I was struck by the simple joy this tradition brought to our meal. I wanted to capture and recreate this experience, allowing others to enjoy the whimsical pleasure of fortune cookies from anywhere. The idea evolved to combine the visual aesthetics of a Chinese restaurant with the interactive element of opening a fortune cookie. I envisioned a scene where users could click to “open” a virtual fortune cookie, revealing one of many possible fortunes. This concept aimed to blend cultural elements, visual art, and interactivity into a single, engaging piece. By digitizing this experience, I hoped to make it accessible to a wider audience, allowing people to enjoy the surprise and wisdom of fortune cookies without the need for a physical restaurant visit.

💻Process

The coding process for this project began with creating the foundational elements of the Chinese restaurant scene using p5.js. I started by establishing the basic structure, including the background, tables, wall decorations, and lanterns, to set the ambiance. The next crucial step was designing the fortune cookie itself. I used bezier curves to craft a realistic cookie shape, complete with a subtle shadow and a visible fold line. To add depth to the experience, I implemented two states for the cookie: closed and open. This required creating separate functions for drawing each state.

The interactive element was introduced by implementing a mousePressed() function, allowing users to toggle between the closed and open states of the cookie. To bring the fortune-telling aspect to life, I created an array of ten different fortune messages. Each time the cookie is opened, a random fortune is selected from this array and displayed on the ‘paper’ inside the cookie.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function mousePressed() {
isOpened = !isOpened;
if (isOpened) {
currentFortune = random(fortunes);
drawOpenedFortuneCookie(openedFortuneCookie, currentFortune);
}
}
function mousePressed() { isOpened = !isOpened; if (isOpened) { currentFortune = random(fortunes); drawOpenedFortuneCookie(openedFortuneCookie, currentFortune); } }
function mousePressed() {
  isOpened = !isOpened;
  if (isOpened) {
    currentFortune = random(fortunes);
    drawOpenedFortuneCookie(openedFortuneCookie, currentFortune);
  }
}

One of the more challenging aspects was ensuring the fortune text was properly centered and legible within the opened cookie. This required careful adjustment of text positioning and size. Throughout the process, I continually refined the visual elements, adjusting colors, sizes, and positions to create a harmonious and visually appealing scene.

The final touch involved fine-tuning the user experience, ensuring smooth transitions between states and readable fortune messages. This iterative process of coding, testing, and refining resulted in an interactive digital representation of the classic fortune cookie experience, nestled within a charming Chinese restaurant setting.

🚩Challenges

One of the main challenges I faced while writing this code was achieving a realistic representation of a fortune cookie while keeping the code simple and efficient. Creating the curved shape of the cookie using bezier curves required careful adjustment of control points to achieve a natural-looking form.

Another significant challenge was implementing the random fortune selection mechanism. While the concept seems straightforward, ensuring that the fortunes were truly random and didn’t repeat too frequently required careful consideration. I had to strike a balance between maintaining a diverse selection of fortunes and avoiding predictability in the user experience.

📶Potential Improvements

While the current version successfully creates an interactive fortune cookie experience within a Chinese restaurant setting, there are several areas for potential improvement and expansion:

  • Animation Enhancements: Adding subtle animations, such as a smooth opening animation for the fortune cookie or a gentle floating effect for the lanterns, could make the scene feel more engaging

  • Sound Integration: Incorporating audio elements, like a soft cracking sound when opening the cookie or ambient Chinese restaurant background noise, could enhance the immersive experience

  • Multiple Cookie Types: Introducing different types of fortune cookies with varying shapes, colors, or even flavors could add variety and replay value to the interaction

 

Startup Funding Visualization

Concept

This project visualizes startup funding data by representing different startups as interactive elements on the screen. Users can hover over a startup to see its funding amount and valuation, and click on a startup to view more detailed information. The goal is to create an intuitive and engaging way to explore startup funding rounds.

Code Highlight

One of the key sections of the code is how the information is displayed when a startup is hovered over. The following snippet effectively creates a tooltip-like interaction:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
text(`${this.emoji} ${this.name}`, this.x, this.y - 15);
if (hovered) {
fill(255, 200);
rect(mouseX, mouseY, 160, 50, 10);
fill(0);
textSize(12);
text(`💰 $${this.amountRaised}M`, mouseX + 80, mouseY + 20);
text(`📈 $${this.valuation}M`, mouseX + 80, mouseY + 40);
}
text(`${this.emoji} ${this.name}`, this.x, this.y - 15); if (hovered) { fill(255, 200); rect(mouseX, mouseY, 160, 50, 10); fill(0); textSize(12); text(`💰 $${this.amountRaised}M`, mouseX + 80, mouseY + 20); text(`📈 $${this.valuation}M`, mouseX + 80, mouseY + 40); }
text(`${this.emoji} ${this.name}`, this.x, this.y - 15);

if (hovered) {
  fill(255, 200);
  rect(mouseX, mouseY, 160, 50, 10);
  fill(0);
  textSize(12);
  text(`💰 $${this.amountRaised}M`, mouseX + 80, mouseY + 20);
  text(`📈 $${this.valuation}M`, mouseX + 80, mouseY + 40);
}

 


This block dynamically positions the tooltip near the cursor and provides a quick summary of key financial metrics.

Embedded Sketch

 

Reflection and Future Work

While the current implementation effectively visualizes funding data, there are several areas for improvement:

  • Scalability: The current approach might become inefficient with a large dataset. Optimizing how data is rendered could enhance performance.
  • More Interactivity: Adding filtering options for different funding rounds (Seed, Series A, B, etc.) could improve user experience.
  • Enhanced Visualization: Implementing different shapes or colors to represent different funding rounds would make distinctions clearer.
  • Data Integration: Connecting to a live API to fetch real-time funding data would make this visualization more dynamic and useful.

Overall, this project provides an engaging way to explore startup funding data, and with future iterations, it can be expanded into a more powerful analytical tool.

Week 4 : Reading response

While reading the text, one thing I felt that Norman did not explicitly mention is the lack of instructions for devices and electronics, especially when their functionality is not immediately obvious. Many products assume that users will either figure things out through trial and error or consult a manual, but in reality, unclear controls often lead to confusion and frustration. For example, I own a digital camera, but I still have no idea how to zoom properly or what the different shooting modes actually do. There are no clear signifiers on the buttons, and the camera does not provide immediate feedback or guidance. Similarly, the projectors we have in the Baraha rooms can get difficult to use because the wall-docked interface offers multiple input options with no clear instructions on which mode is appropiate for what. Users are left guessing, which can lead to delays and unnecessary trial and error.

This lack of guidance can also be an issue in interactive media. Many systems expect users to learn how to navigate and use features on their own, but this often results in important functions being left undiscovered, reducing the overall usability of the system. This is where Norman’s principles of discoverability, feedback, and conceptual models become essential. If interfaces and products clearly communicated how to use them, users would not have to struggle with unnecessary complexity. By applying Norman’s ideas, interactive media could be improved by incorporating better signifiers, immediate feedback, and clearer mappings to ensure users understand what actions are possible. Whether in physical devices like cameras and projectors or in digital interfaces, design should guide users intuitively rather than forcing them to figure everything out on their own.

Reading Reflection – Week 4

This has been my favourite reading yet! I am a massive ‘good-design’ junkie, and love to appreciate when something is made with the User in mind. In fact, I’m so enthusiastic about this that I spent last summer teaching design principles to high school students in Japan. It was incredibly rewarding to share my knowledge and watch young minds grasp the importance of user-centered design.

One aspect of design that particularly excites me is analyzing the ergonomics of products. I love examining how objects interact with the human body and how they can be optimized for comfort and efficiency. This interest led me to develop my own acronym for designing things for people: SUPREME FAM. Each letter represents a crucial aspect of the design process:

S: Stakeholder
U: Usability
P: Primary User Needs
R: 6Rs (Rethink, Refuse, Reduce, Reuse, Recycle, Repair)
E: Ergonomics
M: Machining
E: Environment
F: Function
A: Aesthetic
M: Materials

I find that using this acronym helps me approach design challenges in a comprehensive and user-centered way, much like the principles Norman discusses in his work.

Norman’s emphasis on the importance of affordances, signifiers, and feedback in design struck a chord. I recall a recent visit to our Art Gallery, where I hesitated to touch an exhibit because there were no clear indications of how to engage with it. The art piece was discussing a jelly-like lizard and there was one on display. However, the lack of signifiers left me unsure whether I was allowed to interact or if I might accidentally damage the artwork. This experience made me appreciate how crucial it is for designers to provide clear cues that guide user behavior. I had a similar experience of bewilderment when I stood in front of a high-tech coffee machine, at a friend’s house, unable to figure out how to make a simple cup of coffee. The machine’s sleek design offered no clues about its functionality, leaving me feeling embarrassed and caffeine-deprived.

One thing that drives me crazy is the design of many public restroom sinks. I often find myself waving my hands frantically under the faucet, trying to trigger the sensor to start the water flow. Sometimes I’m left wondering if the sink is broken or if I’m simply not performing the correct “hand dance” to activate it. To improve this, I imagine a design where the sensor area is clearly marked with a simple icon or color, and perhaps a small LED light that illuminates when hands are detected. This would provide clear signifiers and immediate feedback, reducing user frustration and water waste.

Applying Norman’s principles to Interactive Media, I believe designers should focus on creating intuitive interfaces that provide clear feedback and match users’ conceptual models. For example, a website’s navigation should use familiar speech patterns and provide visible cues about the user’s current location and available actions. Interactive elements should offer immediate feedback to confirm user actions, much like how a well-designed physical button provides tactile and visual feedback when pressed. By incorporating these principles, designers can create digital experiences that are both engaging and frustration-free.