week #4- Reading

What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?
One everyday design failure that drives me crazy is poorly designed TV remotes. Many remotes today have an overwhelming number of buttons with unclear labels, making simple tasks like changing the volume or input source frustrating. This problem could be improved by applying Don Norman’s principles of discoverability and mapping. For example, remotes should group related buttons more logically and use tactile or color-coded signifiers to indicate key functions. A minimalistic design, similar to Apple’s Siri Remote, which relies on fewer buttons and intuitive gestures, would make the experience much more user-friendly.

How can you apply some of the author’s principles of design to interactive media?
Interactive media, such as mobile apps and websites, can greatly benefit from Norman’s principles of affordances and signifiers. For instance, buttons in an app should clearly indicate their function through visual cues like color, shape, or animations when hovered over. Poor feedback, such as a lack of confirmation when submitting a form, can leave users confused about whether their action was successful. Applying clear feedback mechanisms, like progress indicators or subtle vibrations for mobile interactions, would improve usability and user satisfaction, making digital experiences more intuitive and engaging.



Week 4- Audio Visualizer

Music has always had a profound effect on me—it can transform my mood, inspire creativity, and even transport me to another place. But what if we could see music, not just hear it? That was the inspiration behind my music visualization project. I wanted to create something that would take raw audio data and turn it into a living, breathing visual experience.

I set out to design a program that listens to a song and dynamically generates visuals based on its frequencies. The goal was to make the visualization feel fluid, immersive, and organic—like the music itself. With a gradient background, floating particles, and an animated frequency bar display, the result is a captivating blend of movement and sound.


Code Highlight: Crafting the Visual Bars

At the core of this project is the drawVisualizer function. This function takes the Fourier Transform (FFT) data from the song and maps it to visual elements—essentially transforming audio frequencies into colorful, dynamic bars.

The function first analyzes the sound spectrum using fft.analyze(), which returns an array of frequency amplitudes. These values determine the height of the bars, making them respond dynamically to the beat of the song.

function drawVisualizer() { // draw bars
  let spectrum = fft.analyze(); // get spectrum data
  let barWidth = width / 60;
  let maxHeight = height / 4; 
  for (let i = 0; i < 60; i++) { // loop 60 bars
    let barHeight = map(spectrum[i], 0, 255, 10, maxHeight); // calc bar height
    let hue = map(i, 0, 60, 180, 360); // set hue
    let yPos = height - barHeight; // top pos of bar
    let xPos = i * barWidth + barWidth / 2; // center pos of bar
    fill(hue % 360, 80, 80, 80); // set fill color
    rect(xPos - barWidth / 2, yPos, barWidth * 0.8, barHeight, 5); // draw bar
    if (barHeight > maxHeight * 0.7 && frameCount % 5 === 0) { // add sparkle if tall
      fill(255, 255, 255, random(50, 100)); // sparkle color
      ellipse(xPos, yPos, random(3, 8)); // draw sparkle
    }
  }
}

 

One of my favorite touches is the conditional statement that adds “sparkles” to the taller bars. This subtle effect makes the visualization feel more alive, as if the bars are bursting with energy at their peaks.


Self-Reflection and Areas for Improvement

One of the biggest challenges was fine-tuning the balance between randomness and control. The bars needed to feel reactive but not chaotic, structured but not rigid.

Another area I’d like to explore is incorporating more interactive elements. Right now, the only interaction is toggling the song on and off with a mouse click. Adding features like changing color themes based on the song’s mood or allowing users to manipulate the bars with the mouse could make the experience even more engaging.

Sketch

Week #4- Text display

Your concept:

My project is an interactive genie lamp simulation where bubbles of smoke rise from the spout when the user holds the “R” key. As the bubbles grow, they form a speech bubble that displays a random percentage between 1-100%. This percentage acts as a playful response from the genie lamp, making the interaction fun and unpredictable. The bubbles disappear after a few seconds, allowing for continuous engagement

Code highlight:

I thought this part would be easy, just make bubbles appear when “R” is pressed. But at first, they wouldn’t stop generating, and itd fill the whole screen. The diagonal movement also felt off, no matter how I adjusted the values.

After some trial and error, I fixed it by limiting the bubbles and refining the movement formula. It was frustrating at first, but seeing the end result made it worth the effort:

function draw() {
  background(190, 195, 255);
  drawGenieBottle();

  // starts generating when r key is pressed
  if (keyIsPressed && key === "r" && !generating) {
    generating = true; // starts generation process
  }

  //generate bubbles if within limit
  if (generating && bubbles.length < maxBubbles) {
    let x = width / 1.35 + bubbles.length * 5; // slight diagonal movement
    let y = height - 100 - bubbles.length * 15;
    bubbles.push({ x, y, size: 10 + bubbles.length * 3 });
  }

 

Press “R” key:

Reflection and future improvement:

Overall, I am happy with how my project turned out, especially the animation of the bubbles and the random percentage display. However, I think it could be improved by adding glowing effects, sound, and different animations based on the percentage shown. Adding more visuals and variety would make the experience even more engaging.

Assignment #4 – Data Visualization

1. Concept

Faced with the decision of choosing between working on data visualization or creating a generative text output, I considered that since I had a good understanding of how to create the generative text from class, and working in data visualization could help me gain some valuable skills, I chose to work on data visualization.

I was inspired by the data visualization of flights in the U.S. I was amazed at that a flight network could tell so much information, such as through its resemblance to the U.S. geography, and where the large airports are.

I was interested in applying data visualization mapping techniques to astronomical data. I explored Kaggle datasets and decided Gaia Astronomical Data to be very appropriate for my project as I noted I could visualize data in 3D space using x (parsecs) coordinates, y (parsecs) coordinates and z (parsecs) coordinates.

I also tried to explore other space visualization projects. In particular, I was inspired by the European Space Agency’s Star Mapper, a visualisation based on data from the European Space Agency’s Hipparcos star mapper. Not only were the views under ‘Explore’ tab very beautiful and breathtaking (see images below), I especially enjoyed the interactivity element of the website; being able to control the view of stars in space using my mouse when it is pressed, and I intended to incorporate this into my p5js animation.

The project involves manipulating the Gaia Astronomical Data to map stellar objects into 3D space, and enable users to zoom in, zoom out and move across the space using mouse controls. This would be my very first time animating in 3D space, as well as controlling camera and perspective through code! How exciting.

I used the code template for wind turbines data which was discussed in class and made many revisions to obtain the outcome.

2. Code Highlights

I encountered various major challenges, resolved with the help of AI.

First, my goal includes being able to move across the space using mouse controls, which is new to me, and watching Coding Train’s WebGL and p5js tutorial on Camera and Perspective helped me understand these unfamiliar concepts in p5js. I learned important ideas including field of view (fov), perspective, orbitControl, camera([x], [y], [z], [centerX], [centerY], [centerZ], [upX], [upY], [upZ]). It was still very challenging, and with the help of AI, I learned to incorporate astronomical measurements:

  • in mapping mouse X-position to 30°-90° zoom range. Wider FOV (90°) shows more galactic context, narrower (30°) focuses on specific star formations. Uses actual astronomical angular measurements (π/6 = 30°, π/2 = 90°).
  • in perspective, which sets viewing frustum matching typical astronomical scales: 1 parsec near clip (prevents nearby masers from clipping); 10000 parsec far clip (accommodates distant OB associations).

At some point in the project development, the stellar objects weren’t visible to my eye in the initial view – though if I moved my mouse to something like mouseX = 395 and mouseY = 340, the stellar objects were visible. This posed a problem, because users might not realize at first glance that there are stars they can explore. So I used target mouse coordinates to have the stellar objects within initial view: const targetMouseX = 397.5 and const targetMouseY = 340, camera(initialCamX, initialCamY, initialCamZ, initialLookAtX, initialLookAtY, 0, 0, 1, 0). Since camera is incorporated when mouse is not pressed, the user’s view would reset to a place they can view the objects in case they lost sight of it.

function draw() {
  background(10); // Darker background for better contrast
  
  // Field of View - Only update if mouse is pressed. 
  let fov;
  if (mouseIsPressed) {
    // Maps mouse X-position to 30°-90° zoom range. Wider FOV (90°) shows more galactic context, narrower (30°) focuses on specific star formations. Uses actual astronomical angular measurements (π/6 = 30°, π/2 = 90°).
    fov = map(mouseX, mouseY, width, PI / 6, PI / 2);
  } else {
    fov = map(targetMouseX, targetMouseY, width, PI / 6, PI / 2); // Use targetMouseX
  }
  // Sets viewing frustum matching typical astronomical scales: 1 parsec near clip (prevents nearby masers from clipping); 10000 parsec far clip (accommodates distant OB associations)
  perspective(fov, width / height, 1, 10000);

  // Camera Controls - orbitControl handles movement when pressed.
  if (!mouseIsPressed) {
        camera(initialCamX, initialCamY, initialCamZ, initialLookAtX, initialLookAtY, 0, 0, 1, 0);
  } //orbitControl handles the rest
  
  print(mouseX, mouseY);
  ...
}

Second, since I needed to move across space, it is important that I draw all stellar objects every frame, so that when the user moves the cursor, the view of all stellar objects may adjust accordingly. The original code where each data point was drawn once per frame needed to be adjusted. Processing all the relevant coordinates for each stellar object was performed in a function, which was called by setup() instead of draw(). An array of stellar objects is used to hold stellar object data, with each element being an object storing the data for that stellar object (xpos, ypos, zpos, size determined according to distance from viewer).

function findMinMaxXYZAndProcessData() {
  let singleRow = [];

  // loop over each row in the file
  for (let csvRowNumber = 1; csvRowNumber < strings.length; csvRowNumber++) {
    // get a single row and split that row
    // into individual words
    singleRow = split(strings[csvRowNumber], ",");

    // We know that the last two fields are the
    // latitude and longitude and so they are
    // numerical:
    let x = float(singleRow[5]);
    let y = float(singleRow[6]);
    let z = float(singleRow[7]);

    // The file may be missing a field, in which case
    // the converstion to a float might have failed
    if (isNaN(x) || isNaN(y) || isNaN(z)) {
      print("conversion to float failed; skipping row " + csvRowNumber);
      continue;
    }
    // Initialize min/max on the first valid row.
    if (minX === undefined || x < minX) minX = x;
    if (maxX === undefined || x > maxX) maxX = x;
    if (minY === undefined || y < minY) minY = y;
    if (maxY === undefined || y > maxY) maxY = y;
    if (minZ === undefined || z < minZ) minZ = z;
    if (maxZ === undefined || z > maxZ) maxZ = z;

    // --- Data Processing and Storage ---

    // Calculate mapped positions *here* (during setup).
    let xpos = map(x, minX, maxX, -width / 2, width / 2);
    let ypos = map(y, minY, maxY, -height / 2, height / 2);
    let zpos = map(z, minZ, maxZ, -width / 2, width / 2);
    let screenZ = map(z, minZ, maxZ, 8, 1); // Size based on distance.

    // Create an object to store the data for this stellar object.
    stellarObjects.push({
      x: xpos,
      y: ypos,
      z: zpos,
      size: screenZ,
      // You could add color, type, or other attributes here from the CSV.
    });

    if (csvRowNumber >= strings.length) {
      print("finished");
      noLoop();
    }
  }

  print("X (min, max) = (" + minX + "," + maxX + ") ");
  print("Y (min, max) = (" + minY + "," + maxY + ")");
  print("Z (min, max) = (" + minZ + "," + maxZ + ")");
} // end of findMinMaxXYZ

Third, my text on WebGL apparently read backwards. I made a logical guess that this was because of the view (I was behind the stellar objects rather than in front or my z-coordinates should be far the other way). I made an attempt to fix this, but it didn’t work. So with the help of AI, I learned to reset the camera for 2D drawing (orthographic projection) and default camera just specifically for text, the text read left-right.

// --- 2D Overlay (Text) ---
// Disable depth test for 2D overlay
drawingContext.disable(drawingContext.DEPTH_TEST);

// Reset the camera for 2D drawing (orthographic projection)
ortho();
camera(); // Reset to default camera

// Draw the textScreen
image(textScreen, -width / 2 + 20, -height / 2 + 20);

// Re-enable depth test for 3D rendering in the next frame
drawingContext.enable(drawingContext.DEPTH_TEST);

 

3. Embedded Sketch

4. Reflection and Areas for Future Improvement

This project allowed for vast improvement in my data visualization skills in 3D space as well as valuable opportunities to try using WebGL for 3D animation in a personal project for the first time. An area for future improvement could be incorporating being able to animate the stellar objects in motion, as if in orbit.

Week 4 – Reading Response

Week 4 –  The Design of Everyday Things, The Psychopathology of Everyday Things

In Chapter 1, Don Norman introduces the idea of human-centered design (HCD) and claims that a well-designed object should be intuitive and easy to use. He emphasizes how poor design may lead to confusion and frustration, which highlights that when people are struggling to use a product, it’s often the fault of the designer, not the user. Additionally, there are a few key principles that were introduced including affordances, signifiers, feedback, and conceptual models. Affordances refer to the properties of an object that indicate how it should be used (a handle affords pulling). Signifiers give clues about how an object functions, such as labels or icons. Feedback ensures that users receive a response when they interact with a design (a light turning on when a button is pressed). Lastly, conceptual models help users predict how an object will behave based on previous experiences. Norman critiques some common usability failures, such as confusing doors (push vs. pull) and complicated appliances. He believes that a well-thought-out design prioritizes clarity, simplicity, and usability rather than just aesthetics.

His valuable insights point out how much bad design impacts daily life. Usually, people will blame themselves for struggling with an object when, in reality, the issue lies in poor design choices. This made me think about how many times I’ve encountered objects that don’t work the way I expect them to, especially digitally where buttons are unresponsive. The chapter reinforces the idea that usability should always come before aesthetics in design. One thing that I find annoying is some modern car systems. Many car manufacturers have replaced physical buttons with touchscreen controls for functions like climate control. The reason that this is frustrating is that they often have poor menu structures, which require you to open multiple tabs for simple tasks like changing the AC temperature. There are various ways to improve this with one being to simply bring back physical buttons for frequently used functions like temperature, volume, and hazard lights. Another thing that could be done is the use of haptic feedback on touchscreens so users can feel when they’ve selected something. Applying Norman’s principles to interactive media can significantly improve usability and user experience. For example, buttons should look clickable, and the links should be distinguishable. Loading animations, confirmation messages, or vibrations on a touchscreen can signal that an action has been completed. Similarly, error messages should be clear and instructive and tell users what went wrong and how to fix it. By prioritising usability over aesthetics, designers are able to create interactive media that feels intuitive rather than frustrating.

Week 4 – Generative Text (Infinite Rain)

Concept

For this project, I chose generative text but wanted to take a unique approach. Instead of displaying standard text, I decided to present the Greek alphabet in an interactive and visually engaging way. Finding all the symbols online was surprisingly easy. The inspiration for the rain-like effect came from the current rainy weather, adding a dynamic element to the design. I also incorporated interactivity and animation while making use of arrays, classes, and loops to enhance the experience.

Code Highlights

Working on this assignment introduced me to new JavaScript features and functions, such as splice()for managing arrays and the fourth argument in background(), which allows for transparency effects. These discoveries helped refine the animation and improve the overall visual aesthetic. Some key blocks of code demonstrating these techniques are included below.

// Remove symbols that reach the bottom
if (particles[i].y > height) {
  particles.splice(i, 1);
function draw() {
  background(10, 10, 30, 100);

Reflections & Future Improvements

I enjoyed working on this assignment and gained valuable insights into generative art. However, I originally wanted to explore data visualization but faced challenges in loading data effectively. Moving forward, I plan to practice more with data handling so I can confidently incorporate it into future projects. Additionally, I aim to experiment with more complex interactivity to further enhance the user experience.

PRESS C TO CHANGE COLOR, G TO INCREASE SPEED, CLICK MOUSE TO ENJOY🙂

 

Week 4 : Reading response

While reading the text, one thing I felt that Norman did not explicitly mention is the lack of instructions for devices and electronics, especially when their functionality is not immediately obvious. Many products assume that users will either figure things out through trial and error or consult a manual, but in reality, unclear controls often lead to confusion and frustration. For example, I own a digital camera, but I still have no idea how to zoom properly or what the different shooting modes actually do. There are no clear signifiers on the buttons, and the camera does not provide immediate feedback or guidance. Similarly, the projectors we have in the Baraha rooms can get difficult to use because the wall-docked interface offers multiple input options with no clear instructions on which mode is appropiate for what. Users are left guessing, which can lead to delays and unnecessary trial and error.

This lack of guidance can also be an issue in interactive media. Many systems expect users to learn how to navigate and use features on their own, but this often results in important functions being left undiscovered, reducing the overall usability of the system. This is where Norman’s principles of discoverability, feedback, and conceptual models become essential. If interfaces and products clearly communicated how to use them, users would not have to struggle with unnecessary complexity. By applying Norman’s ideas, interactive media could be improved by incorporating better signifiers, immediate feedback, and clearer mappings to ensure users understand what actions are possible. Whether in physical devices like cameras and projectors or in digital interfaces, design should guide users intuitively rather than forcing them to figure everything out on their own.

Week 4 – Reading Reflection

One thing that frustrates me is the complexity of using ACs on campus. Despite using them regularly, I have never fully mastered their functions beyond adjusting the temperature. This might seem minor, but it highlights a larger issue—many devices include extra features without clear instructions or intuitive design. Manufacturers often add numerous buttons and settings beyond a device’s core function, but without proper labeling or guidance, users are left guessing. For example, dryers come with multiple buttons beyond basic power and fabric selection, yet their purposes are unclear. Without clear signifiers, users may misuse the device, leading to frustration or even damage. A simple improvement would be better labeling, intuitive icons, or brief explanations to enhance usability and ensure users get the most out of these everyday appliances.

With respect to interactive media, one beneficial concept from the reading is the importance of contributions from people in other disciplines. As Norman states toward the end of the reading, “Design requires the cooperative efforts of multiple disciplines.” I could seek feedback from non-CS students or those without an interactive design background to determine whether my interface is easy to navigate without prior technical knowledge. For example, art students may provide insights on clarity and aesthetic appeal, ensuring the design is both functional and visually engaging. By actively involving diverse perspectives, I can create interactive media that is more intuitive, inclusive, and user-friendly, making my designs accessible to a wider audience rather than just functional for me.

Week 4 – Generative Text

Concept:
This piece is meant to imitate a page turning, but instead of the expected behaviour it slides a new page down from the top. The text itself is pulled from a .txt file, which contains bits of placeholder text (Lorem ipsum dolor) as well as snippets of text from the p5.js reference that pertain to implementing text.

Highlight:
My highlight is still fairly simple, but I chose it since I often forget to include bounds checking or error handling in my code and end up causing myself problems down the road. This snippet checks to make sure that the newly created page is not receiving empty input for its inner text, and cleans up the pages array once a certain number of pages have been created while leaving only the most recent instance.

// // Create a new page on click
  let newText = txt[floor(random(txt.length - 1))];
  let tries = 0;
  // // Make sure new page's text is not empty
  while (!newText && tries < 10) {
    newText = txt[floor(random(txt.length - 1))];
    tries++;
  }
  // // Cull older pages at some upper limit
  if (pages.length >= 10) {
    pages = subset(pages, pages.length - 1, 1);
    console.log("reset");
    console.log(pages);
  }

Embed:

Reflection:
I’m not super satisfied with this piece, doubly so since I tried a few different concepts that I was unable to get to work properly. I had been having issues with getting the data to load from a file, and only managed to get it to work once I pivoted to this idea. I had also wanted to have the sliding direction change between the top, sides, and bottom, either changing randomly or going in a clockwise direction. This week was a bit rough in terms of workload so I was unable to spend as much time adding features to this piece as I would have liked.

Week 4 Generative Text + Reading

For the task of creating generative text, I thought of representing historical conceptual frameworks on machine to create a journey through the development of this conceptualization from literature such as Turing’s ‘Computing Machinery and Intelligence’ in 1950, as well as Isaac Asimov’s exploration of social concepts through the use of machines and robots as a literary medium.

I aimed to extend this literature to create an interactive journey, where the user’s mouse movements will influence the tone of the sentences generated , transitioning to a more hectic state.

 

I am particularly proud of being able to make the generative text based on a function  ‘mood’ which changes how the text is displayed. This required several iterations to nail down:

choosePhrase(type) {
        if (this.mood >= 0.9) {
            return random(this[`veryPess${type}`]);
        } else if (this.mood > 0) { // Use regular pessimistic phrases
            return random(this[`pess${type}`]);
        } else {
            return random(this[`opt${type}`]);
        }

The optimistic phrases display by default, but the method enables the dynamic changing of text which highlights the aforementioned purpose and intent. Other challenges I faced were making the text display within the correct boundaries, and just thinking of which sentences to formulate and how they would be perceived by the user to try to improve the interaction as much as I can.

 

READING

Something that still intrigues me and is of particular interest is the conceptual framework on interaction and the mediums by which we express it. There is no agreement of one single concept of interaction. As a consequence, the mediums through which this interaction can be facilitated are limited by our agreement and interpretation of the conceptual framework at hand. I believe it is difficult to reason about the other fundamental notions presented by the Norman without a detailed analysis of the causality in human-centric design, the reception of it, and as a consequence the dynamic improvement of this concept in literature.

I am able to apply the principles and notions presented by Norman by utilizing the methodology in which he discusses these concepts firstly.  Secondly, I am also able to utilize his critical analysis of concepts in psychology on human interaction and human perception to have a deeper understanding of how the end-user interacts with my generative artwork or projects. In fact, this part of chapter 1 is of key interest, where the analysis of a design takes a methodological approach in viewing instances of design and how humans perceive them, whilst hinting at a psychological notion and referring to psychological literature. This also hints towards a more experiential view of this principle of human design (whether it is human centric or centered towards something else).