Week 4 Visualizing Data

My Concept

For this project, I wanted to create a data-driven timelapse showing airports across Europe. Each airport is represented as a circle, and its size corresponds to the number of passengers in a given week. The idea is to make busier airports stand out visually while showing the flow of air traffic over time. I got the idea as I was looking for datasets on Kaggle. The first dataset I found had the Daily flight information on hundreds of European airports from 2016-2022. The problem was that it didn’t contain the coordinates of the airports for me to plot them in p5 as I envisioned. Thankfully, I found a second dataset that contains the coordinates of airports from around the world. I decided to remove all the non-European airports from the second dataset, then cross filter the two datasets by airport code to make a third csv file with all the information I need before using p5.

Code Highlight

Since I wanted to create a third dataset. I needed to clean out any irrelevant, redundant, or incomplete columns, then merge the two csv files. To remove the non European airports from the coordinates file, I used the below C++ program which reduced the number of rows from the original 83,000 to a couple hundred.

while (getline(inputFile, line)) {
        stringstream ss(line);
        string ident, lat, lon, continent;
        getline(ss, ident, ',');
        getline(ss, lat, ',');
        getline(ss, lon, ',');
        getline(ss, continent, ',');
        continent = trim(continent);
        cout << continent << endl;
        
        if (continent == "EU") {
            cout << "Found an EU row" << endl;
            outputFile << line << "\n";
        }
        
    }

I then wrote this code to only take the airports that appear in both flights.csv and airports.csv, and write then to a third file, which I uploaded to p5

while (std::getline(flightsFile, line)) {
         std::stringstream ss(line);
         std::vector<std::string> cols;
         std::string cell;

         while (std::getline(ss, cell, ',')) {
             cols.push_back(trim(cell));
         }

         if (cols.size() < 6) continue; // skip malformed rows

         std::string icao = cols[3];

         // Skip row if airport not found in airportMap
         auto it = airportMap.find(icao);
         if (it == airportMap.end()) continue;

         // Write row with coordinates
         outFile << cols[0] << "," << cols[1] << "," << cols[2] << ","
                 << cols[3] << "," << cols[4] << "," << cols[5] << ","
                 << it->second.lat << "," << it->second.lon << "\n";
     }

     airportsFile.close();
     flightsFile.close();
     outFile.close();

     std::cout << "Merged CSV with coordinates created successfully.\n";
     return 0;

Finally, in p5, I looped through all the airport data and positioned them based on their coordinates mapped to the canvas size. Then I made the sizes of the airports correspond to their traffic using Power of Log Scaling to make the less busy airports extra small and the busy airports extra big so the contrast is clearer.

// Map coordinates and size
 for (let i = 0; i < airports.length; i++) {
   airports[i].x = map(airports[i].lng, minLng, maxLng, 50, width - 50);
   airports[i].y = map(airports[i].lat, minLat, maxLat, height - 50, 50); // Inverted Y because p5 coordinates are top to bottom

   // Power of log scaling for exaggerated size
   airports[i].size = map(
     pow(log(airports[i].passengers + 1), 2),
     0,
     pow(log(maxPassengers + 1), 2),
     2,
     60
   ); 
 }

Sketch

Reflection

I was impressed by how even large datasets can be visualized effectively with only a few lines of code once the mapping is set up. Using loops and data structures allows complexity to emerge naturally. This project reinforced how math and data can drive visual design, and how small adjustments, like using a logarithmic scale instead of linear mapping, can completely change how readable a visualization is.

For future improvements, I would like to make the timelapse smoother, add color coding by country or region, and include tooltips or labels when hovering over airports so users can see exact values. I could also allow filtering by week or year to explore trends more interactively. These enhancements would make the visualization more engaging and informative.

Week4 Reading Response

I totally agree with the author’s essential idea that it’s all about balancing competing priorities including usability, attractiveness, cost, reliability, manufacturability, marketability, and so on. I realized through reading that nowadays people care more about aesthetics rather than functionality, which undermines the usability of the product. Although aesthetics are quite important to attract a larger audience, I truly think that preserving usability is a must and should never be compromised. Otherwise, it would be bad if we, as producers, confuse users about how to use the product. As someone who likes minimalistic design, in my future projects in interactive media, I would like to keep the design simple and modern, while still being attractive to users. Muji is a great example. Its product designs are super simple, yet their usability is pretty good. Thus, I would like to balance usability and attractiveness like Muji does. 

 

I also realized that what’s being discussed in this reading is quite similar to the software development life cycle I learned in my software engineering class. Back in the old days, communication between clients and producers was not prioritized. However, in modern days, we use the agile method, where communication among users, developers, and business people is the most important factor in increasing usability and satisfying users’ expectations. I drew a connection here with the reading in the sense that we put more emphasis on facilitating communication to better understand and design the product itself. 

 

Something that drives me crazy is a washlet. As a Japanese person, I’m proud that so many foreigners are fond of it. But I don’t know how to use it even to this day. It has too many buttons, and sometimes there are buttons called “massage,” which I’m scared to press because I cannot imagine how a toilet would massage me while sitting. Also, there are multiple buttons to spray water, so I don’t know from which part of the toilet the water will spray. I’m just scared of getting wet and confused. I wish it could reduce the number of buttons for usability and add some simple text to explain what will happen when I press a certain button.

Week 4 Reading Reflection

One everyday thing that drives me crazy is elevators with touch-sensitive buttons instead of physical ones. At first glance, they look sleek, but they are often unresponsive. You tap once, nothing happens, so you tap harder, and suddenly it registers twice which makes it cancel. Sometimes there is no clear feedback whether your press was accepted, especially when the light is dim or there is no sound. In moments of hurry, like trying to catch the elevator before the doors close, that lack of immediate response is frustrating. Norman reminds us that confusion is usually the fault of design, not the user. In this case, the interface lacks both clear mapping, what action triggers what response, and adequate feedback. A simple return to tactile buttons, or at least a haptic or sound cue with instant visual confirmation, would solve most of these issues.

Applying Norman’s principles to interactive media, it is important to always consider feedback and signifiers. In an interactive project, the user should never feel uncertain about whether their action had an effect. For example, if I build a media sketch where users drag an object, the object should move right away with a slight animation or glow to show it is active. If there is a clickable option, it should visibly change on hover or after being pressed. These cues help show users what they can do without needing instructions. In interactive media, even subtle details like sound effects, small motion, or color changes can provide clarity. I believe the best digital designs are the ones where users do not have to stop and think, wait, did that work, but instead feel guided naturally from one step to the next.

Week 4 Reading Reflection

Something that drives me crazy, and I noticed it a lot here during Professor Goffredo Puccetti’s wayfinding class, is when signs are misplaced or point the wrong way. We used to find arrows that looked like they were telling you to turn when you were actually supposed to keep going straight, or signs that were posted too early or too late to actually help. For some reason, it always felt like they were almost mocking us, because instead of guiding, they just created hesitation. I think the only real fix is testing them in the real moment. You can’t design a hallway sign by looking at a blueprint in an office. You have to walk that path, see where people naturally pause, and place the sign right where the choice needs to be made.

Norman’s ideas connect to this really well, and I think they translate to interactive media almost directly. A misplaced sign feels like a button on a website that looks clickable but isn’t, or a menu gesture that does something different than you expect. Norman talks about affordances and signifiers, and those matter so much online. Buttons should actually look tappable, gestures should be hinted at with little cues, and feedback should come right away. Even a tiny animation or sound can reassure someone that the system heard them, the same way a well-placed arrow reassures you that you’re still going the right way in a building.

For me, applying Norman’s principles to interactive media is really about remembering that people use things while distracted, busy, or in motion. Just like we tested signs while walking, I think apps and websites should be tested while people are juggling other tasks. Mapping should feel natural—swiping left should move left, clicking save should clearly save—and the system’s “model” should always be consistent. What I really took from both the chapter and that class is that when people get lost, whether in a hallway or an app, it’s almost never their fault. It’s the design that failed to guide them. That reminder makes me want to design things that feel clear, forgiving, and human.

Week 4: Reading Reflection

Intuition guides our everyday lives, most applications and devices that we run into in different situations are ones we are not familiar with, yet we are able to interact with them and intuitively figure out how to utilize them. This can be attributed to unspoken rules that we have gotten used to over the years a button is to be pushed, and a knob is to be turned. These are affordances derived from our intuition and guided by signifiers and feedback to design devices usable by everyone. An interaction that I would say drives me crazy, due to it’s lack of all that is aforementioned would be trying to find my way around in the public transportation of some locations, especially underground trains where there are not enough signs signifying which train is coming or which platform to take. Often the maps are even cluttered or outdated making it even harder to wayfind my way around. This lack of clear signifiers and feedback makes it difficult to form a reliable mental model of the system, making it impossible to wayfind intuitively without searching things up or asking someone else around you.

When it comes to interactive media, this did open my eyes to the lack of intuition or clear instructions and signifiers to how to use my system and navigate my work so far. Most of the them depend on the user having previously provided knowledge through reading the concept or speaking to me. For example, this week my project has an interaction where the user can switch a book through clicking it, which can only be found by chance by clicking the books if not having read the concept. However, the switching of the book could count as feedback clarifying that the action of clicking does make a change. Moving forward I’d like to make my work more intuitive to interact with where the user is able to identify the purpose and the next move upon coming across a piece of mine. I’m looking to explore the possibility of strong signifiers through visual cues or micro-interactions that lead the user naturally without their having to be told beforehand. Such cues would create a seamless and engaging experience for users where they are able to independently go out and play and interact with the system. At the end of the day, the works we are creating are very user-focused most of the time and keeping their perspective in mind might be more important than in most other art forms.

Data Visualization

Concept:

My work for this week was inspired by a website called Receiptify, which takes your Spotify data and makes a list of your statistics and displays it in the format of a receipt. This is data visualization in itself, while I didn’t make a version of it, I chose to use it to generate my dataset. I asked it to give me my top 50 songs for the month, then I made those into a dataset, which I uploaded onto p5.js. I was racking my brain for different ways music can be represented in a visual sense, and then I saw one of my suitemates got coasters that look like vinyls, which gave me the idea to represent the songs as vinyls. I think most of us have seen people use vinyls as room decor rather than for playing music; this work is sort of a spin on that (pun intended ?).

Part(s) I’m Proud of:

Note: To be completely honest, I did consult ChatGPT for a few things, just to make the process less overwhelming.

1- I remembered Professor Aya saying to try not to hardcode variables, and I am proud that I thought of this part. I decided not to hardcode the number of rows in my grid, in case I want to edit the number of songs in my dataset in the future.

  //compute rows automatically based on number of songs

  //not hard coded in case i want to change the number of

  //songs in the future

  //ceil cause we need to round up

  let rows =ceil(records.length / cols);

2- I made it so that the vinyls expand when you click on them, and I am proud of my implementation because it makes the audience experience less static; you’re not just visualizing the data, you’re also interacting with it.

/when mouse is pressed

function mousePressed() {

  //loop through each vinyl

  for (let i = 0; i <records.length; i++) {

    //check if mouse is inside vinyl

    if (dist(mouseX, mouseY, records[i].x,records[i].y) < 25) {

      //deselect if same vinyl clicked

      if (activeIndex === i) activeIndex= -1;

      //otherwise set this as active

      else activeIndex= i;

    }

  }

}

Heres the sketch:

Reflection: 

There’s always room for improvement. If I spend more time on this, I’d probably make the rest of the vinyls disappear when a vinyl is clicked. Instead of just a color per artist, I can make it more realistic by adding album covers. A possible full upgrade would be making it almost a game where the user chooses between these vinyls, then we get an animation of a vinyl actually being played through a player, and the actual song being played. It would be a different way to interact with music, as opposed to seeing it as a huge list of strings, as we do on Spotify.

Assignment 4 – Generative Text

Concept

Well, so I wanted to create something that would bring words to life based on their emotional weight. The idea came to me while I was writing an essay and then I was just thinking about how certain words just hit differently; like how “thunder” feels more intense than “whisper,” for example. So I built this visualization that reads emotion data from a CSV file and animates words according to their intensity levels. Each word gets its own personality through movement – high-intensity words explode across the screen in chaotic circles, medium-intensity ones flow like waves, and gentle words just float peacefully. The colors and sizes change too, so you can literally see and feel the emotion in each word. Every 3 seconds it switches to a new word automatically, but you can also hit the spacebar to jump ahead if you’re impatient (like me).

Highlight of the code I’m proud of

The part that really clicked for me was figuring out how to make each letter move independently while still keeping the word readable. At first, all my letters were just clumped together or flying off in random directions, but then I realized I needed to treat each letter as its own little character with its own animation offset.

// Animate each letter individually
for(let i = 0; i < word.length; i++) {
  let x, y, size;
  
  if(intensity >= 80) {
    // High energy - explosive circular movement
    let angle = time * speed + i * PI/3;
    let radius = sin(time * 6 + i) * intensity * 0.5;
    x = cos(angle) * radius * 0.4 + i * 40 - (word.length * 20);
    y = sin(angle) * radius * 0.3;
    size = 45 + sin(time * 8 + i) * energy * 15;
  }
  // ... more animation types
}

Using i * PI/3 and i * 0.6 as offsets made each letter follow the same pattern but at slightly different phases, so they stay connected as a word but each one has its own rhythm.

I’m also pretty happy with how I handled the CSV parsing. I decided to do it manually instead of using a library.

// Parse CSV manually - split into lines then extract values
const lines = csvText.trim().split('\n');
const headers = lines[0].split(',');

// Process each data row (skip header row)
for (let i = 1; i < lines.length; i++) {
  const values = lines[i].split(',');
  csvData.push({
    word: values[0],
    intensity: parseInt(values[1]),
    color: [
      parseInt(values[2]),  // red
      parseInt(values[3]),  // green
      parseInt(values[4])   // blue
    ]
  });
}

Sketch

The visualization cycles through emotional words automatically, or press spacebar to advance manually.

 

Future Improvements

There are definitely some things I want to tackle next. First, I’d love to add sound. Another idea is to let users upload their own word lists or even type in words manually to see how the system interprets them. Right now it’s limited to my CSV file, but it would be cool to make it more interactive. I also want to experiment with particle effects – maybe letters could leave trails or break apart into smaller pieces for really explosive words.

The color system could be smarter too. Right now I’m manually assigning colors, but it would be interesting to generate them automatically based on the word’s emotional category or intensity. Maybe cooler colors for calm words and warmer ones for energetic words, or even colors that shift gradually as the intensity changes.

Week 4: Data Visualization

Concept

Through this work I wanted to create a virtual library where a dataset of books is visualized in a dynamic way. I explored the possible datasets in search of one that includes the genre of every book, then integrated that into the visualization with every genre having it’s respective color that represents them to have a more effective format that reveals more about a book. To accomodate a larger number of books in the dimensions of the work, it is interactive with the user pressing each book switches it out to another book, converting the title, author and color to convey the details of the new selected book. The screen displays nine books at a time not to overwhelm the use with too many books and not to have too few to not exclude too many books. The main plan here is to create an interactive and engaging visualization of a dataset of books to help a user explore the dataset for recommendations or to discover the range of books available.

Highlight Code

wrapText(txt, cx, cy, maxW, maxH) {
  let words = txt.split(" "); //split text into words
  let lines = [""]; //initialize lines array

  //build lines until they exceed width
  for (let w of words) {
    let test = lines[lines.length - 1] + (lines[lines.length - 1] ? " " : "") + w; //add word to current line
    if (textWidth(test) <= maxW) lines[lines.length - 1] = test; //fits line, add it
    else lines.push(w); //doesn't fit, start new line
  }

  //limit lines to available height
  let lineH = textAscent() + textDescent(); //line height
  let maxLines = floor(maxH / lineH); //max number of lines
  if (lines.length > maxLines) {
    lines = lines.slice(0, maxLines); 
    lines[maxLines - 1] += "..."; //truncate last line with ellipsis
  }

  //draw centered vertically
  let startY = cy - ((lines.length - 1) * lineH) / 2; //calculate starting y position
  for (let i = 0; i < lines.length; i++) {
    text(lines[i], cx, startY + i * lineH); //draw each line
  }
}

I’m most proud of the wrapText function within the book class, due to it’s ability solve a formatting problem that I faced with the lengthy book titles not fitting into the small allocated space on the book. It tokenizes the input string, by separating each word detected with a space between them then builds lines that respect the maximum width. For the titles that were too long to fit into the book I tried to using smaller font at the start but it was too difficult to read the lines in that case. So I decided to replace words that do not fit with an ellipsis if it exceeds the height to avoid having to use a smaller unreadable font size. This helped me ensure that all the titles fit into the book cover and are visually coherent and uniform.

Embedded Sketch 

Reflection 

Through this assignment I’d say I learnt a lot on data visualization and how to accommodate a large amount of data that comes in with it’s own attributes, like different lengths and format of sentences in the book titles. This taught me how to alter code to accommodate rather than smaller set of data determined by me to a larger more diverse set. Which also taught me the importance of dynamic code that is able to adapt to different inputs. For further assignments and projects I’d like to take lessons I learnt from completing this assignment and integrating them with more interactivity and storytelling so there is a more dynamic path to take rather than the current one where there is only one aspect that could be changed by the user interacting with it.

Reading Reflection Week#4

When Norman was talking throughout the reading about frustrations with design, the first thought that popped into my head was the very complex, ‘interactive’ online shopping sites. I put ‘interactive’ in quotation marks as the design lags, which takes away the whole point of the experience, and I feel like they value aesthetic more than they do user experience. As a user, I’m just trying to look for clothes. Why are you making it so complicated for me? When Norman began to explain HCD, I remembered the workflow of the company I interned at this summer, used for their development department. The company was in charge of an Enterprise Resource Planning system. Suppose you’re a client and you raise a ticket. The ticket goes through Presales, then Analysis, where the time for the development is estimated, and the business need is highlighted by a functional consultant, finally it’s picked up by a developer. After the code is done and reviewed, it’s pushed to functional testing before it goes to the client. Through this flow, it ensures the code is fully tested for the client, which means the probability of the client having errors is minimized. 

In terms of applying the author’s principles to Interactive Media, I think especially as we are learning the technology (like p5js), it’s very easy to lose track of the aesthetics and forget about user experience. There’s a sacrifice to be made, or maybe a balance to be found between prioritizing user experience over aesthetics, but aesthetics is part of the user experience as well. Let’s take websites, for example, the likelihood of wanting to use a poorly decorated website is slim; it may work perfectly, but it wouldn’t look appealing. At the other end of the spectrum is the example I gave earlier, where aesthetics completely take over and the user experience is no longer enjoyable.

Assignment 4 Data Visualization

My Concept

I wanted to make a visualization that’s both fun and interactive, inspired by something I love: music. I picked the top 50 Spotify songs of 2023 because I thought it would be exciting to explore popular tracks visually. The idea was to take a simple concept; songs with different features like energy, mood, popularity, and danceability, and show it in a more creative and engaging way, instead of a boring, standard chart. I wanted people to notice patterns and interact with the data while keeping it simple and playful. In my visualization, the size of each circle represents popularity, the color represents danceability, (where pink is the least and blue is the most and purple is the in between), the higher the circle is, the happier the song’s mood, and the further to the right, the more energy the song has. The dataset I used was from Kaggle: https://www.kaggle.com/datasets/yukawithdata/spotify-top-tracks-2023.

A Highlight of Some Code That I’m Particularly Proud Of

One part I’m really proud of is the hover effect that shows the song title and artist. It makes the visualization feel more interactive and personal because you can actually explore the songs instead of just seeing circles on a canvas. The code works by checking the distance between the mouse position (mouseX, mouseY) and the center of each circle (c.x, c.y). If the distance is smaller than the circle’s radius (size / 2), it means the mouse is hovering over that circle, and then the text appears above it. I used fill(0) to make the text black, textSize(12) to set the size, and textAlign(CENTER) so it’s centered nicely over the circle.

The ${} syntax is called a template literal in JavaScript. It lets you insert variables directly into a string, so ${songs.getString(i, ‘track_name’)} pulls the song name from the table and ${songs.getString(i, ‘artist_name’)} pulls the artist. Using it this way makes it easy to combine them into one neat line of text without writing extra code to join them. I like this part because it’s simple, but it really makes the visualization interactive and fun to use.

// if mouse is hovering over the circle, show the song name and artist
    if (dist(mouseX, mouseY, c.x, c.y) < size / 2) {
      fill(0);
      textSize(12);
      textAlign(CENTER);
      text(`${songs.getString(i, 'track_name')} - ${songs.getString(i, 'artist_name')}`, c.x, c.y - size);
    }
  }

Reflection and Ideas for Future Work or Improvements

I’m happy with how the visualization turned out because it’s simple, colorful, and interactive, and you can immediately see patterns in the data. In the future, I’d love to explore adding more interactivity, like filtering songs by genre or letting users compare two songs directly. I could also experiment with different visual encodings, like using different shapes to represent additional variables. Another idea is adding key press options to trigger effects or even including sound, so the circles could play a snippet of the song when clicked or hovered over. Overall, I think it’s a fun way to combine data and creativity, and it shows how you can turn something as tricky as data into a playful visual experience.