Reading Reflection-Week #5

The reading made me think back on how invisible software work used to be in the past and how easily important contributions can be overlooked nowadays, especially when they do not fit dominant expectations of who a “technical innovator” should actually be. The article highlights that software wasn’t even considered important in the early Apollo mission planning, which aligns with how many modern technological systems still undervalue behind-the-scenes digital labor. From my own experience studying technology and creative coding, I see a similar pattern which is that people often praise visible outputs (design, hardware, final product) while ignoring the programming logic that makes everything function. This actually supports the author’s point that Hamilton’s work was revolutionary not only technically but conceptually, because she helped establish software as a legitimate engineering discipline. But at the same time, the reading also challenges my previous assumption that space exploration was mainly about hardware and astronauts; it made me reconsider how much critical decision-making and problem-solving actually happens in code and systems design.

However, the author might show some bias by strongly linking Hamilton as a singular heroic figure, which risks simplifying the collaborative nature of large-scale scientific projects. While the article acknowledges teams and engineers, it still centers a narrative of individual genius, which is common in technical journalism and can actually overlook collective labor and institutional structures. This raises questions for me about how history chooses which contributors to actually highlight and which to marginalize. I also wonder whether the article’s emphasis on gender barriers, while being important, might shape the story to fit a modern narrative about women in tech rather than fully exploring the technical debates and engineering processes of the time. The reading ultimately makes me question how innovation is actually thought off. Do we celebrate people based on their actual impact, or based on how well their story fits contemporary social values and narratives about progress and inclusion?

Reading Reflection – Week 5

I used to assume computer vision worked like human vision, just less advanced, but I realized the difference is definitely bigger. Human vision automatically understands meaning, like someone’s face, while digital video is “computationally opaque”. It basically shows that a camera image is just pixel buffers with no meaning unless an algorithm like frame differencing, background subtraction, or brightness thresholding interprets it. I was surprised that simple techniques like object tracking can detect motion just by comparing the pixels. The vision systems do not have to be so complex. Even basic detection can be powerful if the physical environment factors in the code are designed well.

One example that stuck with me was Myron Kruger’s videoplace. I found it really interesting that early interactive art already used vision tracking to let people draw with their bodies. It made me realize how computer vision can expand the way we can interact with technology. At the same time, Rafael Lozano Hemmer’s work shows more of a critical side. His belt tracking piece turns surveillance into art, which made me wonder whether interactive work with surveillance abilities that track viewers is also training us to accept being watched. 

The reading left me to question if computer vision works best when environments are made specifically so that the computer can easily detect it, does that mean future spaces will be designed more for machines, made accessible for machines, than us humans? Like, will there be now more controlled lightning, infrared illumination, and retroreflective material? I think this text definitely shifted my perspective from seeing computer vision just as a technical tool to also seeing it as a cultural force that affects art and even social power.

Reading Reflection Week 5: The visionary difference between a Computer and a Human

I found it quite interesting seeing how computer vision actually is different than human vision. Initally I assumed that computer vision being chock full of the knowledge we would provide from the side of AI, it would be able to, at the very least analyze what the image is. However I was surprised to find out how computers only really see grids of pixel and a fully relient on mathematical algorithms, in order to get a cleaner picture of what is on screen. Whereas uh humans, we’re able to distinguish an object from a background and different lighting, computers have a hard time to tell a shadow passing along a room.

However with regards to the use of tracking and surveilence, I would say it honestly opens up a world of possibilities to make use of body tracking as a controller for many games and loads of interactive media artworks. The coolest one I’ve personally seen so far is Just Dance. It utilizes a camera for motion tracking so that its able to give an accurate assessment if the dance moves match up with the computer’s example. It’s main concept isn’t just a gimmik, but the crux of the main functionality of the game. But it’s the implementation where you get an accurate assessment of whether you follow the dance moves and can give you instant feedback, through the use of sound effects, that is very useful. And I mean, with regards to interactive media, this will allow say, people to interact with our art in a deeper way so that they can genuienly feel immersed in the art in question.

Assignment 4: Reaction Times Experiment

For this assignment, I wanted to create something interactive but still simple enough to fully understand and explain. Instead of focusing on complex visuals, I decided to make a reaction time test, where the user presses the space bar as quickly as possible when the screen turns green. I liked this idea because the interaction itself generates the data, instead of relying on an external file.

The sketch starts with a short instruction screen. Once the user presses the space bar, the screen switches to a “wait” state for a random amount of time, and then turns green to signal the user to respond. The reaction time is recorded for each trial, and after a set number of trials, the sketch displays the results. I kept the visuals very minimal and focused more on clarity than decoration.

Embedded Sketch:


For the results, I used three images (a rabbit, a human, and a turtle) to represent fast, average, and slow reaction times. I also added a simple bar chart where each bar represents one trial. Faster reaction times result in shorter bars, while slower reaction times create taller bars. This helped make the differences between trials easier to see.

let barH = map(rt, 200, 2000, 20, 160);
   barH = constrain(barH, 20, 160);

   fill(255, 120, 170);
   rect(chartX + i * barW, chartY - barH, 45, barH);

I’m proud of this section of the code because it takes a the reaction time value and turns it into something visual. By using map(), I was able to convert reaction times into bar heights, which made the results much easier to interpret.

Most of the sketch was built using variables, arrays, conditional statements, functions, and simple state management. Using different states (intro, wait, go, and results) made the interaction much easier to organize. I also used millis() to measure reaction time, which I learned more from p5.js reference and looking at simple reaction-time examples online.

Overall, I’m happy with how this sketch turned out. It helped me understand how user input can be treated as data and then visualized in a clear way. The most challenging part was making sure the input behaved correctly in each state, especially preventing early presses from being counted. For future assignments, I would like to experiment with more detailed feedback per trial or different ways of visualizing reaction time beyond a bar chart.

Reading Reflection:

Norman’s reading made me realize that many everyday frustrations come from poor design rather than user error. He explains how objects often fail to clearly communicate how they should be used, which causes confusion even in simple situations like opening a door. This made me think about how important clarity and feedback are in interactive design.

Relating this to my own work, I realized that even small design choices in my sketches affect how intuitive they feel. If an interaction is not clear, users may not know what to do or may assume something is broken. Norman’s ideas encouraged me to think more carefully about how my sketches guide the user, and how simple visual cues or feedback can make interactions feel more natural and less frustrating.

Week 4 – Generative Text

For this assignment I created a kinematic typography sketch using the word “MADINA.” I wanted the word to feel like it is in motion. My main inspiration was Patt Vira’s kinetic typography work, where letters shift in rhythm. I liked how those examples use simple motion to give a word a stronger presence, so I focused on one word and explored movement across time.

I used p5.js together with opentype.js and geomerative. First I loaded the font “BebasNeue-Regular.ttf” and converted the word “MA D I NA” into a vector path. Then I resampled the outlines into many points. In draw, I repeated those points multiple times in vertical layers. I applied a sine function to the x position and a gradual offset to the y position, so each layer moves like a wave. I kept the color palette minimal with a dark blue background, white strokes, and semi transparent blue fills. Patt Vira’s kinetic typography guided my decisions about rhythm and repetition.

I wrote the sketch in p5.js geomerative to work with vector text. In setup, I created the canvas, set angle mode to degrees, and loaded the font file “BebasNeue-Regular.ttf” with opentype.load. After the font loaded, I called font.getPath on the string “MA D I NA” with a large font size, then wrapped the commands in a geomerative Path object. I resampled this path by length so the letters turned into a dense list of points. I looped through the commands and, whenever I encountered a move command “M,” I started a new sub array in points. For each drawing command that was not “Z,” I pushed the x and y coordinates into the current sub array as p5 vectors.

In draw, I cleared the background to a dark blue color, set stroke weight and stroke color, and translated the origin so the word appears centered on the canvas. I used a nested loop. The outer loop moves through the number of layers, from num down to zero. The inner loop moves through each group of points for each letter. For some letter indices I used noFill to keep only outlines, and for others I used a semi transparent blue fill. Inside beginShape and endShape, I looped over the points and applied a sine based offset to the x coordinate with r * sin(angle + k * 20), and a vertical offset of k * 10 to the y coordinate. This creates layered copies of the word that shift in x and y as angle increases. At the end of draw, I incremented angle by 3 so the sine function changes over time and the typography keeps moving.

let font;
let msg = "MA D I NA"; let fontSize = 200; 
let fontPath; let path; let points = [];

let num = 20; let r = 30; let angle = 0;

function setup() {
  createCanvas(700, 400);
  angleMode(DEGREES);
  opentype.load("BebasNeue-Regular.ttf", function(err, f){
    if (err) {
      console.log(err);
    } else {
      font = f;
    }
    
    fontPath = font.getPath(msg, 0, 0, fontSize);
    path = new g.Path(fontPath.commands);
    path = g.resampleByLength(path, 1);
    
    for (let i=0; i<path.commands.length; i++) {
      if (path.commands[i].type == "M") {
        points.push([]);
      }
      
      if (path.commands[i].type != "Z") {
        points[points.length - 1].push(createVector(path.commands[i].x, path.commands[i].y));
      }
    }
    
    
  });
  
}

function draw() {
  background(0, 0, 139);
  strokeWeight(3);
  stroke(255);
  translate(40, 170);
  
  for (let k=num; k>0; k--) {
    for (let i=0; i<points.length; i++) {
      if(i == 1) {
        noFill();
      } else if (i == 3) {
        noFill();
      } else {
        fill(0, 0, 255, 100);
      }
    beginShape();
      for (let j=0; j<points[i].length; j++)      {
        vertex(points[i][j].x + r*sin(angle + k*20), points[i][j].y + k*10);
      }
      endShape(CLOSE);
    } 
  }
  angle +=3;
}

 

Week 4 – Reading Reflection

One thing that always confuses me is the variety of modes on some household items. When using an iron, I see that spinning the circle increases the steam production, and for people who have no idea which level is needed for which clothes, they write the names of the materials on the same circle respectively. What drives me mad is that washing machines and dryers are NEVER intuitive. What’s the difference between Cupboard Dry and Cupboard Dry+ if they take the same time and operate at the same temperature? What is the difference between Gentle and Hygiene, and why is the time difference there 3 hours? And to actually figure out the difference, you have to find the name of the machine (which will never match its actual name), look it up in some 2008 PDF file on the very last Google page, and it still won’t answer the question. I always use Mixed washing and Cupboard Dry just because it works, and I have no idea how the other regimes work. And as Norman says, it’s not me being stupid, but the design allowing for these mistakes.

“The same technology that simplifies life by providing more functions in each device also complicates life by making the device harder to learn, harder to use”

I think my example perfectly supports this idea, since the bad design of all these items: with no signifiers, no clear affordances, and no clear conceptual model formed either through life experience or through using the item, just creates more confusion and makes the user always choose one method instead of the huge variety of (probably) useful and functional ones.

I think one way to fix it is to provide some sort of manual, even a tiny table on the edge of the machine would help so much to at least understand which method does what and what the difference between them is. Another way is to display something on the small screen that almost every machine has, like all the characteristics and statistics that are unique to each method, or some short warnings/instructions. Another way to solve this problem is to at least make small illustrations near each method that actually depict what the method does. Genuinely, it would help unleash the potential of these machines and help people use them.

Talking about interactive media, I think the principles Norman talks about are really applicable and foundational.

Sometimes great art pieces with very interesting and complex interactions can be overlooked just because people can’t figure out how to interact with them. I believe that it is very important to design the piece in a very intuitive or guiding way, a way that encourages the user to make the interaction that the author created. As Norman says, humans are really predictable, and in this way, some silent guiding design (not notes, not manuals, but the design itself) should trigger the interaction that is meant to be done in order to experience the art.

Week 4 – Reading Response

Reading Norman’s chapter made me realize how often I get frustrated with specific designs, especially ones that lack efficiency in everyday objects. Norman emphasizes that good design should communicate clearly, prevent errors, and provide feedback. I see this principle in some interactive media, where the design makes it easy to use without much explanation—anyone can figure it out quickly. When something is designed well, you don’t even notice it because everything feels natural and intuitive. Unlike the examples the author mentioned, such as the sink that requires pushing down on it or the door that needs a sign to explain that it is a sliding door, good design should not require instructions. If a user has to stop and think about how to use something basic, then the design has already failed.

Something that drives me crazy is the access doors on campus. I walk around carrying two access cards—one specifically for my suite and room, and another for the rest of the campus. It feels unnecessary and inefficient. On top of that, the glass doors are extremely heavy, and the sensors do not work most of the time. Instead of making entry smooth and accessible, the design creates frustration. According to Norman’s ideas, better mapping, clearer feedback, and fewer constraints could significantly improve this experience.

Week 4 – Global Mood (Data Visualization)

Concept:
My concept is based on showing the current global mood, and the world’s current situation. Because whenever I used would google “news” most of the things would is evoke aa negative emotion from me. So, I decided to visualize the news and categorize them into a few different emotions or feelings.

How I created the code:

I used Guardian and NYT API keys in order to get access to live articles, although there are some restrictions, like page requests. Therefore, I added some delay in order to access a larger number of pages and news article headlines. I also used world.json for the country borders.

I then created different arrays: one for the emotional bubbles, one for the country borders, one for the CNN breaking news ticker, and one for tracking articles so they are not shown twice. I also added a timer that updates every 60 seconds and adjusted the speed and position of the news ticker.

Then I added geographical points for a list of countries. I created bubbles for different emotions, with each emotion represented by a color. There is also a map key showing which color represents which emotion. The bubbles have visual effects like glowing and shrinking over time to make the map feel dynamic. Emotions are detected using keywords in article titles to classify sadness, anger, hope, or joy.

It initially gets the last 48 hours of news, then it is updated with live breaking news. I also added fallbacks: if the world map fails to load, a simple grid is shown, and if the API fails, a CORS proxy is used to make sure the news still comes through.

The code:
// Convert guardian format to our format
// Fetch 48 hours of historical news from The Guardian
function fetchHistoricalNews() {
let twoDaysAgo = new Date();
twoDaysAgo.setDate(twoDaysAgo.getDate() - 2);
let fromDate = twoDaysAgo.toISOString().split("T")[0]; // Format: YYYY-MM-DD
console.log("📅 Fetching Guardian news from " + fromDate + " to today...");

let totalArticles = [];
let pagesToFetch = 10; // Get 10 pages of results
let pagesLoaded = 0;
let failedPages = 0;

// Fetch pages sequentially with delay to avoid rate limiting
for (let pageNumber = 1; pageNumber <= pagesToFetch; pageNumber++) {
setTimeout(() => {
let apiURL =
"https://content.guardianapis.com/search?section=world&show-tags=keyword&from-date=" +
fromDate +
"&page-size=30&page=" +
pageNumber +
"&show-fields=webPublicationDate&api-key=" +
GUARDIAN_API_KEY;
console.log("🔄 Requesting Guardian page " + pageNumber + "...");
fetch(apiURL)
.then((response) => {
console.log("📡 Guardian page " + pageNumber + " response status: " + response.status);
if (!response.ok) throw new Error("HTTP " + response.status);
return response.json();
})
.then((data) => {
if (data && data.response && data.response.results) {
totalArticles = totalArticles.concat(data.response.results);
pagesLoaded++;
console.log("✅ Page " + pageNumber + " loaded: " + data.response.results.length + " articles");
if (pagesLoaded + failedPages === pagesToFetch) {
if (totalArticles.length > 0) {
console.log("📊 Total Guardian historical: " + totalArticles.length + " (" + pagesLoaded + "/" + pagesToFetch + " pages successful)");
isShowingHistorical = true;
sourceStatus.guardian.active = true;
sourceStatus.guardian.articleCount = totalArticles.length;
processArticles(totalArticles, true, "guardian"); // true = historical
} else {
console.error("❌ All Guardian pages failed");
sourceStatus.guardian.active = false;
}
}
} else {
console.warn("⚠️ Guardian page " + pageNumber + " returned empty results");
failedPages++;
}
})
.catch((error) => {
console.error("❌ Guardian page " + pageNumber + " failed:", error.message);
failedPages++;
if (pagesLoaded + failedPages === pagesToFetch) {
if (totalArticles.length > 0) {
console.log("📊 Total Guardian historical: " + totalArticles.length + " (" + pagesLoaded + "/" + pagesToFetch + " pages successful)");
isShowingHistorical = true;
sourceStatus.guardian.active = true;
sourceStatus.guardian.articleCount = totalArticles.length;
processArticles(totalArticles, true, "guardian");
} else {
console.error("❌ All Guardian pages failed");
sourceStatus.guardian.active = false;
}
}
});
}, pageNumber * PAGE_REQUEST_DELAY); // Use delay variable
}
}

// Fetch the latest breaking news from The Guardian
function fetchGuardianNews() {
console.log("📰 [" + getCurrentTime() + "] Fetching Guardian news...");
let apiURL =
"https://content.guardianapis.com/search?section=world&show-tags=keyword&page-size=25&show-fields=webPublicationDate&api-key=" +
GUARDIAN_API_KEY;
fetch(apiURL)
.then((response) => {
if (!response.ok) throw new Error("HTTP " + response.status);
return response.json();
})
.then((data) => {
if (data && data.response && data.response.results) {
console.log("✅ [" + getCurrentTime() + "] Guardian: " + data.response.results.length + " articles");
sourceStatus.guardian.active = true;
sourceStatus.guardian.lastUpdate = new Date();
sourceStatus.guardian.articleCount = data.response.results.length;
isShowingHistorical = false; // We're showing breaking news now
processArticles(data.response.results, false, "guardian"); // false = breaking news
}
})
.catch((error) => {
console.log("⚠️ Guardian direct failed, trying CORS proxy...");
tryGuardianWithProxy();
});
}

// Backup method: Try Guardian API through CORS proxy
function tryGuardianWithProxy() {
let apiURL =
"https://content.guardianapis.com/search?section=world&show-tags=keyword&page-size=25&show-fields=webPublicationDate&api-key=" +
GUARDIAN_API_KEY;
let proxiedURL = "https://api.allorigins.win/raw?url=" + encodeURIComponent(apiURL);
fetch(proxiedURL)
.then((response) => {
if (!response.ok) throw new Error("HTTP " + response.status);
return response.json();
})
.then((data) => {
if (data && data.response && data.response.results) {
console.log("✅ [" + getCurrentTime() + "] Guardian via proxy: " + data.response.results.length + " articles");
sourceStatus.guardian.active = true;
sourceStatus.guardian.lastUpdate = new Date();
sourceStatus.guardian.articleCount = data.response.results.length;
isShowingHistorical = false;
processArticles(data.response.results, false, "guardian");
}
})
.catch((error) => {
console.error("❌ [" + getCurrentTime() + "] Guardian completely failed:", error.message);
sourceStatus.guardian.active = false;
});
}

 

Reflection and ideas for future work or improvements:

Reflection:

Global Mood taught me a lot about combining live data, visualization, and emotion analysis. Seeing emotions vary across regions in real time was fascinating, and effects like glowing and shrinking bubbles made the map feel dynamic. It also taught me how to APIs and json files on P5.js.

Future Work and  improvements:

I would love to present it as an installation to show people the current global situation. For future improvements, I would incorporate Natural Language Processing to classify emotions more accurately, rather than relying solely on specific keywords. I also wish I had greater access to open-source news APIs to expand the dataset.

 

Week 4 – Data Visualization

Data Visualization

For this week’s assignment, an animated visualization of global temperature data from 1880 to today. The sketch fetches real data from NASA’s GISS API and draws it as a bar chart. Blue bars mean cooler-than-average years. Red bars mean warmer-than-average years. As the bars animate in, poetic phrases appear based on the temperature, giving the data a human, storytelling feel.

Controls: Click to pause/play · Hover bars for details · Press R to restart · Press S to save · Spacebar to toggle

Code I Am Proud Of

The part I am most proud of is how the bars get their color. Instead of just picking “blue” or “red,” I used lerpColor() to blend between shades based on the actual temperature value. Colder years get a deeper blue, and warmer years get a more intense red. It makes the warming trend really pop visually:

// Color: blue for cold, red for warm
let barColor;
if (d.anomaly < 0) {
  barColor = lerpColor(
    color(70, 130, 200, 200),
    color(40, 80, 160, 200),
    map(d.anomaly, 0, -0.5, 0, 1)
  );
} else {
  barColor = lerpColor(
    color(200, 100, 70, 200),
    color(240, 60, 40, 220),
    map(d.anomaly, 0, 1.5, 0, 1)
  );
}

I also like the generative text system. Each time a new data point appears, the sketch picks a phrase from a pool that matches the temperature: cold, warm, or neutral. It is a simple idea but it adds a lot of personality to the piece:

function triggerPhrase(anomaly) {
  if (millis() - lastPhraseTime < 1200) return;

  let phrases;
  if (anomaly < -0.1) {
    phrases = coldPhrases;
  } else if (anomaly > 0.3) {
    phrases = warmPhrases;
  } else {
    phrases = neutralPhrases;
  }

  activePhrase = random(phrases);
  phraseTargetAlpha = 255;
  lastPhraseTime = millis();
}

What I Learned

Working with a real API changed everything. The full 145-year dataset tells a story on its own. You can see the blue bars dominate the early decades, then the reds creep in and accelerate. I did not need to editorialize; the shape of the data is the narrative. The hardest part was dealing with CORS issues and making sure broken data rows do not crash the sketch.


Data: NASA GISS Surface Temperature Analysis · Built with p5.js · February 2026

Week 4 – Creative Reading Response

What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?

Something that drives me crazy is some of the dryers in the laundry rooms on campus, specifically the ones that don’t have a screen! How am I supposed to know when my laundry will be ready? This system could make sense if this machine was just used by one person in their house, and they could just check on the machine whenever (or maybe there’s some sound that plays when a cycle is done). However, for a machine that’s shared by a whole building, it’s so inconvenient that you kinda have to just guess when your laundry would be ready. And if you’re too late, your laundry might just be tossed on the floor! This can be blamed on either university facilities for purchasing machines that do not have screens (since a few do, it’s just some that don’t have screens) or we can just blame the machines, because why can’t I know how long my clothes will take? So the improvement from my perspective is to add a screen!

How can you apply some of the author’s principles of design to interactive media?

Instructions! But, simple ones. Ones that build on what user’s are used to. For example, users are now used to pressing on the right side of their screen to speed up videos as that is how you do it on TikTok, YouTube, and Instagram. Instead of trying to reinvent wheel, sometimes it’s best to use what already exists.