Reading Reflection-Week #5

The reading made me think back on how invisible software work used to be in the past and how easily important contributions can be overlooked nowadays, especially when they do not fit dominant expectations of who a “technical innovator” should actually be. The article highlights that software wasn’t even considered important in the early Apollo mission planning, which aligns with how many modern technological systems still undervalue behind-the-scenes digital labor. From my own experience studying technology and creative coding, I see a similar pattern which is that people often praise visible outputs (design, hardware, final product) while ignoring the programming logic that makes everything function. This actually supports the author’s point that Hamilton’s work was revolutionary not only technically but conceptually, because she helped establish software as a legitimate engineering discipline. But at the same time, the reading also challenges my previous assumption that space exploration was mainly about hardware and astronauts; it made me reconsider how much critical decision-making and problem-solving actually happens in code and systems design.

However, the author might show some bias by strongly linking Hamilton as a singular heroic figure, which risks simplifying the collaborative nature of large-scale scientific projects. While the article acknowledges teams and engineers, it still centers a narrative of individual genius, which is common in technical journalism and can actually overlook collective labor and institutional structures. This raises questions for me about how history chooses which contributors to actually highlight and which to marginalize. I also wonder whether the article’s emphasis on gender barriers, while being important, might shape the story to fit a modern narrative about women in tech rather than fully exploring the technical debates and engineering processes of the time. The reading ultimately makes me question how innovation is actually thought off. Do we celebrate people based on their actual impact, or based on how well their story fits contemporary social values and narratives about progress and inclusion?

Reading Reflection – Week 5

I used to assume computer vision worked like human vision, just less advanced, but I realized the difference is definitely bigger. Human vision automatically understands meaning, like someone’s face, while digital video is “computationally opaque”. It basically shows that a camera image is just pixel buffers with no meaning unless an algorithm like frame differencing, background subtraction, or brightness thresholding interprets it. I was surprised that simple techniques like object tracking can detect motion just by comparing the pixels. The vision systems do not have to be so complex. Even basic detection can be powerful if the physical environment factors in the code are designed well.

One example that stuck with me was Myron Kruger’s videoplace. I found it really interesting that early interactive art already used vision tracking to let people draw with their bodies. It made me realize how computer vision can expand the way we can interact with technology. At the same time, Rafael Lozano Hemmer’s work shows more of a critical side. His belt tracking piece turns surveillance into art, which made me wonder whether interactive work with surveillance abilities that track viewers is also training us to accept being watched. 

The reading left me to question if computer vision works best when environments are made specifically so that the computer can easily detect it, does that mean future spaces will be designed more for machines, made accessible for machines, than us humans? Like, will there be now more controlled lightning, infrared illumination, and retroreflective material? I think this text definitely shifted my perspective from seeing computer vision just as a technical tool to also seeing it as a cultural force that affects art and even social power.

Assignment 4: Reaction Times Experiment

For this assignment, I wanted to create something interactive but still simple enough to fully understand and explain. Instead of focusing on complex visuals, I decided to make a reaction time test, where the user presses the space bar as quickly as possible when the screen turns green. I liked this idea because the interaction itself generates the data, instead of relying on an external file.

The sketch starts with a short instruction screen. Once the user presses the space bar, the screen switches to a “wait” state for a random amount of time, and then turns green to signal the user to respond. The reaction time is recorded for each trial, and after a set number of trials, the sketch displays the results. I kept the visuals very minimal and focused more on clarity than decoration.

Embedded Sketch:


For the results, I used three images (a rabbit, a human, and a turtle) to represent fast, average, and slow reaction times. I also added a simple bar chart where each bar represents one trial. Faster reaction times result in shorter bars, while slower reaction times create taller bars. This helped make the differences between trials easier to see.

let barH = map(rt, 200, 2000, 20, 160);
   barH = constrain(barH, 20, 160);

   fill(255, 120, 170);
   rect(chartX + i * barW, chartY - barH, 45, barH);

I’m proud of this section of the code because it takes a the reaction time value and turns it into something visual. By using map(), I was able to convert reaction times into bar heights, which made the results much easier to interpret.

Most of the sketch was built using variables, arrays, conditional statements, functions, and simple state management. Using different states (intro, wait, go, and results) made the interaction much easier to organize. I also used millis() to measure reaction time, which I learned more from p5.js reference and looking at simple reaction-time examples online.

Overall, I’m happy with how this sketch turned out. It helped me understand how user input can be treated as data and then visualized in a clear way. The most challenging part was making sure the input behaved correctly in each state, especially preventing early presses from being counted. For future assignments, I would like to experiment with more detailed feedback per trial or different ways of visualizing reaction time beyond a bar chart.

Reading Reflection:

Norman’s reading made me realize that many everyday frustrations come from poor design rather than user error. He explains how objects often fail to clearly communicate how they should be used, which causes confusion even in simple situations like opening a door. This made me think about how important clarity and feedback are in interactive design.

Relating this to my own work, I realized that even small design choices in my sketches affect how intuitive they feel. If an interaction is not clear, users may not know what to do or may assume something is broken. Norman’s ideas encouraged me to think more carefully about how my sketches guide the user, and how simple visual cues or feedback can make interactions feel more natural and less frustrating.

reading reflection, week 3

First of all, I really enjoyed the way this was written. ironically, I think it succeededs emulating  interactivity itself in a way – atleast to the highest extent a static piece of text is capable of.  There’s a clear pattern in the way he writes: he establishes a prospect, makes an attempt at predicting the reader’s reflections or reactions, and then responds accordingly.

He makes the argument that written words cannot be interactive, however, if we were to take his own definition of interactivity (having two parties who, in turn, listen, think, and respond to one another), and if we were to assume he is atleast somewhat correct in his estimations of the reader’s thoughts, there’s clearly an interaction going on between the author and reader. The author is performing an interaction – perhaps not one with any reader in particular, but is clearly making an attempt to reflect back on and respond to a hypothetical audience. At one point he says that “movies dont listen to their audience, nor do they think about what the audience may be saying,” and yet he manages to do exactly that through words alone.

I’m unsure of how intentional this was on the author’s part – I was inclined to give him all the credit, yet he makes abundantly clear his opinion on this matter. I do agree that it isn’t ‘real’ interaction in a traditional sense, but a deeply psychological form of interaction that i think should be appreciated more. I think we could all benifit from achieving this to some level in our own writings.

 

reading response, week 4

I hate Apple and their designs so much. it’s very intentional too, the way they make it such an inconvenience to deal with any non-Apple electronic device to the point where you feel compelled to buy more of their products just to accomodate your phone. The way they impose unnecessary updates so that you’re almost forced to upgrade to a newer model because now your phone is no longer compatible with even their own products. The way they make it such a hassle to transfer your photos and data to a non-Apple product, so the longer you have an iPhone the harder it is to switch to another brand. The way they insist upon automatically sharpening your photos for no reason and making your selfies look fried to tabsolute hell with no ability to disable it. (Still don’t understand that one, it’s just stupid design.) The way their products are so awfully flimsy that they just break for no goddamn reason and Oh no! looks like you need to upgrade to another stupid ass iPhone because all your photos are automatically uploaded to the iCloud and you had no warning or prep time to find a way to save everything to another cloud! pisses me off. someday i’ll free myself of this corporate curse…

anyways if there’s one lesson about design you can glean from this god awful mess of an evil corporation, it’s to not make your consumers hate you, i guess. put the people first. also Macbooks are so confusing to navigate i still refuse to use one.

Week 4: Arts and Crafts Generative Text

My Concept:
For this week’s assignment, I was interested in creating a generative text for something specific and engaging. I explored different ideas, such as movie recommendations, a recipe cookbook, and motivation cards, but then I realized I wanted to create something playful and useful, something that’s fun for the user and easy to benefit from. That’s when I came up with the idea of an arts and crafts guide, since it’s simple, enjoyable, and something many people could try, and I personally like it as well. I created a note-like arts and crafts guide that gives the user multiple random ideas they could experiment with, whether using paper, paint, or other materials. It has a stable template of familiar instructions but allows the user to explore many ideas by simply clicking, changing the materials, actions, and the craft they will end up with. My vision was a board with a note pad on it, where other notes get stuck on top, so I chose a pastel brown background to resemble a pin board and a yellow note pad for the text. I also made the text color and the pins on the top corners change each time, creating a more realistic effect of a new note being stuck on top of an older one.

Embedded Sketch:

A Code I’m proud of:
A piece of code I’m particularly proud of is using the split function to allow one position of my sentence to generate multiple options. This way, one material could have more than one possible action and craft. I used the | character in the CSV file to separate multiple choices within a single row, and then used it in the sketch when calling the variables into the sentence, allowing the program to select one option for each category.

CSV file:

paper, fold|cut|glue, flower|card|plane
cardboard, fold|cut|glue, box|poster|house
clay, mold|shape|paint, bowl|vase|sculpture
cloth, sew|stitch|cut, shirt|bag|scarf
yarn, knit|weave|stitch, scarf|blanket|bracelet

Sketch file:

// These read and call each material, action, and object  from the chosen row
  // The actions and objects are split since they have multiple options assigned in each row
  let material = singleRow[MATERIAL];
  let actions = split(singleRow[ACTIONS], "|");
  let objects = split(singleRow[OBJECTS], "|");

  // This allows only one action and object to be chosen randomly
  let action = random(actions);
  let object = random(objects);

Another part of the code I’m satisfied with is the fonts. I enjoyed applying what we learned in class to change the font of my text, which made it look much better. I was also able to include more than one font in the same sketch.

// Define font variables to store the different ones used
let myFont;
let myFont2;
let myFont3;

// This function loads all the files in the sketch before it generates, to ensure they appear
  myFont = loadFont("titlefont.ttf");
  myFont2 = loadFont("textfont1.ttf");
  myFont3 = loadFont("textfont2.ttf");

// Draw and adjust the title of the note pad
  fill("black");
  textSize(32);
  textFont(myFont);
  text("Arts and Crafts", 6, -50);

  // Draw and adjust the text of instructions
  fill(textColor);
  textSize(16);
  textFont(myFont2);
  textWrap(WORD);
  text(idea, -90, 20, 190);

  // Provide text instructions at the bottom of the frame
  fill("rgb(103,85,79)");
  textFont(myFont3);
  textSize(16);
  text("Click for another idea!", width / 2, height - 20);

A minor piece of code I initially struggled with was the rotation of the note. At first, the note moved too slowly and didn’t look like a realistic new note being placed. I then figured out that I needed to adjust the positioning and randomize the range of radians, which gave the note the movement I wanted.

// Tilt the note pad
  noteAngle = radians(random(-5, 5));

Reflection and ideas for future work or improvements:
Overall, I feel that I was able to apply what we have learned over the past four weeks while also exploring additional aspects to create my project, and I am satisfied with the result. I personally enjoyed designing a set framework that can generate many different outcomes, while also allowing me to add personal touches. I find this concept very fascinating, and I think many different ideas, whether fun or practical, could be implemented in a neater and more accessible way using this approach. However, while I initially found the project interesting, I began to feel that it might be a bit basic as I worked on it. For future work, I feel I could improve the visual quality to make it more appealing to the viewer or user, for example, by adding elements like scissors or paint around the note pad, or including an icon for each craft. I could also make it more interactive, either by allowing the notes to move or by creating a book that actually flips pages, with photos and longer instructions for more advanced crafts, to give a more realistic and engaging experience.

References:
I mainly used the class slides to remind myself of the key concepts needed for creating generative text. In particular, I referred to uploading fonts and lists using different files, such as .csv and .ttf, along with the preload() function.

I also used the p5.js website to review and better understand specific functions, including:
https://p5js.org/reference/p5/translate/
https://p5js.org/reference/p5/rotate/
https://p5js.org/reference/p5/split/
https://p5js.org/reference/p5/int/ 

I was introduced to the textWrap() function using:
https://p5js.org/reference/p5/textWrap/ 

I used ChatGPT when I faced a setback with the movement of the note pad. As mentioned, I defined the noteAngle variable and used it in the mouseClicked() function to tilt the note. Initially, the note moved only a very small distance with each click, which made it look static and took away from the effect of new notes being added on top. ChatGPT helped me realize that my mistake was writing noteAngle = radians(-1, 1) with fixed numbers, which made the movement predictable. Instead, I learned to use the random() function to randomly select a position within the desired range on each click, producing the realistic and dynamic note movement I wanted.

homework week 4

i tried

so for this one i ended up making this floating eyeball angel thing that generates some cryptic dialogue in a manner reminiscent of 2d indie game boss fight interactions. (click the screen).

the background is very similar to the one i made for my week 2 assignment, nothing special there. I used frameCount to animate the ominous red rays eminating from the eyeball. The slightly more tricky part was getting them to actually follow the eye as it oscillates, though it was pretty easy to figure out.

as for the eyeball itself, i implemented some rotation to give it some depth, and had it oscillate via a sin() function, for which i referred back to the slides and studied the example projects provided. i think he’s my favorite part since it turned out exactly as i intended.

//eyeball & bg animation settings
push();
let pos = 200 + sin(angle) * amp; //sin oscillation
angle += 1;
translate(100, 0);
strokeWeight(0.5);
stroke(0);
noFill();

let x = 300;
let y = pos;
let pattern = 10;

if (frameCount % 8 > 6) {
  pattern = 29;
} else if (frameCount % 8 > 4) {
  pattern = 24;
} else if (frameCount % 8 > 2) {
  pattern = 19;
} else {
  pattern = 17;
}
while (pattern > 1 && pattern < 1000) {
  strokeWeight(1);
  circle(x, y, pattern);
  pattern += 14;
}

//eyeball
strokeWeight(1);
stroke(255);
fill(0);
circle(300, pos, 95);
rotate(-5);
ellipse(262, pos + 27, 40, 55);
ellipse(265, pos + 27, 25, 35);
stroke(255);
strokeWeight(2);
fill(255, 0);
rotate(15);
ellipse(333, pos - 100, 120, 20);
pop();

the generated text was incredibly frustrating to figure out on a short notice, especially since i insisted upon having it animated in a typewriter style. I’ll be honest, I still don’t fully understand what I did, especially since I stayed up all night figuring it out after having deleted all my code by accident (would not recommend). Here I had a friend teach me some tricks, as well as ask AI mode on google to help explain how text generation works, however i did all the actual coding myself. My friend really saved my life, though.

i was thinking of adding some interactive dialogue options, but thats for another time perhaps.

im so tired bro

Week 4 Assignment – London Underground

Concept

For this week’s assignment, I was fascinated by the data visualization we did in class, so I wanted to create something similar using that technique. I was browsing the internet for inspiration and came across this website. In the video on the page, one of the data visualization ideas showed the Paris metro activity. This inspired me to visualize the London underground passenger traffic.

I had to create my own CSV file to obtain the data I needed. I used two different datasets from the Transport for London (TFL) website. One of them shows the total number of passengers using stations weekly, and another one shows the coordinates of the stations. I removed all the unnecessary columns and pasted the coordinates with the passenger counts and station names.

I ran into an error where the console said that some of the values it loaded from the CSV file were NaN, even though I made sure to delete any rows with no data. In previous data science classes I took, we always added a check to ensure we were only taking valid values, so I did the same in this sketch. I made new arrays that only add values that are not NaN.

I also added a feature to allow you to check the station name and the number of weekly passengers when you hover the mouse over it.

Reflection

Despite my love for the technique of data visualization, this is not one of my proudest works simply because I was extremely busy this week and could not dedicate as much time as I wanted to on this sketch. In the future, I would definitely like to add a map of London in the background and also to space the circles out in a way where you can actually see each station since there is a lot of overlapping circles and you are not able to properly see each station when you hover your mouse.

Week 4 : Reading Response

While I was reading the text, I did see the point the author is trying to make. I also got confused and frustrated whenever there were not many or unclear instructions for something, or if there were no instructions at all. I remembered when I used the washers for the first time in uni, I got confused about how they worked because they were different from what I used before, and I even tried to open it from a different direction since the handle was so sleek that you couldn’t tell it was the handle at all. This connects to the idea of failure of discoverability and signifiers mentioned in the reading, because the design did not clearly show where the action should take place, which led to trial and error and frustration. 

Additionally, when I got back to uni after the spring semester, they changed both the washers and dryers with options that were very limited (heavy, normal, delicates) compared to the multiple options before (eco, quick, mixed, delicates, heavy, wool, etc.) Without any proper instructions or signifiers explaining what is considered a heavy, normal, or delicate wash, washing clothes could go wrong. This confused my understanding of how the machine worked, since my experience with washers no longer matched the new system image, and I was left unsure about what each setting did. Situations like this could be improved by adding clearer labels, brief explanations, visible signifiers, and better feedback so users can understand how things work without feeling lost.

Applying the author’s principles of design to interactive media, especially Human-Centered Design (HCD), helps me think more about how users actually experience what I create. I realized that it’s not enough for an interface to just look nice, it needs to be easy to understand and navigate. The readings reminded me how important it is to include clear labels and signifiers so users don’t have to guess what a button, icon, or feature does. Actions should be easy to discover, and feedback should appear immediately when something works, so users don’t feel lost or frustrated.

Week 4: Assignment

CONCEPT

For this week’s assignment, I decided to make generative text based on a floating lantern event. I was listening to Tangled’s “I See the Light” when the idea came to me. As far as I know, people usually make wishes or set intentions before releasing their lanterns. To show this, I added different verbs and objects into my code and had p5.js generate them into random sentences, so each lantern shows a different kind of wish or intention.

I kept the visuals simple and warm to match the feeling I imagined. The lanterns float slowly upward, like in a real event, and I added a soft flicker inside so they feel alive. When you hover your mouse over a lantern, a text appears below showing the wish. I hid the wishes at first so you people can only see them when interacting with the lanterns.

HIGHLIGHT

The piece of code I’m most proud of is this:

// ------------- text hover ------------- //
function hover(lanterns) {
  for (let l of lanterns) {
    if (dist(mouseX, mouseY, l.x, l.y) < 25) {
      stroke("rgb(23,23,55)");
      strokeWeight(3);
      fill(255);
      textFont(myFont);
      textSize(28);
      textAlign(CENTER, BOTTOM);
      text(l.phrase, width / 2, height - 20);
      break;
    }
  }
}

This function checks if the mouse is close to any of the lanterns. If it is, it displays a phrase at the bottom of the screen. I like this part because it adds interaction instead of just having a static scene. It makes the viewer move their mouse around and discover different messages. I’m especially proud of it because it connects the visual part (the lanterns) with the text in a simple but meaningful way, making it more engaging and giving each lantern its own small story.

REFLECTION

Overall, I really enjoyed making this assignment and seeing how it developed throughout the process.  One thing I would like to explore further is adding more movement and animation to the lanterns. At first, I planned to make them gently sway, like real lanterns do when they float up into the sky. However, because of time constraints, I wasn’t able to add that effect. If I had more time, I would definitely include the sway to make it feel more natural and complete.

I would also like to experiment with making the movement slightly different for each lantern, so they don’t all move the exact same way. Small changes like that could make the scene feel more realistic and alive. Overall, I’m proud of what I created, and I can see a lot of potential to keep improving and building on this idea.

USAGE OF AI

Most of my code came from previous in-class exercises and what I already understood about p5.js. However, I did use ChatGPT to help me with a few parts that I wasn’t fully confident about yet: the gradient background, the lantern movement, and the twinkling stars. I mainly used it to understand the logic behind those effects so I could apply them in my own way.

For the gradient background, I wanted something that would reflect what we see on the sky during nighttime. I asked how to create a gradient effect, and I learned that I could draw horizontal lines and slightly change the color value based on the y position. By adjusting the blue value little by little like this:

 
stroke(23, 23, 55 + y * 0.1);
line(0, y, width, y);

This code helped me create a soft transition that makes the sky feel more atmospheric.

For the lantern movement, I needed help in remembering how to animate them moving upward and then reappearing at the bottom after leaving the screen. I then remembered that subtracting from the y value makes them move up, and using a conditional statement resets their position:

l.y -= 0.3; // movement
    if (l.y < -50) {
      l.y = height + 5;

This made the animation feel continuous, like an endless lantern release.

For the twinkling stars, I asked how to make them look like they were softly flickering instead of staying the same brightness. I learned that adding a small random change to the brightness each frame and then constraining it within a range creates that effect:

s.brightness += random(-10, 10);
s.brightness = constrain(s.brightness, 150, 255);

This made the sky feel more alive and less static, which matched the mood I was trying to create.

Overall, I used AI mostly when I got stuck.It gave me a better understanding of how to create those effects, but I adjusted everything to match the look and feeling I wanted for my project. It supported my process rather than replacing it, and it helped me feel more confident experimenting with movement and atmosphere in my project.

Source: https://p5js.org/reference/ and Class Slide decks