Reading Reflection – Week 5

Reading this honestly made me laugh a little at the Marvin Minsky anecdote, the idea that “the problem of computer vision” could be assigned as a summer project feels almost delusional now, and I think the article uses that story perfectly to show how much we underestimate what vision actually means and what it really involves. What really stayed with me is the description of digital video as computationally “opaque,” because that word completely shifts how I think about it now. We all know text carries structure and meaning, whereas video is just, as stated in the text, rectangular pixel buffers with no built in meaning. Humans attach meaning almost instantly, whereas computers need instructions just to separate foreground from background.

I also found it interesting that many of the techniques that were mentioned in the reading, like frame differencing and brightness thresholding, sound simple but are actually incredibly dependent on the physical conditions of the place. The article kept on emphasizing that no algorithm is completely “general,” and that honesty stood out to me because it means computer vision only really works smoothly and successfully when the environment is carefully prepared for it, which is actually crazy if you think about it, because it feels like everything you once knew about how computers see was a lie. The workshop example with the white Foamcore made that very clear, since the students basically redesigned their physical space to make brightness thresholding easier. That detail made me realize that computer vision is not just about writing a more complex and smart code, but also about kind of staging reality so the system can read it, which feels less like artificial intelligence and more like controlled intelligence.

The surveillance themed works fropm the reading added another layer that I couldn’t ignore. When Rokeby describes his system as “looking for moving things that might be people,” the phrasing feels sort of purposefully detached, and that detachment made me feel a little unsettled. The same foundational techniques that allowed Videoplace to create playful full body interactions are also what made Suicide Box possible, quietly recording real tragedies, which is just so scary to think about. I think that tension is what makes computer vision in interactive art powerful and complicated at the same time, because it forces us to confront how easily bodies can be tracked and reorganized into data. For me personally, the most compelling idea that i got from this reading is that computer vision does not really just detect what is there, but kind of reflects what we choose to prioritize and make visible to the computer. Overall, this was an extremely fascinating reading and truly opened my eyes to the “true” meaning and reality behind computer vision.

Week 4 Assignment-Data Visualization

The Concept:

I decided for this assignment to recreate the five-star rating system used for film ratings. So my plan was for there to be five stars, and once you click on any of the five, it will show you the films with an average rating of that particular star.

The Process:

I first loaded all my images, fonts, and csv file.

I thought this would be easier than it actually was; this was much, much harder than I expected. I first started by building my stars in a separate class to make it neater and easier for me to use. I followed this YouTube tutorial to make them since there is no star shape on p5, and trying to create the stars using lines that I would have to manually connect was too complex.

After creating the stars, I put them on my screen using a for loop to present 5 stars on the canvas. The first challenge I faced was getting the stars to light up or get filled in once my mouse was inside them. I tried using a particular if statement within a for loop, stating that if we were at index=0 to fill, but that did not work. I knew I had to do something with the distance between the mouse and each star’s outer radius.  So, I decided to create an if statement where I used the distance function, but I used for my parameters mouseX, mouseY, and the outer radius. Unfortunately, that did not work, so I had to use ChatGPT to fix it for me, and it instead included the x and y parameters of the stars. After that, I created an if statement where if the distance is less than the outer radius, the star would fill and “light up.” That’s when I faced another issue where, at some point, two stars would light up at the same time. I decided to call my sister, who is a computer science major, to help me solve the problem. We tried manipulating the x and y positions of the stars, but the problem persisted. My sister then suggested that the issue is with the if statement of the distance between the mouse and the outer radius, since there will always be an overlap between the stars’ outer radii. She suggested I do some addition/subtraction to the mouseX and mouseY positions within the if statement. Once I did that, the stars lit up successfully.

The real challenge was extracting the ratings of the data onto the canvas. I tried many different things such as a triple nested loop like this:

for(let i=0; i<numRows; i++){ for(let j=0; j<=Film_title; j++){ for(let k=0; k<ratingStar.length; k++){ if(mouseIsPressed&&d < ratingStar[i].outerRadius&&int(Average_rating[i])<1.5&&ratingStar[0]){ background(255) text(Film_title[j], 200, 200) } } } }

but it did not work. I knew I wouldn’t be able to work out the code, so I decided to go the peer tutors to figure something out. Although we couldn’t completely figure out what to do, she suggested I use ranges.

My friend then explained to me that I could create a minimum and a maximum and assign them to an array of ratings we created. While his method worked, it was difficult to understand for someone with minimal coding experience.

I asked for help from the professor as this stage, and she gave me a starting code that helped me understand what I needed to implement. After I applied the code, the code fully ran successfully.

Code I’m Proud of:

Even though I wrote out the code with the help of the professor, I still felt particularly proud of this chunk because it made me realize the logic behind what I wanted to do, and it felt like a moment of realization and understanding. Also this was the hardest part of the code I wanted to achieve so being finally able to do it was very relieving.

    } else if (status == 1) {
      for(i=0;i<TwoRatingFilms.length;i++){
      text(TwoRatingFilms[i], 260, 240+i*30);
      }
    } else if (status == 2) {
      for(i=0;i<ThreeRatingFilms.length;i++){
      text(ThreeRatingFilms[i], 420, 240+i*30);}
    } else if (status == 3) {
for(i=0;i<FourRatingFilms.length;i++){
      text(FourRatingFilms[i], 590, 240+i*30);}
    } else {
      for(i=0;i<FiveRatingFilms.length;i++){
      text(FiveRatingFilms[i], 540, 420+i*30);}
    }
  }

Future Reflection:

Honestly for the future I would try to aim for something only a little bit outside my comfort zone. I would also not underestimate what I would have to do like I did in this project.

References:

https://youtu.be/rSp5iSTXwAY?si=RaaxtuAu8XivtpAF

 

Week 5 Midterm Project

Midterm Project Progress 1: Polyglot Galaxy (Week 5)

For my midterm project, I decided to develop an interactive generative artwork called Polyglot Galaxy. The concept is to create a multilingual visual space where users can click to “stamp” greetings from different languages onto a galaxy background. Each click generates a unique phrase using randomness, along with visual glow effects and sound feedback. My goal is to combine text, image, sound, and object-oriented programming into one  interactive experience that reflects my interest in languages combining aesthetics like sounds, animations.

In terms of user interaction design, the program starts with a start screen and transitions into the play state after the first click. Once the user enters the play mode, clicking on the canvas generates a new greeting text at the mouse position. The phrases are randomly selected from a JSON file and styled with different sizes, colors, and blinking alpha effects using sine functions.

The coded that I am proud of would be:

// sound on click
 if (mouseX <= 300) {
   clickSound.play();
 } else { 
   clickSound1.play(); 
}

As I also implemented two different sounds depending on the click position (left or right side), where split in the middle if you click towards the left side of the frame it will play 0.mp3 file sound more of a peep sound and if you click more towards the right side it would play 1.mp3 sound more deep and it also restart function using the “R” key to reset the session without reloading the page. This structure I think gives a good step by step approach.

From a coding perspective, I have begun designing the project using functions, classes, and interactivity as required. I created a GreetingText class to manage each stamped phrase as an object, including its position, color, size, glow shape, and blinking animation. The generatePhrase() function handles generative text creation using randomness from language data, punctuation, and decorative elements. Moreover, I added a state system (“start” and “play”) to control the interface flow.

The challenging part was integrating multiple media elements together incorporating sound playback, generative text from JSON, and object-oriented animation in the same system. To reduce this risk, I tested each component separately by doing sound playbacks on mouse click, image backgrounds for different states, and a prototype class for animated text objects. I also added a limit to the number of stamped texts to ensure the sketch runs smoothly.

I think maybe I can improve by adding some songs into the galaxy output at the start menu to attract people e.g. assuming you were watching the movie guardians of the galaxy or universal starting music.

https://p5js.org/reference/p5/textAlign/

Reading Reflection-Week #5

The reading made me think back on how invisible software work used to be in the past and how easily important contributions can be overlooked nowadays, especially when they do not fit dominant expectations of who a “technical innovator” should actually be. The article highlights that software wasn’t even considered important in the early Apollo mission planning, which aligns with how many modern technological systems still undervalue behind-the-scenes digital labor. From my own experience studying technology and creative coding, I see a similar pattern which is that people often praise visible outputs (design, hardware, final product) while ignoring the programming logic that makes everything function. This actually supports the author’s point that Hamilton’s work was revolutionary not only technically but conceptually, because she helped establish software as a legitimate engineering discipline. But at the same time, the reading also challenges my previous assumption that space exploration was mainly about hardware and astronauts; it made me reconsider how much critical decision-making and problem-solving actually happens in code and systems design.

However, the author might show some bias by strongly linking Hamilton as a singular heroic figure, which risks simplifying the collaborative nature of large-scale scientific projects. While the article acknowledges teams and engineers, it still centers a narrative of individual genius, which is common in technical journalism and can actually overlook collective labor and institutional structures. This raises questions for me about how history chooses which contributors to actually highlight and which to marginalize. I also wonder whether the article’s emphasis on gender barriers, while being important, might shape the story to fit a modern narrative about women in tech rather than fully exploring the technical debates and engineering processes of the time. The reading ultimately makes me question how innovation is actually thought off. Do we celebrate people based on their actual impact, or based on how well their story fits contemporary social values and narratives about progress and inclusion?

Reading Reflection – Week 5

I used to assume computer vision worked like human vision, just less advanced, but I realized the difference is definitely bigger. Human vision automatically understands meaning, like someone’s face, while digital video is “computationally opaque”. It basically shows that a camera image is just pixel buffers with no meaning unless an algorithm like frame differencing, background subtraction, or brightness thresholding interprets it. I was surprised that simple techniques like object tracking can detect motion just by comparing the pixels. The vision systems do not have to be so complex. Even basic detection can be powerful if the physical environment factors in the code are designed well.

One example that stuck with me was Myron Kruger’s videoplace. I found it really interesting that early interactive art already used vision tracking to let people draw with their bodies. It made me realize how computer vision can expand the way we can interact with technology. At the same time, Rafael Lozano Hemmer’s work shows more of a critical side. His belt tracking piece turns surveillance into art, which made me wonder whether interactive work with surveillance abilities that track viewers is also training us to accept being watched. 

The reading left me to question if computer vision works best when environments are made specifically so that the computer can easily detect it, does that mean future spaces will be designed more for machines, made accessible for machines, than us humans? Like, will there be now more controlled lightning, infrared illumination, and retroreflective material? I think this text definitely shifted my perspective from seeing computer vision just as a technical tool to also seeing it as a cultural force that affects art and even social power.

Assignment 4: Reaction Times Experiment

For this assignment, I wanted to create something interactive but still simple enough to fully understand and explain. Instead of focusing on complex visuals, I decided to make a reaction time test, where the user presses the space bar as quickly as possible when the screen turns green. I liked this idea because the interaction itself generates the data, instead of relying on an external file.

The sketch starts with a short instruction screen. Once the user presses the space bar, the screen switches to a “wait” state for a random amount of time, and then turns green to signal the user to respond. The reaction time is recorded for each trial, and after a set number of trials, the sketch displays the results. I kept the visuals very minimal and focused more on clarity than decoration.

Embedded Sketch:


For the results, I used three images (a rabbit, a human, and a turtle) to represent fast, average, and slow reaction times. I also added a simple bar chart where each bar represents one trial. Faster reaction times result in shorter bars, while slower reaction times create taller bars. This helped make the differences between trials easier to see.

let barH = map(rt, 200, 2000, 20, 160);
   barH = constrain(barH, 20, 160);

   fill(255, 120, 170);
   rect(chartX + i * barW, chartY - barH, 45, barH);

I’m proud of this section of the code because it takes a the reaction time value and turns it into something visual. By using map(), I was able to convert reaction times into bar heights, which made the results much easier to interpret.

Most of the sketch was built using variables, arrays, conditional statements, functions, and simple state management. Using different states (intro, wait, go, and results) made the interaction much easier to organize. I also used millis() to measure reaction time, which I learned more from p5.js reference and looking at simple reaction-time examples online.

Overall, I’m happy with how this sketch turned out. It helped me understand how user input can be treated as data and then visualized in a clear way. The most challenging part was making sure the input behaved correctly in each state, especially preventing early presses from being counted. For future assignments, I would like to experiment with more detailed feedback per trial or different ways of visualizing reaction time beyond a bar chart.

Reading Reflection:

Norman’s reading made me realize that many everyday frustrations come from poor design rather than user error. He explains how objects often fail to clearly communicate how they should be used, which causes confusion even in simple situations like opening a door. This made me think about how important clarity and feedback are in interactive design.

Relating this to my own work, I realized that even small design choices in my sketches affect how intuitive they feel. If an interaction is not clear, users may not know what to do or may assume something is broken. Norman’s ideas encouraged me to think more carefully about how my sketches guide the user, and how simple visual cues or feedback can make interactions feel more natural and less frustrating.

reading reflection, week 3

First of all, I really enjoyed the way this was written. ironically, I think it succeededs emulating  interactivity itself in a way – atleast to the highest extent a static piece of text is capable of.  There’s a clear pattern in the way he writes: he establishes a prospect, makes an attempt at predicting the reader’s reflections or reactions, and then responds accordingly.

He makes the argument that written words cannot be interactive, however, if we were to take his own definition of interactivity (having two parties who, in turn, listen, think, and respond to one another), and if we were to assume he is atleast somewhat correct in his estimations of the reader’s thoughts, there’s clearly an interaction going on between the author and reader. The author is performing an interaction – perhaps not one with any reader in particular, but is clearly making an attempt to reflect back on and respond to a hypothetical audience. At one point he says that “movies dont listen to their audience, nor do they think about what the audience may be saying,” and yet he manages to do exactly that through words alone.

I’m unsure of how intentional this was on the author’s part – I was inclined to give him all the credit, yet he makes abundantly clear his opinion on this matter. I do agree that it isn’t ‘real’ interaction in a traditional sense, but a deeply psychological form of interaction that i think should be appreciated more. I think we could all benifit from achieving this to some level in our own writings.

 

reading response, week 4

I hate Apple and their designs so much. it’s very intentional too, the way they make it such an inconvenience to deal with any non-Apple electronic device to the point where you feel compelled to buy more of their products just to accomodate your phone. The way they impose unnecessary updates so that you’re almost forced to upgrade to a newer model because now your phone is no longer compatible with even their own products. The way they make it such a hassle to transfer your photos and data to a non-Apple product, so the longer you have an iPhone the harder it is to switch to another brand. The way they insist upon automatically sharpening your photos for no reason and making your selfies look fried to tabsolute hell with no ability to disable it. (Still don’t understand that one, it’s just stupid design.) The way their products are so awfully flimsy that they just break for no goddamn reason and Oh no! looks like you need to upgrade to another stupid ass iPhone because all your photos are automatically uploaded to the iCloud and you had no warning or prep time to find a way to save everything to another cloud! pisses me off. someday i’ll free myself of this corporate curse…

anyways if there’s one lesson about design you can glean from this god awful mess of an evil corporation, it’s to not make your consumers hate you, i guess. put the people first. also Macbooks are so confusing to navigate i still refuse to use one.

Week 4: Arts and Crafts Generative Text

My Concept:
For this week’s assignment, I was interested in creating a generative text for something specific and engaging. I explored different ideas, such as movie recommendations, a recipe cookbook, and motivation cards, but then I realized I wanted to create something playful and useful, something that’s fun for the user and easy to benefit from. That’s when I came up with the idea of an arts and crafts guide, since it’s simple, enjoyable, and something many people could try, and I personally like it as well. I created a note-like arts and crafts guide that gives the user multiple random ideas they could experiment with, whether using paper, paint, or other materials. It has a stable template of familiar instructions but allows the user to explore many ideas by simply clicking, changing the materials, actions, and the craft they will end up with. My vision was a board with a note pad on it, where other notes get stuck on top, so I chose a pastel brown background to resemble a pin board and a yellow note pad for the text. I also made the text color and the pins on the top corners change each time, creating a more realistic effect of a new note being stuck on top of an older one.

Embedded Sketch:

A Code I’m proud of:
A piece of code I’m particularly proud of is using the split function to allow one position of my sentence to generate multiple options. This way, one material could have more than one possible action and craft. I used the | character in the CSV file to separate multiple choices within a single row, and then used it in the sketch when calling the variables into the sentence, allowing the program to select one option for each category.

CSV file:

paper, fold|cut|glue, flower|card|plane
cardboard, fold|cut|glue, box|poster|house
clay, mold|shape|paint, bowl|vase|sculpture
cloth, sew|stitch|cut, shirt|bag|scarf
yarn, knit|weave|stitch, scarf|blanket|bracelet

Sketch file:

// These read and call each material, action, and object  from the chosen row
  // The actions and objects are split since they have multiple options assigned in each row
  let material = singleRow[MATERIAL];
  let actions = split(singleRow[ACTIONS], "|");
  let objects = split(singleRow[OBJECTS], "|");

  // This allows only one action and object to be chosen randomly
  let action = random(actions);
  let object = random(objects);

Another part of the code I’m satisfied with is the fonts. I enjoyed applying what we learned in class to change the font of my text, which made it look much better. I was also able to include more than one font in the same sketch.

// Define font variables to store the different ones used
let myFont;
let myFont2;
let myFont3;

// This function loads all the files in the sketch before it generates, to ensure they appear
  myFont = loadFont("titlefont.ttf");
  myFont2 = loadFont("textfont1.ttf");
  myFont3 = loadFont("textfont2.ttf");

// Draw and adjust the title of the note pad
  fill("black");
  textSize(32);
  textFont(myFont);
  text("Arts and Crafts", 6, -50);

  // Draw and adjust the text of instructions
  fill(textColor);
  textSize(16);
  textFont(myFont2);
  textWrap(WORD);
  text(idea, -90, 20, 190);

  // Provide text instructions at the bottom of the frame
  fill("rgb(103,85,79)");
  textFont(myFont3);
  textSize(16);
  text("Click for another idea!", width / 2, height - 20);

A minor piece of code I initially struggled with was the rotation of the note. At first, the note moved too slowly and didn’t look like a realistic new note being placed. I then figured out that I needed to adjust the positioning and randomize the range of radians, which gave the note the movement I wanted.

// Tilt the note pad
  noteAngle = radians(random(-5, 5));

Reflection and ideas for future work or improvements:
Overall, I feel that I was able to apply what we have learned over the past four weeks while also exploring additional aspects to create my project, and I am satisfied with the result. I personally enjoyed designing a set framework that can generate many different outcomes, while also allowing me to add personal touches. I find this concept very fascinating, and I think many different ideas, whether fun or practical, could be implemented in a neater and more accessible way using this approach. However, while I initially found the project interesting, I began to feel that it might be a bit basic as I worked on it. For future work, I feel I could improve the visual quality to make it more appealing to the viewer or user, for example, by adding elements like scissors or paint around the note pad, or including an icon for each craft. I could also make it more interactive, either by allowing the notes to move or by creating a book that actually flips pages, with photos and longer instructions for more advanced crafts, to give a more realistic and engaging experience.

References:
I mainly used the class slides to remind myself of the key concepts needed for creating generative text. In particular, I referred to uploading fonts and lists using different files, such as .csv and .ttf, along with the preload() function.

I also used the p5.js website to review and better understand specific functions, including:
https://p5js.org/reference/p5/translate/
https://p5js.org/reference/p5/rotate/
https://p5js.org/reference/p5/split/
https://p5js.org/reference/p5/int/ 

I was introduced to the textWrap() function using:
https://p5js.org/reference/p5/textWrap/ 

I used ChatGPT when I faced a setback with the movement of the note pad. As mentioned, I defined the noteAngle variable and used it in the mouseClicked() function to tilt the note. Initially, the note moved only a very small distance with each click, which made it look static and took away from the effect of new notes being added on top. ChatGPT helped me realize that my mistake was writing noteAngle = radians(-1, 1) with fixed numbers, which made the movement predictable. Instead, I learned to use the random() function to randomly select a position within the desired range on each click, producing the realistic and dynamic note movement I wanted.

homework week 4

i tried

so for this one i ended up making this floating eyeball angel thing that generates some cryptic dialogue in a manner reminiscent of 2d indie game boss fight interactions. (click the screen).

the background is very similar to the one i made for my week 2 assignment, nothing special there. I used frameCount to animate the ominous red rays eminating from the eyeball. The slightly more tricky part was getting them to actually follow the eye as it oscillates, though it was pretty easy to figure out.

as for the eyeball itself, i implemented some rotation to give it some depth, and had it oscillate via a sin() function, for which i referred back to the slides and studied the example projects provided. i think he’s my favorite part since it turned out exactly as i intended.

//eyeball & bg animation settings
push();
let pos = 200 + sin(angle) * amp; //sin oscillation
angle += 1;
translate(100, 0);
strokeWeight(0.5);
stroke(0);
noFill();

let x = 300;
let y = pos;
let pattern = 10;

if (frameCount % 8 > 6) {
  pattern = 29;
} else if (frameCount % 8 > 4) {
  pattern = 24;
} else if (frameCount % 8 > 2) {
  pattern = 19;
} else {
  pattern = 17;
}
while (pattern > 1 && pattern < 1000) {
  strokeWeight(1);
  circle(x, y, pattern);
  pattern += 14;
}

//eyeball
strokeWeight(1);
stroke(255);
fill(0);
circle(300, pos, 95);
rotate(-5);
ellipse(262, pos + 27, 40, 55);
ellipse(265, pos + 27, 25, 35);
stroke(255);
strokeWeight(2);
fill(255, 0);
rotate(15);
ellipse(333, pos - 100, 120, 20);
pop();

the generated text was incredibly frustrating to figure out on a short notice, especially since i insisted upon having it animated in a typewriter style. I’ll be honest, I still don’t fully understand what I did, especially since I stayed up all night figuring it out after having deleted all my code by accident (would not recommend). Here I had a friend teach me some tricks, as well as ask AI mode on google to help explain how text generation works, however i did all the actual coding myself. My friend really saved my life, though.

i was thinking of adding some interactive dialogue options, but thats for another time perhaps.

im so tired bro