Reading Reflection – Week 8

The Margaret Hamilton reading honestly made me think about how often people say “that would never happen” just because it’s easier than actually dealing with the possibility that it might. Like the people around her weren’t necessarily careless, but they were comfortable, and that comfort made them dismiss things that didn’t fit their expectations. What I liked about Hamilton is that she didn’t rely on that kind of thinking at all, she kept imagining worst-case scenarios even when no one else wanted to. When she talks about picturing “headlines in the newspapers… and it would point back to me,” it didn’t feel dramatic, it just felt like she understood responsibility in a very real way. It made me realize that being “good” at something isn’t just about skill, it’s also about how seriously you take the possibility of things going wrong. And I feel like we don’t really do that anymore, we trust things like apps and technology without thinking about what’s behind them or who had to sit there and think through all the ways things that could fail.

The Norman reading felt a bit weird to me at first because his whole idea that “attractive things work better” sounded almost too simple, like something you’d see on a Pinterest quote or something, but then the more I thought about it, the more I realized he’s not really talking about the object itself, but about us and how easily we’re affected by how something looks or feels. The part that actually stuck with me was when he says we “overlook design faults” when we’re in a positive emotional state because that just felt very true in an uncomfortable way. Like if something looks nice or feels smooth, we’re way less likely to question it, even if it’s not actually working properly. And that kind of made me disagree with him a little, because it’s not just that beauty helps, it also kind of distracts. It made me think about how often we trust things just because they look polished, and we don’t even realize we’re doing it.

When I put both readings together, they didn’t feel like they were saying the same thing at all, but they still connected in a weird way. Hamilton is so focused on not overlooking anything, even the smallest possible mistake, while Norman is basically saying that we, as people, do overlook things all the time, especially when something feels good or looks nice. And I think that difference between the two readings is what made this interesting for me, and at the same time created that connection in my head of the bigger picture of both readings.

Week 8 – Unusual Switch

Ardino file on GitHub

Arduino Set-up:

Arduino Illustration:

Project Demo:

IMG_9952

Concept

For this project, I wanted to create a switch that isn’t like a traditional button. Instead of pressing something with my fingers, I used my body as the interaction. I ended up designing a switch using my elbow.

The idea is simple. I placed two pieces of aluminum foil on both sides of my inner elbow. One piece is connected to pin 5, and the other is connected to ground. When my arm is straight, the two pieces of foil don’t touch, so the circuit is open and the LED stays off. When I bend my arm, the skin folds slightly and the two pieces of foil press together, turning the LED on.

Code Snippet I’m Proud Of

const int switchPin = 5;

void setup() {
  pinMode(switchPin, INPUT_PULLUP);
  pinMode(LED_BUILTIN, OUTPUT);
}

void loop() {
  int state = digitalRead(switchPin);

  if (state == LOW) {
    digitalWrite(LED_BUILTIN, HIGH);
  } else {
    digitalWrite(LED_BUILTIN, LOW);
  }
}

This part looks simple, but it took me a while to understand why it works. Using INPUT_PULLUP was the key because it keeps the input stable when nothing is touching it and flips to LOW when the foil is touched, connecting the pin to ground. Once I understood that LOW actually means pressed, everything started to make sense to me.

Problems Encountered

The biggest issue I had was that the LED kept blinking randomly, even when I wasn’t touching anything. At first, I thought my wiring was wrong, but the problem was actually the input pin floating. Without INPUT_PULLUP, the Arduino couldn’t decide whether the input was HIGH or LOW, so it kept switching between them.

Another problem was the foil itself. I initially wrapped the foil around the wires, which caused constant and unstable connections. This made the LED either stay on or flicker all the time. I had to learn to keep the foil flat and only tape the wire at one edge.

I also got confused by the lights on the board. I kept looking at the green power light instead of the LED connected to pin 13, which made debugging way harder than it needed to be.

Reflection

This project was definitely frustrating at first, but it helped me understand how circuits actually work. Before this, I didn’t fully understand how important small details are, like which row you use on a breadboard or how stable a connection needs to be.

If I were to improve this project, I would make the foil setup more durable, neater, and less sensitive to unintentional contact. It works now, but it can definitely be cleaner and more organized.

Overall, I feel like I for sure moved from just following instructions to actually understanding what I’m building, and I’m really proud of myself.

Midterm Project – Madame Digital

Embedded Sketch

Concept

Have you ever wanted to take a picture with a celebrity, but without the impossible chances of actually meeting one?That is basically the fantasy world Madame Digital captures!

My midterm project is basically inspired by the whole wax museum experience, but I wanted to turn it into something more personal and interactive instead of just something you look at. Instead of walking around and seeing celebrity figures from a distance, the user actually gets to choose a celebrity and then step into a digital photobooth with them. Once you pick who you want, the webcam turns on and you see yourself on screen next to the celebrity, almost like you’re actually standing there together. You can adjust yourself in the frame, see a live mirrored preview, and then take a photo that combines both of you into one image. The final result is like a little souvenir picture you can save, which was important to me because I wanted it to feel like you’re taking something away from the experience, not just clicking through it. I really liked the idea of taking something that normally exists in a physical place and turning it into something accessible on your screen, but still keeping that same excitement.

At the same time, I did not want it to feel like just another random app with buttons. I wanted it to feel like entering a space. Something a little theatrical, and a little staged in the best way possible. That is why I leaned into the red curtain background, gold touches, framed celebrity cards, and a series of screens that slowly guide the user through the experience, really giving them the museum vibe. The point of my project was not to try to be realistic. If anything, its charm comes from how intentionally artificial it is, just like a wax museum. Even though the whole thing is obviously digital and staged, the user still gets to walk away with a “memory.” That is what made the concept feel fun to me from the start, and that is what I wanted to preserve throughout the project.

How the Project Works

My project works as an interactive experience that guides the user through a digital version of a wax museum. When the user opens the project, they begin on a title screen and are led through instructions, a celebrity selection gallery, a live photobooth, and finally a souvenir screen. At each stage, the user interacts with the screen by clicking buttons or selecting options, which moves them forward in the experience. Once a celebrity is chosen, the webcam is activated and places the user next to their selected celebrity in a shared frame. The user can then take a photo, which combines both images into a single souvenir-style picture that they can view and also save at the end.

To make my project flow between the different stages, I used the state-based system we learned in class, where a variable called screenMode (also known as gameState from class) controls which screen is being displayed to the user. This was honestly one of the most important parts of the project because it keeps everything organized and prevents it from feeling messy and chaotic. Depending on the value of screenMode a different function runs which draws that specific screen.

if (screenMode === "start") {
  drawStart();
} else if (screenMode === "how") {
  drawHow();
} else if (screenMode === "gallery") {
  drawGallery();
} else if (screenMode === "booth") {
  drawBooth();
} else if (screenMode === "souvenir") {
  drawSouvenir();
}

I feel like this gameState system really shaped how my project came together. It really helped a lot with debugging, because if something wasn’t working, I could immediately tell which screen the problem was in.

Another part I am really proud of is how I built the gallery using arrays and loops instead of placing everything manually. Each celebrity is stored as an object, which made the whole system more flexible and easier for me to expand.

let celebrityList = [
  { name: "Taylor Swift", portraitFileName: "taylor.jpg" },
  { name: "Beyoncé", portraitFileName: "beyonce.jpg" },
  { name: "Timothée Chalamet", portraitFileName: "timothee.jpg" }
];
for (let i = 0; i < celebrityList.length; i++) {
  let x = 150 + i * 140;
  let y = 250;
  image(celebrityList[i].portraitImage, x, y, 120, 160);
}

I liked this part because it made the project feel more thought out technically. If I wanted to add more celebrities later, I wouldn’t have to redesign anything, I could just add more information, and it would work.

The part I am most proud of though, is the photobooth and souvenir feature because that is where the whole idea really comes together. Once the user selects a celebrity, the webcam turns on and places them in the same frame. I mirrored the webcam so it works like a normal selfie camera, which makes it feel more natural and less confusing. When the user clicks the snap button, the code creates a new image that combines both the webcam photo and the celebrity photo into one final photo.

souvenirPicture = createGraphics(photoWidth * 2 + 40, photoHeight + 40);

souvenirPicture.push();
souvenirPicture.translate(photoWidth, 20);
souvenirPicture.scale(-1, 1);
souvenirPicture.image(webcam, 0, 0, photoWidth, photoHeight);
souvenirPicture.pop();

souvenirPicture.image(
  chosenCelebrity.portraitImage,
  photoWidth + 20,
  20,
  photoWidth,
  photoHeight
);

This is my favorite part because it turns the interaction into something personal that the user can actually keep. It’s not just clicking through screens, instead you end up with a photo that feels like a souvenir from the experience. I also think the overall flow is something I am proud of. Even though the project is playful, the user is always being guided from one step to the next. Even small choices, like adding a music toggle (on and off button) and using the same museum-ish background style across screens helped make everything feel connected.

Challenges and Areas for Improvement

One of the biggest challenges I ran into was with the webcam. At first, it showed up twice, once as a default video element and once on the canvas. That was confusing because it looked like something was wrong with my drawing code, but it was actually just how p5 handles video by default. Hiding the original video element and manually drawing it solved the problem, but it definitely took some time to figure out. 

Another challenge I ran into was with the proportions and measurements. I had to deal with a lot of them at once, so it definitely got confusing at times. With coding in general, many challenges come up, so there were minor challenges throughout the code. However, with trial and error, I was able to fix them thankfully.

If I were to improve the project further, one thing I would really want to add is celebrity specific music. Right now the experience already has sound, but I think it would be way more immersive if each celebrity had their own audio choice attached to them. For example, if the user chooses Beyoncé, the background sound could shift into one of her songs, or when choosing Timothée Chalamet or Jennifer Aniston, a different sound could play that matches their overall vibe. I think this would make the gallery feel much more alive overall and more like each choice leads to a slightly different experience.

I would also like to add more customization to the final souvenir, like filters, or decorative frames, or maybe even different photobooth themes. The current version showcases the concept really well, but I can definitely imagine advancing it to make the experience feel even more personalized in a way.

References

Throughout this project, I used a mix of p5 examples, class slides, YouTube videos, and a previous student project to help me understand and implement different features.

These helped me figure out how to display the webcam inside the canvas and structure the photobooth layout.

I used these to understand different variables in order to flip the webcam so it works like a mirror.

These were important for learning how to use createGraphics and combine multiple elements into one image, and other just other image related code.

I also referenced a previous Intro to Interactive Media student project for this part. This helped me with the final image saving process.

These helped me understand how to create interactive buttons that respond to hover and clicks.

This example helped me structure the overall flow of my project and handle the different screens. I used a variable (screenMode) that works just like the in-class (gameState).

These helped me implement background music and button click sounds.

links that helped with general concepts and overall Object-oriented programming.

These helped me implement full-screen and understand the responsive canvas.

font 

Background Music

In addition to these references, I relied a lot on the class slides to understand the key concepts.

Usage of AI

I used ChatGPT mainly for the hardest parts of my project, especially when multiple things were happening at once and I couldn’t tell what exactly was going wrong. 

The most difficult part for me was the souvenir photo system. That part is not just simply “taking a picture,” it’s actually creating a completely new image, bringing the webcam photo, flipping it so it works like a mirror, placing it next to the celebrity image, and then saving everything together as one final frame. At first, I didn’t fully understand how that worked because it’s not the same as just drawing things on the screen. ChatGPT helped me understand that I was basically drawing on a separate hidden canvas and then displaying the result later. It also helped me understand how transformations work together, because if I flipped the webcam without positioning it correctly, it would either disappear or show up in the wrong place. That was one of the hardest parts because it required me to think about how everything is layered and combined, not just displayed.

Another difficult part was the celebrity gallery. I didn’t want to just place images randomly, I wanted it to feel like an actual selection screen with evenly spaced cards that respond to the mouse and adjust to different screen sizes. That meant I had to think about layout and proportions and spacing all at the same time. This part was actually more complicated than i thought because everything in my code is based on proportions like width 0.25 or height 0.16, so even small changes (like decimals) would completely change the layout in ways I didn’t expect. ChatGPT helped me understand how to think about these proportions more intentionally instead of just guessing numbers.

I also used ChatGPT to help me understand code that I was getting and adapting from references and other projects. Sometimes I would find an example that did something similar to what I wanted, but I didn’t fully understand how or why it worked. Instead of just copying it, I used ChatGPT to break it down and explain it to me, which helped me actually learn from it and adjust it to fit my own project.

Another thing I used AI for was generating some of my visuals. I used Gemini to create the celebrity images as well as the red curtain background, since I wanted a very specific “Madame Tussauds” style aesthetic and it was faster to generate and then refine them. 

Overall, I used AI more as a tool to understand and debug difficult parts of my project, and to generate some visuals, rather than to build the project itself obviously. 

Week 5 – Midterm Progress

Concept

Every time I walk into Madame Tussauds, I feel this strange mix of excitement and also superficiality from the figures I am encountering. Because you’re standing next to someone you’ve only ever seen on a screen, except they’re not really there, and yet it still feels like you “met” them. It’s staged and curated, but somehow still memorable in a way. That exact feeling is what I want to recreate for my midterm, but in a digital format.

I don’t want to build another game where you’re trying to score points or beat something. I want to build an experience you move through. My idea is to create a wax-museum inspired digital space where you browse through celebrities, pick one and take a photo with them in a photo booth setup.

The whole concept revolves around that illusion of artificial closeness. You’re not actually meeting anyone, but you still walk away with a cute memory. I want users to feel like they stepped into a staged exhibit for a few minutes and left with a souvenir!

Design

Visually, I don’t want this to look like a bright and cartoonish app. I want it to feel sort of dramatic in a way, with a dark background and clean and polished looking framed celebrity cards.

I actually like that wax museums feel a little staged and exaggerated. I want the digital version to embrace that instead of hiding it.

The experience will start with a dramatic opening screen with soft background music. Nothing moves until the user presses start. That intentional pause kind of mimics standing outside an exhibit before stepping in.

After that, there will be a short instruction screen, and then the gallery. The gallery will show multiple celebrity cards. When you click it, you’ll move into the photo booth scene.

In the photo booth, your webcam will appear next to the celebrity you chose. There will be a snap button with a camera shutter sound, and then a final screen showing your “souvenir.” From there, you’ll be able to restart without refreshing everything, because I don’t want the experience to feel like it just cuts off.

Sound is also very important for my interactive experience. The gallery will have background music, and I might let it shift slightly depending on the celebrity chosen. Small things like that will make it feel less flat and boring and more alive.

Frightening Part

The webcam honestly scares me the most. The whole idea depends on that photo moment. If the camera doesn’t work, the entire concept kind of collapses. Browsers can be weird about permission sometimes, and I don’t want to build this whole dramatic museum and then realize the main interaction fails.

Reducing Risk

Instead of leaving the webcam for later, I’m going to test it early. I want to make sure the camera actually works and shows up inside the canvas and that I can capture a image from it before I build everything else around it.

Testing the technical parts like the camera webcam early will make the rest feel less stressful, because once I know the main interaction works, I can focus on the atmosphere of my experience, which is honestly the part i care about the most.

Reading Reflection – Week 5

Reading this honestly made me laugh a little at the Marvin Minsky anecdote, the idea that “the problem of computer vision” could be assigned as a summer project feels almost delusional now, and I think the article uses that story perfectly to show how much we underestimate what vision actually means and what it really involves. What really stayed with me is the description of digital video as computationally “opaque,” because that word completely shifts how I think about it now. We all know text carries structure and meaning, whereas video is just, as stated in the text, rectangular pixel buffers with no built in meaning. Humans attach meaning almost instantly, whereas computers need instructions just to separate foreground from background.

I also found it interesting that many of the techniques that were mentioned in the reading, like frame differencing and brightness thresholding, sound simple but are actually incredibly dependent on the physical conditions of the place. The article kept on emphasizing that no algorithm is completely “general,” and that honesty stood out to me because it means computer vision only really works smoothly and successfully when the environment is carefully prepared for it, which is actually crazy if you think about it, because it feels like everything you once knew about how computers see was a lie. The workshop example with the white Foamcore made that very clear, since the students basically redesigned their physical space to make brightness thresholding easier. That detail made me realize that computer vision is not just about writing a more complex and smart code, but also about kind of staging reality so the system can read it, which feels less like artificial intelligence and more like controlled intelligence.

The surveillance themed works fropm the reading added another layer that I couldn’t ignore. When Rokeby describes his system as “looking for moving things that might be people,” the phrasing feels sort of purposefully detached, and that detachment made me feel a little unsettled. The same foundational techniques that allowed Videoplace to create playful full body interactions are also what made Suicide Box possible, quietly recording real tragedies, which is just so scary to think about. I think that tension is what makes computer vision in interactive art powerful and complicated at the same time, because it forces us to confront how easily bodies can be tracked and reorganized into data. For me personally, the most compelling idea that i got from this reading is that computer vision does not really just detect what is there, but kind of reflects what we choose to prioritize and make visible to the computer. Overall, this was an extremely fascinating reading and truly opened my eyes to the “true” meaning and reality behind computer vision.

Week 4 – Generative Text Output Sketch

My Concept:

For this week’s assignment, I wanted to create something that felt relatable and personal instead of just random words being generated. Since we learned how to load data from a CSV file and generate text from it, I thought it would be fun to make a NYUAD Advice Generator. The idea was to take simple words from a file and turn them into small pieces of campus advice that feel random, questionable, but still kind of make sense.

I liked the idea of the computer choosing the advice instead of me hardcoding every sentence, because it makes the sketch feel unpredictable and sort of personalized in a way. Every time you click, you get something different, which makes it feel more interactive and less fixed and predetermined. I also wanted it to look clean and centered, so it feels like a actual little advice card instead of just text floating on a screen.

Highlight of Sketch:

One part of the sketch that I am most proud of is how I organized the text generation into a fucntion called generateAdvice(). Organizing all those lines of code into one function made it so much more organized and easier for me to call the function rather than repeatdly rewriting the lines of code.

// I created a function for the lines of code that generate the advice for a more efficent and easy to use code, and this way i can call the functiin anywhere in my code without having the repeatdly write these lines again.
function generateAdvice() {

  //these lines of code were heavily inspired by the random poem generator example from class.

  // this line picks a random line from advice.csv file.
  let randomLine = lines[int(random(lines.length))];

  // this line splits the words from the array by commas
  let row = split(randomLine, ",");

  // after splitting the csv line, row is now an array and each position in the array holds one word from the the advice file, and these lines of code  extract each word using the index constants i defined at the top of the code.
  
  //these lines of code are similar to the poetry example from class but are organized in a clearer and more consice way that fits my code.
  let scenario = row[SCENARIO];
  let action = row[ACTION];
  let item = row[ITEM];
  let location = row[LOCATION];

  // these lines of code build the advice clearly.
  adviceLine1 = "If you feel " + scenario + "...";
  adviceLine2 = action + " with your " + item + ".";
  adviceLine3 = "Go to " + location + ".";
}

I am also proud of how I centered the card and used width and height instead of random numbers. That made the layout feel more intentional and organized. It was defintely a small detail, but it made the whole sketch look more put together and versatile for all screens and canvases.

function drawCard() {

  // centered card dimensions usinf height and width to avoid magic and hardcore numbers.
  let cardWidth = width * 0.7;
  let cardHeight = height * 0.7;

  //// the rect() starts drawing from the top left corner, so if i want the card in the middle of the screen,i have to move it left and up by half of its size, and this makes the card sit exactly in the center. x being horizontal position and y being vertical.
  let cardX = width / 2 - cardWidth / 2;
  let cardY = height / 2 - cardHeight / 2;

Embedded Sketch:

How This Was Made:

I started by deciding what kind of text I wanted to generate, and once I chose the NYUAD advice idea, I created a CSV file with single words separated by commas, just like the poem words example from class. Each line followed the same structure so that the program could read it properly, and the sentences/advice would make sense to a certain extent.

Then in my code, I used loadStrings() inside preload() to load the file before the sketch starts. In draw(), I used random() to pick one line from the file and split() to separate the words by commas. After splitting, the row becomes an array, and I used the index positions to take out each word from the array

.Once I had each word stored in its own variable, I built the final advice sentence using string concatenation (which is a fancy term that means putting pieces of text together), just like the poem generator example from class. I then displayed the text using text() and centered everything on the screen. I used noLoop() so the advice only changes when the user clicks, and mouseClicked() to restart that loop. This makes the sketch interactive instead of constantly refreshing on its own.

Reflection and Future Ideas:

Overall, I think this project helped me better understand how loading data works in p5. At first, I didn’t fully understand what split was doing, but after working through it step by step, I realized it’s just turning a string into pieces that I can access individually. That made the whole idea of generative text feel less intimidating and confusing.

There were definitely small moments of confusion, especially when thinking about where the text generation should go and how it compares to the professor’s example. But figuring that out helped me understand the structure of draw(), noLoop(), and mouse interaction much more clearly.

In the future, I would like to experiment with more complex text structures, maybe adding more variables per line or creating multiple sentence formats. I also think it would be interesting to combine generative text with animation so the words themselves feel more dynamic and for the whole sketch to just be more entertaining visually. Overall, really proud of this cute little sketch.

Refrences:

Reading Reflection – Week 4

One thing that really drives me crazy is when websites log me out suddenly while I’m still working on something. I could literally be in the middle of typing a really long text or even filling out a form that’s like 20 questions, and suddenly the website automatically takes me back to the login page, and everything I wrote is gone and erased. It makes me feel so frustrated and annoyed at myself and the website. However, after reading Norman, I realized this is a failure of feedback and signifiers. My mental model assumes that if I’m actively typing, the system will know that I’m there. But clearly it’s running on some sort of invisible timer that I can’t see, and there’s nothing to signal that I’m about to lose everything I worked so hard on. I think this could be improved so easily by adding a simple message like “You will be logged out in 2 minutes” or even by just automatically saving drafts. Just anything really that respects the fact that users are human and not always aware of the system’s hidden rules.

If I apply Norman’s design principles to interactive media, I would focus on making those hidden rules visible in a way. I would make sure there are clear signifiers and signals showing what’s happening and clear feedback so users don’t feel confused or suprised by sudden things that the system outputs. I would also design while keeping in mind that people will make mistakes and are prone to getting distracted, because that’s just normal human nature. The reading honestly made me rethink how often we blame ourselves for doing things wrong or not understanding when really the design just didn’t communicate those rules clearly. Overall, a really eyeopening read and made me realize that good dystems should make the user feel very capable rather than lost and confused.

Reading Reflection – Week 3

The reading by Crawford pushed me to reflect more carefully on how I define interaction in my own work. I usually associate interaction with visible movement or manual input, but Crawford presents it as an exchange in which the user acts and the system responds in ways the user can notice and understand. The user stays engaged when their actions change behavior over time and when feedback feels connected to what they do. Crawford emphasizes listening as a core part of interaction, meaning the system changes only when the user’s input affects future responses. This idea made me realize how easy interaction becomes shallow and meanigless when the system continues regardless of the users presence

When thinking of how I want to apply this idea to my p5 sketches, I see areas where interaction feels just a bit limited in a way. Many sketches rely on randomness and animation, which creates visuals but does not always depend on the user’s actions. When the user moves the mouse or clicks, the sketch often continues in a similar way, which kind of weakens the sense of exchange Crawford describes in the reading. To improve this, I want the users input to influence structure rather than just surface behavior. Mouse movement could change speed, and clicks could change patterns. Changes like these would force the system to listen and respond, which aligns more closely with Crawford’s definition of interaction and helps the work feel more intentional and responsive rather than automatic and fixed. Overall, Crawford’s text really opened my eyes on the concept of interaction, and definitely deepened my understanding of how I want to incorporate it in my own work in the future.

Week 3 – Object-Oriented Programming Genarative Artwork

My Concept:

I started this assignment by thinking about the idea of night and day and how they slowly transition into each other, and I wanted to make something that shows both in one single sketch. I also wanted the user to have some control over what they’re seeing, and since we learned about objects and arrays in class this week, I knew I wanted to use those instead of just drawing everything individually. The final idea was to have the left side of the screen feel more like night and the right side feel more like day, all controlled by the mouse and the user. At night there are stars, and during the day there are clouds. The sketch changes depending on how the user interacts with it, which makes it feel more alive and interactive.

Highlight of Sketch:

class Star {
  constructor(x, y) {
    // x and y store the position of the star.
    this.x = x;
    this.y = y;
for (let i = 0; i < clouds.length; i++) {
   clouds[i].move();
   clouds[i].wrap();
   clouds[i].show(dayAmount);
 }
// blending between night and day colors. I used lerp to smoothly transition values.
let skyR = lerp(nightR, dayR, dayAmount);
let skyG = lerp(nightG, dayG, dayAmount);
let skyB = lerp(nightB, dayB, dayAmount);

background(skyR, skyG, skyB);

// same idea for ground at the bottom of the canvas.
let grassR = lerp(20, 40, dayAmount);
let grassG = lerp(40, 170, dayAmount);
let grassB = lerp(30, 70, dayAmount);

One part of the sketch that I am most proud of is my use of object-oriented programming for the stars and clouds. This was something new for me, and at first it was confusing, but once it worked it made the sketch feel much more organized. I am also proud of learning and using lerp to blend colors smoothly.

Embedded Sketch:

How this was Made:

I started by deciding what the sketch would visually look like before writing any code. Once I had the idea of night and day, I planned how to break it into smaller parts. I first created two arrays, one for stars and one for clouds. Then I made separate class files for each, just like how we used ball.js in class. Each class has its own variables for position, speed, and functions. After that, I used for loops in setup() to create multiple stars and clouds and store them in the arrays. In draw(), I used mouseX to control whether the scene should be more night or more day. I then looped through each array and called the move and show functions for every object, which created the animation. I also added a cute and simple sun and moon so the theme is easier to understand for the user. Finally, I used mousePressed() to add interaction, where clicking adds either a star or a cloud depending on the time of day. Overall, my sketch came together quite nicely by combining loops, arrays, objects, and interaction, all based on what we learned in class.

Reflection and Future Ideas:

Overall, I am happy with how this project turned out, especially since object-oriented programming was something very new to me. There were definitely moments where things didn’t work the way I expected, and i faced a lot of trial and error, especially with juggling between the different class files, and choosing the different number values for all the variables and functions, but fixing those issues helped me understand the code alot better. And, for this assignment in particular, I really went out of my way and experimented with learning new things to elevate my code. Using different articles and references helped me create a more advanced final piece, which I am extremely proud of. I think the night and day transition came out really well and feels smooth without being too complicated. In the future, I would like to experiment more with animation and movement in p5, and create pieces that feel even more dynamic and interactive.

Refrences:

Reading Reflection – Week 2

In my work, I plan to incorporate randomness as a major element, as I believe having it consist solely of order and rigid structure will yield a boring, fixed outcome. As the speaker mentions in his talk, randomness breaks the artist’s habits, and I completely agree. Randomness allows for a more lively and lifelike outcome. Reas mentions that when an artist sticks to controlling evrything they tend to repeat patterns and styles, so randomness allows the artist to break free from patterns they plan. In my own work, I tend to be more satisfied with my outcomes when randomness is a strong component, as it adds an element of surprise and spontaneity, which, in my opinion, deepens the work through the unexpectedness of the results. Reas’s speech allowed me to appreciate randomness more and helped me understand my tendency to be more satisfied with my code when elements of randomness are involved.

Although I find randomness a very important and desired element of my work, I do believe there needs to be a balance between total randomness and complete control. When working on a piece, finding that sweet spot of fixed and spontaneous is important. If the code consists of too much randomness, it becomes too chaotic or even messy. However, if it is completely controlled by the artist, there is a sense of. rigid and repeated patterns. Finding the balance between the two depends on the specific work, but for the most part, an equal share of both elements allows for a perfectly balanced piece. Reas also agrees that pure randomness alone in art is not enough to make a good piece. Sure, it may improve a very rigidly structured piece, but some control is necessary. Reas’s speech supports the idea that randomness allows for variation and a less predictable outcome; however, order and structure are what make art artistic and not accidental.