Week 8 – Creative Switch

Concept:

For this week’s assignment, I decided to create a system to help me remember to sit with a straight posture and to avoid slouching. The system works using two sheets of aluminum foil connected to a buzzer. When the aluminum foils are connected (i.e. your back is touching the chair meaning you are sitting up straight), the buzzer is silenced. When the aluminum foils are separated, meaning you are slouching and your back is off the chair, then the buzzer gives tone, reminding you to sit up straight.

Demo + Setup:

 

Github Code: https://github.com/deema-hazim/into-to-IM-week-8/blob/main/Posture%20Buzzer%20Arduino%20Code

Implementation:

To create my foil switch, I taped two thin sheets of foil to a dish washing sponge. This works as a compression switch, so when you press the foils together they close the circuit, and when pressure is released, the sponge between them separates the foils and opens the circuit.

I connected one tin foil to digital pin 2, and the other foil to GND. This functions as the switch, so when the foils are touching they close the circuit, and when they are not the circuit is open.

For the buzzer, I connected its -ve leg to GND, and its +ve leg to digital pin 8.

void setup() {
  pinMode(foilPin, INPUT_PULLUP);
}

For the code, I used a new pinMode called INPUT_PULLUP. I found it while I was doing some research about Arduino coding, and essentially it just uses Arduino’s internal pull up resistors so you do not have to add a physical resistor into the circuit. I wanted to test this out to explore a bit with the code.

if (state == HIGH){ //foils not touching (slouching)
    delay(1500); //wait 2 seconds in case person is just moving
    if(digitalRead(foilPin)==HIGH) { //read input again, if person is still slouched
      tone(buzzer, 1000); //tone
    }
  }

I set an if-else statement so when the state of the foil switch is high, meaning the foils are not touching and the circuit is open, the buzzer will wait a few seconds before giving tone to account for when the user might just be adjusting their sitting position. After 2 seconds, there is another check to see if the foils are in fact still not touching, and then the buzzer gives tone. Else, as long as the circuit is closed, and the foils are touching, then there is no tone produced.

Reflection:

Overall, I am quite proud of my project and the usefulness behind its idea. It was also really exciting and fun to work with Arduino and I really enjoyed the concept of physical work combined with coding.

For the future, I would like to have longer wires so that I can set the Arduino and breadboard setup on my table when the foils are taped to my chair. I considered taping the entire setup to the chair, but that would then make it uncomfortable for the person sitting on the chair.

References:

Week 8 – Reading Reflection

A paragraph that stood out to me in the reading on Margaret Hamilton was the one discussing how programmers used to write, run, and test code. Nowadays, we simply use a text editor to write code, compile it, and run it in a matter of seconds. Back then, however, code had to be punched into paper cards, fed into a machine to be compiled and run, which could take hours, and then, to insert the code into the spaceship, wires had to be hand-woven through rings. This made me reflect on how far the software industry has advanced since then.

As a current computer science student, I cannot imagine how difficult my university experience would have been if I were a student in the 1960s. Today, I can easily make changes to my code and test them immediately, which allows me to focus on efficiency, functionality, and aesthetics. Reflecting on the second reading about how attractive things work better, I imagine that in the 1960s, since code was much more difficult to develop, software engineers likely had to prioritize functionality over aesthetics. If they had to prioritize both, it would have made their work even more challenging than it already was.

Additionally, a large part of making software aesthetically pleasing involves constantly editing and testing the placement and design of elements such as buttons and images. Adjusting even a single button can require hundreds of recompilations and test runs. This process would have been extremely difficult in the 1960s, when every small change required going through the entire process of punching holes into cards, feeding them into a machine, and waiting hours for results.

The software industry is constantly evolving and improving. More recently, we have seen groundbreaking advancements in generative AI, which can assist in developing and running code. Many software engineers now use AI to address the challenge Don Norman discusses in “Attractive Things Work Better.” By using AI to help build the front end, they can shift more of their focus toward the functionality and theoretical aspects of software. This also speeds up the development process by providing a visual starting point.

Midterm Project – F1 Pitstop Game

Concept:

Inspired by pit stops in Formula 1, the goal of this game is to change tires in the correct order as quickly as possible.

There are 3 rounds per game. In each round, the car enters your garage, and you are given a random order to change the tires. The order changes every round. At the top of the screen in the red banner, you see the tire you have to change. You must click that tire on the car to “change” it. The game not only requires speed but correctness. Each wrong tire you press costs you a 2-second penalty, adding to your total time at the end.

Once all 3 rounds are complete, you can see your final time and the time taken during each round. The game also tracks your high score, even when you restart the sketch, so you can come back to beat your high score at any time!

Final Game + Code:

*For the best experience, open the sketch in a new tab and play the game in full screen

Implementation:

I began by creating the audio files I needed for the game. I went to YouTube to find the F1 theme song to use on the start and results pages, and I also found a video demonstrating a real pit stop to use for the wheelgun and engine sounds. Then, I converted each video to an MP3 file, and I used an audio cutting website to cut the audio so I only took the wheelgun and engine sounds, and I had to shorten the F1 theme audio since it was too big to upload to the p5 editor.

Then, I gathered all my images. I asked ChatGPT to create a cartoon-ish pitlane image with arrows and lines to use in my game since I could not find the exact image I needed online. I also found a cartoon F1 car with a transparent background.

I also chose a font to use from Google Fonts and uploaded that as well.

Then, I researched different functions I knew I needed. First, as the professor suggested, I wanted to implement a high score tracking feature, where the user’s highest score is saved even when they restart the sketch. I found the function storeItem(), which does exactly that. I also needed a time tracking function for the timer, so I found the millis() function, which returns the number of milliseconds since the sketch started running. Finally, I needed a way to shuffle the order of the tires. My initial idea for implementing the shuffling was to randomly generate a number between 0 and 3, and each time the number is used, it cannot be used again. These numbers are then used to access the tire at that index in the array and is displayed to the player; however, I realized this would make it more complicated to track the tire and if the user clicked the correct one, and is also very inefficient. So, I referred to Google to find a function that shuffles items in an array and found the shuffle() function.

The game was built around 4 classes. First is the GameManager class which controls everything. It tracks which screen should currently be displayed from the 4 screens: start, instructions, playing, or results. It also manages the progression between the three rounds, calculates the total pit stop time, compares it against the stored high score, and handles the mute state for the background music in the main and results screens. The high score is stored using the storeItem() function, which stores a certain value with a label you give it in the browser’s localStorage. For retrieval, you use the getItem() function. Initially, I did not have a mute button, but after having my friends test my game, one of them suggested it due to the song’s loud volume and how it can become annoying after listening to it for a while.

The Car class handles how the car drives into and out of the pit box. It has three properties that control its movement: y which is its current position, targetY which is where it needs to stop, and two boolean flags called arrived and exiting that track what stage of the animation it is in. Every frame the update() method runs and uses

lerp(this.y, this.targetY, 0.04)

to move the car smoothly toward the center. lerp() works by taking 4% of the remaining distance each frame, which means the car moves fast at first and naturally slows down as it gets closer, creating a deceleration effect. I only used lerp() for the entrance of the car because logically when the car accelerates to the top of the screen it is leaving the garage to go back on track, so it must leave the screen quickly to give that effect. Once the difference between the current position and the target is less than 1 pixel, the car snaps exactly to the center and arrived is set to true. When the pit stop is complete, exiting is set to true and the car moves upward until it disappears off the top edge. To draw the car, I used imageMode(CENTER) with translate(), so the image is always perfectly centered on the car’s current coordinates.

The PitStop class controls the main gameplay logic. At the start of every round the start() method calls shuffle() on the array storing the tire labels to generate a random sequence, and the variable currentStep keeps track of which position in that sequence the player is currently on. The timer only starts counting when the car has fully arrived, using a boolean flag timerStarted so that millis() is only recorded as startTime the moment the banner appears. The elapsed time is calculated every frame as

(millis() - this.startTime) / 1000 + this.penalty

because dividing by 1000 converts milliseconds to seconds and penalty adds any two second penalties from wrong clicks. When a wrong tire is clicked, wrongFlash is set to true and wrongTimer records the exact millisecond it happened, then in update() the code checks if 500 milliseconds have passed since then and resets the flash.

The Tire class handles each of the four individual tires, and each tire is an instance of the Tire class. Each tire stores its position as offsetX and offsetY, which are fractions of the canvas size, and the actual pixel coordinates are calculated every frame using getX() which returns gameManager.car.x + this.offsetX * width and getY() which returns gameManager.car.y + this.offsetY * height. I created functions to calculate the tire’s X and Y positions rather than giving them a static value in the constructor so they can be resized with the car when the window size is changed. Each tire also has a state property that is set to either ‘waiting’, ‘active’, ‘done’, or ‘wrong’, and the draw() method uses these states to determine the color it should display each tire with. Click detection in isClicked() works by checking whether the mouse coordinates fall within the rectangle representing the tire.

Parts I am proud of:

The feature I am most proud of is definitely the high score. When I initially heard the idea, I didn’t think I would be able to implement it, assuming it might be too complicated. However, I am glad I found the storeItem() and getItem() functions and that they were extremely simple to use.

This is where the high score is retrieved in the reset()/constructor function of the GameManager.

//load the saved high score from browser storage
//if nothing has been saved yet, default to 0
this.highScore = getItem("pitStopHighScore") || 0;

And here, it checks whether the player achieved a new high score or not to update the variable. It also changes a boolean flag which helps determine if the player achieve a new high score to display a new high score message in the results screen.

//check if this is a new high score
          if (this.highScore == 0 || this.totalTime < this.highScore) {
            //new best. save it to browser storage
            this.highScore = this.totalTime;
            storeItem("pitStopHighScore", this.highScore);
            this.isNewBest = true;
          } else {
            this.isNewBest = false;
          }

I am also generally proud of the graphics and sounds used in the game. It ended up looking way better than I imagined I would be able to make it.

Challenges and Areas for Improvement:

One bug I ran into was that the timer was starting at around 2 seconds instead of 0 at the beginning of each round. I realized this was because I was starting the timer the moment the round began in startRound(), so it was not accounting for the fact that the car still needed to move in from the bottom of the screen before the player could actually do anything. So, by the time the tires appeared, the timer had already been counting for however long the entry animation took. I fixed this by adding a boolean flag called timerStarted to the PitStop class and a startTimer() method that only records the start time if the timer hasn’t begun yet. Instead of starting the timer in start(), I call startTimer() inside drawPlaying() in the GameManager, but only once car.arrived is true, so that the timer starts at exactly the same moment the banner and tires appear on screen.

For the future, I would like to add more complexity to the game, perhaps more rules or different game modes. I also thought of this later on, but a great feature to implement, which would’ve allowed players to score lower times would be to show the sequence of tires that they need to click for that round. This would allow players to anticipate the next tire they have to click and click it faster.

References:

Pitlane Background:

Audio:

Function References:

Font:

Week 5 – Midterm Progress

Concept:

For my midterm project, I decided to create something inspired by one of my passions, Formula 1. I have been a huge Formula 1 fan for years, so I wanted to create something for my midterm project that I will enjoy working on.

I knew I wanted to create something related to Formula 1, but it took me some time to come up with my final idea. In the end, I decided to create a pit stop simulator game.

A pit stop in Formula 1 is the process of drivers coming into the garage to change their tyres mid-race. The entire process needs to take about 2-3 seconds to avoid the driver dropping out of places in the race. Below is a video demo of what a pit stop looks like.

The game would consist of 3 rounds. In each round, the car slides into the screen from the side, and you are given the order in which you need to change the tyres. You must click on the tyres in the correct order to change them as quickly as possible. If you click the wrong tyre, you get a 2.0s penalty added to your time. Your goal is to score the lowest time possible

Design:

I started by asking generative AI to create sample screen pages because I was struggling to imagine how they could look like. Below is a sample introduction screen, instructions page, and gameplay screen.

My original idea for the actual gameplay was to have an image of the pit stop view from the top. Something like this:

Pit Stop Top View F1 Racing Stock Vector (Royalty Free) 2602757631 | Shutterstock

However, I am struggling to find the exact image I am looking for, which is just a cartoon-ish pit stop area image, clear of the car and pit stop crew. Therefore, I designed the game differently for now. If I do find the background image I am looking for, I will change the design.

For sounds, I plan to incorporate a sound for the car pulling into and out of the “garage”, and the sound of the wheel gun when the player selects the right tyre.

Most Challenging Part:

Something I am a bit stressed about is tracking everything. When I think of how to implement each feature individually, it seems simple in my head; however, connecting everything, such as displaying the next tyre to be changed after the user completes it, and detecting if the user has clicked the right tyre or not, sounds a little bit overwhelming.

Another challenging part is ensuring all the elements on the screen move accordingly when they are in full-screen mode

Risk Reduction:

To reduce the risk, I need to properly plan which classes I am going to create and what elements need to be linked together before I begin coding. I think this would help me a lot with the full implementation and organization of the code. It would also give me a better understanding of how to translate my vision for the game into reality.

 

Week 5 – Reading Reflection

The main difference between human vision and computer vision is the limitations of computer vision. The text mentions that “no computer vision algorithm is completely ‘general’.” This means that none of them can perform reliably given any possible video input. Each algorithm comes with specific assumptions about what the scene will look like, and if those assumptions aren’t met, the results can be poor, ambiguous, or completely broken. This is obviously very different from human vision which is significantly more adaptable. We are able to recognize almost anything in any environment.

However, one advantage of computer vision is its strength as a surveillance tool. Unlike human eyes, which can only see in normal light, computer vision systems can be paired with infrared or thermal cameras that work in complete darkness or detect body heat. This gives them a significant advantage as surveillance tools as they aren’t held back by the same biological limitations we are.

The techniques for helping the computer see better are mostly about manipulating the real world to suit the algorithm’s assumptions. Examples the text gives include using backlighting or retroreflective materials to create contrast, using infrared illumination in low-light conditions, choosing the right camera and lens for the situation, or even dressing subjects in specific colors. The idea is that good physical design and good code need to be developed together, not separately.

Computer vision’s limitations for surveillance mean that to incorporate them in interactive art, you need careful planning and knowledge on where this art will be, to appropriately plan for physical or environmental limitations. For instance, if you are creating a computer vision interactive art project for some exhibition, you will need to analyze the venue and its environmental conditions to ensure you use the right technique to properly analyze the subject(s) being surveilled.

Interestignly, the irony is that the very limitations of computer vision mean that in an art context, the surveilled person often has to cooperate with the conditions for the system to work at all. That’s quite different from CCTV, where you’re tracked without consent or awareness. So, interactive art using computer vision tends to occupy this strange middle ground where surveillance becomes participation, which raises its own questions about what it means to be watched by a system you’re also performing for. This becomes a crucial moral question when it comes to projects such as the suicide box mentioned in the text and David Rokeby’s Sorting Daemon, where people are participants in the art installations without their concent, especially during vulnerable moments such as with the case of the suicide box.

Week 4 Assignment – London Underground

Concept

For this week’s assignment, I was fascinated by the data visualization we did in class, so I wanted to create something similar using that technique. I was browsing the internet for inspiration and came across this website. In the video on the page, one of the data visualization ideas showed the Paris metro activity. This inspired me to visualize the London underground passenger traffic.

I had to create my own CSV file to obtain the data I needed. I used two different datasets from the Transport for London (TFL) website. One of them shows the total number of passengers using stations weekly, and another one shows the coordinates of the stations. I removed all the unnecessary columns and pasted the coordinates with the passenger counts and station names.

I ran into an error where the console said that some of the values it loaded from the CSV file were NaN, even though I made sure to delete any rows with no data. In previous data science classes I took, we always added a check to ensure we were only taking valid values, so I did the same in this sketch. I made new arrays that only add values that are not NaN.

I also added a feature to allow you to check the station name and the number of weekly passengers when you hover the mouse over it.

Reflection

Despite my love for the technique of data visualization, this is not one of my proudest works simply because I was extremely busy this week and could not dedicate as much time as I wanted to on this sketch. In the future, I would definitely like to add a map of London in the background and also to space the circles out in a way where you can actually see each station since there is a lot of overlapping circles and you are not able to properly see each station when you hover your mouse.

Week 4 – Reading Reflection

This reading is probably my favorite reading so far. It triggered me to reflect on various objects we encounter daily that are not very efficient, and have not had a change in their mechanisms for years. When reading about mapping lights to their switches, it reminded me of my own house, where we have a set of switches near the main door.

Even after living in this house for around 10 years, I still flick multiple switches until I find the one I need. The placement of the switches is especially inefficient because some of these switches are for lights outside the house in the front yard, and some are for lights inside the house.

It took me a while to think of something that functions inefficiently besides what was already mentioned in the reading, because I feel like Norman covered some of the core ones. Then, I remembered my electric toothbrush. My electric toothbrush is the most basic Oral-B model, and it only has one button (as do most electric toothbrushes). On top of this button, the word “timer” is written, indicating that this button is a timer. However, this button is also the same button I use to turn on the toothbrush. I was always confused about how to trigger the timer, and I did not have access to the manual to check. It took me around a year to realize that the toothbrush vibrates in a specific pulsing pattern for a short 2 seconds after 2 minutes of being on. I always wondered why the toothbrush would randomly vibrate weirdly mid-brush, and then I connected the dots. Using Norman’s definitions, this is an issue with the feedback of the system. How was I to know that the timer had started? While to me specifically, I think something like a light or a sound would’ve been a better indicator for the timer, I actually think this design is the most accessible design. It means deaf and blind people can still sense the toothbrush’s vibrations to know the timer is up. So, I think sometimes designs can seem inefficient to some people, but in reality they are designed this way to make them functional to everyone no matter their abilities.

This is a clear example of an item that affords timing your brush, but has poor design with signifiers that allow the usage of this feature. Norman’s argument on the importance of designing products keeping in mind the complexity of humans and eliminating the thought that how to use a feature you create will be obvious to its users can be applied to our interactive media projects. Interactive media is all about experience and interactivity. Without clear instructions from the artist on how to interact with their artwork or their game, users cannot maximize their experience. Imagine how frustrating it would be for a user to play a game with no instructions on how to control the game or about the main goal of the game. The game loses its purpose to entertain, because the user is frustrated with figuring out how to work the game.

Week 3 – Reading Reflection

The only thing I was impressed with was Crawford’s ability to write that much about the misuse of the term interactivity. In my opinion, Crawford was being quite exaggerative. While I understand the frustration and agree that people sometimes slap the word “interactivity” on more reactive things, many of the examples Crawford gave were not very common. I have never really heard of a movie being called interactive before, unless it’s a kids’ movie like Dora, nor have I heard someone call opening the fridge interactive.

Following his argument to its logical conclusion, we should change “interactive media” to “reactive media.” After all, interactive media is programmed to trigger a reaction when the user interacts with it, but it does not listen, think, and respond the way Crawford claims interactivity should. However, when I Googled the official definition of interactivity, the Oxford dictionary defines it as “the process of two people or things working together and influencing each other.” According to this definition, everything Crawford claimed was not interactive is indeed interactive, because it involves things influencing each other, not necessarily listening, thinking, and speaking.

That said, I do agree with Crawford that interactivity exists on a spectrum. I believe the characteristics of a strongly interactive system are its ability to listen, think, and speak. Generative AI is the best example of a strongly interactive system we currently have, if not the most interactive. It simulates human conversation and allows for human-like interactions. Other examples of interactivity, such as humans triggering a reaction from computer-based artwork (like the sketches we do in class), fall on a slightly lower scale of interactivity because they are more of a “reaction,” to use Crawford’s terms, but I would still consider them highly interactive.

To improve the degree of user interactivity in my sketches, I could go beyond just the click of a mouse. Using different parts of the human body, including an interactive webcam project, like one of my classmates’ projects for this week, and even allowing for multiple interactions at once – something like a two-player game – can all improve the degree of interactivity of my sketches.

Week 3 Assignment – Simplified Pac-Man

Concept

For this week’s assignment, I decided to draw a bit from the skills I learned in Intro to CS back in my freshman year. I chose to implement a very simplified version of Pac-Man, something I always wanted to recreate.

Here is the final sketch (use the right arrow key to begin moving Pac-Man):

The idea is simple: you use the left and right arrow keys to move Pac-Man, and it eats the ghosts once it approaches them. Once all the ghosts are eaten, and Pac-Man returns to his starting position on the left side of the screen, the ghosts regenerate. This creates a sort of infinite loop.

Pac-Man:

The Pac-Man figure is created using the arc() function. On the p5 reference page for the function, there is a sample code for a biting Pac-Man. So, I used that in my assignment. I used Claude.ai to understand what each line of code was doing, specifically how the sin function works and how it enables the model to open and close its mouth. Building on this knowledge, I was able to adjust the speed at which Pac-Man was biting. I also added a boolean that checks if he is moving, so that he is only biting when he is moving.

Pac-Man’s movements are based on the pressing of the right and left arrow keys. From my Intro to CS class, where we built a sample Super Mario Bros game, I knew there must be a way to trigger movement when a specific key is pressed. With a Google search, I found that you can use

if (keyCode === LEFT_ARROW)

However, this function made it difficult to make Pac-Man move when the key is pressed and stop when the key is released. I had a bug where he would just keep moving on his own. I asked Claude.ai for a different function that is specifically for detecting if a key is held down or not, and it gave me the keyIsDown() function. This function returns true if the key is pressed, and false if not. This function fixed my bug and made Pac-Man stop and go correctly.

Ghosts:

For the ghosts, I found PNG images on Google and uploaded them to the sketch. Then, I found this sketch online, which includes a tutorial on how to load and display images.

I faced some difficulty with getting the ghosts to disappear when Pac-Man eats them. At first, I was just looping through the image names and displaying each image, then checking if the x position of Pac-Man and the image aligned, and deleting the image, but this was not doing anything. I asked Claude.ai why my code was not working, and it pointed out to me that images cannot hold x variables; therefore, I cannot check conditions using them. So, I created a simple Ghost class to store each ghost image’s x and y positions, its size, and image name. I made each ghost image an instance of the Ghost class, and stored them in a ghosts[] array. This allowed me to use the dist() function to see if Pac-Man was close to the ghost and delete that ghost instance from the array, which makes the ghost disappear from the sketch.

I was initially just going to leave the sketch with one iteration of displaying, then eating the ghosts, but then I decided to make it regenerate the ghosts every time they are all eaten. I did this by checking if the ghosts[] array is empty, because that indicates that all ghosts have been eaten. Adding only this condition gave me a small bug where the ghosts do not generate when the sketch first loads. It also would re-display the ghosts as soon as the last ghost is eaten, and since Pac-Man would be in proximity of the last and before last ghosts, it eats them immediately, making all the ghosts disappear as soon as they appear. Therefore, I added a condition that ensures that not only must the array be empty, but Pac-Man must also be on the left side of the screen (back at his starting position).

Code Snippet I am Proud of:

//loop backwards through the ghost objects to delete from the back of the array, otherwise you skip items in the array
 for (let i = ghosts.length - 1; i >= 0; i--) {
   let ghost = ghosts[i];

   //display the ghost image
   image(ghost.imgName, ghost.x, ghost.y, ghost.size, ghost.size);

   //if the distance between the center of pacman and the center of the ghost is smaller than 50
   if (dist(pacman.x, pacman.y, ghost.x + 40, ghost.y + 40) < 50) {
     //delete that ghost object from the array so that it is not displayed
     ghosts.splice(i, 1);
   }
 }

This is the code to display the ghost images and delete them when Pac-Man eats them. I am most proud of it because it took me the longest to figure out, and I made LOTS of edits until I reached the final version that worked.

Reflection

Overall, I am extremely happy with my work. Pac-Man has always been one of my favorite games, and I am thrilled I got the chance to recreate it. Despite running into a lot of bugs, they taught me a lot along the way, helped me discover new functions, and expanded my knowledge on how certain things are done in JavaScript since I have never coded using it before. For the future, I would definitely love to properly implement a full Pac-Man game, maybe for my final project 🙂

References

keyIsDown(): https://p5js.org/reference/p5/keyIsDown/

Image upload tutorial: https://editor.p5js.org/FAAH/sketches/8s1g0vilF 

arc(): https://p5js.org/reference/p5/arc/

Claude AI: https://claude.ai/new

  • Claude was used to understand the biting Pac-Man code from the arc() reference page, for debugging when the ghosts were not disappearing, and for finding the keyIsDown() function. It gave me the solution of creating a ghost class, which I then implemented on my own.

Week 2 – Reading Reflection

Before watching the video, I assumed that randomness = messy or chaotic. I never before thought of controlled randomness, which initially sounds contradictory, but begins to make sense as the video goes on. I was fascinated by the artwork that was produced by programming randomness and it opened my eyes to different ways randomness can be used in my own work. In a way, some of the artworks Reas showed in the video, specifically the music pieces, reminded me of modern AI. The music sounded exactly what it was, random, yet also structured oddly enough. It reminded me of AI generated images and songs because AI tends to create some very messy looking images when you give it a specific prompt, and the randomly generated music somewhat mimics the idea of AI nowadays. More importantly, I was mostly impressed with the artwork that can be produced through computer graphics and code. Coming from a computer science backround, most of my coding involved creating algorithms. So, seeing a whole new world of coding to create abstract art pieces was captivating.

In my own artwork, I definitely plan to incorporate randomness by generating random colors and sizes for objects when needed, and especially for generating random positions and velocities for objects. I believe the optimum balance between total randomness and complete control is having almost complete control yet incorporating some elements of randomness when necessary. The control comes from writing the code yourself and deliberately inserting when you want randomness to be used. This helps create more complex art pieces because sometimes it is difficult to draw each element individually in the sketch and create more abstract pieces. So, the element of randomness allows for the creation of art pieces one might not have completely imagined in their mind.