Week 8 – Unusual Switch

Here are pictures and videos of my unusual switch:

Video:

IMG_1353

Here is the GitHub link to my code:

https://github.com/farahshaer/Intro-to-IM/blob/3e682e9aebc9f598bbd09f61855eba49238cbf92/sketch_mar29.ino 

Overall concept
So for this project, I decided to create a foot-activated switch. When I press my foot on the pad I made with aluminium foil, tape, sponges, and cardboard, it completes the circuit and lights up the blinking LED. When I remove my foot, the LED turns off. I wanted something creative that uses the human body in a new way, and the foot switch, to me, felt unusual and fun to play with, even though it is a basic concept.

Code Highlight

The code that I am most proud of is the blinking effect. I wanted to incorporate it because we learned how to do it in class. I thought it wouldn’t be possible to put it in the if statement because I needed the light to turn off once you remove your foot. But by delaying it by 100 and turning the LED off and on quickly, it made a fast blinking effect when my foot presses the pad.

if (buttonState == HIGH) { //if the switch is pressed
digitalWrite (13,HIGH); //the LED turns on
delay(100); //cause a delay so I can produce a fast blinking effect
digitalWrite (13,LOW); //to turn the LED off for the blink effect
delay(100); //the fast delay to complete the blink full cycle
}

Reflection/Future work

To build the circuit, I implemented basically the same thing we discussed in class. For the LED circuit, I connected pin 13 of the Arduino to a 330 resistor, which then connects to the long leg of the LED. The short leg of the LED connects to the ground row. For the switch circuit, I made a simple foot pad switch, which I took reference from a website. I used two pieces of aluminium foil on cardboard pieces, and in between them two sponges so I can easily turn it on and off. and separate the aluminium foils. One foil connects to 5V, and the other connects to pin 2. I also added a 10k resistor between pin 2 and ground. The resistor keeps the input stable when the pads aren’t pressed, so the Arduino doesn’t get floating values and burn out. I put the wires in specific rows on the breadboard. For example, the green wire goes from pin 2 to the row shared with the resistor and one foil pad. The yellow wire connects the other foil pad to the same row. This makes sure that when my foot presses the pads, pin 2 gets 5V, and when not pressed, the resistor pulls it down to 0V.

For future improvements, I would make the foot pad a bit more stable and try using multiple LEDs to make a pattern so the concept gets more creative. I would also experiment with different blink patterns to make it more interesting.

Here is the website I took inspiration from to make the pressure board:
https://www.instructables.com/Pressure-Switch-for-Makey-Makey/

Here is the sketch of my board and circuit connection diagram:

(I apologise for the quality of this sketch; I do not have any printer paper or markers yet.)

Reading Reflection – Week 8

Emotion & Design: Attractive things work better

I found Norman’s reading quite interesting, especially his argument that emotions affect how well we use objects. From the previous readings we have done in this class, I just assumed good design was mainly about functionality and usability. I also thought that even if the object is not as aesthetically pleasing, if it works well, then everyone would use it. I did not realize how much emotional appeal plays a role in the design. Norman claims that attractive designs can actually improve how people think and solve problems. When people feel positive emotions, they are more flexible, while negative emotions make people more focused but also more stressed. This made me realize that design is not only about making something work well with clear directions, but also about how it makes people feel while using it.

I liked the example of the three teapots. Each of the teapots emphasized something different. Even though one of them is harder to use, he still keeps it because he enjoys how it looks. This made me understand that objects can have an emotional value. I started thinking about how many objects I own for the same reasons. I sometimes choose things because they look nice or match my mood, even if they are not the most practical option.

This reading also made me realize how much of the visual design, like the color and layout, might help determine how people interact with it, beyond just making it usable. The reading definitely changed how I think about design by showing that usability and aesthetics should be combined, especially to have that emotional appeal. But I wonder if something can ever be too beautiful, can an object’s attractiveness ever get in the way of its function?

 

Her Code Got Humans On The Moon — And Invented Software Itself

The reading about Margaret Hamilton’s work really inspired me. The fact that the concept of software did not even exist when she started working on the Apollo missions is crazy. I never thought about how much space travel depended on programming. It made me realize how much courage and creativity it must have taken to build something new, especially in a male-dominated environment and time, while even being a mother. Not only did she lead her team successfully, but she also fought to have her ideas heard, even when people initially dismissed her concerns.

I was amazed by the story of the Apollo 11 landing, where the computer started giving error messages. But because Hamilton designed it to prioritize the most important tasks, the module still landed safely. She showed how important it is to anticipate problems before they even happen.

This reading made me reflect on how many of the innovations we have today have been taken for granted and built by people who fought the system in their time. It also makes me wonder how many people today contribute to so much work but stay unrecognized, or their credit is handed over to someone else. Hamilton’s story shows the importance of visibility and persistence in tech and in life, something that we should all take note of.

Midterm Project

Here is the final sketch:

Examples of the evidence pieces: Overall Concept

For my midterm project, I created an interactive court trial simulation where the player takes on the role of a judge. The experience lets the player go through a courtroom scenario where they listen to the defendant’s and witness statements once you click on any of the characters, then examine 5 pieces of evidence, and decide whether the defendant is guilty or not guilty, and then you will get your result, whether you are correct or incorrect.

I wanted the project to feel immersive, as if you were inside a courtroom drama. Instead of just presenting information passively, I designed it so the player has to actively click through the dialogue, review the evidence using a slider, and make the final decisions. My goal was to combine the narrative, interaction, and sound design into one experience that feels like a small narrative game.

How the project works and what I’m proud of

The entire project is built with the state-based system that you showed us in class, which I found very useful to keep things organized. I used a global variable called state to control which screen is currently being duspayed and inside the main draw function, I check the value of state and call different functions, like drawcover, draw instruction, draw trial, draw evidence, draw verdict, and draw result. I also used oop for the clickable characters and for the different scenarios, which was useful because I can easily add or edit scenarios. I then created variables and uploaded images of evidence, in which I used a four-loop and arrays, that loop through every evidence item in the current case and create the image path using the case number and index, like 101_0.png, then store it into an array for the evidence. For the actual case randomizer, I used the random function currentCase = random(cases) and loadEvidenceForCase(currentCase). I made sure to name the evidence files in an index form with the case number, so the system can find them and match each piece of evidence with each case.

I am especially proud of how I structured the interaction system. On the trial screen, the player can click on any of the different characters (defendant, lawyer, witness) to open the statement pop-up. I used a Boolean variable popup to control the visibility and a counter variable popupstage to track the dialogue progression. This created a small dialogue that allows the statements to unfold step by step instead of appearing all at once, which i though made the game feel more controlled.

Another part I am proud of is the dynamic evidence loading system. Instead of manually loading each image one by one, I created a function that loops through the selected case evidence and builds the image file paths automatically. The images are stored in an array and displayed using a slider that lets the player scroll through them. This made the project more scalable because I could easily add more cases without rewriting a large portion of the code. Here is the code:

// load only the 5 evidence images for the current case
function loadEvidenceForCase(caseObj) {
  evidenceImages = []; // resets the evidenceimages array so old case images dont stay

  for (let i = 0; i < caseObj.evidence.length; i++) {
    //loop through every evidence item in the current case
    let imgPath = `evidence/${caseObj.caseNumber}_${i}.png`; //creates the image path using the case number and index, like 101_0.png
    loadImage(imgPath, (img) => {
      //load the image from that file path
      evidenceImages[i] = img; // when the image finishes loading, store it in the evidenceimages array
    });
  }
}

function preload() {
  coverImg = loadImage("cover.png");
  titleFont = loadFont("title font.otf");
  bodyFont = loadFont("body font.ttf");
  instructionsImg = loadImage("instructions background.png");
  trialImg = loadImage("trial.png");
  verdictImg = loadImage("verdict.png");
  correctverdictImg = loadImage("correct verdict.png");
  wrongverdictImg = loadImage("wrong verdict.png");
  clickSound = loadSound("clicking sound.wav");
  backgroundSound = loadSound("cover and instructions music.wav");
  gaspSound = loadSound("gasp.wav");
  gavelSound = loadSound("gavel sound.mp3");
  statementSound = loadSound("statement.wav");
  tickingSound = loadSound("tic.wav");
}

function setup() {
  createCanvas(windowWidth, windowHeight); //makes canvas fill entire screen

  backgroundSound.setVolume(0.4);

  // create characters
  defendant = new Character("Defendant", 417, 325, 1);
  lawyer = new Character("Lawyer", 500, 325, 1);
  witness = new Character("Witness", 840, 325, 1);

  //evidence button (which is hidden until trial screen)
  evidenceButton = createButton("View Evidence");
  evidenceButton.position(1050, 660); //
  evidenceButton.size(200, 50); // button width/height
  evidenceButton.style("background-color", "255");
  evidenceButton.style("color", "rgb(11,11,11)");
  evidenceButton.style("font", "tileFont");
  evidenceButton.style("font-size", "18px");
  evidenceButton.style("border-radius", "15px");
  evidenceButton.style("border", "3px solid black");
  evidenceButton.mousePressed(() => {
    if (currentCase) {
      // only open if a case is selected
      state = "evidence";
      evidencePopup = true;
      currentEvidenceIndex = 0;
      evidenceSlider.value(0); // reset slider
      justOpenedEvidence = true;
      evidenceButton.hide(); // hide it until trial screen
    }
  });
  //create slider for evidence (hidden until popup opens)
  evidenceSlider = createSlider(0, 4, 0, 1); // 5 pieces of evidence (0–4)
  evidenceSlider.position(550, 550);
  evidenceSlider.style("width", "200px");
  evidenceSlider.input(() => {
    currentEvidenceIndex = evidenceSlider.value();
  });
  evidenceSlider.hide();
}

function draw() {
  evidenceButton.hide();
  evidenceSlider.hide();

  //background music control depending on current game state
  if (
    state === "cover" ||
    state === "instructions" ||
    state === "trial" ||
    state === "evidence"
  ) {
    if (!backgroundSound.isPlaying()) {
      backgroundSound.loop();
    }
  } else {
    backgroundSound.stop();
  }

  //ticking sound that only plays during verdict decision
  if (state === "verdict") {
    if (!tickingSound.isPlaying()) {
      tickingSound.loop();
    }
  } else {
    tickingSound.stop();
  }

  //different screens depending on game state
  if (state === "cover") drawCover();
  else if (state === "instructions") drawInstructions();
  else if (state === "trial") drawTrial();
  else if (state === "evidence") drawEvidence();
  else if (state === "verdict") drawVerdict();
  else if (state === "result") drawResult();

  //tool to help me
  fill(255);
  textSize(16);
  textAlign(LEFT, TOP);
  text("X: " + mouseX + "  Y: " + mouseY, 10, 10);
}

Areas for improvement and problems

One of the biggest problems I ran into was managing alignment and the systems; at one point, changing the rectmode(CENTER) and textAlight (CENTER, CENTER) affected other parts of the code unexpectedly. So I had to learn how push and pop isolate the styling changes and where exactly to put them so they don’t affect the entire sketch. Another challenge was getting all of the evidence images to load correctly. Some of the files had different extensions like jpg or png, which caused loading errors because in the four loop, I only put the png extension. So I had to fix the file names and make sure the loading function matched the correct format. That taught me how sensitive the file paths are and how important consistency is. If I were to improve this project further, I would refine the visual design so it feels more polished and fix up the formatting of the buttons. I would also introduce a scoring system or a branching narrative so that the decisions feel even more impactful. The core system does work well, but I just think there is room to push it even further.

References and AI usage

I used this YouTube video and the reference page to better understand the scale function and implement it in my sketch, which I used for the characters: https://www.youtube.com/watch?v=pkHZTWOoTLM
https://p5js.org/reference/p5/scale/

I also used this P5 reference page to understand the drawingContext function, which I used to add shadows to my characters
https://p5js.org/reference/p5/drawingContext/

For Ai I mainly used ChatGPT. I sometimes ran into bugs where changing the position or layout of something in my sketch would unexpectedly affect other parts of the program. So ChatGPT helped me debug these issues by explaining what parts of the code might be interfering with each other and suggested ways to fix them. I also used chagpt to help me figure out a system for loading all of the evidence images into the game, since my project had around 100 pieces of evidence, and manually loading each image would have been inefficient. With ChatGPT’s help, I implemented the loop system that automatically loads evidence images from the evidence folder using the case number and image index. Another area where ChatGPT helped me was structuring my case scenario. It suggested using the const keyword when defining the cases so that the data could not accidentally be reassigned later. This helped keep the case info organized and protected from being changed while the game runs. It also provided some information on how to structure each case object, like adding a case number.

For the visual evidence, I used Gemini to generate the individual evidence images. All the coding decisions and implementations were done by me; the AI tools were used as guidance for debugging, structuring the code, and generating the visuals.

Assignment 5 – Midterm Progress

Concept
For my midterm project, I decided to create an interactive courtroom experience where the player becomes the judge and has to decide whether a defendant is guilty or not guilty based on testimony and evidence. The original element of my project is that the player must interpret conflicting evidence and testimony rather than relying on obvious clues, making each decision feel uncertain and investigative. I chose this idea because I was interested in how people interpret information differently and how evidence can sometimes be misleading depending on how it’s presented. I wanted to design something that makes the user think critically rather than just react quickly. Also, because I love law and the process in general.

The experience begins with a cover, then an instruction screen that explains what the user has to do. From there, they move through the trial, evidence review, verdict decision, and final results. Each case is randomly generated from a set of scenarios, so the experience feels different every time someone plays.

Design
So far, I have focused on designing both the concept and the structure of the project. I planned out the different screens first (cover, instructions, trial, evidence, verdict, result) so I could understand the flow before building anything. That helped me feel less overwhelmed because I could work on one part at a time instead of the whole game at once. I went ahead and made the backgrounds using Canva and some generative AI pieces with the text (I will implement on-screen text for the testimonies), here are some:

Right now, I have the main structure of the game working, like the interaction controls, screen transitions, and the characters. I separated everything into classes and functions, and made some interactive buttons and keys to move through the different stages. I already have an idea in mind on how I want this game to work, so now I’m just trying to put it all together. I also started designing case scenarios, and I came up with 20. Now I just need to think about designing the visual evidence icons because I will have 5 pieces of evidence displayed for each case.

Visually, I plan to keep the characters stylized and minimal, created using p5 shapes instead of detailed illustrations. I want the courtroom environment to feel cohesive but not overly complex, so the focus stays on interaction and decision-making. The characters (defendant, lawyer, and witness) are drawn from mid-chest up using oop, so I can easily place them anywhere on the screen.

Challenging Aspects
The most frustrating and uncertain part so far has been positioning the elements on the screen, especially when switching to full-screen mode. My characters kept moving to different places, which made it hard to design the layout. Another difficult aspect is managing multiple interactive elements at once, like the hover detection, clickable areas, and screen transitions, because they all require precise logic to work smoothly together. I’m also worried about making all of the pieces of evidence using shapes, but I am thinking about doing them on different sketches, finding inspiration, and then combining them into my final sketch.

Risk Prevention
To reduce the layout issues, I switched from fixed pixel positioning to relative positioning based on the canvas width and height. This allows the objects to scale and stay in the correct space even when the screen size changes. I also used a coordinate display tool that shows my mouse position on screen while designing, to help me put everything precisely instead of guessing.

Also, to manage the interaction complexity, I tested individual features separately before combining them. For example, I built and tested hover detection for the characters before integrating it into the full scene. I also focused on building the basic system early, so I could confirm that the scene transitions worked before adding the detailed images. To me, breaking the project into smaller testable parts made the process feel more manageable.

Moving forward, I want to focus on refining the visuals, adding the pop-up for the testimony and evidence, which you get when you click on the character, and the sounds.

Reading Reflection – Week 5

I used to assume computer vision worked like human vision, just less advanced, but I realized the difference is definitely bigger. Human vision automatically understands meaning, like someone’s face, while digital video is “computationally opaque”. It basically shows that a camera image is just pixel buffers with no meaning unless an algorithm like frame differencing, background subtraction, or brightness thresholding interprets it. I was surprised that simple techniques like object tracking can detect motion just by comparing the pixels. The vision systems do not have to be so complex. Even basic detection can be powerful if the physical environment factors in the code are designed well.

One example that stuck with me was Myron Kruger’s videoplace. I found it really interesting that early interactive art already used vision tracking to let people draw with their bodies. It made me realize how computer vision can expand the way we can interact with technology. At the same time, Rafael Lozano Hemmer’s work shows more of a critical side. His belt tracking piece turns surveillance into art, which made me wonder whether interactive work with surveillance abilities that track viewers is also training us to accept being watched. 

The reading left me to question if computer vision works best when environments are made specifically so that the computer can easily detect it, does that mean future spaces will be designed more for machines, made accessible for machines, than us humans? Like, will there be now more controlled lightning, infrared illumination, and retroreflective material? I think this text definitely shifted my perspective from seeing computer vision just as a technical tool to also seeing it as a cultural force that affects art and even social power.

Week 4 – Data Visualization

Here is my final sketch:


Concept:

I really liked the map idea that we did in class, so I wanted to do something similar but on a smaller scale. I chose Abu Dhabi and decided that I wanted the map to show something useful, like public resources. I chose 10 police stations and plotted them on a map image I found on Google. The goal was too amek he map simple and interactive so that people could click on the locations and see the name and phone number of the station.

Code Highlight:

  //draw popup if active
  if (popup) {
    drawPopup(); // it calls the function to draw the popup overlay and the text
  }
}

function mousePressed() {
  let clicked = false; // to track if the circle was clicked
  for (let i = 0; i < stations.getRowCount(); i++) {
    let x = stations.getNum(i, "X"); //to get the circle x
    let y = stations.getNum(i, "Y"); //to get the circle y
    let name = stations.getString(i, "Name"); //to get the station names
    let phone = stations.getString(i, "Phone"); //to get the station phone numbers

    //check if mouse is inside the circles
    if (dist(mouseX, mouseY, x, y) < 10) {
      popup = true; //show popup
      popupText =
        name +
        "\nPhone: " +
        phone +
        "\n" +
        "click anywhere on the outside to return to map"; //set popup text, with /n to set up the phone number on a new line underneath it, same with the insturctions to leave the popup
      clicked = true; //mark that a circle was clicked
    }
  }
  // If clicked outside any circle to hide the popup
  if (!clicked) {
    popup = false;
  }
}

//to draw the popup with the info
function drawPopup() {
  // Semi-transparent overlay
  fill(0, 0, 0, 150); //transparent grey overlay for the background
  rect(0, 0, width, height); //to cover the entire canvas

  // White popup box
  fill("white"); //color for box
  rect(100, 150, 300, 150, 10); //rect in center with rounded corners

  // Text inside box
  fill(0); //black text
  textFont(myFont); //outside font
  textSize(18); //size
  textAlign(CENTER, CENTER); //to center the text horziontally and vertically
  text(popupText, width / 2, height / 2); //to display the popup text
}

The part of the code I’m particularly proud of is the interactive circles, where you click on the circle, and a pop-up box with information appears. I created a function called drawPopup() to organize my popup drawing code, where I call it manually when the popup boolean is true.

Reflection/future work:

I started this assignment using the same concept as the map visualization we did in class. In the class example, we used latitude and longitude values, which are good for geographic visualizations. But I changed the method to manual X/Y coordinates because I’m using a custom image map, so I needed precise visual placement instead of mathematical mapping. I manually assigned x and y coordinates for each police station and placed them into my Excel sheet, so that they would visually match their positions on my map. I also added interaction logic so users can click the circles to see information, which turns the project from a static visualization into something more interactive using the mousePressed() and dist() functions. I also added a grey transparent overlay behind the pop-up to make it clearer and easier to read.

I used ChatGPT mainly for debugging because I would make silly mistakes like writing CVS instead of CSV, and had a hard time finding the problem with why my sketch wouldn’t load. I also used ChatGPT because I added the title and instructions text and placed it in the middle, but every time I interacted with the circles, the text would be placed on the left of the canvas. So, ChatGPT said it was because of my textAlign(CENTER, CENTER); and it told me to set alignment only for the popup, and to reset it to LEFT, BASELINE before drawing any other text. That helped me better understand how the text-align feature worked as well. I also used a Google font for my text: https://fonts.google.com/selection 

And I searched up on Google how to make a pop-up using a boolean, and I took inspiration from this sketch I found: 

https://editor.p5js.org/YifanLiu/sketches/HkcS9wXo-#:~:text=let%20on%20=%20false;,(204%2C%20153%2C%20255) 

But overall, I mainly used the example we did in class and the lecture slides for reference. I also took some of the concepts from this week’s reading into consideration and provided clear instructions on how to manage the map. For future improvements, I would like to make my animations smoother and maybe add zooming or panning for better navigation on the map, but this assignment definitely helped me understand the different ways we can portray data. 

Reading Reflection – Week 4

One thing that drives me crazy is the car touch screens and buttons that are used to control things like the air conditioning and volume. I have to literally look away from the road just to figure out which button to press. I get so frustrated and end up pressing everything just to see what would change. From what I understood in Norman’s reading, it is the failure of signifiers and mapping. I used to think they were the same thing, but with all of his examples, he made it clear that affordances are what actions are possible, while signifiers are what show those actions to the user. The car buttons do not communicate their function, and their icons are quite confusing. I think this could be improved by adding clear labels, rearranging icons, or even having words instead of icons, so I could predict what will happen before I press it. 

I can apply Norman’s design principles by making sure the buttons actually look clickable and that the interactive elements and directions stand out visually so users do not have to guess. I will incorporate mapping, where the controls match their results spatially and conceptually, so that people can predict what will happen. I noticed that when mapping is unclear, I hesitate before acting because I do not trust what will happen. I would also probably prioritize feedback or make sure to show a loading animation or confirmation message so people know their action worked. I think I would also design my sketches with the assumption that users will make mistakes, by including undo options or very clear instructions. The reading made me realize that good design isn’t about the visual aesthetics of something, but about making sure people understand how to use it; otherwise, it would be really annoying to use.

Assignment 3: Ocean – Farah Shaer

Here is my final sketch:

Concept:

For this assignment, I wanted to create an ocean-inspired sketch. It has clouds in the sky, moving waves, a floating boat, and birds flying across the sky. My goal was to make all of the animations smooth and calm rather than random.

Code Highlight:

class Waves {
  constructor(y, noiseLevel, noiseScale) {
    this.y = y; // the vertical position of the wave
    this.noiseLevel = noiseLevel; //for the height of the waves
    this.noiseScale = noiseScale; // for the smoothness of the waves
    this.t = 0; //the time for the animation (the move part)
  }

  move() {
    this.t += 1; //to update the wave over time (slightly)
  }

  display() {
    stroke("rgba(63,63,198,0.45)"); //color of the wave (a bit transparant)
    strokeWeight(4); //thickness of the wave

    for (
      let x = 0;
      x < width;
      x += 1 // so it can be looped from left to right across the canvas
    ) {
      let nx = this.noiseScale * x; // to scale the x position for the perlin noise, making the x values have a similar height
      let nt = this.noiseScale * this.t; //to scale the time variable for a smooth transiton because as the time increases the wave slowly changes its shape

      let waveHeight = this.noiseLevel * noise(nx, nt); // uses the perlin noise to calc the height of the wave and return it to a value between 0 and 1

      line(x, this.y, x, this.y - waveHeight); // draws a vertical line for the wave
    }
  }
}

 

I’m particularly proud of the wave animation. This was my first time working with Perlin noise. I followed the p5 noise reference and watched the coding train video to understand how Perlin noise creates smooth and natural movements. I think using noise instead of random values made the water feel more realistic and continuous.

Reflection/future work:

I built the sketch using object-oriented programming with separate classes for the waves, boat, clouds, and birds. I also used arrays to animate multiple objects at once. At the beginning, I started with the clouds, and I tried using the random function to place them, but it looked too messy and chaotic, so I decided to just space them out evenly with a little variation to make it look natural. That way, the clouds felt more organized and intentional. 

Originally, I planned to animate a person with a surfboard moving back and forth, but I decided a floating sailboat would fit the scene better. So I added a boat that floats gently up and down with the waves. I used sin for the floating motion and push and pop with translate to make it easy to move the boat without changing its shape (this is a game-changer). 

Then I felt as if the sketch was too empty, so I added birds that fly from left to right. Their y position changes a little, so they do not move in a straight line. The bird shapes were inspired by another P5 sketch I found online, but I changed the movement and made them fit my sketch.

I think the hardest part was positioning things so everything looked intentional and nice together. The clouds and boat were tricky at first. I also spent a lot of time figuring out how to use push and translate to make the boat move without messing up the coordinates. Once I got it down, it was actually really simple using objects and classes. Also, I felt like the object oriented programing was really useful, since it made it easier for me to create multiple objects without constantly repeating code. For my future work, I do want to lean more towards interactive elements, but I used this assignment to really focus on understanding OOP.

Here are the tutorials I watched and the reference/examples  I found inspiration for the sailboat and birds from other P5 sketches. I liked the shapes, so I used them as a starting point and modified the code to fit my scene by changing the motion, colors, and integrating them into my own object-oriented program:

https://editor.p5js.org/cs105/sketches/iCmF693Ps 

https://editor.p5js.org/zielinsj/sketches/NAzOThoLl#:~:text=Sketch%20Files,37 

https://www.youtube.com/watch?v=Qf4dIN99e2w 

https://www.youtube.com/watch?v=YcdldZ1E9gU 

https://p5js.org/reference/p5/noise/#:~:text=noise()%20is%20used%20to,x%2C%20y%2C%20t)%20

Reading Reflection – Week 3

After reading this article, I realized that I had been using the word interactive without fully questioning what it actually means. I mostly associated it with responsiveness and just assumed that if something reacted to my input, it was interactive. The author’s examples and definition of interactivity helped me clearly distinguish between interaction, reaction, and even participation. For example, the branch example showed how it was more reactive rather than interactive because there was only one active actor. I also really liked how the author argues that reaction, no matter how intense, never becomes interaction. The example with the dancers made it clear to me that, yes, the dancers were dancing with the music (giving a response), but they did not provide any feedback that changed the music or affected it in any way. This also made me question my own work and whether my sketches are more reactive than truly interactive. For example, in my work, I created sketches where the system responds immediately to the participant (like if they press a button), but I did not make it actually change or adapt based on their behavior. So I think my previous sketches were more about animations rather than interactive ones because, yes, it responds to input, but does not meaningfully listen, think, and respond in the way he describes. 

I also like how interactivity was discussed more as a continuous variable rather than binary, especially with the use of the fridge example. I think framing interactivity as high or low rather than interactive or not interactive helped me understand how people can experience the same system differently. I agree with the author’s argument in this case, because it gives designers a more practical way to evaluate the interaction on a scale from high to low, rather than just dismissing the concept altogether. But this has me questioning who gets to define that standard of high or low interactivity, especially when we all can experience systems so differently. 

Also, the difference between the user interface and interactivity was clear. At first, I had a hard time distinguishing between them.  I usually think visually first, and then add interaction later. But now it is clear that for a strong interactive system, it needs to be heavy on both aspects of interactivity in technology, which is combining both form and function, and should not be separate steps. While I do agree with the author, I think he is somewhat biased towards interaction models that give off human conversations. I think this bias most likely comes from his own experience as an interactivity designer, but I think it also limits how other forms of interaction might be perceived and valued. In future work, I want to design my sketches more intentionally, making sure each element has a functional role in the interaction so it does not look like it was just thrown all together. 

Assignment 2: Box Hit

Here is my final sketch:

Concept:

For this project, I was inspired by the game Smash Hit, where a ball hits glass and causes it to shatter. Also, while looking through the computer art magazines, I came across Boxes I by William Kolomyjec. I decided to combine these two ideas by creating a grid of squares with a circle in the center that acts as a button that triggers the distortion of the squares. 

The circle turns green when the viewer hovers over it and turns red when clicked. Once you click on it, the circle causes the squares surrounded by it to distort, kinda like in the artwork I was inspired by. I wanted a controlled system that slowly breaks apart, while having a game-like effect. 

Here is the artwork I was inspired by that was in the magazine:

Code Highlight:

The code that I am particularly proud of is the section that distorts the boxes based on their distance from the center and the randomness to its position and sides. It gave the visual effect I wanted. 

//distortion control using the radius
if (ruined && radius < 155) {
  //so if the disortion is on and the radius hasn't reached its maximum, expand the radius. I chose the number 155 because I didn't want the disortion to go all the way out.
  radius += 4; //to increase the radius gradually (like a firework effect)
}

if (!ruined) {
  //if the distortion is off to reset the radius back to 0.
  radius = 0;
}

//the grid of boxes
for (
  let x = 0;
  x <= width - size;
  x += size //so it can loop horizontly
) {
  for (
    let y = 0;
    y <= height - size;
    y += size //so it can loop vertically
  ) {
    let d = dist(x, y, width / 2, height / 2); //to get the distance from the squares to the center of the canvas

    //to make the grid distorted
    if (ruined && d < radius) {
      // if the distortion is on and the square is in near the radius
      rect(
        x + random(-5, 1),
        y + random(-5, 1),
        size + random(-5, 1),
        size + random(-5, 1)
      ); //to make random x and y positions and change the width and height for a random messy effect. (for the disorted squares)
    } else {
      // if the squares are not within the radius.
      fill("white");
      rect(x, y, size, size);
    } // let the rest of the boxes to be straight and untouched
  }
}

Reflection/future work:

In class, we used loops to create a grid of circles, so I applied the same concept using nested for loops to build a grid of squares. At first, the size of each box was 50, but I felt that it was too large for the distortion effect to look effective. The larger size also caused some of the boxes to be cut off at the edges of the canvas. So I decided to reduce the size of the boxes to 20, which made the grid feel more refined and detailed. 

Also, the distortion effect was happening too fast it felt overwhelming. I tried to slow it down by reducing the values inside the random function, but this did not work the way I expected. So I just did the frameRate function and put it inside the setup function, like how you showed us in class. I think slowing down the frame rate helped the movement feel more controlled and intentional.

When the circle was clicked, all the boxes that were within the radius distorted at the same time. I was going to leave it this way, but I showed my friend, and she told me that I should try to make the distortion spread outward gradually, like a firework. I did this by using a radius that slowly increases over time, allowing it to expand from the center instead of appearing all at once. I think it made it more visually interesting, so I stuck with it.

While working on this project, I sometimes needed help with the order of the functions and how everything should be structured so the sketch would work properly. I already knew the functions from the slides and reference materials and wrote the code myself, but I was not always sure how to organize them in an order that would work. So I used ChatGPT and Google searches to help clarify those details, specifically how to control the distortion using a radius value and why that needs to be placed at the top of the draw function so it could update consistently, and where the mouse-pressed function should be placed. But most of my understanding came from the class examples, lecture slides, and trial and error with my code. 

In the future, I would like to add more details and incorporate more concepts that were previously discussed. I am interested in exploring different types of distortion and animation that really enhance the experience.