Week 11 – Reading Response

I really enjoyed this week’s readings, especially his response to people’s response as it was quite entertaining to read.

Firstly, I did find myself agreeing with the initial arguments. I personally think it’s sad to see technology taking over everything. For instance, in the video, the person’s glasses translates the announcements. While, of course, this is convenient, I think it takes away from the normal human experience of asking someone for directions, and struggling to understand one another, but still find that human bond. This might be a bit nit-picky, but I feel like if humans have lived till now without all this technology, then maybe not everything needs to be changed.

Another thing that I thought of, that can kind of be used for both sides of the arguments, is disability and accessibility. Blind and deaf people heavily rely on their sense of touch to do most everyday tasks, for example, when pouring water they can feel when the cup is getting lighter. However, on a similar note, technological advancements have also been essential for other disabilities, for instance, someone with limited hand dexterity can find it easier to user their voice to do some tasks, rather than using a keyboard or a screen. This isn’t an argument that the author brought up, however, it’s something that immediately came to mind for me and I was surprised it didn’t come up.

Overall, despite literally being a Computer Science major who’s whole career path is probably going to be linked to technological advancements and AI taking over the world, I still really feel like it’s important to take a step back and observe whether something really needs to be digitalized and technologized and AI-fied. I might have strayed away from the topic of the reading, but I feel it all is strongly linked.

Week 11 – Production Assignment

Your concept

I was a bit lost on what to do for this week’s assignment. With loads of deadlines and presentation coming up for my other classes, it was difficult to allocate a lot of time for this production assignment which is something I was very upset about. I have been seeing so many Arduino videos online and getting loads of ideas for possible productions and I hope to bring these ideas to life in my next submission. However, for today’s assignment, I referred to the SparkFun tutorials.

The idea of my circuit is that the user can play different tunes by pressing on the buttons. The sound is generated by the buzzer. The user can also adjust the volume using the potentiometer.

Schematic & Circuit

 

IMG_8039

“How this was made” section explaining how the code was made and sources of media assets

As mentioned, I followed the tutorials from the SparkFun book provided with the kit. Beyond just doing the circuit and running the code, I did read through the information provided to understand how and why everything works. The code is pretty self-explanatory and mimics things we covered in class.

Reflection and ideas for future work or improvements

For the future, I definitely should start EARLIER. Honestly, it was just a struggle with time management for this submission. Otherwise, I think I could’ve come up with something more interesting. I’m hoping to work on something else for this assignment over the weekend and hopefully submit that in addition to this!

Week 10 – Production Assignment

Your concept

I created an arduino that responds to light using the photolight sensor. When the room is dark, the LEDs automatically turn on. Alternatively, the user can also press on the button to turn on the LEDs, however, they only remain lit up as long as the user presses on the button.

Schematic

Video of the circuit
IMG_7822
How this was made

I referred to the class notes and schematics to create the photolight sensor part of the circuit. Similarly, for the switch, I also referred to class notes. In order to get the response of the LEDs when the room gets dark, I put a specific threshold in the code, and when the the sensor reading is below that threshold, the LEDs turn on. The switches responsiveness was similar to exercises we did in class.

Code
int photoresistor = 0;              // this variable will hold a value based on the brightness of the ambient light
int threshold = 500;                // if the photoresistor reading is below this value the the light will turn on

void setup()
{
  Serial.begin(9600);              // starts a serial connection with the computer
  pinMode(A2, INPUT);             
  pinMode(8, OUTPUT);
  pinMode(9, OUTPUT);             // set pin 8 & 9 as an output that can be set to HIGH or LOW
  pinMode(13, OUTPUT);            // used pin 13 to troubleshoot since sometimes my LEDs wouldn't be connected correctly
}

void loop()
{
  //read the brightness of the ambient light
  photoresistor = analogRead(A0);   // sets photoresistor to a number between 0 and 1023 based on how bright the ambient light is
  int switchPosition = digitalRead(A2);

  Serial.println(photoresistor);    // print the value of photoresistor in the serial monitor on the computer

  if (switchPosition == HIGH || photoresistor < threshold) {
    digitalWrite(8, HIGH);   // turn the LED on (high voltage)
    digitalWrite(9, HIGH); 
    digitalWrite(13, HIGH);
  } else  { 
    digitalWrite(8, LOW); // turn the LED off by making the voltage LOW
    digitalWrite(9, LOW);
    digitalWrite(13, LOW);
  }
}

 

Reflection and ideas for future work or improvements

I’m proud that I was able to get the circuit to actually work and be responsive! As simple as it seems, I was really struggling to do a simple circuit with a switch and one LED, my connections were all not working and it was not being responsive at all. Hence, I’m really happy I was able to incorporate multiple inputs and outputs in the end.

There are a few improvements I can think of for this circuit. For starters, I think I should’ve used different components and edited the code so that when the user presses on the switch, the LEDs stay on until user presses again. Another possible improvement is allowing the user to turn off the LEDs using the switch when they’re on due to the photoresistor. I think this would require me to play around with the code more and possibly user others components from the kit. I also hope to get more creative with my circuits and have more unusual interactions in the future.

Week 10 – Creative Reading Response

Physical Computing’s Greatest hits and misses

I really enjoyed going through the different themes of physical computing in this article. I felt that I got a lot of inspiration and ideas about possible project ideas, and the explanations provided for each concept really simplified how implementing this idea in practice would look like. Looking through these examples felt like when I would look through past student’s projects on this WordPress. And I’ve also felt that everytime I see an idea that’s been done, it feels like I can’t do that idea anymore either. However, as the author pointed it out, it’s always nice to re-imagine ideas in new contexts, think of new interactions, and as a lot of us did for our midterm projects, link it back to our identity and cultures.

Some themes especially stuck out to me from this article, and I hope to be able to implement them in some way in my work in the future. First, Floor Pads! I love it when a coding project goes way beyond the screen or the usual hardware of wires and buttons. Especially when something is more prominent and unusual, it definitely captures more attention. And something like Floor Pads where there’s movement and a lot of viewer interaction involved can be especially fun. Likewise, Body-as-cursor and Hand-as-cursor are two other themes that stuck out to me for similar reasons. Finally, Things You Yell At was another fun theme, and something I’ve seen a lot at previous IM Showcases. I feel like another common aspect among these themes is that there is no learning curve to understand how it works, you kind of just experiment with it till you get it, usually it’s pretty straightforward.

Making Interactive Art: Set the Stage, Then Shut Up and Listen

This article made some great points, throughout this class, we’ve spoken a lot about interactive art, what makes it interactive and how to guide users through these interactions. I really appreciated the points it made about allowing your viewers to experiment with the art, rather than force them into the interactions you’ve planned out, allowing them to discover it themselves can be more fun. As the author states, an important part of interactive art is to listen. I think it can be eye-opening to see how viewers look at your art. Especially as most artists spend hours and hours just looking at their work, editing every little detail, and creating everything from scratch, it can be hard to zoom out and see the bigger picture and experience the art through fresh eyes. Hence why it’s important to see how others interact with your work, take it as feedback, maybe just by watching how others interact, you can get new ideas and make edits to your work.

Week 9 – Reading Response

Her Code Got Humans on the Moon

I really enjoyed this reading, learning about Margaret Hamilton and her impressive contributions to the field of software engineering, especially at such a young age is truly inspirational. As someone who semi-aspires to be a software engineer, but sometimes feel that it’s not a welcoming environment for women, this reading really changed my perspective and gave me some hope and motivation. A few things stood out to me in this reading. First, Margaret felt a lot of pressure to avoid any mistakes. Mistakes are normal, but obviously, in such a high stakes environment, any mistake can be catastrophic. More than that, if she made a mistake, it would only worsen the image about women in “male-dominated fields,” and it would set a bad impression for future women scientists. For instance, the reading mentions a time where she suddenly remembered a bug in the code one late night, and immediately went to fix it, out of fear that any mistakes in the code would be blamed on her is a sad, but plausible scenario. Despite it being a team effort working on the Apollo software, any mistakes would’ve most likely pointed back to her. Luckily, that didn’t happen!

Another thing I found interesting is her attention to detail, implementing back-up plans for any scenario. Even when others on her team claimed that no astronaut would make such mistake, she still insisted on planning for every worst-case scenario. And it paid off. This reminds me of whenever I if-else statements, sometimes even though there are only two possibilities (true or false, for example), I still feel the need to write the second statement as an else if, JUST IN CASE. Obviously, this isn’t the same thing, but this two situations kind of related in mind.

Norman,“Emotion & Design: Attractive things work better”

This reading made me reflect on a lot of everyday things that we use. While I cannot think of any examples of the top off my head, I’m sure there are so many times where I’ve chosen the more aesthetic item, even when it is not functionally better. I think social media trends also contribute to these phenomena nowadays. Overall, I think Norman brings up some great points about how there sometimes always a trade-off between functionality and aesthetics, even though that is not necessary. His example about tools used during hazardous and stressful situations make sense, I guess that’s probably why emergency exits and fire extinguishers all look the same and work the same. Most emergency exit doors are “push” doors because that’s the fastest way to get everyone OUT, and it’s important that any tools needed to facilitate this escape are standardized and straightforward to use (fire extinguishers, ‘hammers’ in buses to break the window, etc.).

However, the balance between functionality and aesthetics still stands in calmer situations. Let’s say someone wants to pick up a new hobby and they buy a pottery kit. The packaging is aesthetic, the brand is well-known and trendy, and everything LOOKS good. Then, they go home and try to start working on their new pottery and nothing makes sense. The instructions are so short because they were trying to keep a minimalist aesthetic, and everything is organized really well but the user does not understand how to use anything. Now, this calm situation has become a frustrating one because what was supposed to be a relaxing, unwinding evening is now a confused, annoyed evening. This might not be what Norman meant by his explanations, but this is kind of what I understood. And this continues to apply in so many aspects of life, when an app developer makes a new app, the interactions should be built on user standards and shouldn’t require the user to think twice, or else the user will just delete the app and go back to what they know. The learning curve to adapt to something new shouldn’t be so high that the user abandons the experience.

Week 8 – Midterm Project

Sketch link!

Describe the overall concept of your project (1-2 paragraphs)

My project is a puzzle solving game with an “Ancient Egypt” theme. I wanted to have theme that related to me and since I’m Egyptian, this theme only made sense. Initially, I planned to use cartoon-like images of landmarks in Egypt, but I couldn’t find any images online of what I wanted or there wasn’t enough variety. I also can’t draw so I ended up just using realistic images instead (which I think I found through a website called freepik, but it was so long ago that I don’t even remember anymore). I had a rough plan of how I wanted the different screens within the game to look like in Canva (included in my midterm progress documentation), most things stayed the same in my final project. I found a font online called “khamenet,” which I decided to use throughout. I feel that this really added to the vibe of my project. 

As for the game aspect, I settled on having each piece just be a square, rather than actual puzzle shaped since that would’ve been too difficult to code. The user can move the pieces around using their mouse. I didn’t add any instructions for my game as I felt it was self-explanatory, I tested it with my siblings and they both were able to play the game without any instructions. The only thing that I needed to point out was the ‘peek’ button. The peek button allows the user to see a preview of what the image they’re putting together looks like on the board itself. Initially, I was planning to just have a small reference image at the bottom of the screen, however, the peek overlay option ended up being more helpful and looks more seamless than having something in the corner. While the user plays, there’s background music and a ‘ding’ whenever a piece is placed correctly, if the user turns of the sound, both of these audio elements are muted/stopped. In addition, while the user plays, the game keeps track of the time taken, number of pieces placed, and number of moves taken. Finally, at any point, if the user gives up, they can press the menu button at top right to go back to the main menu. If they do not give up and complete the puzzle successfully, there’s a victory cheer (called a zaghrouta) that plays as well as an ending screen with their time taken. They can press anywhere to restart.

Describe how your project works and what parts you’re proud of (e.g. good technical decisions, good game design) 2-3 paragraphs

For the start screen, the main interactions are choosing the image and the difficulty level. The user can select each of these elements by clicking on them using the mouse. If the user presses within the coordinates of the element, their choice is recorded and the game proceeds accordingly. By default, if the user does not click on anything, the first image is selected and the difficulty is easy. The user can see which option is selected as the selected image has a border around it and selected difficulty is darker in color. Finally, the user starts the game by pressing within the boundaries of the lines around the work “START”. As long as the user is on the start screen, there is some music in the background.

I’m really proud of the aesthetic and the look of my start screen, since I wasn’t sure how closely I would be able to match the idea I had put together on Canva. However, luckily everything came together really nicely. I was really happy with the visual feedback to the user selecting the image and difficulty.

On the game screen, there’s a puzzle board on the left where the user puts the pieces together, a piece tray on the right where all the pieces start out scattered, and a few buttons at the top to control different things. First, top left, there’s the peek button. This button (as described earlier) allows the users to see a light preview of the full image overlaid on the puzzle board. I’m really proud of this element since I didn’t really have a plan of how I wanted to display the preview, so the fact that in the end, it ended up being something so simple is really nice.

// shows a preview of the image being solved, as a background 
function drawPeekOverlay() {
  let img = getSelectedImage();
  if (img) {
    push();
    tint(255, 70);
    imageMode(CORNER);
    image(img, 10, 120, 600, 600);
    pop();
  }
}

Next to the peek button is the volume button which simply either toggles all the sound on or off, which includes the correct ding sound and the background music. A possible improvement is keeping the ding sound audible even when the user silences the music since I feel like that audible feedback can be more helpful.

In the middle at the top there’s an info bar that tracks the time elapsed, how many pieces have been placed out of the total, and the number of moves taken. The time elapsed is tracked by calculating the difference between millis() and the startTime that was recorded when the puzzle began, and counts placed pieces by looping through the pieces array each frame. Finally, on the top right, there’s a menu button which simply takes the user back to the start screen at any point.

As for the actual puzzle, when buildPuzzle() is called it creates a PuzzlePiece  object for every cell in the grid and scatters each one at a random position inside the tray using random(). Each piece stores which row and column it belongs to, so it knows its exact target position on the board. The drag and drop system uses three separate p5.js mouse functions working together: mousePressed() picks up the topmost unplaced piece the user clicked on by looping backwards through the array, mouseDragged() updates the piece’s position to follow the mouse every frame, and mouseReleased() drops it and calls trySnap() which checks if the piece landed within 30 pixels of its correct target — if it did, it locks into place exactly and the border around it turns green. The dragged piece is always moved to the end of the pieces array so it draws on top of everything else. When all pieces are placed, the background music stops, the zaghrouta audio plays, and the state switches to WIN which triggers the win screen on the next frame.

class PuzzlePiece {

  constructor(id, col, row, cols, img) {
    this.id = id;
    this.col = col;
    this.row = row;
    this.cols = cols;
    this.img = img;
    
    // calculate piece size based on the no. of cols and board size
    this.w = 600 / cols;
    this.h = 600 / cols;

    // current position, updated in buildPuzzle
    this.x = 0;
    this.y = 0;
    
    // target position (where the piece actually belongs on the board)
    this.targetX = 10 + col*this.w;
    this.targetY = 120 + row*this.h
    this.isPlaced = false;
}
draw() {
    push();
    // draw the actual image slice
    if (this.img) {
      let sliceW = this.img.width / this.cols;
      let sliceH = this.img.height / this.cols;
      let sliceX = this.col * sliceW;
      let sliceY = this.row * sliceH;

      imageMode(CORNER);
      // draw slice at current this.x and this.y
      image(this.img, this.x, this.y, this.w, this.h, sliceX, sliceY, sliceW, sliceH);
    }
    // draw border based on 'state'
      noFill();
      if (this.isPlaced) {
      stroke("#2a7a2a"); // green, if correct
      strokeWeight(3);
    } else if (this === dragging) {
      stroke("#e59828"); // orange, if being moved
      strokeWeight(3);
    } else {
      stroke("#6b2705");
      strokeWeight(1);
    }
      rect(this.x, this.y, this.w, this.h);
      pop();
    }

    // check if the mouse is touching this specific piece
   contains(mx, my) {
    return mx > this.x && mx < this.x + this.w && my > this.y && my < this.y + this.h;
  }
// snap logic
  trySnap() {
    let d = dist(this.x, this.y, this.targetX, this.targetY);
    if (d < 30) {
      this.x = this.targetX;
      this.y = this.targetY;
      this.isPlaced = true;
      return true;
    }
    return false;
  }
}
Describe some areas for improvement and problems that you ran into (resolved or otherwise) (1-2 paragraphs)

I think one main area for improvement is having actual puzzle shaped pieces rather than just squares. I think that would’ve definitely my idea across more, however, considering my skill set and based on tutorials and p5.js reference page, square seemed more feasible. I think adding instructions to clarify what the peek button does would’ve also been helpful, but my hope was that the user would just play around with the buttons and eventually discover it themselves (or ask me how to view a preview and I would guide them to use the button). I also would’ve loved to add some visual animations associated with the background music and also have different music/audio for each picture, but I completely forgot about the sound requirement till the last minute so there was a bit of a time constraint there.

I ran into a few problems throughout writing the code for this project, however, at this point, most of the problems have slipped my mind. One thing that I do remember since it was added last minute is that when I added audio files, my code was stuck in an endless loading loop. I was stuck on that for around 10 minutes, thinking maybe the files are just taking longer than usual to load, before I checked back through our class notes and realized I needed to add the sound library to my index.html file. Luckily, that wasn’t too big of an issue. Whenever I did run into any problems with my code (debugging) or was stuck with how to begin or how to proceed with specific features, I did get some help from Gemini as it would guide me on what topics to cover, give me links to videos to refer to, and what pages would helpful from the p5.js reference so I felt that I learnt a lot more that way.

Week 5 – Midterm Progress

Concept & User Interaction

I was very hesitant on an idea from my midterm. I really wanted to incorporate something that was personal to me and create a game that I could imagine my younger siblings being able to play. The idea I ended up with is: a puzzle game! I love solving jigsaw puzzles, so when I thought of doing something that would digitalize that, I was immediately excited about the idea. In addition to that, I decided to give it an Ancient Egypt theme! The last puzzle I solved consisted of landmarks in Egypt (image below), so inspired by that, the images that the user can solve as a puzzle in my game will be Egyptian landmarks (but more cartoon-like).

The first thing I started working on was the design. I feel even though we spend a lot of time on p5.js in class, the design and aesthetics of our projects really matter as it is what users will perceive first. So, I went to Canva and started working on what I want to my game to look like. I also started looking for fonts that would match my theme. However, since I don’t have Canva Pro, I wasn’t able to incorporate the fonts I found online into my designs, but I will definitely be using them in my code. Something that is missing in my design but I definitely plan to add into my final implementation is a reference image for the user to refer to on the game screens. It was a bit difficult to get everything to fit into the dimensions provided by Canva, however, I will try to work more on the layout in the coding stage.

The font I’m planning to use would look something like this if used for the title:
I’m still debating if its a bit illegible or if it fits the theme nicely. For now, I’ll just have it as a possible option, because I feel the regular serif fonts can be a bit boring.

After design, the next question is obviously user interaction and the actual code. The user interaction in my program is pretty obvious, the user selects their puzzle image, their difficulty, and then they solve the puzzle by moving the pieces from the right side (consisting of the jumbled up pieces) to the left side. Initially, when I was thinking about my idea, the picture I had in my head was jigsaw-like shaped pieces, however, I felt that actually implementing that in p5.js might be a bit too troublesome so I settled for simply splitting the image into smaller squares. Then, thinking about how the pieces would go onto the solved board, I thought that it would make sense if the user moves the piece close to where it actually is in the image, the piece would snap into place (paired with appropriate sound effects). I think that is the most satisfying way to gamify this interaction, and I can already think about what the code for that could like. In addition to sound effects, I’m planning to add some background music to each screen, probably relating back to the image selected for the puzzle. Finally, on the ending screen, I will probably match that with a celebratory sound effect, and confetti. For the confetti, I’m imagining Egyptian hieroglyphs raining down, as that would link back to theme I’m going for and it would be a unique idea. However, I still have to think more about the implementation of that.

In terms of code, I haven’t been able to make much progress, however, I’ve started to watch a video Coding Train and Patt Vira videos to understand how to code the mouse interactions I’m thinking of.

Most frightening or uncertain part of my midterm project

The most frightening part of my midterm project so far is how to really implement my ideas into code in an efficient, simple way.  There are two main interactions I’m intimidated by, 1 – Splitting the images into smaller squares, especially maintaining the aspect ratio and quality of the images and 2- Snapping of the pieces into place when solving the puzzle. For the former, I believe The Coding Train has done a video on a similar concept so I hope I can refer to his tutorials to understand the code, and the latter, I believe will become more familiar through trial and error. Another thing I’m concerned about that’s less technical and more about aesthetics, is finding appropriate assets (i.e. images, background music, sound effects) as that is what most non-technical users will notice, rather than, the code. To overcome this, I’ve already started searching for ideas and looking at past projects to see what resources they used to find their assets. Luckily, we also live in the age of AI, so if it comes down to it, I may have to use AI-generated images to find exactly what I’m looking for, however, my preference is that the images are more realistic.

Week 5 – Reading Response

What are some of the ways that computer vision differs from human vision?

Previously, I kind of always linked computer vision with machine learning. I always assumed there was some use of machine learning to identify the different objects in a given video, and to really understand the movements and different interactions within the video. However, after reading this article, I feel like I’ve gained a much clearer understanding of how computer vision actually works as well as a better understanding of the limitations of the technology available. While both computers and humans can probably identify where a person is in a video and their movements, humans are also usually able to predict their next movements. Humans are familiar with how humans interact with objects, while computers really depend on data, which can sometimes miss anomalous cases or outliers. An example that may seem a bit far fetched is someone who only has 4 fingers, human vision is obviously able to comprehend that, while I assume computer vision may not be able to tell that there is something missing in the image, and it’s only programmed to work with the norm.

In terms of computer vision’s capacity for tracking and surveillance and it’s effect on its uses in interactive art, I think one of the examples from the article, Suicide Box, combines those two ideas nicely. The tracking and surveillance aspect of computer vision has been used to create an art piece (kind of) about suicide and to emphasize irregularities in data. An issue that immediately comes up for me with computer vision is privacy concerns. A tool that once so heavily used for tracking and surveillance, to now be used in interactive art may be suspicious to viewers. Viewers may be paranoid that these art pieces are collecting data about them, however, I’m not sure if this is a common concern, considering most art pieces we’ve looked at that use computer vision have been well-received.

Week 4 – Creative Reading Response

What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?

Something that drives me crazy is some of the dryers in the laundry rooms on campus, specifically the ones that don’t have a screen! How am I supposed to know when my laundry will be ready? This system could make sense if this machine was just used by one person in their house, and they could just check on the machine whenever (or maybe there’s some sound that plays when a cycle is done). However, for a machine that’s shared by a whole building, it’s so inconvenient that you kinda have to just guess when your laundry would be ready. And if you’re too late, your laundry might just be tossed on the floor! This can be blamed on either university facilities for purchasing machines that do not have screens (since a few do, it’s just some that don’t have screens) or we can just blame the machines, because why can’t I know how long my clothes will take? So the improvement from my perspective is to add a screen!

How can you apply some of the author’s principles of design to interactive media?

Instructions! But, simple ones. Ones that build on what user’s are used to. For example, users are now used to pressing on the right side of their screen to speed up videos as that is how you do it on TikTok, YouTube, and Instagram. Instead of trying to reinvent wheel, sometimes it’s best to use what already exists.

Week 4 – Inside Out

Your concept

I had many ideas for this assignment. When we saw the flight path visualizations in class, I was inspired to do something similar, focusing on maybe DXB and AUH airports. However, after looking through Kaggle, I didn’t find datasets that would really work for the ideas I had, and I didn’t have many idea on how to do a new kind of visualization. Instead, I decided to think about how I could include generative text and that’s how I came up with the idea I ended up doing! I love the Inside Out movie, so I decided to create a visual of the orbs that store her memories, and when the user presses on each orb, it displays that memory.

Embedded sketch

How this was made

The first thing I did was create the design for the orbs. At first, I was going to just import images for each color, however, I decided to explore the different features of p5.js and instead create them from scratch. I referred to a video by Patt Vira where she was actually creating a Gradient Screensaver, built from circles with gradient fill. I watched up till almost the half point of the video to understand how to create the gradient fill. I’ve included comments wherever I used code from her video. I customized the colors based on the colors of the emotions from the movie. At first, I had the colors randomized and were generated using a function. I was planning to repeat the same memories for multiple orbs since it wasn’t guaranteed that I’d have enough memories for the number of orbs generated from each color. After actually creating the csv file of the memories, I realized that there weren’t that many and decided that the number of orbs would be equivalent to the number of memories, and the color of each orb would match up with the emotions for each memory. To actually create this in code, I created an object that stored all the emotion names and the color for each one. Then, I had a function called when creating the orbs that would check what color each one should be based on the emotion it was assigned.

The other main bit of code in this is the interactivity of the user pressing on the orb and the text being displayed.  When creating the orbs, each orb is assigned a line in the csv file (an emotion and corresponding memory), so when the user selects a specific orb, it just shows whichever memory text was assigned to that orb. I also imported a font from Google Fonts to try to add my own touch to the text section.

A highlight of some code that you’re particularly proud of
// referred to yt vid: https://youtu.be/Mdt81-7-U18?si=Uzq8KlE-3FfifFLA
function drawOrb(orb) {
  let gradient = ctx.createRadialGradient(orb.x, orb.y, 0, orb.x, orb.y, orb.r);
  
  let rr = red(orb.c);
  let gg = green(orb.c);
  let bb = blue(orb.c);
  
  // make the center transpert and outer area solid
  gradient.addColorStop(0, `rgba(${rr}, ${gg}, ${bb}, 0)`);
  gradient.addColorStop(0.8, `rgba(${rr}, ${gg}, ${bb}, 1)`);
  
  // use gradient as fill for the circles
  ctx.fillStyle = gradient;

  noStroke();
  ellipse(orb.x, orb.y, orb.r * 2, orb.r * 2);
}

Despite using a video to do this section, I am still really proud that I explored something new to improve the aesthetics of my assignment!

Reflection and ideas for future work or improvements

Something I wanted to add, but didn’t have the time, was a glowing effect to kind of mimic how the orbs look in the movie. I think that’s a possible improvement that could make the sketch look more realistic to the inspiration.