Week 11 – Reading Reflection

One of the most interesting points Pullin makes in his book is the idea that lots of design standards around disability is to cover it up rather than embrace it or emphasize it. There is always a strong push towards “normalcy” in the design world. His example of how glasses went from medical necessities to a fashion item; I believe most people may acknowledge someone wears glasses out of necessity but we also subtly know that the style of glasses they wear is also an expression of themselves. I imagine as prosthetics become more advanced and fine-tuned, humans will find a way to embrace these tools as more than just their functions but also introduce an expressive layer to them. An easy example of how that might look is if someone needs a prosthetic arm, they might want a prosthetic arm that resembles their favorite prosthetic-using character in fiction (like Edward Elric, Luke Skywalker, or even Guts). This agency and idea of co-design can empower non-able-bodied folks.

Pullin actually mentions that prosthetic arms are much more intimate than prosthetic legs; arms inherently serve more than their function, and it’s not just about nails, veins, or perceived texture. If it’s trying to resemble a fleshy human arm, it creates a serious sense of disappointment and is often off-putting. Pullin mentions the best approach here is often to NOT hide the fact that the arm is artificial and instead emphasize it while giving the functions of a fleshy arm. This was all really interesting to me, especially his example of how certain clients of Monestier would hate the feeling when the hand would initially fool new acquaintances until they had their moments of realization.

As I read this article, I thought about how prosthetics in fiction are often romanticized. A sick metal arm that can turn into a cannon or extend out to become a grapple hook are often framed as “upgrades” but I imagine not having complete dexterity and fleshiness creates quite a sense of void that most fiction rarely explores. I thought of this as a connection to last week’s reading on a “Brief Rant on the Future of Interaction Design”, which I also thought was very insightful. There must be an inherent, strong emotional gap on prosthetic function and feeling, and I think our portrayals of disabled characters in media could use some more commentary on losing their sense of touch.

Week 10: Arduino Loopstation (Musical Instrument)

This week Yongje and myself paired up to make our very own musical instrument.

I thought about the capabilities of the arduino speaker and was unimpressed with the sound “texture” of it, so we discussed what we could do with a rather limited range of sounds we could generate. I’m not much of a musician so I suggested what if we made a simple beat recorder, kinda like a metronome of sorts? Yongje informed me that what I was describing is called a “loopstation” and we got to designing.

Concept (With Visuals) – Hubert

After we planned what we wanted to do, I decided to visualize the user interaction side of the project first before designing the schematics and technical side.

The red button would be to start/stop the recording process. A red LED would indicate whether it was currently recording.

The blue button would be there for the user to tap in their beat.

When you are done with your beat, you can save it by clicking the red button once again. You can see whether it was properly stopped by the indicator turning off. Then you can press the green button to play your recorded beat.

Schematics & Planning – Hubert

Before we started connecting metal to metal, I made a schematic to quickly map out everything we needed to connect.

 

Code & Difficulties Encountered – Yongje

There are 3 main parts to the code.

The first is figuring out debouncing logic, which is used to remove the state when the system is bouncing between true and false when the switch is pressed. The second part is playback, actually playing back the recorded soundLastly, the third which is the hardest part: finding how to store the beat recording.

I’ll start by explaining the hardest part first, which is storing the beat recording.
The beat recording logic works by tracking the time of each button press and release while the device is in recording mode. Every time the beat button is pressed, the program calculates the gap since the previous press (gap = now – tRef) to capture the spacing between beats. When the button is released, it measures the duration the button was held (dur = now – lastPressTime) to record how long that beat lasted. Both values are stored in arrays (gaps[] and durs[]), building a timeline of when each beat starts and how long it plays. Figuring out this logic was the most difficult part.

Now onto explaining the playback logic. The playback logic is responsible for reproducing the rhythm that was recorded. It does this by reading through the stored arrays of gaps and durations in order. For each beat, the program first waits for the gap time, which is the delay before the next beat begins and then plays a tone on the speaker for the duration that was originally recorded. Because each recorded gap includes the previous beat’s duration, the playback code subtracts the previous duration from the current gap to get the true silent time between beats. This ensures that the playback matches the timing and spacing of the user’s original input, accurately reproducing both the rhythm and the length of each beat. I had to create a logic to turn negative silence time to positive because sometimes it gave errors when the inputs and the durations of beats were too short. This is explained in depth in the comment section of the code.

Finally, the debounce logic ensures that each button press or release is detected only once, even though mechanical switches naturally produce rapid, noisy fluctuations when pressed. When a button’s state changes, the program records the current time and waits a short period to confirm that the signal has stabilized. Only if the input remains steady for longer than this debounce delay does the program treat it as a valid press or release event. This filtering prevents false triggers caused by electrical noise or contact bounce, giving the system clean, reliable button inputs for recording and playback control. At first, I didn’t have this debounce logic implemented and had a hard time figuring out why the system sometimes failed to recognize button presses or seemed to trigger multiple times for a single press. Once the debounce logic was added, the button responses became stable and consistent.

Reflection

I believe this project turned out really well, and it was very interesting to work on our first group project of the semester.

Week 10 – Reading Reflection

My initial impression regarding the”Brief Rant on the Future of Interaction Design” reading was that it was formatted in a very engaging way. As someone with ADHD, I tend to read long paragraphs a lot slower than individual sentences structured this way; this made me trust in the credibility of the author’s perspective on interaction design.

The way the author transitions from explaining the importance of touch to the complete lack of it in our touchscreen technology was really done, and I couldn’t agree more. Using touchscreens is really un-stimulating, and if anyone who has tried texting without haptics will know it feels incredibly unresponsive– but that also seems to be the future we’re headed towards. The images Bret Victor chooses to accompany his texts are hilarious too; there really aren’t many things we naturally need our hands to swipe on other than manmade touchscreens. Victor’s explanation of how humans naturally transition between grip-styles is pretty interesting to me too.

This reading gave me a lot to think about. One of the things that came to mind was the experience of using a controller versus mouse & keyboard when playing video games. For those unaware, let me explain the main differences between the two.

When you use a controller (or gamepad as some call it), you’re primarily using your thumbs for everything from the analog sticks to the surface-buttons. Using just your thumb to control your camera controls can be quite difficult if precise and delicate movements are required.

When you use a keyboard and mouse, your arm and wrist are capable of microadjustments while holding the mouse to input much more precise and delicate movements; not to mention your keyboard hand is using way more than just your thumb to control what’s happening on screen.

So going by what I’ve said, many would probably wonder why anyone would ever use a controller, but that’s the thing– I haven’t explained the one thing that makes this remotely a difficult decision.
Controllers give an extra layer of immersion by both letting the user relax their arms and lean back, but also provide the user with haptic feedback and vibrations in response to what’s happening in-game. Imagine you’re playing a game where explosions are involved– the controller would vibrate violently in your hands as explosions affect your character. This is why you turn to keyboard and mouse for precision but controllers for immersion.

Now onto Victor’s follow-up article– I thought his response to voice was pretty amusing, “I have a hard time imagining Monet saying to his canvas, “Give me some water lilies. Make ’em impressionistic.” It’s amusing because that’s literally how our modern generation approaches stuff they don’t know how to do.

One other thing that really caught my attention in the follow-up was this quote, “The density of nerve endings in our fingertips is enormous. Their discrimination is almost as good as that of our eyes. If we don’t use our fingers, if in childhood and youth we become “finger-blind”, this rich network of nerves is impoverished.” I wonder if late-gen Z and gen alpha have any indicators of finger blindness as so many of us grew up with touchscreen devices as our main source of entertainment.

 

Week 9: Lock Mechanism (Analog + Digital)

I initially really struggled to come up with a concept for this week’s project. While I understood how to use analog and digital switches, I wasn’t sure how I would use them in conjunction to create something engaging and interesting. I slept on it and woke up with the idea to build off the theme of security from week 8 and make a lock mechanism using analog inputs like the potentiometer.

One of my biggest concerns was that I wasn’t sure how to draw a proper schematic so I looked online and used Tinkercad Circuits for a while but it wasn’t exactly the simple schematic drawing I wanted so I decided to save the site for future use instead.

This was the schematic I came up with. I knew I was going to mess up and have to move stuff around so I did it digitally using simple shapes and lines on canvas. I barely knew what I was doing and to be honest it made a LOT more sense to just start putting wires and pieces together on the breadboard but this schematic was how I thought it would look.

And this is how it looked physically with all the relevant pieces attached to the breadboard.

I knew the wiring would get pretty crazy as I started to connect everything so this is how I color-coded them:

    • Red: Power Source (+)
    • Black: Ground (-)
    • Green: Inputs Connections to Arduino Uno (Switch, Pot Meter, etc)
    • Yellow: Output Connections to Arduino Uno (LEDs)

On the code-side, I made sure to declare each pin I was going to use at the very top so I could quickly change it if I needed to swap stuff around.

int greenButton = 7; //button to open lock
int blackSwitch = 2; //part of the lock (the black on/off switch)
int potMeter = A4; //part of the lock (the roating blue one)

int greenLED = 12; //lock is open
int redLED = 11; //indicates lock is still locked

//LOCK COMBINATIONS
int blackSwitchANSWER = LOW;
int potMeterANSWER = 1000; //above X value to be correct

This is what the final product looked like:

I didn’t run into many bugs when coding but I did initially accidentally set the red light to always be on unless I was holding down the green button. I had to search online how I could fix this and I eventually found I could set the pinmode for the button to “INPUT_PULLUP” which was useful for inputs that otherwise would have no stable value when not pressed.

pinMode(greenButton, INPUT_PULLUP); //INPUT_PULLUP is good for buttons that automatically spring back uppinMode(blackSwitch, INPUT_PULLUP);
pinMode(potMeter, INPUT);

pinMode(greenLED, OUTPUT);
pinMode(redLED, OUTPUT);

I did have a physical difficulty where I was testing different kinds of inputs with the switch and the potentiometer and forgot what the correct combination actually was– and with no way to know if the green light was even going to turn on when it WAS correct. Thankfully a minute of trial and error and making sure the code side of things was correct got me to return to the original combination, but that could’ve been pretty annoying if the lock was any more complicated.

When it came to the LED light responses. I originally had the appropriate light turn on for 2 seconds and then turn off. I felt like this didn’t feel quite right so I made a creative decision to flash the appropriate light as feedback instead; I did this by creating a for loop that would flash the light 5 times with a small delay. This felt a LOT more responsive than the brief moment that the light would turn on previously. Below is the full if-statement that checked whether the confirm button, the green one, was pressed.

if (greenButtonState == LOW) {  
  if (LockIsUNLOCKED == true) { 
    digitalWrite(redLED, LOW); //turn OFF red light
    for(int i = 0; i < 5; i++) {    // flash 5 times
      digitalWrite(greenLED, HIGH);
      delay(150);
      digitalWrite(greenLED, LOW);
      delay(150);
    }

  } else {
    digitalWrite(greenLED, LOW); //turn OFF green light
    for(int i = 0; i < 5; i++) {    // flash 5 times
      digitalWrite(redLED, HIGH);
      delay(150);
      digitalWrite(redLED, LOW);
      delay(150);
    }
  }
}

I am very proud of how it turned out in the end with both the technical difficulties that I solved and how it feels to interact with it. Below is a demonstration of the lock working; you need to toggle the black switch to the right side and twist the potentiometer counterclockwise to the end.

Link to ino File (Google Drive)

Week 8~: DIY Security System (Unusual Switch)

I started this project with a few ideas. One of the ideas was to make a secret switch hidden on a bookshelf. One of the books would have a piece of copper tape underneath that would touch the copper tape beneath it on the shelf. However…I realized that I didn’t have enough books to make it look convincing so I started looking around my room for something that inherently had motion to make my life easier. So I thought– what if I used my door?

Sometimes I don’t fully close my door properly and don’t know if it’s fully closed. That inspired me to make the door a switch to act like a DIY security system. I have no experience with physical computing so I had some trouble visualizing how it would look in my head, and it didn’t help that I’ve been out sick for so long. This image below was my initial concept for it.

The idea is to set up a simple LED light on the breadboard that would light up when the door was fully closed. I think I understood that pretty comfortably, but I didn’t know how the rest would work.

One of the challenges was having a mobile power source, as I was definitely not going to use my laptop for power when the Arduino kit is all the way up on the wall. I remembered that I found a battery pack in the kit so I dug that out and went down to the IM lab to ask for help and got some spare AA batteries.

The breadboard and Arduino setup was pretty straightforward but the one thing I had to think about was which wires to leave as an open connection later. From my perspective, it made sense to have the ground be split up and later connected using the copper tape so that’s what I did, while following the color conventions of PWR and GND.

I walked over to my door and applied some copper tape and set up my Arduino & battery pack on the wall using some mounting tape. However, as I worked on setting up the rest of the copper tape, I realized that this was going to very clunky– the door’s frame is not seamless with the door itself which means I need something solid to connect it; however, the copper tape was a bit too flimsy to be that connection. This is what I had so far at the time:

 

I was pretty perplexed by this. After some critical thinking, I realized that I could set up the arduino on the door itself. And not only that– I could even take advantage of this metal piece that helps the door slide into the lock near the door handle. This was my updated plan for my switch:

When it came to positioning the wires, I thought the copper tape would suffice to keep the wires in place but it just wouldn’t stick very well; I also needed to be able to micro-adjust and tune the location of my wires to make sure the system worked smoothly. I decided to use mounting tape again to make fine tuning the position of the wires much easier on me

I think the final result turned out to be really nice, and I’m proud of myself for adapting to the difficulties encountered during the process of mounting the project on the wall. It’s a shame I have to take it down to make future projects but I had fun making it.

 

Here’s a video of the final result in action; the green LED lights up to indicate a full lock when closed properly. I would not recommend installing this as your sole security system.

 

 

Week 9 – Reading Reflection

For Tigoe’s “Physical Computing” piece, one of the quotes that stood out to me was the idea that pretty much everything physical computing-related has already been done before, and if it hasn’t, many aspects of your idea are inevitably going to be unoriginal, and that is okay. The main message I got from this piece was that there are endless ways of remixing things using the same concepts you see– your specific combination of physical computing-ideas is going to make your idea.

Out of the motifs he listed, I thought the tilty stands/tables were the most amusing to me. The first reason why is because it reminded me of the game Super Monkey Ball, a game I loved as a kid; the second reason is because I’ve been wanting to integrate my old skateboard into our final project since the first class, and a skateboard is basically a “tilty table” you stand on. I hope to learn more physical computing skills to eventually create an interactive experience using my skateboard.

For Tigoe’s second piece, “Set the Stage, Then Shut Up and Listen”, the title immediately reminded me of the mindset I had to approach my midterm presentation with– if my game is good, I have no need to explain anything myself. Sure enough, that was roughly the mentality Tigoe was preaching here.

I’ve heard staying quiet and watching the experience can be very frustrating for many people, but my personal belief is that the frustration is an indicator of poor user experience design. My experience with watching has always been so much fun as I get to confirm my hypothesis with how the user interacts with the game.

For example, I wanted to create a situation where players would communicate “let me handle the red ball and you get the blue ball” in my midterm game’s co-op mode. The game would only start with one ball but a second one would spawn after the ball’s second bounce, which would hopefully prompt the players to coordinate whose responsibility the new ball was. When it came to the first ever playtest in-class, I was overjoyed when Yongje and Khatira said “let me handle the red ball and you get the blue ball” pretty much word-for-word.

However, I also recognize that not every system is foolproof as there are prerequisites I had not considered for understanding the game. Many people struggled to figure out what fingers to use to press WASD or the arrow keys so they would use one finger from each hand to press WASD separately and get really confused why the controls were so clunky. Obviously, WASD is a very gamer-coded instinct so anyone who inherently doesn’t play games would not understand to put their middle finger on “W” and ring and index fingers beside on the “A” and “D” keys respectively. That begs the question of whether explaining finger placement was necessary or whether there was an unforeseen restriction on who could enjoy my project.

 

Week 8 – Reading Reflection

For Norman’s piece, I was very surprised to find it very relatable as I initially didn’t quite understand where he was going with his vast knowledge of teapots, but I later realized that he was talking about a feeling I actually had trouble verbalizing a few months ago. I had a conversation with some friends where I wanted to describe how much the visual design of a certain aspect of something I was trying to learn significantly affected how passionate I felt toward learning that ‘something.” While my friends wrote me off as just a little picky with my peripherals, Norman’s point on “aesthetic pleasure” improving perceived usability of something really made me feel validated. For example, just having a pretty bass guitar will make me feel more likely to pick it up to practice it seriously rather than just have it lying around.

The general idea of emotional design was very interesting, and it reminded me of the sound design of my midterm project. The sound design alone was motivating me through countless playtests against myself. It was because I designed the SFX to hit a certain metaphorical chord in my brain that I felt more passionate about the final result. It made me even happier to hear positive feedback on the SFX design from my fellow peers as it was honestly just designed in a way that was personally satisfying to me; other people enjoying it was definitely an unseen externality.

I had previously read about Margaret Hamilton’s story back in 2019 when we had the first visualization of an event horizon/black hole back. One of the parts that resonated with me most was that Hamilton ran to her computer after a late-night party to fix some faulty code she thought about at the party. I think this shows just how much pride Hamilton had in her role on the Apollo missions, and just how brilliant she was. Her quote, “I was always imagining headlines in the newspapers, and they would point back to how it happened, and it would point back to me” shows me a lot of her prideful but grounded nature that I found very inspiring.

 

Week 6: Pong² (Midterm Project)

Overall Concept:

This is Pong². It’s called Pong² because this isn’t just a spiritual successor to the first ever video game, it’s a dimension beyond. If regular pong only moved you in one axis, it only has one dimension; this game lets you move left to right AND back and forth– creating a very amusing experience with much more depth than the original.

Pong² features two modes. Let’s start with Versus. Versus is a continuation of what I worked on all the way back in Week 3. In Versus, you and another player play a 1 v 1 duel of up to 9 rounds, best of 5. It’s the same win conditions as classic pong but with a lot more strategic depth. Moving your paddle in the opposite direction towards the ball will “smash” or “spike” it into your opponent’s side while moving in the same direction as you receive the ball will help you “control” or “trap” it. 

Pong²’s second mode is its best: co-op. In Co-Op mode, you and another player work together to guard the bottom goal against multiple balls that speed up with every return. You have 3 lives and each ball that slips past you takes a life from your shared life total. You have to coordinate with your teammate to make sure you keep a watch on every ball and not just respond as fast as you can to each one, because that’s only going to take you so far (my bet is on 50 seconds).

 

Feedback: Color Scheme Consistency

On our Monday checkup, Professor Ang told me to have consistent colors for highlighting the gameplay instructions; when I use different colors to highlight I risk player confusion. I also proceeded to keep the game mode colors consistent with the menu instead of having the menu remain exclusively white and purple.

CO-OP Mode Design

I originally planned to make a player vs AI mode but realized that I really didn’t know how to make it respond similarly to how a real player would move the paddle. I had received feedback from Professor Ang to make the concept more interesting than just the usual 1v1 pong, and that’s when it hit me: what if I made 2 Player survival pong?

I had a lot of fun designing the co-op mode on paper, but I made my code a huge mess by duplicating my versus game mode javascript file instead of working from scratch. I THOUGHT that would make the process faster since I would be working with my previous framework but I ended up having to modify so many things it ended up convoluting it.

I originally had the game start with just one ball and you wouldn’t get the second ball until the 15th hit on the top side; however,I realized I wanted players to naturally have the idea to coordinate with each other. For instance, player one might tell his teammate “I’ll take the red ball you take the blue one” to strategize. So what I decided was to make the 15th hit spawn the third ball and the 2nd  top bounce spawn the second ball which presents the challenge to players at a good pace.

I was very proud of this for loop I built to manage the many colorful balls that you had to defend against. Between this week and last week, I also had to add a canHitPaddle() method and timer to make sure you wouldn’t accidentally double hit the same ball as you tried to move in the same direction as it.

for (let b of coopBalls){ //runs thru all the balls stored in the array
    b.display();
    b.move();
    b.scoreDetection();
    
    if (coopBallCollidePaddle(b, player1paddle1, "Top")&& b.canHitPaddle()) {
      paddleBounceNeutralSFX.play();
      b.lastHitTime = millis() //records hit time using canHitPaddle()
      b.ballYspeed *= -1;
      b.yPos = player1paddle1.yPos - player1paddle1.paddleHeight / 2 - b.radius - 5;
      
      if (keyIsDown(83)) { // 'S'key 
        paddleBounceControlSFX.play();
        b.ballYspeed *= COOPbackControlFactor;
      } else {
        paddleBounceNeutralSFX.play();
      }
    }

    if (coopBallCollidePaddle(b, player2paddle1, "Top")&& b.canHitPaddle()) {
      paddleBounceNeutralSFX.play();
      b.lastHitTime = millis()//records hit time using canHitPaddle()
      b.ballYspeed *= -1; 
      b.yPos = player2paddle1.yPos - player2paddle1.paddleHeight / 2 - b.radius - 5; //the 5 helps make a microadjustment to prevent funky collisions
      
      if (keyIsDown(DOWN_ARROW) || keyIsDown(75)) { // '↓' or 'K' key 
        paddleBounceControlSFX.play();
        b.ballYspeed *= COOPbackControlFactor;
      } else {
        paddleBounceNeutralSFX.play();
      }
      
    }
    
  }//closes for b loop  

I think the co-op mode overall turned out very well. It was surprisingly challenging but also a good amount of fun for the friends that I asked to test it.

 

VERSUS Mode Improvements

I had the idea to add direction input with forward and backward velocity. Basically say you’re on the bottom side, if you receive the ball with “down arrow” then you slow it down because you pressed the same key the ball was headed, but if you received it with the “Up arrow” key you would add momentum to it.

I had so much fun implementing this layer of complexity to the game. A main criticism I received at first was that the extra dimension of movement for the paddles didn’t offer a significant difference to the usual pong gameplay loop. 

This was my response to that– adding a much needed strategic depth to the game. You weren’t just competing on who was better at defense now, you were seeing who had the better attack strategy.

In fact I originally didn’t plan to add this forward and backward velocity effect to the co-op mode but I loved how it felt so much that I knew I had to add it in some way. This posed a balancing challenge as well; if we were to keep the backControlFactor from the versus mode then it would prove far too easy. 

 

Sound Design

I originally wanted to implement the sounds into my versus mode but then I realized building the infrastructure for all the menus first would make my life a lot easier. So sound design ended up as the last thing on the to-do list.

I usually got sound effects from silly button websites when I needed to edit videos so I never knew where to get actual licensed SFX for indie games. I eventually found one called pixabay that had a good list of sound effects.

Most of the menu button SFX and game over SFX were pretty straightforward, but I really wanted the SFX for the gameplay loop itself to sound very satisfying. My favorite ones were definitely the ball control SFX and the countdown SFX. For the ball control SFX, I was inspired by the sound design in the Japanese anime Blue Lock– which has some incredible sound effects like the bass-y effect when one of the characters controls the ball midair. I found this super nice sounding bass sound and that’s how the idea of using acoustic instrument sounds stuck. The other two ended up being a snare and a kickdrum sound.

 

Difficulties Endured

The Great Screen Changing Mystery

There was this incredibly confusing bug that I never truly figured out but the gist was that it would randomly switch screens specifically to the versus game mode SOMETIMES when I clicked into Co-Op. 

I dug and I dug through the code endlessly until I saw that I left one of the screen-related if-statements to give an else-statement that defaulted the screen to versus. This was here for earlier tests but this seriously still doesn’t make any sense to me. There was never a call to the function that would switch to versus and there was nothing that would leave the screen state invalid for any of the available screens to take over.

This was a frustrating gap in my knowledge that I never fully understood, but I did fix it.

 

Hit “K” to Let the Ball Through (and look stupid)

For some reason, the bottom paddle kept letting the ball straight through whenever I blocked it while holding “K” or the down arrow. This took hours of looking for the problem, going “I’ll deal with that later” and then running into it again and wondering what the heck is causing this?

It was very specifically the bottom paddle and it HAD to be related to backward movement input detection. Then I realized that it was ONLY the bottom paddle because I had set the ball speed to minSpeed, which is always positive. So all I had to do was to convert it to a negative value equivalent to minSpeed.

if (keyIsDown(UP_ARROW) || keyIsDown(73)) { // '↑' or 'I' key 
  paddleBounceSmashSFX.play();
  ballforVS.ballYspeed *= fwdVelocityFactor;
} else if (keyIsDown(DOWN_ARROW) || keyIsDown(75)) { // '↓' or 'K' key 
  paddleBounceControlSFX.play();
  ballforVS.ballXspeed = minSpeed;
  ballforVS.ballYspeed = minSpeed;
} else {
  paddleBounceNeutralSFX.play();
}
Cursor Changing

When I was developing the buttons, I realized that they didn’t really feel clickable. So I wanted to add some code to change the cursor to the hand whenever it hovered over a button. This proved surprisingly complicated for what is seemingly an easy task. The most efficient way I found was to append all the buttons to an array for each menu and have it checked in a for loop everytime– then closing off with a simple if-statement.

function updateCursorFor(buttonArray) {
  let hovering = false;
  
  for (let btn of buttonArray) {
    if (btn.isHovered(mouseX, mouseY)){
      hovering = true;
    } 
  }
  if (hovering) {
    cursor(HAND);
  } else {
    cursor(ARROW);
  }
}
Conclusion & Areas for Improvement

As for areas of improvement, I would probably make the ball-paddle collision logic even more stable and consistent. It usually behaves predictably and as intended but it sometimes shows the way I coded it has a lot of flaws with colliding on non-intended surfaces.

However, despite all this, I am really proud of my project– both from the UI design standpoint and the replayability of the gameplay loop. Once my other friends are done with their midterms I’m gonna have a much better opportunity to see flaws in the design.

 

 

Week 5: Pong² (Midterm Progress Report)

Concept:

Finding the concept for this project was rather difficult– I initially thought about creating a rhythm game that would use the WASD keys and the arrow keys similar to a 2D Beat Saber where the direction you cut mattered as much as your timing. I was inspired by Mandy’s project from Fall 2024 where she made a two player dance game where you have to input a specific pattern of keys in combination with your partner. I thought her project had a lot of charm to it; however, I imagine syncing the rhythm to the onscreen inputs would prove challenging so I scrapped that idea early on.

Then I revisited the project I made for week 3 where we had to use loops. Back then I just wanted to follow Professor Aya’s advice (when she visited on Monday of week 3) and use object-oriented programming to add manually controlled collision physics to the project I made for week 2. That’s how I accidentally made Pong but this week I considered seriously turning it into a fully-functioning project.

I realized that my code was going to be very messy and a long scroll from top to bottom if I kept it all in one .js file, so I looked up how to split up the files. After I created new js files I realized that setup() and draw() could only be run in the main sketch.js file so I had to find a way to work around that for the gameLogic js file which is just an imported week 3 js file. I basically made new functions and named them initiateGame() and initiateMenu() and called them in the main sketch.js setup() function; I also had to add two lines of code into the html file to have the project access these new js files.

<script src="menu.js"></script>
<script src="gameLogic.js"></script>

Updating the “PING” Game

Besides the obvious requirements to add a menu screen and a restart button, there were plenty of personal touches I wanted to add to what I made two weeks ago.

The first of which was to implement a more visually appealing score tracking system. Last time I had the game continuously repeat so I made a very minimalistic but number-based scoring system to fit with the rest of the minimalist aesthetic. Since I was adding a set number of rounds now, I wanted to use a more interesting way of representing each point. There would be a rectangle in the middle that would either flick upwards or downwards slightly based on which side took the point (kind of like a lightswitch in a way).

The next idea was to have the ability for an options menu to change the in-game volume mixer and maybe even keybinds.

Since there was also the need to reset the game without restarting the whole sketch now, I made sure to add an ESC function where you can back out to the menu from the game and also subsequently reset the scores and start a new game the next time the user clicks the button to enter the game.

//RESET SCORE
function resetGame(){ //resets the score
  roundNumber = 0;
  topScore = 0;
  topCounter = [];
  bottomScore = 0;
  bottomCounter = [];
}
...
function menuMousePressed(){
  twoPlayerButton.handleClick(mouseX, mouseY);
  resetGame();
}

I also made each button with its own class while passing functions as parameters to call when clicked. I have only test one button so far– the twoPlayerButton that leads you into the normal mode – but it works great and I’ve “scaffolded” an area for more buttons to be added the same way.

allMenuButtons.push(
  twoPlayerButton = new button(
    tempButton, //image 
    width/2 + 120, //xPos
    height/2, //yPos
    tempButton.width, //sets to uploaded file dimensions
    tempButton.height, 
    switchToGame //which function to call if clicked?
  )
);

User Interface Planning

To plan out the UI, I quickly made a Canva file that matched the dimensions of my project and made a quick sketch of what I wanted the menu to look like. I’m going for a rather

This is also how I came up with a better name for the game: Pong2  

It’s not quite Pong 2, but since the paddles are no longer restricted to one dimension along each edge and now have true 2D movement, I wanted to call it something representative of that.

For the font choice I chose Gotham for its geometric feel. Since p5js didn’t have access to Gotham, I downloaded an otf file online and placed it in my sketch folder.

For the final version of the sketch,  I want to add a faint bouncing ball in the background to make the menu screen feel more dynamic and alive too.

A detail I was quite proud about was setting up a hover checker that would turn my cursor into a hand if I was hovering over something that was clickable.

for (let btn of allMenuButtons) { 
  btn.display(); //displays the button
  
  if (btn.isHovered(mouseX, mouseY)) {
    hovering = true; //check if hovering any buttons
  }
}

if (hovering) {
  cursor(HAND);
} else {
  cursor(ARROW);
}
Identify the Most Frightening Aspect: 

The most frightening part of this project is most certainly the one player mode I want to make where you can play against an AI. Theoretically it’s just some simple math to have it sort of predict the ball’s trajectory but I imagine it would consume a considerable amount of effort to figure out how that would work.

I might drop this aspect altogether if it’s too challenging to make before the deadline but I really would like to make it work.

 

 

Week 5 – Reading Reflection

The moment I started reading the article I immediately recognized Myron Krueger’s Videoplace from my first week in Understanding IM; I remember it because Professor Shiloh explained that Kreuger was actually manually adjusting the project in the background but making it appear to audiences like an automatic feedback loop. At the time, only computer specialists and engineers had access to complex computer vision technologies; this story is a reminder to me that the development tools that we now take for granted have only become accessible to the majority of the population in the past decade.

How does computer vision differ from human vision?
In the simplest sense, I believe computer vision lacks perspective and has an innate lack of context. Where humans lack in high processing speeds, they make up for in their innate flexible perception of reality of what is in front of them. They ask questions or make comparisons to what may not necessarily be the objectively closest comparison.

When it comes to perspective in AI– artificial intelligence didn’t grow up with an innate curiosity about the world no matter how many “Hello, World!”s it says. A human can look at a boy and a girl who always hang out together and assume romantic context but an AI wouldn’t know that innately; that’s probably why the trope of AI learning human emotions from watching their movies and media is such a common one in our fiction pieces.

Techniques to help the computer see / track what we’re interested in?
I believe the article mentions using bright lighting or at least high contrast backgrounds. However, I’m sure that image training is also very important in today’s computer vision.

Effect of tracking & surveillance in interactive art
I remember when I got my Xbox 360 as a kid and got the Kinect system bundled alongside it. It was such a revolutionary technology back then and now we can recreate the same thing on the software side with just a webcam on p5js! That is incredibly impressive to me.

I never even considered computer vision in surveillance until I read the piece on Suicide Box, which recorded real tragedies of people taking their lives at the Golden Gate bridge. What surprised me is how port authorities counted thirteen in the initial hundred days of deployment whereas the suicide box with its computer vision recorded seventeen. That’s four human lives that were tragically lost and possibly forgotten.