Week 5: Midterm Progress

Concept

For my midterm project, I’ve decided to make a Star Wars-themed game, where the user plays as Luke Skywalker. The aim of the game is to defeat the Stormtroopers on the spaceship. The aesthetic of my game is going to look similar to the image below. I want to go for the pixel-style simple game.

Gangs wars pixel

I got the inspiration for my game’s mechanics from Street Fighter. But instead of two players fighting against each other, I’ll make it a one-player game fighting against NPC enemies.

I’m thinking of making my game look something like this:

Street Fighter II The World Warrior! / KEN Longplay / 4K HD 60 FPS - YouTube

Design and User Interaction

  • The player (Luke) will be able to move and attack.
  • The Stormtroopers will spawn on the opposite side of the screen and run towards Luke.
  • The game will include a time-limit (which goes on until all his lives/HP is gone, or to maximize points within the time-limit).

Challenges and Risks

The most complex part is probably figuring out how Stormtroopers attack. I’m probably going to program them in a single class, which will spawn all of the enemies and their attack patterns. I’m also uncertain about the point system. I’m not sure whether to go for an HP/lives system, or a points system where he gains points for killing each enemy. But either way I’ll include a time limit.

To minimize the risk, I’m planning to divide the player and NPCs into two separate classes. I’ll also experiment with a simple collision detection test to confirm that Luke’s attacks properly register against enemies.

 

This is what my sketch looks like so far:

Midterm Project Progress


<iframe

 

Concept

For my midterm, I wanted to make a project that isn’t just fun, but also means something to me, feels useful to create, and reflects my interests, which for me is combining women’s empowerment, sports, and music. Because this project allows so much freedom, my mind kept jumping from idea to idea. I finally decided on a concept that felt meaningful and achievable: an interactive story experience where users guide a female athlete through multiple sports — football, basketball, and martial arts — while overcoming challenges in the form of negative comments from people, culminating in a celebratory dance that reflects the user’s gameplay.

I also wanted to incorporate music, one of my biggest passions, into the gameplay. Each action the user takes (scoring a goal, making a basket, or landing a punch) will add a layer of sound to a final anthem, letting players create a rhythm-based celebration that reflects their journey and success.

To make the experience authentic and inspiring, I decided to use real female athletes as the characters for each stage: Sam Kerr for football, Breanna Stewart for basketball, and Ronda Rousey, the first female UFC champion, for martial arts. Each athlete has a quick introduction before their stage to give context and highlight their achievements and the obstacles they’ve overcome.

My project aims to:

  • Educate users about the basics of each sport.
  • Empower women by breaking stereotypes and highlighting achievements.
  • Engage users through rhythm-based gameplay that integrates music and beats.
  • Inspire users to create a unique anthem reflecting the character’s journey.

Design

I want the design to be interactive and bold. The journey starts with a fullscreen instruction screen, and each stage is preceded by a quick introduction of the athlete, with a short text and a character.

The gameplay is divided into stages:

  1. Football: Users dribble and shoot as Sam Kerr. Negative comments appear as barriers, which shatter when the user scores, adding beats to the anthem.
  2. Basketball: Users control Breanna Stewart, shooting and dribbling while shattering barriers and adding beats to the music.
  3. Martial Arts: Users perform punches and kicks with Ronda Rousey, breaking barriers that add more beats to the anthem.
  4. Celebration / Dance: The beats from previous stages combine into a unique anthem, and character performs dance moves that sync with the rhythm, celebrating the journey.

Visually, I’m keeping the design stylized and clean, using shapes for interactive objects. Sounds, like cheering, percussion, and piano, respond dynamically to user actions, making the experience engaging.

Frightening / Challenging Aspects

There are a few parts of this project that I’m worried might be tricky:

  • Making the rhythm-based gameplay work: I’m unsure about getting the sounds from scoring, punching, or breaking barriers to line up and feel smooth.
  • Creating each sport stage: Each sport has different moves and controls, and it might be hard to make them all work well.
  • Drawing the characters and barriers: Making all the athletes and objects using shapes in p5.js could take a lot of time and get complicated.
  • Combining all the beats into a final anthem: I’m unsure if the sounds from football, basketball, and martial arts will come together nicely to make a fun and satisfying rhythm at the end.

Risk Prevention

To manage the different sports stages, I will use separate layers so only the relevant athlete, objects, and barriers show up at the right time. For example, when the user moves from football to basketball, the football elements disappear and the basketball stage appears. Designing all the characters and barriers from scratch in p5.js could take a long time, so I started making them on separate canvases to work faster and keep things organized before combining them into the full project. As well as I started on the intro page. Making interactions feel natural is also tricky because I need to detect clicks, key presses, or movements on objects made of multiple shapes. I’ve started experimenting with this early to make sure the gameplay will feel smooth.

Week 5 – Midterm Progress and Reading Response

Initial Concept:

The core concept of my project is to create a game where a panda chef stacks pancakes on its tray while pancakes fall from random positions in the sky . The gameplay is intentionally simple—pancakes fall from the sky, and the player moves the panda left and right to catch them before they hit the ground. I wanted the concept to be both whimsical and approachable, something that could appeal to all ages while still having the potential for engaging mechanics like timers, scoring, and fun visuals.

Design

The panda sprite sheet is animated using a frame-based system that cycles through images depending on movement, while background elements like clouds and grass are generated with simple loops and p5 shapes for efficiency. Pancakes are handled as objects using a dedicated class, which keeps the code modular and easy to expand. I also separated core functions—like drawing the welcome screen, updating the game state, and spawning pancakes—so the program remains organized and readable. This approach makes the design not only playful on-screen but also manageable under the hood.

Potential Struggles:

The most frightening part of the project was integrating sounds into the game. I know that audio is essential to making the experience immersive, but I was unsure about the technical steps required to implement it smoothly, especially since I had issues in the past when it comes to overlapping sounds and starting and stopping it accurately. Questions like when to trigger sounds, how to loop background music, or how to balance audio levels without them being distracting added to the challenge.

How I Plan to Tackle:

To reduce this risk, I turned to existing resources and examples, especially by watching tutorials and breakdowns on YouTube. Seeing other creators demonstrate how to load and trigger sounds in p5.js gave me both practical code snippets and creative inspiration. By learning step by step through videos, I am hoping to be able to gradually integrate audio without it feeling overwhelming.

Embedded Skecth:

Reading Response:

Computer vision i would say is different from how humans see. We naturally understand depth, context, and meaning, but computers just see a grid of pixels with no built-in knowledge or ability to infer and make connections like humans do. They need strict rules, so even small changes in lighting or background can throw a computer off.

To help computers “see” better, we often set up the environment in their favor, meaning that we cater to their capabilities. Simple techniques like frame differencing (spotting motion), background subtraction (comparing to an empty scene), and brightness thresholding (using contrast) go a long way. Artists and designers also use tricks like infrared light, reflective markers, or special camera lenses to make tracking more reliable.

What’s interesting in art is how this power of tracking plays out. Some projects use computer vision to make playful, interactive experiences where people’s bodies become the controller. Others use it to critique surveillance, showing how uncomfortable or invasive constant tracking can be. So, in interactive art, computer vision can both entertain and provoke — it depends on how it’s used.

Week 5 Reading Reflection

When I think about how computer vision differs from human vision, what stands out most is how little meaning computers can extract on their own. To a human, even a blurry image is packed with context, emotion, and symbolism. To a computer, it’s just a grid of pixel values that need to be processed before anything useful can be recognized. This is why the article highlights techniques like frame differencing, background subtraction, and brightness thresholding, which are methods that help a computer separate what is important from what is in the background.

To make these systems work, we often need to modify the image so the computer can interpret it. That might mean fixing the lighting, using high-contrast markers, or limiting the scope of what the computer is supposed to track. In a way, we design the world to fit the algorithm rather than expecting the algorithm to fully match the complexity of the world.

In interactive media, I think this capacity to interpret and track movement opens up exciting new directions for creativity. Computer vision gives artists a way to design works that respond directly to a person’s gestures, which is actually something I used in a project for another IM course. It transforms the audience from passive viewers into active participants, making the artwork something dynamic and alive. By combining human imagination with the computer’s ability to detect patterns, interactive art can become more immersive and responsive than ever before.

Barbie’s Dream House – Midterm Progress

Concept + Interaction: 
I’m making a small interactive Barbie Dream House experience with mini-games in each room. You start outside the house and it tells you to “click the door”. When you do a doorbell sound plays and you go inside. The interior is a divided view with hotspots for rooms: closet, kitchen, bedroom, and living room.
Each room has a tiny activity:
Closet: wardrobe selector (browse outfits with ← / → and hit ✓ to confirm).
Kitchen: cupcake maker (choose base, pick frosting color, “bake” in an oven).
Bedroom: open a book to see a short quote or page.
Living room: click the TV to change pictures (channel changer). 

Code design:
Each interactive room is its own class/file.
Main sketch holds a simple state machine: state = “outside” | “inside” | “closet” | “kitchen” | …. When state === “closet” the sketch delegates to wardrobe.draw() and forwards clicks to wardrobe.handleClick().

I made a Wardrobe class that:

  •  loads outfit sprite-pages and slices them into frames,
  • centers and scales the current outfit into a fixed box,
  • draws left/right arrow buttons and a confirm (✓) button,
  • exposes handleClick() for arrows, page switches, and confirm.

Other rooms (Kitchen, Bedroom, LivingRoom) follow the same class pattern so swapping and testing is easy.

Here is my current progress:

https://editor.p5js.org/rma9603/full/5s96PtOnM

 

Biggest risk & what I did about it:
The scariest part was making outfit switching reliable: slicing sprite-sheets, handling transparency, and aligning click hitboxes. If that failed the closet would be unusable.

How I reduced the risk:
I wrote a tiny test that loads one outfit sprite-page and uses page.get() to draw each frame side-by-side so I could verify cropping and transparency immediately. I exported/uploaded transparent PNGs (no white bg) and re-tested them.

centralized coordinates: arrows, confirm button, and hitboxes compute from the same this.box values so visuals and clicks always line up.

Result: the sprite-sheet method works, i can slice pages reliably and the outfit switching is stable, so i’m confident i can implement the cupcake and other rooms the same modular way.

Next steps:

  •  finish the kitchen cupcake game (choose base, frosting, bake timer).
  •  Create the bedroom interaction
  •  Create living room interaction
  • add start/instructions screen and a “new session” reset button so people can restart without refreshing.
  •  hover feedback + polish (shadows, small tweaks).
  • Make it full screen.

Week 5: Midterm Project’s Progress

Main Concept and User Interaction:

Super Mario has been the greatest game I’ve ever played, and I have so many good memories associated with it. I used to play it with my whole family after my dad came back from work. Since I was a child, I didn’t know how tired he was after finishing his job, but I kept begging him to play with me. Even though he was apparently exhausted, he always said yes to me, and I was absolutely happy and excited every night. Super Mario is thus a fundamental part of my childhood. For this midterm project, I wanted to bring back that childhood memory by making a simple version of Super Mario Bros 2, a Wii game I used to play with my family.

 

Most Uncertain Part of My Midterm Project:

The most uncertain part of my midterm project is how to actually make Super Mario run and jump using the sprite concepts that we learned in class. Since I had no experience making animations where a character responds to key presses, I started the project by completing this feature. I first went to a website called “The Spriters Resource” to get sprite images of Super Mario and then imported them into my project. But since the sprite images had a green background, I had to remove it using another website “Remove Background” and make the background transparent so that Mario matched the canvas background.

 

Code Snippet:

As you can see, this is the code I wrote to animate Mario running and jumping. I used the same logic that we discussed in class. If a user presses the right arrow, Mario runs to the right. If a user presses the up arrow, Mario jumps vertically. I used modulo to make sure the running animation loops back to the first frame and doesn’t go over the limit.

let isRunning = false;
  let isJumping = !onGround;

  //Make mario run towards right 
  if (keyIsDown(RIGHT_ARROW)) {
    x += speed;
    isRight = true;
    isRunning = true;
  }
  //Make mario run towards left
  if (keyIsDown(LEFT_ARROW)) {
    x -= speed;
    isRight = false;
    isRunning = true;
  }
  
  //make mario always stay inside the canvas 
  x = constrain(x, 0, width); 

  //animation for running
  if (isRunning && onGround) {
    //every 6 frame, move to the next sprite 
    if (frameCount % 6 === 0) {
      index = (index + 1) % sprites[1].length; //use modulo to loop through the same animation
    }
    drawMario(sprites[1][index]); //draw running Mario
  }
  
  //Animation for jumping 
  else if (isJumping) {
    drawMario(sprites[3][0]); //sprite for jumping
  }
  
  //sprite for idle 
  else {
    drawMario(sprites[0][0]);
  }

 

Sketch:

 

Reflections and Future Improvements:

It was relatively easier than I expected to implement the animation of Mario running and jumping because I only had to apply the concepts we learned in class, such as using nested arrays to store each sprite image and if-else statements to trigger certain actions when a certain key is pressed. However, I still need to fix one critical issue, which is when Mario runs too fast, the background image glitches. I am still not sure how to solve this issue, so I would like to ask my professor during the next class. To make it more like Super Mario Bros 2, I need to add some obstacles such as blocks, pitfalls, Koopa Paratroopas, Bullet Bills, and Goombas. I would also like to add a score tracking system where the longer you play, the higher your score gets. This is to make the game more interesting and fun.

Week 5 – Midterm Draft 1 Progress

Concept + Design

For this project, I wanted to go for something mystical yet down to earth. This is why I wanted to integrate “ordinary” objects such as trees, flowers, and combine them with magical designs, such as the fantasy-like structures on the background, and blue hues.

Although I am still in the process of deciding what the interaction will be for the viewers to engage with this project, I have narrowed my main ideas to two.

  1. The roses will display different colors and musical tracks whenever the viewer clicks on them. For example, one rose will show a different color after the viewer clicks on it. Another will change music every time it is clicked on it. And the third one might give an inspirational message after every click. This will allow an entertaining and colorful interaction to match the tone of the sketch.
  2. Another option, while less entertaining, will focus on the visual aesthetics, hence, the interaction will rely on a function that resembles a magnifying glass so viewers can take a closer look at all the objects displayed, and depending on where in the canvas they move the mouse, they will be able to listen to a different musical track.
Most Frightening/Complex Part

Although I haven’t written a code or program that can detect, prevent, or avoid specific problems or glitches, I one code I wrote to prevent the image in my canvas from having any issues is the preload function. This should allow the code to load properly before displaying the image.

// Edited Image PNG 
let palace;
function preload(){
   palace = loadImage('palace.PNG');
}

Another function I struggled to make but eventually led to a success was the custom shape, which initially was made in another sketch to define the shape of the tree. However, after I realized that I would need to move this shape in my final sketch, I introduced the translate code, along function “drawCustomShape” in order to execute the function for the custom shape and translate it in the canvas to my desired position.

function draw() {
  background(51);
  

 drawCustomShape(450, 230);
  drawCustomShape(-50, 230);

  
}


// function of custom shape (tree) + translation of position at x and y

function drawCustomShape(x, y) {
  push();

  translate(x, y);

  noStroke();
  fill("rgb(235,233,233)");
  beginShape();

  vertex(140, 95);
  vertex(140, 250);

  vertex(140, 250);
  vertex(100, 280);
  vertex(225, 280);
  vertex(225, 280);

  vertex(190, 250);

  vertex(190, 95);
  endShape(CLOSE);

  fill("rgb(32,32,228)");
  ellipse(120, 90, 170, 120);
  ellipse(180, 98, 130, 110);
  ellipse(150, 45, 140, 160);
  ellipse(200, 55, 150, 120);

  pop();
  // end of code for custom shape
}

 

Embedded sketch

 

Reflection and ideas for future work or improvements

My next steps are to find a way to display text to introduce the context and instructions for the interaction. I also plan to decide what said interaction will be based on the feedback and begin to write the codes necessary in a separate sketch before applying it to the final one. In the meantime, I will also search for the music pieces I will be using and edit them accordingly.

Furthermore, I need to plan how to include the option to start a new session without restarting the sketch. Any kind of feedback will be highly appreciated.

Week 5: Reading Response

I think both computer vision and human vision have benefits and downsides in terms of how they can comprehend the reality and meaning of the world. Computer vision simply relies on the quality of algorithms and environmental conditions, which often makes it fail to comprehend the meaning of videos. Human vision, on the other hand, allows us to instinctively comprehend everything in the world through our greatest tools called “eyes”. Furthermore, when it comes to emotions, computer vision is not able to fully understand human emotions. As mentioned in the reading, the emotion recognition system turns very subjective, complex, personal features, called emotions, into objective data, which I don’t think is ethically right, because we are essentially labeling people’s emotions in a way that does not perfectly depict them. However, computer vision can literally track everything in real time for as long as possible until the whole energy is consumed. We, as humans, cannot keep our eyes open and look at everything. But computer vision can stay active indefinitely, being able to record everything that is going on. Expanding on this, computer vision can depict the true reality of the world if all of the conditions are met and algorithms are implemented correctly. For example, Suicide Box was able to reveal the true reality of suicide that society was uncomfortable confronting. In this sense, computer vision is very effective in maintaining transparency. 

 

To enhance the quality of computer vision, we can control the environment of the physical world. For example, we can change the brightness and lighting of the background or change the color of objects so that the target is spotlighted, making it easier for computers to track.

 

In terms of the future of computer vision, I think more and more artists are going to incorporate computer vision into their interactive art as people are getting more interested in human and computer interaction, such as VR, AR, XR, and robotics. teamLab would be a great example. They exhibit artwork that allows people to interact with it. Specifically, in Sketch Aquarium, kids draw fish and then the fish appear on the screen so people can feed them or make them swim together. But I believe there are also ethical implications of using computer vision, such as tracking people’s personal data without consent and digital sexual harassment. Therefore, we should establish standards to make sure that computer vision tracking systems are used in appropriate ways.

Week 5: Reading Reflection

Our vision as people comes with a few prerequisites for creating a clear image in our mind, for example an object needs to be close enough with lighting that is sufficient for us to identify what it is. Computer vision is differentiated to our vision with the existence of a larger set of these prerequisites, the conditions need to be tuned to the abilities of the algorithm in place for accurate identification. With each system built for different purposes needing different unique conditions and pre-existing information to complete its purpose, while our vision is able to cover all ends such as motion detection and object detection without the need for different conditions. Which is why it might be a difficult task to integrate computer vision into interactive media, it is not only about picking the right system and mechanism, you need to set up the right circumstances for the system to successfully bring your vision to life.

However, even with the complication that come with it, the use of computer vision opens up a new realm of possibilities for interactive art. It takes the ability to interact with a user to another level with the possibility of now tracking tracking the movement of the user adding another layer to the ways the art can ‘listen, think and speak’ to the participant. This is especially true due to the increased accessibility to open source code and the technologies needed to navigate the world of computer vision, it is now easier than ever to utilize such technologies to explore themes more in depth. Though, this does raise the question that many of the works mentioned in the article tackle, which is surveillance. These technologies provide interactive art pieces with eyes, they are now partaking in the observing and tracking process of the work. This is more of a concern when it comes to those works installed in public spaces where the technology is used to detect the movement of people who did not choose to be part of the work. Which is why it is important to approach the use of such technologies with care to make the most of them in stepping up the level of interactivity in art works.

Week 5 Reading Reflection

When I compare computer vision to human vision, I realize how differently they work. For me, vision is immediate and layered with meaning, but for a computer, an image is nothing more than an array of numbers, vectors, and pixels. This reading reminded me of the summer I worked on an image understanding project with our faculty in the cybersecurity domain. I saw firsthand how computers don’t “see” an image the way I do. They only interpret numerical values and computations. Because of this, it’s even possible to generate two different sets of numbers that look the same to a computer, but when drawn out, they are hardly identical. That gap between what humans and computers perceive really fascinated me.

To help computers track what we want, techniques like frame differencing, background subtraction, and brightness thresholding come into play. Through this reading, I learned how important the setup is: lighting, camera placement, and reflective markers can make or break the system’s accuracy. It’s almost like computers need us to simplify reality so they can process it.

Where I feel conflicted is in the area of surveillance. In my cybersecurity project, I came to appreciate how powerful these systems can be, but also how much risk comes with that power. Since computers only “understand” numbers, they can be tricked, but they can also be used to monitor, record, and categorise people in ways that feel invasive. In interactive art, I see these tools as playful, engaging, and thought-provoking. But outside of art, they can become a threat. Surveillance powered by computer vision has the capacity to invade privacy, consolidate control, and even manipulate what we think of as reality. That tension makes me think harder about not just what computers can see, but what we allow them to see.