MIDTERM PROGRESS

Superman Saves (Enhanced Game)

Concept of the Project

“Superman Saves” is an interactive game inspired by my previous project, which focused on simple character movement and rescue mechanics. In this version, I aimed to elevate the project by adding dynamic challenges, such as time limits, obstacles, and a progressive difficulty system. The objective is to control Superman as he navigates the sky, rescuing individuals while avoiding clouds and birds. The game becomes increasingly difficult with each successful rescue, introducing faster obstacles and reducing the time available to complete the rescue.

The concept is rooted in creating an engaging, responsive game environment that tests the player’s reflexes and strategic thinking. By introducing new features like lives, levels, and a timer, I’ve created a more immersive experience compared to the original version, which was relatively straightforward in terms of gameplay.

 

How the Project Works

The game begins with Superman stationed at the bottom of the screen, and a person randomly placed near the bottom as well, awaiting rescue. Using arrow keys, the player can move Superman to navigate the sky, avoid clouds and birds, and reach the person. Upon reaching the person, Superman flies upwards, carrying them to the top of the screen to complete the rescue.

A notable feature of the game is its dynamic difficulty adjustment. Each successful rescue increases the game’s difficulty by speeding up the clouds and bird movements, which adds a sense of progression. Additionally, the inclusion of a timer introduces a layer of urgency, forcing players to make quick decisions. I’m particularly proud of how the game manages the timer, lives system, and level progression seamlessly, as these were complex components to implement but significantly enhanced the overall experience.

The code uses object-oriented programming principles to manage the background stars, obstacles, and gameplay mechanics. I took advantage of arrays to efficiently handle the stars’ animations and the positioning of various game elements.

 

Areas for Improvement and Challenges

One area that could be improved is the game’s overall visual design. Although the current visual elements (e.g., clouds, birds, and Superman) are functional, they could benefit from more detailed and polished artwork. Additionally, I would like to enhance the sound effects in the future, adding background music and sound cues for when Superman successfully completes a rescue or collides with an obstacle.

I encountered a few challenges during development, particularly in managing the game’s timer and ensuring that collisions between Superman and obstacles felt fair and consistent. I resolved this by tweaking the collision detection algorithm and adjusting the movement speeds of the obstacles as the difficulty increases. Another issue was ensuring that the game feels balanced at higher levels, where the speed increase can quickly overwhelm players. However, after adjusting the difficulty curve, the gameplay experience became smoother.

 

EMBEDDED SKETCH

 

LINK TO FULL SCREEN

https://editor.p5js.org/b_Buernortey_b/full/p0Rs9Tzbk

Week 5: Reading Response

Computer vision differs from human vision in several ways. Humans can focus on basic features of objects and identify them even under different conditions, such as low light or slight changes in color and shape. In contrast, computer vision focuses on details rather than basic features, relying on a set of rules to detect objects. This can lead to mistakes when slight environmental or object modifications occur, such as changes in lighting. Another key difference is the ability to recognize objects in three dimensions. Humans can perceive depth, while computer vision typically operates in two dimensions, meaning that slight tilting of objects can cause confusion.

Various techniques can help computers see and track objects more effectively, similar to how humans do. One such technique is frame differentiation, which is useful for detecting motion. This is done by comparing consecutive frames, where differences in pixel color indicate movement. Another technique is background subtraction, where the computer is provided with a reference image of the background. When an object is introduced into the scene, the computer detects pixels that differ from the background and identifies the object. A third method is comparing pixels with a threshold value, which is especially useful when there are significant light differences between the background and the object. Object tracking can also be achieved by tracking the brightest pixel in a video frame. Each pixel’s brightness is compared to the brightest encountered, and its location is stored. This technique can be adapted to track the darkest pixel or multiple objects of different colors.

In interactive art, the complexity of implementing certain ideas limits artistic expression, as only a few people have the expertise to implement such designs. However, with ongoing advancements making computer vision techniques more accessible and easier to use, art will increasingly benefit from these technologies.

Midterm Progress: Save Bran Stark!

Concept:

HBO’s Game of Thrones has countless iconic scenes that have been forever ingrained in the memory of 21st century pop culture, but arguably one of the most iconic and significant is the fall that crippled Bran Stark and set off the ripple of scandals that made the series’s plot as complicated as it is. While I obviously do not condone acts of violence against minors, I found the scene quite funny because of how absurd everything was: a 10 year old child who somehow managed to climb up the exterior of a tower accidentally witnesses an incestuous affair and then gets thrown out of a window. I figured that I would eternalize this scene in a more lighthearted way by creating a simple game in which multiple Brans will fall and the player, with their mouse, has to catch the boy by moving a pile of hay as a cushion for him to fall on. Each game will last 60 seconds, and players must save as many Brans as they can — and prevent the chaos that is about to spread throughout Westeros.

A rough sketch of what the game will look like.

Project Components:

  • OOP: Bran Stark will be a class. Bran’s sprite and the parameters for his falling movement will be set in this class.
  • Image: Using Procreate, I will illustrate a simple background image that depicts the tower from which Bran fell + grassy ground + pebbles. The sprite for Bran will also be illustrated by yours truly (in a desperate attempt to make up for amateur coding with good artistic direction).
  • Sound: I plan to have the show’s opening theme, composed by Ramin Djawadi, play as soon as the game loads; it will stop when the timer runs out and restart along with the game.
  • Text: The starting screen will show the game’s title in the same font as the show’s title card. The countdown and score will also be displayed.
  • Shape: I will draw simple shapes to serve as buttons for starting/restarting the game.

Code Progress:

  • So far, I have a rough sketch of red ellipses (placeholder for Bran) that fall at varying speeds and a yellow rectangle (placeholder for hay cushion) with a fixed y position that moves horizontally with the player’s mouse. Both of these are classes of their own.
  • The Game of Thrones main title theme starts playing when the game loads.
  • The checkCollision() function checks if a Bran has came into contact with the hay cushion; if so, the score increases by 1 and that specific Bran is removed while a new Bran spawns at the top.
  • let brans = [];
    let hay;
    let numberOfBran = 3; //number of Brans to be on screen at the same time
    let score = 0;
    let music;
    
    function preload() {
      music = loadSound("GOT Main Title.mp3");
    }
    function setup() {
      createCanvas(600, 400);
      for (let i = 0; i < numberOfBran; i++) {
        brans[i] = new Bran();
      }
      hay = new Hay();
      music.play();
    }
    
    function draw() {
      background("rgb(180,226,218)");
      //display Bran and hay cushion
      hay.display();
      for (let i = 0; i < brans.length; i++) {
        brans[i].display();
        brans[i].move();
        
        //increase score and reset Bran after collision
        if (checkCollision(brans[i], hay)) {
          score += 1;
          brans[i].reset();
        }
      }
      fill(0);
      text("Score:" + score, 500, 80);
    }
    
    //check for collision between Bran and hay cushion
    function checkCollision(bran, hay) {
      let d = dist(bran.x, bran.y, mouseX + 35, hay.y + 15); 
      return d < bran.diameter / 2;
    }
    
    class Bran {
      constructor() {
        this.x = random(50, 550);
        this.y = 0;
        this.speed = 3 + random(0, 7);
        this.diameter = 50;
      }
    
      display() {
        fill("rgb(216,78,78)");
        ellipse(this.x, this.y, this.diameter);
      }
    
      move() {
        this.y += this.speed;
        //re-generate Brans at top of screen when Brans fall off screen at the bottom
        if (this.y - this.diameter / 2 > canvas.height) {
          this.reset();
        }
      }
    
      reset() {
        this.x = random(50, 550);
        this.y = 0;
      }
    }
    
    class Hay {
      constructor() {
        this.x = mouseX;
        this.y = 330;
      }
    
      display() {
        fill("#FCD179");
        rect(mouseX, this.y, 70, 30);
      }
    }
  • Challenges:
  • Setting a countdown for the game would be a challenge for me, as I have never tried incorporating one into my projects prior to this one.
  • I will also need to figure out an algorithm that prevents the randomly generated positions of the falling Brans from overlapping.

 

Week 5 – Midterm Progress

Concept

Whenever I went to the arcade, the dance machine was always the most fascinating and fun for me. This inspired me to create a dancing game, but with a twist. The story takes place on a planet about to be invaded by aliens, where a cat is granted a superpower to defeat them. The catch is that his superpower is activated through dance moves.

Arcade Dance Machine Hire | Arcade & Games Machines | Odin Events

Game design

The game mechanism will be moving your mouse to avoid falling asteroids/aliens and using UP,DOWN,LEFT, RIGHT arrows to destroy them. After destroying a number of enemies, extra moves will be unlocked to destroy multiple aliens at the same time.

Implementation design

Structure of code:

  • class for cat: display, move, unlock moves, variables: x, y, dance (current key pressed).
  • class for enemies: display, move (fall), variables: x, y, arrow, velocity.
  • starting screens: including story screens that tell the story of the cat.

Không có mô tả.

Starting screens and tutorial

Không có mô tả.

Game screens

Progress

Currently, I have implemented the basics of cat class: move, display and arrow class: move, display. I also implemented the randomly generated arrows and mechanism to remove it from the screen when users input a key.

Struggle

For this project, my main concern is how to synchronize the music with the gameplay. Typically, dancing games generate arrows in time with the beat. In the current version, the arrows in my game are randomly generated with random timing and velocity. Therefore, in the next step, I plan to improve this by generating arrows based on the music’s beats.

Another struggle I have is finding the sprite for the game. There was not a lot of resources online for sprites in dancing game. I was trying to generate from gif to sprite, however the file size was too large and cannot be imported to p5js. During the next step, I will try to create my own sprites and design the visuals for the game.

Next steps

  • Implement extra moves
  • Add collision when arrows fall on the cat
  • Apply visuals
  • Create starting scenes
  • Fix initialization of falling arrows

 

MIDTERM PROGRESS

CONCEPT:

I couldn’t decide between a game or an artwork for my midterm project. However, I was playing on my phone, and there’s this one game that I still play to this day “Geometry Dash”. My siblings and I grew up around this game and we still love it to this day, and as the game design is basically shapes, I thought it would be the perfect game to try and re-create.

The main idea or goal is to control a character that moves through several/ repeating obstacles. The difficulty increases over time as the player’s score rises, with obstacles speeding up and appearing more frequently. There’s also a feature where the game randomly switches to an upside-down mode, adding unpredictability and complexity to keep the player more engaged.

Design:

So far, I haven’t really worked on the design but the layout of my game. I want to have a similar concept to the actual game, where the background is gradient, and changes colour that aligns to the beat of the background music and the obstacles. As for the obstacles, for now, I left them as simple shapes, rectangles, and spikes just to test everything out and see how it flows in the game. For the use of sound, I found online the original music used in Geometry Dash and implemented it in my game as well as adding a sound effect when the player dies. However, I still need to fix the background music so that when the player dies the song stops until he starts playing again, since I used the loop function it’s just playing over and over non-stop.

This is the inspiration for my design and how i would like it to turn out in the end.

User Interaction:

My user interactions are basically the player’s input in the game. The player must press the spacebar to jump. If the spacebar is held down, the player continues jumping until the key is released. As for my instructions and text, I’ve applied it in the beginning, so the game begins when the player presses the spacebar at the start screen. After a game is over, pressing “1” will restart the game. Moreover, I still need to work on the visual design of the Start page, as of now, I just left it as text. I’ve also added a score count which is displayed at the top of the screen, which increases by one as the player successfully passes an obstacle. In the game, the obstacles appear from the right side of the screen, and the player must jump to avoid them. Then the game randomly switches to an upside-down mode at higher scores, adding an extra challenge, but I still think I need to make it more complex and play around with the obstacles, as I fear the game might be too simple and boring the way it is now.

The Most Difficult Part of the Project:

The hardest part of making this game has been figuring out how to make the difficulty increase smoothly as I want the game to stay engaging throughout. I want the game to get harder as you play, but I also need to make sure it doesn’t become too hard too soon, to the point it just gets frustrating.

Collision Detection (When the Player Hits an Obstacle):

The other tricky part is making sure the game knows when the player hits an obstacle, especially the spikes.  For the spike obstacles, the spikes are drawn as triangles, but I treated them as if they were inside an invisible rectangle (called a bounding box) that surrounds the spike. This makes it easier for the game to detect if the player hits the spike. Even though the spike is a triangle, the game checks if the player touches the rectangle around the spike. I used the collideRectRect() function in p5.js. This function checks if two rectangles touch each other. Even though the spike is a triangle, the game uses a rectangle around it for simpler collision detection. If the player’s rectangle overlaps with the spike’s rectangle, the game registers a hit. The same goes for the rectangle obstacles.

How I Made It Less Risky:

To make sure the game doesn’t get too hard too fast, I tested how quickly the obstacles speed up and how often they appear. By setting limits on both, I made sure that the game gradually gets harder, but not too difficult right away.

 

Code so far:

 

 

Reading Reflection #4: Computer Vision for Artists

As much as we may try, computer vision has yet to be able to emulate the human vision; what a computer “sees” are “critically dependent on certain unique assumptions about the real-world video scene it is expected to analyze” (Levin). And as we work towards optimizing computer vision — especially in terms of enhancing facial recognition and such — I do believe that regulations do need to be put in place so that these technologies are not appropriated by the wrong hands, with issues such as surveillance and the potential invasion of privacy being so widely debated. The Suicide Box referenced in the text feels especially macabre and ethically questionable, since the project relies on tracking the movement of unsuspecting, suicidal people who are at their most vulnerable moment. Conversations about the ethics of computer vision being implemented in such manners are difficult but necessary in this rapidly evolving technological landscape.

In terms of interactive media arts, however, I do believe that computer vision and tracking can greatly enhance immersive experiences. Once again speaking in terms of video games (because I really do love games), the eye-tracking mechanism of Skybound Games’s Before Your Eyes is an example of great computer vision implementation that came to my mind. The game is centered around the story of a boy who revisits memories of his life while preparing for the afterlife (this is a very rough synopsis, as to not spoil the game). The gameplay tracks the player’s eyes, and players blink to interact with in-game elements and advance the plot. The game’s lengthy but crucial cutscenes prove to be challenging for players to watch without blinking and accidentally skipping to the next scene. While this may seem like an unwise choice of gameplay on the developer’s part, I eventually realized how creative it was for a game dealing with themes of grief to utilize eye-tracking to capture the bittersweet feelings of wanting to hold on to fleeting moments in life but feeling yourself unintentionally letting them slip away. Before Your Eyes is proof that computer vision, when implemented creatively, also has the potential to enhance the emotional value of a project.

Week 5: Mid-term Progress “Animal sounds trivia.”

Concept

My idea for Midterm is to create a simple game that I will call “Animal sounds trivia.I will create a simple sprite sheet of a human character walking and running. The movement of the human character will be determined by the pressing of the arrow buttons by the user.
As the user presses to move the background will move so that the human character appears to move as per the direction of arrows.  As the human character moves, I will position Animal animations at different intervals depending on how long the human character has moved (Determined by how long the key arrow was pressed). As the animal appears, I will play sounds that are related to the animal shown in the background. After the sound is played I will pop up a window, temporarily freezing the background. The pop up window will ask the user to tell the name of the animal. If the user tells a correct name, they get a score, otherwise they can choose to re-listen for once more  and guess again or reveal the animal and get zero score for that animal.  

The game will go to around 10 animals and stop with a window that displays the user’s total score and give them options to replay the game or quit.

My front screen should look something more like this one.

Implementation.  

I hope to implement my game using Four classes. My first class is the Player class. This class will manage the Sprite sheet for the player walking and standing animations as well as Player’s position with respect to the starting position. The second class will be the Animal class, this class will manage sounds as well as animal Images/animations. The third class will be the Message class. This class will manage the pop up window that keeps track of the information given by the user , comparing them with correct ones and updating the score of the player. The last class will be the Game class. The game class will contain instances of all the three classes and will manage the game flow. Allowing restarts and quitting. 

The starting window will be briefly detailed with instructions on movement and keys to use. With every interaction my users will be able to know what to do as the pop up window will ensure that!

Challenges

My hope is that I can create some good animations. Matching the photo I attached or at least close to it. I have seen that I can archive this using Unity. Which I have started  to explore and hopefully will work out.

In order to minimise the risk, I have thought of using a simpler design given the time frame. I will use some simple animations using available sprite sheets on the internet. 

Midterm Project Progress: Eye of the Sound

Concept

The “Eye of the Sound” is a project which transforms sound into mesmerizing visual art using the principles of physics. There are many others before me who have used music as a way of generating artwork, like:

However, there was one artwork which was quite interesting to see on Vimeo: A circular spectrogrram. This project would be based on this visualization at the core and would be expanded on that too.

This spectrogram appears to be mostly based on the loudness of the sound, and though it is an intriguing and cool artwork, has no user interaction, no generative features, and so on. That’s where my project, “Eye of the sound” comes in. It would be based on sensory experiences of sound and sight, and the circular flow is inspired from the flow of life, and the end is quite similar to the ‘Iris’ of the eye, a symbol of life.

Implementation

I have a basic idea of what must be done in the project. The central part of the project would be FFT (Fast Fourier Transform Analysis) to distinguish the different frequencies in the song, and then use a linear visualizer to display it. The visualizer’s displacement from the mean position is stored in an array, and is used to display the results on a rotating graphics buffer layer. The layer is rotating in one direction, and the visualizer in the opposite direction to cancel the rotation effect and “appear” stationary.

The color of the imprints on the layer are based on the loudness, and the intensity of the imprints is determined by the FFT analysis. The subtitles are displayed below.

The user also has another option in which they are able to sing along with the music being played, and their voice imprints are also calculated and diplayed in the same manner but in a different color scheme according to them.

This means that providing a menu screen for the user in which the instructions are provided and separate buttons are there to lead them to the mode they want.

Improvements which can be made

  • Adding a choosing the song screen, where the user can select some songs from some options in both the modes
  • Adding a sprite running loading bar for the amount of song left
  • Adding some generative effects on the subtitles, or some sparkle effects on the “highs” of a song
  • Adding a performance score in the second mode to see how well the user has done with the song
  • A “Save Canvas” option for the user to store their experiences.

Week 5 reading

This reading was instrumental in my understanding of how computer vision techniques can be harnessed in the realm of interactive art and design.

One of the most enlightening aspects of the article was its clear explanation of the fundamental differences between computer and human vision. Understanding these distinctions helped me grasp why certain approaches are necessary when implementing computer vision in artistic contexts. The emphasis on the limitations of computer vision systems, such as their struggle with environmental variability, underscored the importance of thoughtful design in both the physical and digital realms.

The article’s discussion of various techniques to optimize computer vision for artistic applications was particularly valuable. Levin’s explanations of methods like controlled lighting, and algorithms provided me with a toolkit of practical approaches. This knowledge feels empowering, as it opens up new possibilities for creating interactive artworks that can reliably detect and respond to elements in a scene.

The ethical considerations raised in the article regarding tracking and surveillance capabilities of computer vision were thought-provoking. Levin’s examples of artists like David Rokeby and the Bureau of Inverse Technology, who have used these technologies to comment on surveillance culture and social issues, inspired me to think about how I might incorporate similar critical perspectives in my own work.

Furthermore, the range of artistic applications presented in the article, from full-body interactions to facial expression analysis, expanded my understanding of what’s possible with computer vision in art. These examples serve as a springboard for imagining new interactive experiences and installations.

In conclusion, this reading has significantly enhanced my understanding of computer vision in the context of interactive art. It has equipped me with technical knowledge, practical approaches, and critical perspectives that I’m eager to apply in my own creative practice.

Midterm Project Progress – Week 5

Concept and Introduction

Like a movie teaser, this would serve as the very first visual to be seen for this project. I designed it using Adobe Photoshop. Additional images are from pixabay.com

For my midterm project, I wanted to explore something I deeply love—a concept that excites me to bring to life. This project combines elements from my Week 1 and Week 3 assignments into an interactive piece that I’m thrilled to work on. My love for sci-fi began with watching Dragon Ball Z and was later fueled by shows like Naruto, Marvel and DC animated and live-action films, Star Wars, and many more. From this inspiration, I created a universe that’s too vast to fully explain here, but I can say that this project represents a small piece of something much larger. ‘The Flame Boy’ is a character I’ll be exploring through interactive storytelling, which excites me because it allows me to experiment with a different medium than the filmmaking I’m most accustomed to.

In short, ‘The Flame Boy’ is about a young boy who lives with his grandfather. He was abandoned by the royal family (the reasons are explained in the project) and left on the side of the planet where the sun never rises. He meets someone special who inspires him to trace his roots, as he never felt he truly belonged to this side of the world. The interactive story allows the user to explore this world and learn more about this character. Eventually, he discovers the truth about his family, specifically the Robinsons.

The concept of the interactive artwork offers a choice: you can either explore The Flame Boy’s world or uncover the truth (think ‘red pill, blue pill,’ if you will). Choosing to explore his home lets the user interact with his room and discover his personality. On the other hand, choosing to know the truth allows the user to experience the story through interactive storytelling, which will unfold as you continue the journey.

 

User Interaction

  1. The interactive artwork begins with an opening splash screen. On this screen, there are visual cues guiding the user to enter and continue. This is the starting point every time the user explores this world. A soundtrack plays in the background whenever this screen appears. The following images are rough sketches of how I envisioned the splash screen before moving into Photoshop:
Cover screen idea 1
Cover screen idea 2

2. Once the user presses any button, they are transported to a menu screen. This screen presents them with the option to either explore The Flame Boy’s world/home or learn about his story.

If they choose to explore his home, the screen transitions to a scene resembling his house. Users will be able to interact with various objects within his space, allowing them to learn more about him through this interaction. This will be created using a combination of shapes in p5.js, along with a few images, music, and sounds. The experience will be simple and intuitive.

 

If the user chooses to learn about his story, they are transported into a movie/book-like environment. Here, a narrator introduces the protagonist, explaining how he was born, how he received his name and powers, and why he is where he is now. The user can advance to the next page by clicking the screen. As the story progresses, they meet a magician who guides the protagonist in discovering his identity.

The user is then presented with another choice: either ‘shoot the stars’ using The Flame Boy’s fire powers to earn 100 star coins, or navigate their way through a dark maze using The Flame Boy’s fire as a light source. The maze changes each time the user selects this option, creating an unpredictable and ‘random’ experience. Once the user completes these mini-games, they witness The Flame Boy meeting his parents for the first time. The experience then concludes, prompting the user to start over.

The following image is a simple node sketch I made in photoshop which depicts the flow of the program in its entirety:

This was the 3rd version of this visual. A more sophisticated version exists. The nodes represent the structure explained above.

The following is a brief progress report on the program as of the time of writing. The music was made in Suno A.I:

 

Most frightening Part and its Solution

Problem: The main challenge of my interactive artwork lies in implementing the two mini-games within the story segment. This project feels like three projects combined into one, which makes me concerned about whether users will find it as rewarding as I hope. Specifically, I felt apprehensive about how to implement the maze game and the shooting stars feature.

Solution: At the time of writing this report, I am researching ways to integrate these features using predefined algorithms available on GitHub. I will discuss these algorithms in my final update, whether they work or if there’s a need to change the creative and technical approach. For now, this is my progress update.

The following is an image of the splash screen as a thank you for reading this report in its entirety (and because I’m excited to share this image I made in Photoshop!).

Image made in photoshop for the interactive piece titled: The Flame Boy: Becoming a Robinson. Further developments will be made. Images were downloaded from Pixabay and Meta A.I