Assignment 5 Midterm Progress Report

Project Concept:

I’ve always had a fear of snakes. Overcoming it seemed impossible until I started working on my game. Inspired by how snakes consume food whole, I had the idea of the game I am working on for my midterm called “Going Through It”. It is be an obstacle course game where the obstacle course is be designed in the shape of a snake. The player controls a small stick character trying to escape the snake as fast as possible. Adding my own unique twist to the game, the player cannot directly control the jumping ability of the stick figure, instead the stick is more akin to a ‘pogo stick’ where it bounces off with every obstacle it collides with and the player only controls the rotation of the stick figure using their keyboard.

User Interaction Design:

Player Input: The player uses the “A” and “D” keys to control the rotation of the stick figure. Player needs to angle the jump correctly to get through each part of the obstacle course.
Obstacles: The whole level design acts as the obstacle in this game, where the stick figure can bounce around by hitting walls or the insides of the ‘snake’ and can even get stuck in some traps laid out to make the game trickier to play.
Game Progression: As death is not an element of this game, the players race against themselves and other players to complete the level in the shortest possible time as they understand the mechanics of the game better.

Code Design: Functions, Classes, and Interactivity:

‘Going Through It’ is built using Object Oriented Programming (OOP) where every entity within the game belongs to a separate class that interact closely to make the game function.

The primary classes and their major functionalities are as follows:

Obstacle Class: The obstacle class defines the parameters for the rectangles make up the ‘snake’, it also contains functions such as display() which display the obstacles as one cohesive interactable object.

Stick Class: The Stick class defines the stick object itself and contains functions such as rotate() and update() which help establish rules for collision and input caused rotation. This serves as the biggest and most important class for the game code

Snake Level Class: The snake level class instantiates the obstacle class objects with coordinates to make one cohesive snake level and adds them to a list so collision checks can be performed.

Game Class: Game class deals with all functionality not handled by the other classes, this includes functions for pausing the game, showing the start screen, showing a lowest time, and starting the level itself. It also checks when a level has ended and displays a text with the time taken to clear the obstacle.

Identifying and Addressing Risks:
The most challenging aspect of the project is implementing accurate collision detection between the stick character and the obstacles. Since there are a multitude of angles the stick can collide with the obstacles with, figuring out accurate and responsive collision detection has involved a lot of trigonometry and I definitely believe there is more to come in order to account for all possible scenarios.

In order to mitigate the risk of the stick phasing through the level, or getting stuck at certain points I have tried to account for detection with each corner of the stick, each side vertical and horizontal and collision with vertical, horizontal and obstacle walls at diagonals. This is a tedious process and there is a lot of room for error which I am minimizing through meticulous play testing of the level.

Another risk I am trying to mitigate is the complexity of the level, as this is a single player game, a level that is too easy will make the game not enjoyable after a few playthroughs, on the contrary a level that is too complex and difficult to compete will cause players to quit and not wish to progress sooner. I hope to continue doing play testing, also involving friends who have never played the game before so that I can get input from them and adjust the level and individual paths within the level to make the game the perfect mix of complex but pushing people to beat the level in shorter and shorter times.

Snapshots from the game and further steps:

 

This is the design of the Snake Level, the time is shown in the top left corner and stops running once the level is completed

Next Steps:

Adding a Lowest Score: I want to keep a track of the lowest time achieved which is displayed on the game start screen and is meant to be an initial challenge for new players  to beat.

Start Screen: I want to add a start screen which shows, the level itself, the tutorial for the game and lowest time someone has taken to complete the level.

Reading Response 1

 

After watching Casey Reas’ talk, I’ve found my randomness and controlled chaos held within art. The way he intertwined randomness with controlled elements presented me with a unique perspective on creating art, showing how seemingly chaotic processes could yield meaningful and structured outcomes.

One of the standout examples from his talk was his work with cancer cell data, where he uses controlled randomness to produce distinctive visual patterns. This application of data not only served an artistic purpose but also pushed the boundaries of the interpretation of scientific information. It sparked a realization in me: the artistic process can be deeply intertwined with scientific exploration, leading to new interpretations and insights.

It was remarkable to see how even simple elements like dashes could serve as building blocks for more complex patterns. This idea compelled me to reconsider my creative practices. Instead of getting overwhelmed by the complexity of my projects, I now have the ability to start small and hopefully snowball into a project that I can be proud of.

Moreover, his talk provoked a deeper philosophical inquiry into the nature of creativity itself. If art can emerge from a system of randomness and algorithms, it challenges our conventional understanding of ownership and intentionality. In a time when machines contribute more and more to the artistic and creative process, is there a line we can draw regarding owning a piece of art or even whether something can be called art itself?

After Reas’ talk, I’m inspired to experiment more freely with randomness and tiptoe within the lines of control and chaos.

Week 5 Reading Response

Golan Levin’s article on Computer Vision opened my eyes to the fascinating world of computer vision in art. As a computer science major also delving into the visual arts, I found this piece incredibly relevant to my studies and personal interests.
The historical overview of computer vision in interactive art was eye-opening. I was unaware of the fact that artists have been experimenting with this technology since Myron Kruger’s work in the 1970s. It’s amazing to see how far society has progressed in the past half a century and how artists continue to push the boundaries of what’s possible with these tools.
One thing that really stuck with me was the explanation of basic computer vision techniques. I’ve always been curious about how interactive installations work, and learning about motion detection and object tracking was akin to getting a peek behind the curtain. It was fascinating to think that these fundamental concepts are behind so many of the interactive artworks in galleries and online.
The article made me think a lot about the difference between human and machine vision. I never realized how much work goes into making a computer “see” things that we take for granted. The fact that algorithms have to assign meaning to visual data is mind-blowing. It makes me appreciate the complexity of human perception even more.
I was particularly intrigued by the ethical implications of using computer vision in art. The example of the Suicide Box by the Bureau of Inverse Technology made me think about the fine line between art and surveillance. As someone who wants to create interactive pieces, I’m now more aware of the responsibility that comes with using these technologies. How do we make sure we’re not invading people’s privacy while still creating engaging experiences? It’s definitely something I’ll be considering in my future projects.
I appreciated that Levin included code examples, so I could see firsthand the effort and technique that went into creating such pieces. In conclusion, this article has changed the way I think about interactive art. It’s not just about creating engaging and fun experiences anymore; it’s about understanding the technology behind it and using it responsibly. I’m looking forward to experimenting with computer vision in my own work, but I hope to be more mindful of the ethical considerations involved.

Assignment 4

Inspiration:

In this project, I aimed to create a dynamic visualization of baby names by ethnicity, year, and gender using data from a CSV file. The inspiration for this project came from the realization that names are not just identifiers; they often carry cultural significance and reflect societal trends. By exploring how different ethnicities and genders influence naming patterns over time, I wanted to highlight the diversity and richness of society. This interactive visualization allows users to engage with the data, making it easier to understand how names evolve and resonate within various communities within the US. The data for the USA was readily available and organized, and thus I decided to pick that as my dataset.

Code that I am particularly proud of:

One of the sections of my code that I am particularly proud of is the get_top_names function. This function processes the CSV data to extract the top ten baby names based on user selected criteria including ethnicity, year, and gender. This code tallies the occurrences of each name and sorts them to ensure that the most popular names are displayed in the visualization.

// This function processes the provided data to get the top 10 names
function get_top_names(ethnicity, year, gender) {
  let names = {};
  for (let row of table.rows) {
    // filters the data based on our current selection
    if (row.get('Year of Birth') === year && row.get('Ethnicity') === ethnicity && row.get('Gender') === gender) {
      let name = row.get("Child's First Name");
      let count = parseInt(row.get('Count'));
      // sums up counts for each name to account for duplicates
      if (name in names) {
        names[name] += count;
      } else {
        names[name] = count;
      }
    }
  }

Final Product:

Conclusions and Reflections:

Reflecting on this project, I recognize the power data visualization holds in uncovering trends and fostering human understanding and comprehension. I learned how to manipulate data and create interactive elements that allow users to explore and compare information. However, there are several areas for improvement. For instance I hope to incorporate additional features such as user-input filters or a broader range of years. I hope to also include an overlay feature where two separate bar charts can be compared on the same canvas, this could be useful for example if someone wants to more clearly see the trend of a certain name through time. Additionally, I plan to improve the aesthetics of the visualization by experimenting with different color palettes and designs to make it more appealing. Overall, this project has been a valuable learning experience, and I look forward to applying these insights in later projects.

 

WEEK 5 READING RESPONSE

Computer vision and human vision are very different in how they process information. While people can quickly understand what they see, computers need specific algorithms to detect motion, brightness changes, and object differences. We help computers “see” by adjusting lighting and using techniques like background subtraction and motion detection to improve tracking accuracy.

In interactive art, computer vision allows viewers to engage with the artwork in real-time. By tracking movements and gestures, it creates an immersive experience where the audience becomes an active participant, enhancing their interaction with the art.

However, this ability to track people also raises concerns about privacy, especially in public spaces. While it makes art more interactive and responsive, the same technology can be used for surveillance, which can feel invasive. Artists and technologists must strike a balance between creating innovative interactive art and respecting individual privacy, ensuring the technology is used responsibly and ethically.

 

MIDTERM PROGRESS

Superman Saves (Enhanced Game)

Concept of the Project

“Superman Saves” is an interactive game inspired by my previous project, which focused on simple character movement and rescue mechanics. In this version, I aimed to elevate the project by adding dynamic challenges, such as time limits, obstacles, and a progressive difficulty system. The objective is to control Superman as he navigates the sky, rescuing individuals while avoiding clouds and birds. The game becomes increasingly difficult with each successful rescue, introducing faster obstacles and reducing the time available to complete the rescue.

The concept is rooted in creating an engaging, responsive game environment that tests the player’s reflexes and strategic thinking. By introducing new features like lives, levels, and a timer, I’ve created a more immersive experience compared to the original version, which was relatively straightforward in terms of gameplay.

 

How the Project Works

The game begins with Superman stationed at the bottom of the screen, and a person randomly placed near the bottom as well, awaiting rescue. Using arrow keys, the player can move Superman to navigate the sky, avoid clouds and birds, and reach the person. Upon reaching the person, Superman flies upwards, carrying them to the top of the screen to complete the rescue.

A notable feature of the game is its dynamic difficulty adjustment. Each successful rescue increases the game’s difficulty by speeding up the clouds and bird movements, which adds a sense of progression. Additionally, the inclusion of a timer introduces a layer of urgency, forcing players to make quick decisions. I’m particularly proud of how the game manages the timer, lives system, and level progression seamlessly, as these were complex components to implement but significantly enhanced the overall experience.

The code uses object-oriented programming principles to manage the background stars, obstacles, and gameplay mechanics. I took advantage of arrays to efficiently handle the stars’ animations and the positioning of various game elements.

 

Areas for Improvement and Challenges

One area that could be improved is the game’s overall visual design. Although the current visual elements (e.g., clouds, birds, and Superman) are functional, they could benefit from more detailed and polished artwork. Additionally, I would like to enhance the sound effects in the future, adding background music and sound cues for when Superman successfully completes a rescue or collides with an obstacle.

I encountered a few challenges during development, particularly in managing the game’s timer and ensuring that collisions between Superman and obstacles felt fair and consistent. I resolved this by tweaking the collision detection algorithm and adjusting the movement speeds of the obstacles as the difficulty increases. Another issue was ensuring that the game feels balanced at higher levels, where the speed increase can quickly overwhelm players. However, after adjusting the difficulty curve, the gameplay experience became smoother.

 

EMBEDDED SKETCH

 

LINK TO FULL SCREEN

https://editor.p5js.org/b_Buernortey_b/full/p0Rs9Tzbk

Week 5: Reading Response

Computer vision differs from human vision in several ways. Humans can focus on basic features of objects and identify them even under different conditions, such as low light or slight changes in color and shape. In contrast, computer vision focuses on details rather than basic features, relying on a set of rules to detect objects. This can lead to mistakes when slight environmental or object modifications occur, such as changes in lighting. Another key difference is the ability to recognize objects in three dimensions. Humans can perceive depth, while computer vision typically operates in two dimensions, meaning that slight tilting of objects can cause confusion.

Various techniques can help computers see and track objects more effectively, similar to how humans do. One such technique is frame differentiation, which is useful for detecting motion. This is done by comparing consecutive frames, where differences in pixel color indicate movement. Another technique is background subtraction, where the computer is provided with a reference image of the background. When an object is introduced into the scene, the computer detects pixels that differ from the background and identifies the object. A third method is comparing pixels with a threshold value, which is especially useful when there are significant light differences between the background and the object. Object tracking can also be achieved by tracking the brightest pixel in a video frame. Each pixel’s brightness is compared to the brightest encountered, and its location is stored. This technique can be adapted to track the darkest pixel or multiple objects of different colors.

In interactive art, the complexity of implementing certain ideas limits artistic expression, as only a few people have the expertise to implement such designs. However, with ongoing advancements making computer vision techniques more accessible and easier to use, art will increasingly benefit from these technologies.

Week 5 – Midterm Progress

Concept

Whenever I went to the arcade, the dance machine was always the most fascinating and fun for me. This inspired me to create a dancing game, but with a twist. The story takes place on a planet about to be invaded by aliens, where a cat is granted a superpower to defeat them. The catch is that his superpower is activated through dance moves.

Arcade Dance Machine Hire | Arcade & Games Machines | Odin Events

Game design

The game mechanism will be moving your mouse to avoid falling asteroids/aliens and using UP,DOWN,LEFT, RIGHT arrows to destroy them. After destroying a number of enemies, extra moves will be unlocked to destroy multiple aliens at the same time.

Implementation design

Structure of code:

  • class for cat: display, move, unlock moves, variables: x, y, dance (current key pressed).
  • class for enemies: display, move (fall), variables: x, y, arrow, velocity.
  • starting screens: including story screens that tell the story of the cat.

Không có mô tả.

Starting screens and tutorial

Không có mô tả.

Game screens

Progress

Currently, I have implemented the basics of cat class: move, display and arrow class: move, display. I also implemented the randomly generated arrows and mechanism to remove it from the screen when users input a key.

Struggle

For this project, my main concern is how to synchronize the music with the gameplay. Typically, dancing games generate arrows in time with the beat. In the current version, the arrows in my game are randomly generated with random timing and velocity. Therefore, in the next step, I plan to improve this by generating arrows based on the music’s beats.

Another struggle I have is finding the sprite for the game. There was not a lot of resources online for sprites in dancing game. I was trying to generate from gif to sprite, however the file size was too large and cannot be imported to p5js. During the next step, I will try to create my own sprites and design the visuals for the game.

Next steps

  • Implement extra moves
  • Add collision when arrows fall on the cat
  • Apply visuals
  • Create starting scenes
  • Fix initialization of falling arrows

 

Week 5: Mid-term Progress “Animal sounds trivia.”

Concept

My idea for Midterm is to create a simple game that I will call “Animal sounds trivia.I will create a simple sprite sheet of a human character walking and running. The movement of the human character will be determined by the pressing of the arrow buttons by the user.
As the user presses to move the background will move so that the human character appears to move as per the direction of arrows.  As the human character moves, I will position Animal animations at different intervals depending on how long the human character has moved (Determined by how long the key arrow was pressed). As the animal appears, I will play sounds that are related to the animal shown in the background. After the sound is played I will pop up a window, temporarily freezing the background. The pop up window will ask the user to tell the name of the animal. If the user tells a correct name, they get a score, otherwise they can choose to re-listen for once more  and guess again or reveal the animal and get zero score for that animal.  

The game will go to around 10 animals and stop with a window that displays the user’s total score and give them options to replay the game or quit.

My front screen should look something more like this one.

Implementation.  

I hope to implement my game using Four classes. My first class is the Player class. This class will manage the Sprite sheet for the player walking and standing animations as well as Player’s position with respect to the starting position. The second class will be the Animal class, this class will manage sounds as well as animal Images/animations. The third class will be the Message class. This class will manage the pop up window that keeps track of the information given by the user , comparing them with correct ones and updating the score of the player. The last class will be the Game class. The game class will contain instances of all the three classes and will manage the game flow. Allowing restarts and quitting. 

The starting window will be briefly detailed with instructions on movement and keys to use. With every interaction my users will be able to know what to do as the pop up window will ensure that!

Challenges

My hope is that I can create some good animations. Matching the photo I attached or at least close to it. I have seen that I can archive this using Unity. Which I have started  to explore and hopefully will work out.

In order to minimise the risk, I have thought of using a simpler design given the time frame. I will use some simple animations using available sprite sheets on the internet. 

Midterm Project Progress: Eye of the Sound

Concept

The “Eye of the Sound” is a project which transforms sound into mesmerizing visual art using the principles of physics. There are many others before me who have used music as a way of generating artwork, like:

However, there was one artwork which was quite interesting to see on Vimeo: A circular spectrogrram. This project would be based on this visualization at the core and would be expanded on that too.

This spectrogram appears to be mostly based on the loudness of the sound, and though it is an intriguing and cool artwork, has no user interaction, no generative features, and so on. That’s where my project, “Eye of the sound” comes in. It would be based on sensory experiences of sound and sight, and the circular flow is inspired from the flow of life, and the end is quite similar to the ‘Iris’ of the eye, a symbol of life.

Implementation

I have a basic idea of what must be done in the project. The central part of the project would be FFT (Fast Fourier Transform Analysis) to distinguish the different frequencies in the song, and then use a linear visualizer to display it. The visualizer’s displacement from the mean position is stored in an array, and is used to display the results on a rotating graphics buffer layer. The layer is rotating in one direction, and the visualizer in the opposite direction to cancel the rotation effect and “appear” stationary.

The color of the imprints on the layer are based on the loudness, and the intensity of the imprints is determined by the FFT analysis. The subtitles are displayed below.

The user also has another option in which they are able to sing along with the music being played, and their voice imprints are also calculated and diplayed in the same manner but in a different color scheme according to them.

This means that providing a menu screen for the user in which the instructions are provided and separate buttons are there to lead them to the mode they want.

Improvements which can be made

  • Adding a choosing the song screen, where the user can select some songs from some options in both the modes
  • Adding a sprite running loading bar for the amount of song left
  • Adding some generative effects on the subtitles, or some sparkle effects on the “highs” of a song
  • Adding a performance score in the second mode to see how well the user has done with the song
  • A “Save Canvas” option for the user to store their experiences.