Midterm Progress

Thinking about my midterm idea last weekend, I realized that I wanted to create something that was related to Christmas. Being far away from home, I often think about my childhood and those cozy winter holidays that I used to spend with my family and friends, so my midterm is a representation of one of my favorite times of the year.

When deciding on the style of my project, at first I wanted to create some sort of game. However, I decided to try something new – different from the typical collision-detection or platform games that I did all the way through my Introduction to Computer Science class. I decided for it to be an interactive game that would incorporate different elements of my memories about Christmas – music, visual elements, and the overall atmosphere of the celebration because it would be both a pleasant experience for me and something more unusual. As I found out later during the week, when the professor was showing us the previous midterms, my idea about the interactive game was not so unique. The Cafe game is a great example of such, and in some way, I got some additional inspiration by looking at it, but, of course, I will do my best to make my game even better.

Designing the Idea

As I did not have much time this week, my goal was to simply outline the idea and start working on the basic fundamentals of my game. First of all, the opening screen will be full of Christmas celebration elements. One of the first concerns that I faced was to find good high-quality textures for my game, because usually, the free game images I see on the internet are not that good-looking, and for the good ones you need to pay. However, after I spent some time looking for a free PNG no-background images website, I came across a couple where I found exactly what I needed for my game. This is one of the good websites I used – CleanPNG. I also decided to add the possibility of entering the fullscreen – when you press ‘f’ inside the p5.js editor. I remember the technique the professor showed us to make the canvas and all the elements automatically adjusted based on the size of the window, so I even tried to implement it using windowResized() function, but I decided that it would add unnecessary complications further in the game in case I needed to hardcode something, so I abandoned the idea and decided to stick to the size of the canvas 1800×900.

The second stage of my game will take place inside the house. Once the user clicks the door, the inner side of the house will appear on the screen. I took some inspiration from the game called ‘Fortnite’ which I used to play in my middle school years. It had a very nice Christmas event called Winterfest, and that is what it looked like:

Fortnite Winterfest Presents - How to Get Free Daily Gifts

Fortnite' Presents Guide: All Gifts Listed to Help Decide Which to Open Next - Newsweek

The idea was that you log in to the game every day and you can open one gift per day. It had some in-game outfits, weapon skins, and other customizable items, but the main reason I still remember this game is the memories of how I used to play it with my friends during the school winter break.

For my midterm project, I want to implement the idea of opening gifts. I think that it will be the biggest challenge – to think of how the animation of opening the gifts will work and what I want to put inside. I really want to make it a good-quality interaction, so I will do my best to find all the necessary resources to learn the code elements I need to implement for that. As of now, I have not chosen any specific solution, but I am sure I will use a lot of OOP in my code, as well as a lot of functions. In fact, I already started creating functions for each of the elements that are shown on the screen because I can potentially add some animations or sounds for them next week.

Conclusion

So far I really like how I am working on developing my idea. I really want to use my imagination and try to create something that I would be proud of. As I have mentioned before, this game is a good representation of my memories, and I will try to make the most out of my time to make it great.

 

Reading Reflection: Week 05

I read about Myron Kruger’s “Videoplace” from another article and I think Myron W Krueger’s philosophical approach to make an interaction between Arts and Computer Science is a pioneering idea in the 60’s that made him a first generation virtual reality and augmented reality researcher. His examples of creating art on the hands of his friend and visual aesthetics reminded me of the times when I used to draw just lines and patterns on the “Paint” software of windows computer and sometimes it would come out pretty aesthetically, but he took this simple idea to generate something out of the box. From this paper, as I read more about the VIDEOPLACE, I felt it was more advanced than the technologies of that time and surely created a path for today’s AR and VR tech. I felt this paper works as a stepping stone for students interested in Computer Vision and Artificial Intelligence.

I also felt very curious to know more about Messa di Voce, created by Golan Levin in collaboration with Zachary Lieberman. The concept of what can be done in the field of Human-Computer Interaction expanded more through this project. Previously, image or pattern recognition would come to my mind while talking about HCI, but now I see how our voice, can be an excellent element for interacting with audiovisual installations. I agree on the fact that algorithms play a key role in this field and I wonder do we all need to keep rebuilding new algorithms to explore more opportunities in HCI? The idea of building machine vision techniques “from first principles” resonates with my desire to understand how these systems work under the hood. While many environments provide off-the-shelf solutions, the challenge of implementing vision algorithms from scratch seems like a rewarding process that deepens one’s understanding of the mechanics of vision systems.

After reading this paper, I want to explore how vision-based systems can interact with audio, visuals, and physical devices to create immersive, interactive experiences. Tools like Myron, Rozin’s TrackThemColors, and especially Max/MSP/Jitter with CV.Jit seem like powerful platforms for experimenting with art and performance which I haven’t used till now.

Reading response #4: computer vision

Computer vision has strong analytical capabilities, but only with proper programming. For example, in Limbotime (the example given by the web), the computer interprets the human body as black pixels, where the head is identified as the vertex of the pixel group. This expands the range of input compared to simpler systems like p5, where input is limited to the keyboard. It also highlights the key difference between computer vision and human vision. While computer vision relies on data and algorithms, human vision rely on context and experience, which is similar to computer vision’s logic but is much more subjective.

Optimizing physical conditions, like using reflective materials or controlled environments, can improve computer vision’s accuracy. But there are broader implications, especially regarding surveillance and privacy. I’m always surprised by how quickly we’ve adapted to technologies like Instagram filters. It’s now expected for the screen to show something when you interact with them.  This also makes me wonder how computer vision might shape human vision, not just physically but in how we perceive ourselves. In terms of servailence, the privacy intrusion may not be as extreme as sci-fi films suggest, but it does subtly alter how we see and present ourselves.

Week 5: Midterm Report

Concept
I decided to implement a clicker/incremental game with a twist. Instead of just clicking buttons, you’ll be hunting down flying shapes, upgrade your abilities and much more. Cursor will have a box (scope/aim) attached to it, and when an enemy shape is inside this box, clicking will damage it. As you defeat enemies, you’ll earn different types of resources to upgrade your skills in a skill tree. The game will also have prestige rounds (levels), letting you to progress over the game with some bonuses. This idea didn’t come out of nowhere – it’s inspired by a bunch of incremental and clicker games I’ve played, like To The Core, Cookie Clicker, Adventure Capitalist, and others. I’m trying to take the parts I liked best from these games and mix them up into something new and fun.

Design
I’ve been sketching out ideas in Figma. So far, I’ve got designs for the skill tree and the main page. I’m still working on the game screen, but I’ve got the enemy shapes and cursor box figured out.

Main page

Skill tree and upgrades

Enemies and shooting box


I decided to keep the design simple and stick to a pixel art style. This choice not only fits the retro vibe of many clicker games but also helps keep the visuals clean and easy to understand. Plus, pixel art is pretty forgiving when you’re not an art pro, which is perfect for me. The simple style should also help with performance, which is a bonus given my worries about how smoothly everything will run.

Challenges
One big worry is how well the game will run. P5.js is great, but I’m not sure if it can handle lots of moving shapes and calculations without slowing down. I might need to cut back on some features or find ways to make the code run faster. Especially when there are many enemies on screen, or when the skill tree gets complex, the game might start to lag.

Another tricky part is balancing the difficulty. I want the game to get harder as you play, but not so hard that it’s frustrating. Finding that sweet spot will take some trial and error. It’s not just about making enemies tougher – I need to balance how fast you earn resources, how much upgrades cost, and how much stronger they make you. If I mess this up, the game could end up boring or impossible.

Designing the skill tree is also proving to be a challenge. I want it to be interesting and give players meaningful choices, but not so complicated that it’s overwhelming. Each skill needs to feel useful, and there should be different strategies players can try. But I also don’t want to end up with a situation where there’s one “best” path that makes all other choices pointless.

Risk Prevention

  • To tackle the performance issue, I’m going to start by making a simple version with just a few moving shapes. I’ll test how many I can add before the game starts to lag. This will help me figure out if I need to change my plans or look for ways to optimize the code. I’ve also started looking into ways to optimize p5.js, like using object pooling for enemies instead of creating new ones all the time.
  • For the difficulty balance, I’m planning to create a basic difficulty curve and then play-test it a bunch. I’ll probably need to tweak the numbers a lot, but starting with something I can actually play will help me see what works and what doesn’t.
  • To handle the skill tree challenge, I am planning to implement main skill class. This basic structure will let me experiment with different skills and costs without committing to a full implementation. I’ll gradually expand this as I figure out what works and what doesn’t.
  • Also I want to implement progress save functionality, I’m going to start by just saving the bare minimum – maybe just the resources and which skills are unlocked. I will probably use browser’s local storage to store this data, which should be simple to implement.

By tackling these challenges one by one, I’m hoping to reduce the risks and make steady progress on the game. It might not be perfect right away, but having something playable will make it much easier to improve over time.

Reading Reflection – Week 5

This article made me think about how computer vision is different from human vision in some important ways. Humans can easily understand the meaning and context of what we see, but computers need very specific instructions to make sense of images. The article talks about how even simple things like detecting motion or finding objects are challenging for computers. We take for granted how our brains can instantly recognize people, objects, and what’s happening in a scene. But for a computer, each of those tasks requires complex algorithms and programming.

I found it interesting how the article emphasized the importance of setting up the physical environment to help computer vision systems work better. Things like using special lighting, high-contrast backgrounds, or reflective markers can make a big difference. This shows that computer vision isn’t just about fancy software – the hardware and physical setup matter a lot too. It made me realize that artists and designers using computer vision need to think carefully about the whole system, not just the code.

The use of computer vision for tracking and surveillance in art projects raises some tricky issues. On one hand, it allows for really cool interactive experiences where the artwork can respond to people’s movements and actions. But it also means constantly watching and recording what people are doing, which could feel invasive. I wonder how artists think about the ethics of using these technologies. The Suicide Box project mentioned in the article is a good example of how computer vision can be used to gather data on sensitive topics in ways that make some people uncomfortable. Overall, I think computer vision creates exciting new possibilities for interactive art, but artists need to be thoughtful about privacy concerns and potential negative impacts of surveillance.

Week 5 – Midterm Progress

Concept:

The game is a simplified and modified version of the mini game snowy spin in the steam game Pummal Party. The game is an arena game in which each player controls a snowball. To win the game, the players will try to knock other players’ snowballs out of the arena or collect coins. Each game will contain 3 rounds. The player who wins two rounds out of three wins the game. There will be a scoreboard displaying the current score and current coin number of each player

User Interaction :
  1. The game starts with an instruction screen, waiting for user input to begin.
  2. Each player controls their snowball using keyboard inputs:
    • Player 1: WASD keys
    • Player 2: Arrow keys
  3. Players navigate the arena, collecting items and attempting to knock opponents out.
  4. The game consists of 3 rounds, with the winner being the first to win 2 rounds.
  5. After game completion, players can choose to start a new session.
Features:
  • A snowball class:

Pressing the keyboard applies a force to the ball in the direction of the keyboard control. The ball will accelerate until it reaches maximum speed. When collision between the balls is detected, each ball will move according to the physics of the elastic collision. The ball can collect the items along the path of its movement. The ball will “fall” if the ball is out of the arena.

  • A game class:

That will display the arena, the balls, and the items. The arena will be a perfect circle.

  • A item class: 

Items are randomly generated around the arena every (possibly) 10 seconds. The players can collect items by moving their balls to the item. 

  • Players win a round by knocking the other player out of the arena or collecting 3 gold coins. Players win a game by winning 2 rounds out of 3.
  • Background music and/or sound effects of ball colliding and players collecting items 
  • Power ups : 
  • Increase / Decrease size of snowball 

    Increase / Decrease speed of snowball

    • Coins
Code Design:
  1. Snowball Class:
    • Properties: position, velocity, size, player ID
    • Methods: move(), accelerate(), checkCollision(), collectItem()
  2. Game Class:
    • Properties: arena, players, items, current round, scores
    • Methods: startGame(), updateGame(), checkWinCondition(), nextRound()
  3. Item Class:
    • Properties: type, position, effect
    • Methods: spawn(), applyEffect()
  4. Arena Class:
    • Properties: size, shape
    • Methods: draw(), checkBoundaries()
  5. Main game loop:
    • handleInput()
    • updateGameState()
    • checkCollisions()
    • drawScreen()
    • playAudio()
Potenial Obstacles:
  1. Collision detection and physics
  2. Implementing smooth player controls

 

 

Midterm Progress | week 5

For my midterm I want to create a simple game inspired by the movie “Cloudy with a Chance of Meatballs”

where the avatar has to avoid the meatballs, each meatball avoided a point is gained. Using this catching game as a template I would have to change the code from catching the circles to gain a point to avoiding them.

Also I would like to have the player pick their avatar (the girl or boy) before starting. But I am worried that maybe I can’t do it because the collision between the ball and avatar might look off because different avatars means different sizes or shapes.

If everything goes well and I end up finishing faster than I think I could add levels to the game.

this is a rough sketch of what it will look like

Week 5 – Reading Response

Computer Vision for Artists and Designers provides a comprehensive overview of the technology, its history, and its potential applications in the creative field. While the essay highlights the advancements in computer vision, it also acknowledges the limitations and challenges that still exist.

One of the key points emphasized in the essay is the difference between human and computer vision. Humans possess a unique ability to interpret visual information based on context, prior knowledge, and emotional cues. Computers, on the other hand, rely on algorithms and data to process and understand visual data. While computers can excel at specific tasks like object detection and tracking, they often struggle with more nuanced aspects of human perception.

The essay also discusses various techniques used in computer vision, such as object detection, object tracking, feature matching, and deep learning. These techniques have enabled significant progress in fields like autonomous vehicles, medical imaging, and surveillance. However, it is important to note that these techniques are still evolving and may have limitations in certain scenarios.

The intersection of computer vision and art raises interesting questions about the role of technology in creative expression. While computer vision can provide new tools and possibilities for artists, it is also important to consider the ethical implications of surveillance and data collection. Artists should be mindful of privacy concerns and strive to use technology in ways that enhance rather than exploit human creativity.

Week 5: Midterm progress update

Concept

For my midterm project, I decided to design an interactive photo lab experience. The user starts the program off with the choice of being a customer for a day or an employee and then that decision guides their experience. As an employee, they have more of a ‘mini game’ experience (comparable to Papa’s Pizzeria on CoolMath) while if they choose to be a customer, it’s more of a learning experience for a beginner film photographer (inspired a bit by Duolingo). My main motivation for this design was based on personal experience because I started working at a photo lab roughly 2 months after I shot my first roll of film so it felt like I was learning a ton as a photographer and an employee super quickly.  I also think it would be cool if my job back home could use this as a fun resource for first-time customers.

Code progress

So far, in terms of ‘code-progress,’ I am pretty satisfied with how things are developing. Because the program is split into two sections (customer and employee experience) I devoted this weekend to just working on the employee experience. Working out small kinks has been quite time consuming because of the complexity to with which I want the program to function but I believe it is still manageable (for now). For the most part, I’ve relied on Canva for basic ‘set-design’ just to lighten the load a bit, and for aesthetic reasons so I’ve been working in layers quite a bit.

For example, this is the base layer of the processing room in the employee section. I’ve added different ‘layers’ on top so that depending on the mouse location, the program produces different signifiers that prompt you to open one of the four mini games or return back to the home page.

Most frightening part

Overall, I don’t think the implementation of my ideas will be all that difficult compelte. However, I am frightened by how tediously time consuming it will be, because it’s already taken be a significant amount of time to being making headway on 2 of the mini games. Furthermore, I still haven’t solidified what I want the customer’s side to look like, so I’m afraid that it won’t balance out appropriately with what I’ve already created.

Lastly, there are smaller things I’m holding off on debugging until the framework is fully complete and I’m really hoping that doesn’t prove to be a huge mistake. I’ve implemented the windowWidth components to try and keep my program (mostly) adaptable to any window size but quickly that it ends up distorting some of the images in a weird way and and making the space a bit awkward at times. So, I’m hoping when I circle back at the end it is a fairly easy solution.

As you can see I am relying heavily on windowWidth and windowHeight for a lot of stuff and don’t really know what to expect/how to solve the issue if it starts to not perform the way I’d like.

Nevertheless, I will try to address these concerns by working in segments, as I have been doing. With a project like this, it’d be very easy to become overwhelmed by the ideas, and implementation and everything else going on so I’m trying to keep everything broken down, into more digestible mini projects for me to complete. More specifically, in regards to the formatting concerns I think I’m gonna start recording the ratios of the important objects just have an easily accessible reference for what I’m most concerned about (i.e. making sure the instruction manual is centered). If something small is a little off, or has some slight variation from time to time, I’ll to address it but don’t want to become too fixated on little things.

Week 5 – Midterm Progress Report

intro

For my midterm project, although the Object Life Sim project seems promising to me, I do not have much idea for furthering it, and I’d rather incorporate a higher degree of interactivity in this project. Therefore, I would like to integrate the elements that appeared in my previous attempts – both conceptually and technologically – into a new game. And for this progress report, I will start with composing a standard game design documentation and then touch on the technical difficulties and that of the desgin-wise.

game design documentation

1. Game Overview

Title: Motion Ships
Concept:
Motion Ship will be a space-themed action game where players control a spaceship using head movements detected via a webcam. Navigate through space environments, combat obstacles, and achieve high scores through maneuvering and sound-activated attacks.

2. Game Structure

2.1. Game Flow

  1.  Start Screen
  2. Configuration Menu
  3. Gameplay
  4. End Screen
  5. LoopBack or Exit
3. User Interface

3.1. Start Screen

Title Display: “Motion Ships”
Introduction: “Control the spaceship with your head.”
Prompt: “Press any key to start.”
Functionality: Waits for any key press to transition to the Configuration Menu.

3.2. Configuration Menu

Customization Options:
    Spaceship Selection: Choice between two spaceship models.
    Background Selection: Choice among four distinct battle backgrounds.

Controls:
    Selection Interface: Allows players to select their preferred spaceship and background before starting the game.
    Functionality: After configuration, initiates gameplay.

3.3. End Screen

Score Display: Shows the player’s final score.
Functionality: Offers players the choice to restart or modify settings post-game.

4. Gameplay Mechanics

4.1. Player Controls

Input Methods:
    Webcam-Based Head Movement Detection: Controls spaceship movement along the X and Y axes.
    Sound Input Detection: Monitors ambient sound levels to trigger beam launches.

Control Scheme:
    Movement: The spaceship moves along the X and Y axes based on the player’s head position detected by a face-detection machine learning model.
    Rotation: The spaceship rotates in real-time to mirror the direction of movement, simulating realistic aircraft maneuvers.

4.2. Player Character

Spaceship Model:
Formats Supported: 3D .obj files.
Positioning:
    Fixed Z-Axis: The spaceship remains on a fixed plane along the Z-axis.
    Dynamic X and Y Axes: Movement is controlled via head tracking.

Energy System:
Energy Meter: Displayed on the bottom-left of the screen, indicating available energy.
Energy Consumption: Firing laser beams depletes the energy meter.
Energy Recovery: Collecting energy ores replenishes the energy meter based on the ore’s size.

4.3. Movement Mechanics

Forward Motion:
    Illusion of Movement: The spaceship appears to move forward by keeping its Z-position fixed while other objects move towards the spaceship.
    Relative Speeds: All in-game objects move at a base speed relative to the spaceship’s “forward” movement. Objects capable of independent movement have their own speed vectors in addition to the base speed.

4.4. Combat Mechanics

Attacking:
    Weapon: Laser beams automatically launched based on ambient sound levels.

Trigger Mechanism:
    Sound Input Detection: Continuously monitors the player’s microphone input.
    Threshold-Based Activation: When the ambient sound level exceeds a predefined threshold, the spaceship automatically fires laser beams.

Laser Behavior:
    Trajectory: Beams travel along the negative Z-axis from the spaceship’s X and Y positions when launching.
    Energy Cost: Each launched beam consumes a small amount of the spaceship’s energy.

Feedback Indicators:
    Visual Cue: Brief animation or flashing effect indicating that a beam has been fired due to sound input.
    Energy Indicator: Real-time update of the energy meter showing depletion upon beam firing.

4.5. Obstacles and Enemies

4.5.1. Meteoroids

Appearance: Rock-textured spherical objects.
Behavior: Typically remain fixed relative to the background.
Size Variation: Random sizes within predefined limits.

Interactions:
    Destruction: Requires a number of laser hits proportional to size to be destroyed. Upon destruction, the meteoroid shakes briefly before disappearing.
    Collision: If a meteoroid collides with the player’s spaceship, both are destroyed, resulting in game over.

4.5.2. Energy Ores

Appearance: Crystal-textured spherical objects.
Behavior: Typically remain fixed relative to the background.
Size Variation: Random sizes within predefined limits.

Interactions:
    Destruction: Requires a number of laser hits proportional to size to be destroyed. Upon destruction, the energy ore shakes briefly before disappearing, and the player’s energy is replenished based on the ore’s size.
    Collision: If an energy ore collides with the player’s spaceship, both are destroyed, resulting in game over.

4.5.3. Enemy Ships

Appearance: Alternate spaceship models not selected by the player during configuration.
Behavior: Dynamic movement across all X, Y, and Z axes relative to the base speed. Change direction randomly every few seconds.
Collision Avoidance: Actively avoid colliding with other objects except the player’s spaceship.

Interactions:
    Destruction: Requires a number of laser hits to be destroyed. Upon destruction, the enemy ship shakes briefly before disappearing.
    Collision: If an enemy ship collides with the player’s spaceship, both are destroyed, resulting in game over.

4.6. Object Spawning and Lifecycle

Spawn Mechanics:
Spawn Rate: Objects spawn at regular intervals.
Spawn Probability:
    Meteoroids and Enemy Ships: Equal likelihood.
    Energy Ores: Lower likelihood compared to meteoroids and enemy ships.

Despawn Conditions:
Objects are removed from the game once their actual Z-position surpasses the camera’s Z-position by a specified distance, ensuring optimal performance and resource management.

5. Environment and Visual Design

5.1. Backgrounds

Selection: Players can choose from four distinct battle backgrounds during the configuration phase.
Parallax Effect: Backgrounds subtly move in response to the spaceship’s movement to create a dynamic and immersive living space effect.
Camera Perspective: Fixed camera position to maintain consistent player orientation.

5.2. Lighting

Position: Fixed behind the camera.
Effect: Illuminates all objects within the 3D space, ensuring visibility and depth perception.

5.3. Graphics

3D Models:
    Spaceships: .obj models for both player and enemy ships.
    Obstacles: Spherical models for meteoroids and energy ores, with varying textures based on type (grey for meteoroids, green for energy ores).

Animations:
Destruction Effects:
Shaking animations upon object destruction before removal.
    Movement Animations: Smooth transitions and rotations for moving objects.

6. Technical Specifications

6.1. Technology Stack

Machine Learning Integration: Utilize a face-detection library (ml5.js) for webcam-based head movement tracking.
Sound Processing Library: Integrate a sound analysis library (p5.sound) to process and analyze ambient sound levels for beam activation.

6.2. Performance Considerations

Object Management: Efficient spawning and despawning of objects to minimize memory usage and ensure smooth gameplay.
Collision Detection: Algorithms for real-time detection between laser beams and obstacles/enemies, as well as between the player’s spaceship and other objects.

6.3. User Input Handling

Face Detection: Implement real-time face and head movement tracking for responsive spaceship control through webcam access.
Sound Threshold Detection: Continuously monitor ambient sound levels to trigger beam launches when thresholds are met through microphone access.

7. Additional Features and Considerations

7.1. Scoring System

Score Incrementation: Points awarded based on actions such as destroying obstacles, collecting energy ores, and surviving over time.
High Score Tracking: Persistent tracking and display of the highest score achieved across game sessions.

7.2. Audio

Background Music: Ambient space-themed music to enhance immersion.
Sound Effects:

  1. Laser Firing: Audio cues when beams are launched.
  2. Destruction: Sounds for destroying objects.
  3. Collisions: Impact sounds for collisions leading to game over.
  4. Game Over: Sound indicating the end of the game.

7.3. Accessibility

Instructions: Clear in-game instructions or tutorials to guide new players on using head movements and sound inputs for control.
Visual Indicators: HUD elements like energy meters and score displays designed for clarity and ease of reading.
Adjustable Sensitivities: Allow players to adjust head movement and sound input sensitivities to accommodate different playstyles and environments.

8. Implementation Notes

8.1. Sound-Based Beam Launching

Calibration: Provide an initial calibration phase where the game adjusts the sound input threshold based on the player’s environment to prevent unintended beam firing.
Feedback Mechanism: Visual and audio feedback when beams are launched due to sound input, helping players associate their actions with in-game events.

potential challenges

1. Real-Time Face Detection and Head Movement Tracking

Real-time video processing and face detection can be computationally intensive, leading to lag or reduced frame rates. SO, implementing throttling mechanisms to control the rate of face detection processing could be conducive to overall efficiency.

2. Sound Input Detection and Processing

Differentiating intentional sound inputs (e.g., biu-biu-biu voice command) from ambient noises to trigger beam launches without false positives may need amplitude-based or frequency-based thresholding. It might be helpful to establish a continuous record (RMS, LUFS, etc.) before gamplay to calculate a based noise level.

3. Collision Detection in 3D Space

3D collision detection would be more complex and resource-intensive than 2D that I used in previous projects, especially with multiple moving objects and interactions. The potential solution could include:

  • Early Exit Strategies:
    Implement early exit conditions where collision checks are skipped if objects are too far apart to interact, conserving resources.
  • Simplify Collision Shapes – Axis-Aligned Bounding Boxes (AABB):
    For certain objects, AABB can offer a balance between simplicity and collision detection accuracy.
4. Object Spawning and Lifecycle Management

Without proper management, the continuous spawning of objects can lead to memory leaks and degraded performance over time, while the spawn rate and probability are important to maintain a balanced difficulty without overwhelming players or underutilizing game mechanics.

Therefore, adjusting spawn rates and object probabilities dynamically based on the player’s score or progression can be used to maintain engaging and balanced gameplay.

demo version

With all that said, I developed an early demo version without fine-tuning the parameters and visual effects.

Fundamentally, the first two technical difficulties are tackled in this stage, while the rest is untouched. Personally speaking, as this project expanded in scale, I learned about the game design structure and built highly object-oriented programming with code as specialized as possible.