MIDTERM PROGRESS

Superman Saves (Enhanced Game)

Concept of the Project

“Superman Saves” is an interactive game inspired by my previous project, which focused on simple character movement and rescue mechanics. In this version, I aimed to elevate the project by adding dynamic challenges, such as time limits, obstacles, and a progressive difficulty system. The objective is to control Superman as he navigates the sky, rescuing individuals while avoiding clouds and birds. The game becomes increasingly difficult with each successful rescue, introducing faster obstacles and reducing the time available to complete the rescue.

The concept is rooted in creating an engaging, responsive game environment that tests the player’s reflexes and strategic thinking. By introducing new features like lives, levels, and a timer, I’ve created a more immersive experience compared to the original version, which was relatively straightforward in terms of gameplay.

 

How the Project Works

The game begins with Superman stationed at the bottom of the screen, and a person randomly placed near the bottom as well, awaiting rescue. Using arrow keys, the player can move Superman to navigate the sky, avoid clouds and birds, and reach the person. Upon reaching the person, Superman flies upwards, carrying them to the top of the screen to complete the rescue.

A notable feature of the game is its dynamic difficulty adjustment. Each successful rescue increases the game’s difficulty by speeding up the clouds and bird movements, which adds a sense of progression. Additionally, the inclusion of a timer introduces a layer of urgency, forcing players to make quick decisions. I’m particularly proud of how the game manages the timer, lives system, and level progression seamlessly, as these were complex components to implement but significantly enhanced the overall experience.

The code uses object-oriented programming principles to manage the background stars, obstacles, and gameplay mechanics. I took advantage of arrays to efficiently handle the stars’ animations and the positioning of various game elements.

 

Areas for Improvement and Challenges

One area that could be improved is the game’s overall visual design. Although the current visual elements (e.g., clouds, birds, and Superman) are functional, they could benefit from more detailed and polished artwork. Additionally, I would like to enhance the sound effects in the future, adding background music and sound cues for when Superman successfully completes a rescue or collides with an obstacle.

I encountered a few challenges during development, particularly in managing the game’s timer and ensuring that collisions between Superman and obstacles felt fair and consistent. I resolved this by tweaking the collision detection algorithm and adjusting the movement speeds of the obstacles as the difficulty increases. Another issue was ensuring that the game feels balanced at higher levels, where the speed increase can quickly overwhelm players. However, after adjusting the difficulty curve, the gameplay experience became smoother.

 

EMBEDDED SKETCH

 

LINK TO FULL SCREEN

https://editor.p5js.org/b_Buernortey_b/full/p0Rs9Tzbk

Week 4 – Reading Reflection

Don Norman’s “The Design of Everyday Things” highlights how poor design impacts our daily interactions with common objects. In the first chapter, he introduces the concept of discoverability, which emphasizes that people should be able to figure out how something works simply by looking at it. Norman explains that objects like doors or light switches can become confusing when they lack clear visual cues, often leading users to make mistakes. The idea of feedback is also essential, ensuring users receive confirmation that their actions—such as opening a door or turning off a light—are completed successfully. Norman’s focus is on making designs intuitive and straightforward, so users can easily understand them without needing instructions.

One frustration I experience, which Norman’s work sheds light on, is how stressful the use of modern smart devices like home assistants or thermostats are. These devices often overwhelm users with too many options and hidden functions, making them difficult to use. Applying Norman’s principles of signifiers and mapping could make these systems more user-friendly. For example, clearer labels, icons, or simple gestures could improve usability. Additionally, feedback through visual or sound cues would help users feel confident that they’ve completed a task correctly. Norman’s focus on human-centered design highlights the importance of keeping the user in mind, ensuring that products are approachable and simple to navigate. This approach could significantly improve our interactions with modern technology, reducing frustration and making these tools more accessible.

Week 4 – Generative Text

Concept

In this project, I created an interactive tool that lets users see how emotions can be expressed through emojis. Inspired by my brother’s misuse of emojis, each emoji represents a specific feeling, and by clicking on them, users can see a contextual message that describes when to use that emoji. 

Implementation

The implementation of this interactive tool uses p5.js to create an engaging experience for users. It starts with an array of emoji objects that include their symbols, emotions, and positions on the canvas. The `setup()` function initializes the canvas size and text settings. In the `draw()` function, emojis are displayed along with a gradient background that transitions from light blue to dark slate blue for visual appeal. When users click on an emoji, the `mousePressed()` function checks if the click is near an emoji, then shows a message above it explaining its meaning. Overall, this simple structure effectively helps users understand emoji meanings in a fun and interactive way.

Highlight

One aspect of the code that I’m particularly proud of is the text generation feature that dynamically displays contextual messages when users click on an emoji. By using simple logic in the `mousePressed()` function, the code checks the position of the mouse relative to each emoji and identifies the selected one. This triggers a specific message that appears above the emoji, explaining its meaning and when to use it. 

This dynamic text generation not only enhances interactivity but also provides an educational element, helping users understand the emotional context behind each emoji. I appreciate how this feature brings the project to life, making it more engaging and informative. The clarity of the messages ensures that users leave with a better understanding of emojis, which is the core goal of this tool. This is how i implemented it.

function mousePressed() {
  // Check if an emoji is clicked
  for (let emoji of emojis) {
    let d = dist(mouseX, mouseY, emoji.x, emoji.y);
    if (d < 30) {  // Check if the mouse is close enough to the emoji
      selectedEmoji = emoji;  // Set the selected emoji
      break;
    }
  }
}

When a user clicks near an emoji, the code checks the distance between the mouse pointer and the emoji’s position. If the distance is small enough, it sets that emoji as the selected emoji, allowing the following code in the draw() function to display the corresponding message:

if (selectedEmoji != null) {
  textSize(24);  // Smaller text for the message
  fill(0);
  text(`Use this emoji when you are ${selectedEmoji.emotion}.`, selectedEmoji.x, selectedEmoji.y - 50);  // Display above the emoji
}

Embedded Sketch

 

Future Reflections and Ideas for Future Work

Looking ahead, I plan to enhance this project by adding more emojis, including diverse options, and allowing users to submit their own. Additionally, I would like to incorporate a quiz feature to make learning about emojis more fun and engaging. These improvements will help create a more comprehensive tool for understanding emojis and their meanings.

Floating balloons

Concept

The inspiration for using balloons in this project came from a Chelsea football match I watched on Saturday, September 14. During the game, my team struggled to score until a substitute player finally found the net. In his celebration, he blew up a balloon, which sparked the idea to create a project centered around balloons. I wanted to capture that moment of excitement and celebration by translating it into an interactive experience where each balloon expands and changes color when clicked. My goal was to apply the programming concepts I’ve learned so far to build something creative and playful using balloons.

Christopher Nkunku, the chelsea Player who scored the only goal of the game and did his iconic balloon celebration.

 

Implementation

The project is built using p5.js and applies Object-Oriented Programming (OOP) to create interactive floating balloons. A `Balloon` class is defined, with properties like position, size, color, and growth rate. Each balloon floats upwards and resets when it reaches the top of the canvas. On each mouse click, the clicked balloon changes color and expands by a set amount.

The `setup()` function initializes an array of balloons with random attributes, while the `draw()` function continuously updates and displays the balloons. The `mousePressed()` function checks for clicks and triggers balloon growth and color changes. This structure efficiently demonstrates object interaction, animation, and user engagement with OOP principles.

Highlight

One aspect of the code that I enjoyed implementing was the incremental growth of balloons on each mouse click. Instead of making the balloon instantly expand to a large size, I wanted to ensure each click resulted in a smooth, controlled increase. To achieve this, I implemented the grow() and clicked() methods within the Balloon class. This is the code that shows the highlight: 

  // Incrementally expand the balloon when clicked
  grow() {
    this.balloonSize += this.growthAmount;  // Increase size by a fixed amount per click
  }

  // Check if the balloon is clicked
  isClicked(px, py) {
    let d = dist(px, py, this.x, this.y);  // Distance from mouse to balloon center
    return d < this.balloonSize / 2;  // Check if mouse is inside the balloon
  }

  // Change color and grow when clicked
  clicked() {
    this.color = color(random(255), random(255), random(255));  // New random color
    this.grow();  // Incremental growth with each click
  }
}

 

Embedded Sketch 

 

Future Reflections and Ideas for Future Work

Looking ahead, there are several ways I could improve and expand upon this project. One idea is to implement a balloon popping mechanic, where balloons could “pop” after reaching a certain size, adding another layer of interaction and completion. This could involve sound effects to make the interaction even more satisfying and immersive for the user.

Additionally, I could introduce different behaviors for each balloon, such as varying floating speeds or randomized directions, to make the scene feel more dynamic. Another potential improvement would be to add more complex user interactions, like dragging balloons or having them respond to other inputs such as keyboard presses.

Finally, I’d like to explore using advanced animations to create a more gamified experience. These ideas could help make the project more engaging and add more variety for the user.

Week 3 Reading Reflection

Chris Crawford’s “The Art of Interactive Design” emphasizes the importance of interactivity as more than just a simple response to input. Crawford defines true interactivity as a system’s ability to listen, think, and respond, much like a conversation. This approach highlights the need for thoughtful engagement between the user and the system, ensuring that interactions feel dynamic and meaningful. Systems that effectively apply this model create ongoing exchanges, where both user and system influence each other, ultimately making the user feel actively involved and engaged in the process.

In improving the level of interactivity in my p5.js sketches, I aim to apply these concepts by going beyond simple triggers and creating a more thoughtful, adaptive experience. Instead of a one-time action from the user generating a single response, I want the system to react dynamically, changing multiple aspects of the sketch based on continuous input. For example, rather than a simple click to trigger an animation, the user’s actions could influence multiple variables like movement, color, and object behavior, creating a more engaging and evolving experience. Crawford’s “listening, thinking, and responding” approach can help me develop sketches that offer a richer interaction and make the user feel more connected to the experience.

Week2: Superman Saves

Concept

For this project, I created an interactive animation where the player controls Superman in his mission to save people. The gameplay is simple: Superman must avoid clouds (which act as obstacles) while rescuing a person from the bottom of the screen and flying them to the top. Players use the arrow keys to control Superman’s movement, and the background color changes dynamically when the mouse is clicked. The score updates every time Superman successfully saves a person, adding a sense of progress.

Code

The program begins by creating a 400×600 canvas and setting Superman’s initial position near the bottom of the screen. The person to be saved is randomly placed at the bottom of the screen. The background features a dynamic gradient that shifts colors when the player clicks the mouse, with animated stars moving across the sky to enhance the visual appeal.

For managing the positions of clouds, I used individual variables for each cloud’s position, speed, and size. However, managing stars in the same way proved to be challenging because I wanted to include a large number of them. Using individual variables for each star would have been cumbersome. Therefore, I used arrays to store the stars’ positions, sizes, and speeds, making it easier to manage and update their movement throughout the animation. This approach works well for handling many stars while still keeping the code organized.

Highlight

One part of the code that I’m particularly proud of is the interaction between Superman and the clouds. The clouds move across the screen, and Superman has to avoid them while rescuing the person. If he hits a cloud, his position resets, adding a challenge to the game. Here’s the code that handles cloud movement and collision detection:

//function to draw cloud and check collision
function drawCloud(x, y) {
  fill(255); // Cloud color (white)
  ellipse(x, y, 60, 40); // Draw cloud as an ellipse

  // Check if Superman collides with a cloud
  if (dist(supermanX, supermanY, x, y) < 50) {
    supermanX = width / 2;
    supermanY = height - 100; // Reset Superman's position
  }
}

 

Embedded Sketch

.

USE THE ARROW KEYS ON THE KEYBOARD TO CONTROL THE MOVEMENT OF SUPERMAN

 

Reflection and Ideas for Future Work

Overall, I’m satisfied with how the interactive elements of the animation came together, particularly the moving clouds and the dynamic background. These features make the scene feel more alive and add a layer of interest. However, I think I can improve the visual design of the characters. Adding more details to Superman and the person being saved would enhance the overall look.

One challenge I faced was managing multiple objects, particularly the stars. However, using arrays made it much easier to handle their positions and behaviors. This approach allowed for smoother animations and better organization of the code, simplifying the process of managing multiple stars at once.

In the future, I’d like to focus on refining the visuals, especially Superman’s appearance, and possibly add more animation to make the scene more engaging. For example, animating Superman’s cape or adding small interactions when he saves the person would be nice improvements.

Weekly Reflection(Creative Reading): Week 2

Reflection:

Casey Reas’ video shows how structure and randomness blend together in digital art. He explains that with the use of software, artists can combine strict rules with random elements to create pieces that are both unique and unpredictable. This approach challenges the traditional view of art, where the artist had full control over every detail. Now, algorithms can introduce unexpected outcomes, making the process less about direct control and more about exploring the balance between order and chance. This new way of creating art mirrors life itself, where both structure and unpredictability constantly interact.

Watching this video made me reflect on how creativity is evolving. Reas shows that creativity is no longer just about human control or intention; it can also come from the interaction between the artist and the machine. This got me thinking about whether using randomness and algorithms might make the art less personal or if it actually enhances the creative process by introducing new possibilities that an artist might not think of alone. I’m curious to discuss these ideas with others and hear different perspectives on whether this shift in the role of the artist adds value to art or takes away from it. The questions Reas raises make me wonder how much of creativity comes from us and how much can come from machines, and I believe exploring this could change the way we understand the art of the future.

Week 1- Self Portrait: MANG-F2024

Concept

For this self-portrait, I wanted to create something that shows who I am but also adds some fun, interactive elements. I focused on making my hair, face, and upper body look as close to real as I could. I also added cool features like changing the background color when you click the mouse and making the eyes follow the cursor. Plus, there is a custom wand that moves with your mouse, adding a playful touch to the whole thing.

Code

The code begins by setting up a canvas that is 400×400 pixels in size and gives it a light blue background. The default mouse cursor is hidden to make the custom wand cursor more prominent.

The draw() function runs continuously, which allows the portrait to update in real-time. The background color changes to a random color when the mouse is pressed and resets to the original blue when released.

The main features of the face—head, eyes, nose, mouth, ears, and hair—are drawn using basic shapes like ellipses, rectangles, and triangles. The pupils are interactive and follow the mouse cursor, giving the eyes a dynamic effect. The shoulders and chest area are represented with curves and rectangles to add more detail to the portrait.

Finally, a custom wand cursor is drawn at the mouse position, with a black handle and a yellow tip, adding a playful touch to the sketch.

Highlight

One part of my code that I’m really happy with is the way the pupils of the eyes move. I made it so that the pupils follow the mouse as you move it around. This makes the eyes look like they’re actually looking at the cursor, which brings the portrait to life. Here’s the code that does this:

 

// Interactive Pupils Movement
fill(0);  // Sets the fill color to black for the pupils
let leftPupilX = map(mouseX, 0, width, 165, 175);  // Maps mouseX to move the left pupil within the eye
let rightPupilX = map(mouseX, 0, width, 225, 235);  // Maps mouseX to move the right pupil within the eye
ellipse(leftPupilX, 180, 10, 10);  // Left pupil
ellipse(rightPupilX, 180, 10, 10);  // Right pupil

This part of the code lets the pupils move depending on where the mouse is. It’s a small detail, but it makes the portrait feel more interactive and real.

 

Embedded Sketch

.

 

Reflection and Ideas for Future Work

Looking back, I’m happy with how the interactive features turned out, especially the moving pupils and the changing background color. These parts make the portrait more fun to play with. But, I think I could improve the details of the hair and the face to make them look better. In the future, I would like to work on shading and maybe add more interactive features, like changing facial expressions. I also want to get the proportions of the head and face just right to make the portrait even more realistic.