Midterm Project Zayed Alsuwaidi

To bring the initial concept of Russian Roulette to life, I decided to use generative AI for the images. I faced several issues sketching the artwork myself on my iPad, and I wanted a surreal realism and impact from the art. The images are therefore generated by DALL-E.

Here are the Game Mechanics:
Players

  • Player: Player starts with 100 health, the bar is at the top-left (50, 50).
  • Opponent: Also 100 health, bar is at top-right (650, 50).
  • Health: Drops by 50 when hit. Game’s over if either of us hits 0.

Gun

  • It’s a 6-shot revolver (shotsFired tracks how many times it’s fired).
  • The chamber’s random—loaded or empty (isLoaded)—and resets on reload.
  • Every shot counts toward the 6, whether it’s a bang or a click.

States

  • “between”: Whether the choice is to shoot (S) or pass (N).
  • “playerTurn”: Player shoots, showing playergun.gif.
  • “opponentTurn”: Person shoots, showing persongun.gif.
  • “reload”: After 6 shots, we reload, and they shoot at me!
  • “gameOver”: One is down; hit R to restart.

My Journey Building It
I started with the basics—getting the images and health bars up. That was smooth, but then I hit a wall with sounds. I added gunshot.m4a for hits, and it worked sometimes, but other times—nothing. That was frustrating. Turns out, browsers block audio until you interact with the page, so I had to trigger it after a key press. Even then, emptyclick.m4a wouldn’t play right when the opponent fired an empty shot. I kept seeing “sound not loaded” in the console and realized the timing was off with setTimeout. I fixed it by storing the shot result in a variable and making sure the sound played every time—loaded or not. Adding the reload mechanic was tricky too; I wanted the opponent to shoot during it, but keeping the flow consistent took some trial and error.

Image:

The image (From DALL-E) I used to depict the person wearing a silk mask covering his facial features and creating a feeling of suspense:

 

 

Gameplay Flow

  1. Start: Game kicks off in “between” with nogun.gif.
  2. My Turn:
    • S: I shoot.
      • Loaded: gunshot.m4a, light flashes, opponent loses 50 health.
      • Empty: emptyclick.m4a, no damage.
    • N: Pass to the opponent.
  3. Opponent’s Turn: They always shoot now (I made it consistent!).
    • Loaded: gunshot.m4a, light flashes, I lose 50 health.
    • Empty: emptyclick.m4a, no damage.
  4. Reload: After 6 shots:
    • Switches to “reload”, shows nogun.gif.
    • Gun resets with a random chamber.
    • Opponent shoots at me:
      • Loaded: gunshot.m4a, light flash, I take damage.
      • Empty: emptyclick.m4a, I’m safe.
    • Back to “between”.
  5. Game Over: When me or the opponent hits 0 health.
    • “Game Over! Press ‘R’ to restart” shows up, and R starts it over.

Controls

  • S: I shoot.
  • N: Pass to the opponent.
  • R: Restart when it’s over.

Visuals

  • Canvas: 800×600—big enough to see everything.
  • Images: Centered at (400, 300), 300×300 pixels.
  • Health Bars: Red base (100 wide), green shrinks as health drops.
  • Light Effect: A cool yellow-white flash when a shot lands—fades out fast.
  • Instructions: Text at the bottom tells me what’s up.

Audio

  • Gunshot: gunshot.m4a for a hit—loud and punchy.
  • Empty Click: emptyclick.m4a for a miss—subtle but tense.
  • Volume: Set both to 0.5 so my ears don’t hate me.

Overcoming Challenges
The sounds were my biggest headache. At first, gunshot.m4a only played after I clicked something—browser rules, ugh. I fixed that by tying it to key presses. Then emptyclick.m4a kept skipping when the opponent shot an empty chamber. I dug into the code and saw the random shoot chance was messing with the timing. I simplified it—stored the shot result, made the opponent shoot every time, and checked isLoaded() right before playing the sound. Now it’s rock-solid.

this.state = "playerTurn";
    this.currentImg = playerGunImg;
    let shotFired = this.gun.shoot();
    if (shotFired) {
      if (gunshotSound.isLoaded()) {
        gunshotSound.play();
      }
      this.opponent.takeDamage();
      this.flashAlpha = 255;
    } else {
      if (emptyClickSound.isLoaded()) {
        emptyClickSound.play();
      }
    }
    setTimeout(() => this.checkReloadOrNext(), 1000);
  }

  opponentTurn() {
    this.state = "opponentTurn";

 to show how I handled the opponent’s turn.

Key Gameplay Highlight

  • Reload Mechanic: The reload’s risky and cool—opponent gets a free shot!
setTimeout(() => {
     let shotFired = this.gun.shoot();
     if (shotFired) {
       if (gunshotSound.isLoaded()) {
         gunshotSound.play();
       }
       this.player.takeDamage();
       this.flashAlpha = 255;
     } else {
       if (emptyClickSound.isLoaded()) {
         emptyClickSound.play();
       }
     }
     setTimeout(() => this.checkReloadOrNext(), 1000);
   }, 1000);
 }

 checkReloadOrNext() {
   if (this.gun.needsReload()) {
     this.reloadGun();
   } else {
     this.nextRound();
   }
 }

 

 

 has the logic for resetting the gun and surviving that tense moment.

Technical Bits

  • Classes:
    • Player: Tracks my health and draws the bar.
    • Gun: Manages the 6-shot limit and random chamber.
    • Game: Runs the show—states, visuals, all of it.
  • p5.js Stuff:
    • preload(): Loads my assets.
    • setup(): Sets up the 800×600 canvas and sound volumes.
    • draw(): Keeps everything on screen.
    • keyPressed(): Listens for S, N, R.

Endgame
It’s over when me or the opponent’s health hits 0. I see “Game Over! Press ‘R’ to restart”, hit R, and it’s back to square one—health full, gun reset.

What’s Next?
Maybe I’ll add a manual reload key or a score counter. Also, I would re-design the game with different artwork to make it more immersive.

Here is a snippet of the state handling, which was key to ensuring less redundancy by preventing procedural programming of the game. Also, the states handle the logic, and this was key in establishing how the game runs i.e if there is an issue with the Game Class, the rest of the gameplay mechanics are directly impacted.

class Game {
  constructor() {
    this.player = new Player("Player", 50, 50); // Player health bar at top-left
    this.opponent = new Player("Opponent", 650, 50); // Opponent health bar at top-right
    this.gun = new Gun();
    this.state = "between"; // States: "between", "playerTurn", "opponentTurn", "reload", "gameOver"
    this.currentImg = noGunImg; // Start with no gun image
    this.flashAlpha = 0; // For light effect transparency
  }

 

 

Assignment 5: Midterm Project Update

I developed “Dragon Ball Z: Power Level Training,” an engaging and nostalgic game that captures the essence of the iconic anime series. This interactive experience allows players to step into the shoes of a Dragon Ball Z warrior, focusing on the thrilling power-up sequences that made the show so memorable. Players start with a low power level and, through rapid clicking, increase their strength while watching their character’s energy aura grow. The game features familiar visual and audio elements from the series, including character sprites, power level displays, and the unmistakable sound of powering up. As players progress, they encounter milestones that pay homage to famous moments from the show, culminating in a final power-level goal that, when reached, declares the player a true warrior.

📋Assignment Brief

  • Make an interactive artwork or game using everything you have learned so far
  • Can have one or more users
  • At least one shape
  • At least one image
  • At least one sound
  • At least one on-screen text
  • Object Oriented Programming
  • The experience must start with a screen giving instructions and wait for user input (button / key / mouse / etc.) before starting
  • After the experience is completed, there must be a way to start a new session (without restarting the sketch)

💭Conceptualisation

The idea for “Dragon Ball Z: Power Level Training” was born from a deep appreciation for the iconic anime series and a desire to recreate its most thrilling moments in an interactive format. As a long-time fan of Dragon Ball Z, I’ve always been captivated by the intense power-up sequences that often served as turning points in epic battles. The image of characters like Goku, surrounded by a growing aura of energy as they pushed their limits, has become a defining element of the series.

This project concept emerged while rewatching classic Dragon Ball Z episodes, particularly those featuring transformations and power level increases. I was struck by how these moments, despite their simplicity, generated immense excitement and anticipation among viewers. I wanted to capture this essence and allow players to experience the rush of powering up firsthand. The idea evolved to focus on the visual and auditory aspects of powering up, combining the growing energy aura, rising power level numbers, and the distinctive sounds associated with these transformations.

By digitalizing this experience, I aimed to create an interactive homage to Dragon Ball Z that would resonate with fans and newcomers alike. The game’s design intentionally incorporates key visual elements from the series, such as the character sprites and power level displays, to evoke nostalgia while offering a fresh, interactive twist on the power-up concept. This project not only serves as a tribute to the series but also as an exploration of how iconic pop culture moments can be transformed into engaging interactive experiences.

💻Process

I practiced making classes for certain elements as that is what I struggle with most. I created these classes for Characters, and Auras around the characters. Through this, I was able to solidify my ability with classes, now being a pro, and am able to use them for even more features.

class Character {
  constructor(name, x, y) {
    this.name = name;
    this.x = x;
    this.y = y;
    this.powerLevel = 100; // Starting power level
    this.sprite = null; // Will hold the character's image
    this.aura = new Aura(this); // Create an aura for this character
    this.powerUpSound = null; // Will hold the power-up sound
  }

  // Load character sprite and power-up sound
  loadAssets(spritePath, soundPath) {
    // Load the sprite image
    loadImage(spritePath, img => {
      this.sprite = img;
    });
    // Load the power-up sound
    this.powerUpSound = loadSound(soundPath);
  }

  // Increase power level and grow aura
  powerUp() {
    this.powerLevel += 50;
    this.aura.grow();
    // Play power-up sound if loaded
    if (this.powerUpSound && this.powerUpSound.isLoaded()) {
      this.powerUpSound.play();
    }
  }

  // Display the character, aura, and power level
  display() {
    push(); // Save current drawing style
    this.aura.display(); // Display aura first (behind character)
    if (this.sprite) {
      imageMode(CENTER);
      image(this.sprite, this.x, this.y);
    }
    // Display character name and power level
    textAlign(CENTER);
    textSize(16);
    fill(255);
    text(`${this.name}: ${this.powerLevel}`, this.x, this.y + 60);
    pop(); // Restore previous drawing style
  }

  update() {
    // Add any character-specific update logic here
    // This could include animation updates, state changes, etc.
  }
}

class Aura {
  constructor(character) {
    this.character = character; // Reference to the character this aura belongs to
    this.baseSize = 100; // Initial size of the aura
    this.currentSize = this.baseSize;
    this.maxSize = 300; // Maximum size the aura can grow to
    this.color = color(255, 255, 0, 100); // Yellow, semi-transparent
    this.particles = []; // Array to hold aura particles
  }

  // Increase aura size and add particles
  grow() {
    this.currentSize = min(this.currentSize + 10, this.maxSize);
    this.addParticles();
  }

  // Add new particles to the aura
  addParticles() {
    for (let i = 0; i < 5; i++) {
      this.particles.push(new AuraParticle(this.character.x, this.character.y));
    }
  }

  // Display the aura and its particles
  display() {
    push(); // Save current drawing style
    noStroke();
    fill(this.color);
    // Draw main aura
    ellipse(this.character.x, this.character.y, this.currentSize, this.currentSize);
    
    // Update and display particles
    for (let i = this.particles.length - 1; i >= 0; i--) {
      this.particles[i].update();
      this.particles[i].display();
      // Remove dead particles
      if (this.particles[i].isDead()) {
        this.particles.splice(i, 1);
      }
    }
    pop(); // Restore previous drawing style
  }
}

I would like to clarify, I did use ChatGPT to help me understand classes further and it guided me as I used it to edit this code. However, the bulk of the work us mine.

🚩Predicted Challenges

One of the most intricate tasks will be implementing a particle system to create a dynamic, flowing energy aura around the character. This will require crafting a Particle class with properties like position, velocity, and lifespan, as well as methods for updating and displaying particles. Managing the creation and removal of particles based on the character’s power level will add another layer of complexity to this feature.

Customizing sounds for each character, particularly matching their iconic screams and power-up vocalizations, presents a unique challenge in this project. Dragon Ball Z is known for its distinctive character voices, and replicating this authenticity in the game will require careful sound editing and implementation. Finding high-quality audio clips that capture the essence of each character’s voice, while also ensuring they fit seamlessly into the game’s audio landscape, will be a time-consuming process.

The use of character sprites will be another difficult process, especially given that extracting character models from sprite sheets is a relatively new technique for me. Sprite sheets are efficient for storing multiple animation frames in a single image, but working with them requires a solid understanding of image slicing and animation timing. Learning how to properly extract individual frames, create smooth animations, and manage different character states (idle, powering up, transformed) will likely involve a steep learning curve. This process may involve trial and error, as well as research into best practices for sprite animation in p5.js.

📶Minimum Deliverables and Extras

Minimum:

  • Start screen with instructions and a start button
  • Main game screen with: Character sprite, Power level display, Energy aura (shape) around the character, Power-up button
  • Basic power-up mechanics (increase power level on button click)
  • Growing energy aura as power level increases
  • At least one sound effect (e.g., power-up sound)
  • Victory screen when final goal is reached
  • Option to restart the game after completion
  • Object-Oriented Programming implementation (Character, PowerUpButton, and EnergyAura classes)

Extras:

  • Multiple playable characters (e.g., Goku, Vegeta, Piccolo)
  • Animated character sprites that change with power level increases
  • Dynamic background that changes based on power level
  • More varied and engaging sound effects (e.g., different sounds for different power levels)
  • Power-up animations (e.g., lightning effects, screen shake)
  • Unlockable content (e.g., new characters, backgrounds) based on achievements
  • Adaptive music that intensifies as power level increases
  • Voice clips from the show playing at certain milestones
  • Mini-games or challenges to break up the clicking (e.g., timed button mashing, rhythm game)

Reading Reflection – Week 5

Computer vision differs from human vision in several key ways, primarily in its struggle with environmental variability, lack of semantic understanding, and limited field of view. While humans can easily adapt to changes in lighting, perspective, and context, computer vision systems process images as raw pixel data without inherent meaning. This fundamental difference presents both challenges and opportunities for artists and designers working with computer vision technologies.

To help computers see and track objects of interest, several techniques have been developed. These include controlled lighting to create consistent illumination, background subtraction to identify moving objects, brightness thresholding to detect significant differences, frame differencing to identify motion, and object tracking to maintain focus on specific elements. These methods, as highlighted in Golan Levin’s article, provide a toolkit for novice programmers and artists to incorporate computer vision into their work, enabling the creation of interactive experiences that respond to movement, gestures, and objects in real time.

I find it interesting how artists navigate ethical considerations regarding privacy and surveillance while also leveraging these technologies to create immersive and responsive installations. Some artists use computer vision as a medium for critical commentary on surveillance culture and social issues, turning the technology’s capabilities into a subject for artistic exploration. This dual nature of computer vision in art- as both a tool and a topic- encourages artists to deeply consider the societal impact of their work.

As computer vision tools become more accessible, there’s a growing tension between the democratisation of technology and the depth of understanding required to use it effectively. While user-friendly interfaces and AI-powered tools (like DALL-E and SORA) make it easier for artists to incorporate computer vision into their work, there’s a risk of oversimplification and a potential loss of the underlying principles that drive these technologies. This evolution in the artistic landscape offers exciting new avenues for creativity but also raises questions about the role of human ingenuity and technical literacy in art creation. As the field continues to advance rapidly, artists are challenged to balance the use of cutting-edge tools with a thoughtful approach to their application, ensuring that technology enhances rather than replaces human creativity.

Midterm Progress Report

Concept:

For my midterm project, I’m creating a grid version of Color by Number game. I’ve always loved these types of games because they help me relax, focus, and feel super satisfied as the artwork slowly comes to life. My goal is to bring that same experience to users by making a game that’s easy to use, fun, and visually appealing.

The game gives users a color palette, where each color is linked to a letter or number. The image itself is made up of a grid, with each cell labeled to show which color should go there. All users have to do is click on a grid cell, and the color fills in automatically. It’s quite simple for users to use.

Challenging Aspects and Risk Prevention

1. One of the biggest challenges was ensuring precise click detection on the image grid. Since the image itself doesn’t inherently support click interactions, I had to create an invisible grid and lay it on top of the image. This allowed the game to register mouse clicks accurately while keeping the visual presentation clean. However, this was difficult because I had to align the overlay perfectly so that the user wouldn’t see it, but it would still work effectively for detecting clicks and filling in colors.

2. Another challenge was allowing users to select colors from the palette and ensuring the correct color was applied to the chosen grid cell. I had to implement logic that detected which color was clicked on and then stored that color as the “active” selection. The game then needed to apply this color to any grid cell the user clicked until a new color was chosen. The difficulty came in precisely mapping click positions to the correct palette color while keeping the interface responsive and user-friendly.

class ColoringPage {
  constructor(name, imagePath, rows, cols, palette) {
    this.name = name;
    this.img = loadImage(imagePath);
    this.rows = rows;
    this.cols = cols;
    this.cellSize = 600 / this.cols;
    this.grid = Array.from({ length: this.rows }, () => Array(this.cols).fill(null));
    this.palette = palette;
    this.selectedColor = Object.values(palette)[0].color;
  }

  display() {
    this.drawPalette();
    image(this.img, 100, 90, 600, 600);
    this.drawGrid();
    this.drawColoredGrid();
  }

  drawGrid() {
    stroke(0, 50);
    noFill()
    for (let row = 0; row < this.rows; row++) {
      for (let col = 0; col < this.cols; col++) {
        rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize);
      }
    }
  }

  drawColoredGrid() {
    for (let row = 0; row < this.rows; row++) {
      for (let col = 0; col < this.cols; col++) {
        if (this.grid[row][col]) {
          fill(this.grid[row][col]);
          rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize);
        }
      }
    }
  }

  drawPalette() {
  let keys = Object.keys(this.palette);
  let x = (width - keys.length * 60) / 2; // Decrease 70 to 60 for less spacing
  let y = 20;
  
  noStroke();

  for (let i = 0; i < keys.length; i++) {
    let colorValue = this.palette[keys[i]].color;
    
    fill(colorValue);
    ellipse(x + i * 60 + 30, y + 25, 50, 50); 
    let c = color(colorValue);
    let brightnessValue = (red(c) * 0.299 + green(c) * 0.587 + blue(c) * 0.114); // Standard luminance formula

    fill(brightnessValue < 128 ? 255 : 0); // White text for dark colors, black text for light colors
    
    textSize(14);
    textAlign(CENTER, CENTER);
    text(this.palette[keys[i]].label, x + i * 60 + 30, y + 25); // Adjusted positioning to match circles
  }
}

 

Week 5 : Reading Response

Humans recognize faces and objects effortlessly because we rely on memory and perception, without needing to crunch huge amounts of data. Computers, on the other hand, have to go through an entire process—analyzing pixel data, running algorithms, and comparing what they see to stored information just to identify something. Plus, we’re naturally good at adapting to different environments, while computers can struggle when conditions change from what they were programmed for, often leading to glitches or poor performance.

To help computers track objects better, there are a few go-to techniques. Brightness thresholding boosts contrast by filtering colors and intensities, making objects stand out more clearly. Background subtraction helps by removing the static background so the system can focus only on what’s moving. These methods make it easier for computer vision to detect and track what matters, whether in art, security, or interactive installations.

In interactive art, computer vision takes audience engagement to the next level by allowing people to actively shape the artwork with their movements and gestures. Unlike traditional static art, these installations respond in real-time, making the experience more dynamic and immersive. By tracking participants as they move, computer vision can trigger visual, auditory, or even tactile reactions, turning viewers into active contributors rather than just passive observers. This creates a deeper connection with the artwork, giving people a sense of agency in the creative process. Whether it’s motion-triggered visuals, gesture-controlled projections, or body-driven games, computer vision is expanding the possibilities of creativity in interactive media.

Week 5 – Midterm Progress

For a midterm project, I decided to make a game for baking cookies! As a passionate baker, cookies are one of my specialties (ask my friends trust me).

The concept: A simple game to bake chocolate chip cookies. If time permits and if my skill set can handle it, I also want to include a timer. This is a very basic sketch of what I’m planning on: 

  1. Start page
  2. Ingredients with a bowl; the users will drag the ingredients into the bowl, and then click on the egg to crack it. This was my “scary” part, as I wanted to try using sprite sheets, a concept I never heard of or tried before this.
  3. A mix button will bring a whisk, which is animated to turn, and then an animation will switch to a tray filled with cookie dough.
  4. The tray will be pushed into an “oven” (which I’ll draw using shapes)
  5. A screen shows up saying “Cookies are ready!!”. There will be a restart button, and a recipe button, for my actual original recipe.

So, the user interaction would essentially be their role as the baker, dragging the ingredients into the bowl, putting the tray into the oven and so on.

The scary part: Using a sprite sheet for the egg cracking. I drew the stages of an egg being cracked and then created a sprite sheet of it using Canva. I then coded the egg breaking, which was a bit confusing. At first, the egg would “crack” wherever we click on the screen, but then I figured out how to make it confined to just the egg picture.

I was very happy with the result, and it made me less afraid in approaching the midterm project, since the scariest part is solved now. Here’s an embedded sketch of my “cracking the egg” test. I might need to update the sprite sheet and code to get the egg to ACTUALLY fall into the bowl, but so far, I’m pretty confident with my project concept, and excited to code it to reality!

 

Week 4 Project

For this week’s project I decided to make something somewhat ironic in an attempt to create something funny (I think it is).

I made a quote generator that generates a quote and refreshes it for the user’s motivational needs. Needless to say, I was the one that inspired my own design, as I found myself at certain points in the day, struggling to gather the motivation to do anything remotely productive.

One part of this project that was challenging to me was finding out how to load my .csv file into the project, then gathering the correct string to be displayed back into the print().

function keyPressed() {
    if (keyCode === 13 ) {
      
      
    let currentQuote = int(random(1,20))
    print(currentQuote)
    displayText = quotes.getString(currentQuote,0)

 

Assignment 4: Fortune Cookies

This is my text-based art piece, which I have taken inspiration from during a recent trip to a Chinese restaurant. My friends and I enjoyed opening our cookies and reading each other’s fortunes. I wanted to give others this experience too so created this piece to do exactly that.

 

📋Assignment Brief

  • Create an interactive piece of artwork by loading data or displaying text

💭Conceptualisation

The idea for this project emerged from a delightful dining experience at a Chinese restaurant. As my friends and I cracked open our fortune cookies and shared our fortunes with laughter and anticipation, I was struck by the simple joy this tradition brought to our meal. I wanted to capture and recreate this experience, allowing others to enjoy the whimsical pleasure of fortune cookies from anywhere. The idea evolved to combine the visual aesthetics of a Chinese restaurant with the interactive element of opening a fortune cookie. I envisioned a scene where users could click to “open” a virtual fortune cookie, revealing one of many possible fortunes. This concept aimed to blend cultural elements, visual art, and interactivity into a single, engaging piece. By digitizing this experience, I hoped to make it accessible to a wider audience, allowing people to enjoy the surprise and wisdom of fortune cookies without the need for a physical restaurant visit.

💻Process

The coding process for this project began with creating the foundational elements of the Chinese restaurant scene using p5.js. I started by establishing the basic structure, including the background, tables, wall decorations, and lanterns, to set the ambiance. The next crucial step was designing the fortune cookie itself. I used bezier curves to craft a realistic cookie shape, complete with a subtle shadow and a visible fold line. To add depth to the experience, I implemented two states for the cookie: closed and open. This required creating separate functions for drawing each state.

The interactive element was introduced by implementing a mousePressed() function, allowing users to toggle between the closed and open states of the cookie. To bring the fortune-telling aspect to life, I created an array of ten different fortune messages. Each time the cookie is opened, a random fortune is selected from this array and displayed on the ‘paper’ inside the cookie.

function mousePressed() {
  isOpened = !isOpened;
  if (isOpened) {
    currentFortune = random(fortunes);
    drawOpenedFortuneCookie(openedFortuneCookie, currentFortune);
  }
}

One of the more challenging aspects was ensuring the fortune text was properly centered and legible within the opened cookie. This required careful adjustment of text positioning and size. Throughout the process, I continually refined the visual elements, adjusting colors, sizes, and positions to create a harmonious and visually appealing scene.

The final touch involved fine-tuning the user experience, ensuring smooth transitions between states and readable fortune messages. This iterative process of coding, testing, and refining resulted in an interactive digital representation of the classic fortune cookie experience, nestled within a charming Chinese restaurant setting.

🚩Challenges

One of the main challenges I faced while writing this code was achieving a realistic representation of a fortune cookie while keeping the code simple and efficient. Creating the curved shape of the cookie using bezier curves required careful adjustment of control points to achieve a natural-looking form.

Another significant challenge was implementing the random fortune selection mechanism. While the concept seems straightforward, ensuring that the fortunes were truly random and didn’t repeat too frequently required careful consideration. I had to strike a balance between maintaining a diverse selection of fortunes and avoiding predictability in the user experience.

📶Potential Improvements

While the current version successfully creates an interactive fortune cookie experience within a Chinese restaurant setting, there are several areas for potential improvement and expansion:

  • Animation Enhancements: Adding subtle animations, such as a smooth opening animation for the fortune cookie or a gentle floating effect for the lanterns, could make the scene feel more engaging

  • Sound Integration: Incorporating audio elements, like a soft cracking sound when opening the cookie or ambient Chinese restaurant background noise, could enhance the immersive experience

  • Multiple Cookie Types: Introducing different types of fortune cookies with varying shapes, colors, or even flavors could add variety and replay value to the interaction

 

Startup Funding Visualization

Concept

This project visualizes startup funding data by representing different startups as interactive elements on the screen. Users can hover over a startup to see its funding amount and valuation, and click on a startup to view more detailed information. The goal is to create an intuitive and engaging way to explore startup funding rounds.

Code Highlight

One of the key sections of the code is how the information is displayed when a startup is hovered over. The following snippet effectively creates a tooltip-like interaction:

text(`${this.emoji} ${this.name}`, this.x, this.y - 15);

if (hovered) {
  fill(255, 200);
  rect(mouseX, mouseY, 160, 50, 10);
  fill(0);
  textSize(12);
  text(`💰 $${this.amountRaised}M`, mouseX + 80, mouseY + 20);
  text(`📈 $${this.valuation}M`, mouseX + 80, mouseY + 40);
}

 


This block dynamically positions the tooltip near the cursor and provides a quick summary of key financial metrics.

Embedded Sketch

 

Reflection and Future Work

While the current implementation effectively visualizes funding data, there are several areas for improvement:

  • Scalability: The current approach might become inefficient with a large dataset. Optimizing how data is rendered could enhance performance.
  • More Interactivity: Adding filtering options for different funding rounds (Seed, Series A, B, etc.) could improve user experience.
  • Enhanced Visualization: Implementing different shapes or colors to represent different funding rounds would make distinctions clearer.
  • Data Integration: Connecting to a live API to fetch real-time funding data would make this visualization more dynamic and useful.

Overall, this project provides an engaging way to explore startup funding data, and with future iterations, it can be expanded into a more powerful analytical tool.

Week 4 : Reading response

While reading the text, one thing I felt that Norman did not explicitly mention is the lack of instructions for devices and electronics, especially when their functionality is not immediately obvious. Many products assume that users will either figure things out through trial and error or consult a manual, but in reality, unclear controls often lead to confusion and frustration. For example, I own a digital camera, but I still have no idea how to zoom properly or what the different shooting modes actually do. There are no clear signifiers on the buttons, and the camera does not provide immediate feedback or guidance. Similarly, the projectors we have in the Baraha rooms can get difficult to use because the wall-docked interface offers multiple input options with no clear instructions on which mode is appropiate for what. Users are left guessing, which can lead to delays and unnecessary trial and error.

This lack of guidance can also be an issue in interactive media. Many systems expect users to learn how to navigate and use features on their own, but this often results in important functions being left undiscovered, reducing the overall usability of the system. This is where Norman’s principles of discoverability, feedback, and conceptual models become essential. If interfaces and products clearly communicated how to use them, users would not have to struggle with unnecessary complexity. By applying Norman’s ideas, interactive media could be improved by incorporating better signifiers, immediate feedback, and clearer mappings to ensure users understand what actions are possible. Whether in physical devices like cameras and projectors or in digital interfaces, design should guide users intuitively rather than forcing them to figure everything out on their own.