Midterm Progress/ Reading response – Shaikha AlKaabi

Midterm Project Progress:

Whimsical Garden: Seasons of Growth is a point-and-click game where players nurture a magical garden through the changing seasons. The core gameplay involves planting, watering, and harvesting a variety of plants, each with unique needs. The game is a garden simulation where the player’s objective is to grow and manage a garden. You start by planting a flower in a location of your choice within a designated area. As the game progresses, you need to water the plants to ensure they grow. If a plant doesn’t receive water for too long, it will wither and die. Players must balance their attention between different plants, making strategic decisions about which plants to water with their limited resources. The challenge lies in keeping all plants alive and thriving, which becomes more difficult as the garden grows in size and complexity. The game ends when the player successfully maintains the garden for a certain period, or if all the plants die. There’s an option to restart the game after completion to try different strategies or improve your previous score.

Game Structure:

1. Start Screen: The game opens with a start screen that displays instructions for playing. This screen explains the basics of gardening mechanics, seasonal changes, and the objectives of the game. The start screen awaits user input (such as pressing a ‘Start’ button or hitting a specific key) to begin the game.

2. Gameplay Session: Once started, the player enters the main gameplay area, where they manage their garden through the various seasons, facing challenges like planting, watering, harvesting, and solving environmental puzzles.

3. End Session and Restart: After a gameplay session is completed (which could be defined by reaching a certain goal, surviving through a set number of seasons, the game transitions to an end screen. This screen summarizes the player’s achievements and offers the option to start a new session. Choosing to start a new session resets the game environment , allowing players to begin with a fresh garden.

Challenges:

1. Managing Game States: Implementing a system to manage different game states (e.g., start screen, active game, end screen) is crucial. This requires careful design to ensure smooth transitions between states based on user inputs.

2. Session Reset Functionality: Developing a way to reset the game environment for a new session without restarting the sketch poses a challenge. This involves resetting game variables, clearing the garden area, and preparing the game for a new set of seasonal cycles.

3. User Input Handling: Creating an intuitive and responsive input system for navigating between the start screen, gameplay, and restart option is essential. This includes implementing event listeners for keyboard, mouse, or button inputs that trigger the correct actions within the game’s flow.

By addressing these challenges, Whimsical Garden: Seasons of Growth aims to offer a rich and engaging gameplay experience that is accessible, educational, and enjoyable for a wide range of players.

let gameState = 'start'; // Possible states: 'start', 'game', 'end'
let garden = [];
let restartButton;

function setup() {
  createCanvas(600, 600);
  textAlign(CENTER, CENTER);
  
  // Restart button (hidden initially)
  restartButton = createButton('Restart');
  restartButton.position(width / 2 - 50, height / 2 + 20);
  restartButton.mousePressed(restartGame);
  restartButton.hide();
}

function draw() {
  background(161, 196, 170);
  
  if (gameState === 'start') {
    drawStartScreen();
  } else if (gameState === 'game') {
    drawGarden();
  } else if (gameState === 'end') {
    drawEndScreen();
  }
}

function drawStartScreen() {
  textSize(32);
  text('Whimsical Garden: Seasons of Growth', width / 2, height / 3);
  textSize(25);
  text('Click to start', width / 2, height / 2);
}

function drawGarden() {
  for (let plant of garden) {
    // Simple representation of plants
    fill(20, 180, 60);
    ellipse(plant.x, plant.y, 20, 20);
  }
  
  // Example end condition: 5 plants
  if (garden.length >= 5) {
    gameState = 'end';
    restartButton.show();
  }
}

function drawEndScreen() {
  background(47, 54, 50);
  textSize(50);
  fill(184, 46, 64);
  text('Garden is Full!', width / 2, height / 3);
  textSize(25);
  text('Restart to play again', width / 2, height / 2);
}

function mousePressed() {
  if (gameState === 'start') {
    gameState = 'game';
  } else if (gameState === 'game' && mouseY < height - 100) {
    // Allow planting only within the game area (excluding UI elements, e.g., buttons)
    plantSeed(mouseX, mouseY);
  }
}

function plantSeed(x, y) {
  // Add a new plant to the clicked position in the garden
  garden.push({x: x, y: y});
}

function restartGame() {
  garden = []; // Clear the garden
  gameState = 'start'; // Set game state back to start
  restartButton.hide();
}

 

Reading Response: 

We’ve come a long way in how we interact with computers, starting with some really early virtual reality that let people play in digital spaces just by moving around. Myron Krueger was one of the first to make this happen with his Videoplace, and it was a big step forward because it made the connection between people and computers feel natural and easy.

Then there’s Bob Flanagan’s Suicide Box, which is a lot more serious. It’s a piece of art that makes us think about tough topics like sickness and the choice to end one’s life. It’s a strong reminder that technology can do more than just entertain us; it can also make us think deeply about life’s big questions.

Understanding how these systems work is pretty cool too. They can tell the difference between what’s important for the interaction and what’s just background noise. They look for the brightest spot they can find to keep track of what the user is doing. This is really important for making games and art installations where you can move around and have the game or art react to what you’re doing.

Jonah Warren’s Master’s thesis takes this idea further. He found new ways for games to understand our body movements. He talks about direct ways to interact with games, like touching, covering, or mirroring, which are now common in games that use body movement for control.

From the early days of virtual reality to the deep questions raised by interactive art, and the nitty-gritty of tracking movement, it’s clear that the way we communicate with computers has many sides to it. Technology keeps getting better, and it changes how we play, create, and think about the world. The field of human-computer interaction is really about connecting with each other, sharing ideas, and getting to grips with both the real and digital worlds we live in.

Midterm Progress / Reading Response – Hamdah AlSuwaidi

 Midterm Progress:

For my midterm project, I’m taking on the challenge of designing a captivating costume exhibit for The MET’s Costume Institute. The exhibit will showcase the evolution of haute couture through iconic designs from renowned fashion houses, spanning different eras and design philosophies.

The overarching theme of the exhibit is “Timeless Elegance: A Journey Through Fashion History.” My aim is to highlight the enduring allure and innovation of haute couture, exploring how designers have redefined beauty and style over the decades.

1. Homepage:
– The homepage welcomes visitors to the online exhibit with a visually appealing layout featuring the exhibit’s title, a brief description, and navigation options.

2. Navigation Menu:
– A navigation menu at the top or side of the webpage allows visitors to easily access different sections of the exhibit, including:
– Explore: Provides access to the main exhibit layout where visitors can view featured designers and garments.
– Audio Guides: Offers a collection of audio commentary on select pieces.
– Search: Enables visitors to search for specific designers, garments, or fashion movements.
– About: Provides information about the exhibit, its curators, and contributors.

3. Explore Section:
– Clicking on the “Explore” option takes visitors to the main exhibit layout, which may resemble a gallery or virtual space.
– Visitors can navigate through different areas of the exhibit by clicking or tapping on designated hotspots or arrows.
– Clicking on a designer’s name or garment reveals detailed information, including descriptions, historical context, and images.

4. Audio Guides:
– The “Audio Guides” section presents visitors with a curated collection of audio commentary on select pieces within the exhibit.
– Visitors can listen to the audio commentary by clicking on specific garments or audio icons associated with them.

5. Search Functionality:
– The “Search” option allows visitors to search for specific designers, garments, or fashion movements.
– Visitors can enter keywords or phrases into the search bar to find relevant content within the exhibit.
– Search results display relevant garments, designers, or topics, with clickable links to access more information.

Interactive Elements:
– Throughout the exhibit, interactive elements such as clickable images, videos, and multimedia presentations provide additional context and engagement.
– Visitors can interact with these elements by clicking or tapping on them to access related content or animations.

Some examples:

7. Responsive Design:
– The online exhibit is designed to be responsive, ensuring a seamless experience across different devices and screen sizes.
– Whether visitors access the exhibit on a desktop computer, tablet, or smartphone, the layout and functionality adapt to provide an optimal viewing experience.

  1. Maintaining Engagement: Keeping visitors engaged throughout their online exhibit experience is crucial. Unlike in-person exhibits where visitors may spend hours exploring, online visitors may have shorter attention spans. Designing captivating visuals, interactive elements, and compelling content is essential to maintain visitor interest and encourage exploration.
  2. Audio and Video Integration: Integrating audio and video content into the exhibit may present challenges such as ensuring cross-browser compatibility, synchronizing multimedia elements with other interactive elements or animations, and optimizing playback performance for smooth audio/video streaming.
let titleSize = 40; // Decreased the title size
let subtitleSize = 20; // Decreased the subtitle size

function preload(){
   hangerImage = loadImage('Hanger.png'); // Load hanger image }
}
function setup() {
  createCanvas(windowWidth, windowHeight);
  textSize(titleSize);
  textAlign(CENTER, CENTER);
}

function draw() { background(255); // Set background color to white
if (mouseIsPressed || mouseX != pmouseX || mouseY != pmouseY) {
  // Display hanger image on mouse hover
  image(hangerImage, mouseX, mouseY, 70, 50);
}
                
 // Title
                 
fill(166, 0, 0); // Set text color to red
  textSize(titleSize);
  text("Welcome to Timeless Elegance", width / 2, height / 2 - 50);
  
  // Subtitle
  fill(0); // Set text color to red
  textSize(subtitleSize);
  text("Touch the screen to begin your journey", width / 2, height / 2 + 20);
}

function mouseClicked() {
  // Trigger next screen or action when the screen is touched/clicked
  // For example, navigate to the main exhibit layout
}

The hanger icon serves as a visual representation of fashion and garment design, reinforcing the theme of the exhibit. It symbolizes the process of selecting, showcasing, and appreciating designer garments, further immersing visitors in the world of fashion.Changing the mouse cursor to a hanger creates a more immersive experience for visitors, immediately signaling that they are entering a fashion-themed environment. This subtle visual cue helps transport visitors into the world of haute couture and sets the tone for the exhibit.

https://docs.google.com/document/d/1wQb92P4HcfuLNuzHEYyvs3ZUFQNo6hspwGtPbJLv53Q/edit?usp=sharing (More details about the costume institute and what dress will be included in the exhibit)

Reading response:

The reading on “Computer Vision for Artists and Designers” delves into the increasing accessibility of computer vision technology to artists and designers, facilitated by user-friendly software and open-source communities. It showcases various projects, including Rafael Lozano-Hemmer’s intriguing installation “Standards and Double Standards” (2004), where belts controlled by a computer vision-based tracking system rotate to follow individuals, offering an unintentional yet captivating form of interaction.

However, the ethical implications become pronounced with projects like Suicide Box by the Bureau of Inverse Technology (1996), which utilized motion-detection to record real data of suicides, raising concerns about privacy and surveillance. While such data might aid in locating missing individuals, it blurs ethical boundaries, sparking controversy.

Moreover, the reading outlines different problems addressed by vision algorithms, such as motion detection and object tracking, underscoring their relevance in design considerations. The introduction of “Telecentric lenses” offers insights into improving object recognition, albeit with drawbacks like cost and distortion issues, prompting reflection on their judicious usage.

The discussion expands to the societal acceptance of constant surveillance through technologies like facial recognition, prompting introspection on privacy norms. This leads to questioning the boundaries between innovation and intrusion, highlighting the need for ethical frameworks in technology adoption.

In reflecting on the reading, two artworks stand out for their distinct approaches to computer vision. Rafael Lorenzo-Hemmer’s “Standards and Double Standards” impresses with its inventive use of space and objects, blurring the lines between digital and tangible realms. In contrast, Christopher Moller’s “Cheese” raises questions about the advancements in facial recognition technology and its potential implications for art and society.

Week 5: Midterm Progress

Concept and Design

The core concept of this project revolves around creating an immersive interactive experience that blends art and technology, leveraging the versatility of the p5.js library. The aim is to develop an artwork or game that captivates the user by engaging their senses through visual elements, sound, and interactivity. The design is centered on the principle of simplicity to ensure accessibility while fostering a deep sense of exploration and interaction. By incorporating a variety of media types—including shapes, images, sounds, and on-screen text—the project seeks to create a rich, multi-sensory environment that responds to the user’s actions in intuitive and surprising ways.

Implementation

So far, the project has laid down a foundational structure that supports Object-Oriented Programming (OOP) to manage its various interactive elements effectively. A particle system has been developed as the primary visual feature, showcasing a dynamic and aesthetically pleasing behavior that can be manipulated through user input. The system begins with an instructional screen, ensuring that users understand how to interact with the artwork or game before commencing. This design choice not only improves user experience but also aligns with the project’s requirement to start the experience with an instruction screen and wait for user input before starting.

The current implementation allows for the experience to be restarted without the need to reload the sketch entirely, promoting a seamless user interaction that encourages exploration and repeated engagement. The integration of state management facilitates this process, enabling the transition between different phases of the application, such as viewing instructions, interacting with the experience, and restarting the session.

Challenges

One of the main challenges faced during development has been incorporating all the required elements (shape, image, sound, and on-screen text) into a cohesive and interactive experience. While the foundational structure for shapes and on-screen text has been established through the particle system and instructional screen, the integration of images and sounds remains incomplete. Balancing the aesthetic appeal with functional interactivity, especially when introducing multimedia elements, requires careful consideration to maintain performance and user engagement.

Another significant challenge is ensuring that the experience is equally compelling for one or more users. Designing interactions that are universally intuitive yet offer depth and discovery poses a unique challenge in user experience design. Additionally, creating a mechanism for seamlessly restarting the experience without restarting the sketch has required thoughtful state management and control flow within the code.

Future Directions

Moving forward, the project will focus on integrating the remaining required elements—specifically, the inclusion of at least one image and one sound—to enrich the sensory experience. Exploring creative ways to incorporate these elements will not only fulfill the project’s requirements but also enhance the overall aesthetic and interactive quality of the work. Addressing the challenges of multi-user interaction and refining the user interface to accommodate various interaction modes will also be a priority. Through iterative design and testing, the project aims to evolve into a fully-realized interactive experience that leverages everything learned so far, presenting users with an engaging and memorable exploration of digital art and interactivity.

Progress

Week 5 Reading Response – Khalifa Alshamsi

The historical context provided, referencing Marvin Minsky’s underestimation of the complexity of computer vision, sets the stage for the discussion on how the field has evolved to include a wide array of applications, especially in the arts. The mention of Myron Krueger’s Videoplace as an early example of interactive artwork utilizing computer vision illustrates the potential for creative interaction between humans and computers beyond the conventional mouse and keyboard interfaces.

The exploration of projects such as Messa di Voce, Standards and Double Standards, Sorting Daemon, and others within the article showcased the versatility of computer vision in crafting interactive and immersive art experiences. These projects, each with its unique thematic focus, from surveillance and privacy to the nuanced exploration of human emotions via facial recognition, illustrate the breadth of computer vision’s applicability. They catalyzed my curiosity, pushing me to consider how technology can be leveraged to challenge societal norms, provoke thought, and evoke emotional responses.

Week 5 – Khalifa Alshamsi

For the midterm, I plan on creating a car racing game in which the objective is to get as many points as possible depending on how far you reach before colliding with a vehicle.

The Sketch:

Description:

The game uses a simple rectangular design so far until I can finish the game in more detail in which the white rectangle is your car and the blue rectangles are the other cars on the road and you aim to not collide with them.

Script:

let player;
let obstacles = [];
let gameSpeed = 2;

function setup() {
  createCanvas(400, 600);
  player = new Player();
}

function draw() {
  background(51);
  
  // Player
  player.show();
  player.move();
  
  // Obstacles
  if (frameCount % 120 == 0) { // Adds a new obstacle every 100 frames
    obstacles.push(new Obstacle());
  }
  
  for (let obs of obstacles) {
    obs.show();
    obs.update();
    
    // Checking for collision
    if (player.collidesWith(obs)) {
      noLoop(); // Stop the game
      textSize(32);
      textAlign(CENTER, CENTER);
      fill("White");
      text("Game Over", width / 2, height / 2);
    }
  }
}

// Player class
class Player {
  constructor() {
    this.size = 30;
    this.x = width / 2;
    this.y = height - this.size * 2;
  }

  show() {
    fill("white");
    rectMode(CENTER);
    rect(this.x, this.y, this.size, this.size);
  }

  move() {
    if (keyIsDown(LEFT_ARROW)) {
      this.x -= 5;
    } else if (keyIsDown(RIGHT_ARROW)) {
      this.x += 5;
    }
  }

  collidesWith(obstacle) {
    let hit = collideRectRect(this.x, this.y, this.size, this.size, obstacle.x, obstacle.y, obstacle.size, obstacle.size);
    return hit;
  }
}

// Obstacle class
class Obstacle {
  constructor() {
    this.size = random(20, 50);
    this.x = random(this.size, width - this.size);
    this.y = 0;
  }

  show() {
    fill("blue");
    rect(this.x, this.y, this.size, this.size);
  }

  update() {
    this.y += gameSpeed;
  }
}

// Function to check rectangular collision (Note to self p5.collide2D library needed)
function collideRectRect(x, y, w, h, x2, y2, w2, h2) {
  return x < x2 + w2 && x + w > x2 && y < y2 + h2 && y + h > y2;
}

Problems Encountered:

The problem that I faced the most was setting up the collision format, but I will, later on, be able to fix it by using a p5.collide2D library because, from my understatement, it will make the collision simpler and exact to the collision I want the game to run.

Future goals:

  1. Improved graphics: By using either a better code format for the cars instead of the rectangles or implementing pictures to represent the cars.
  2. Scoring system: The scoring system is yet to be implemented into the game but the idea is for it to score a higher number the further you reach in the game while avoiding collisions.
  3. Sound Effects:  Adding music to the background to add more to the game’s value while playing.
  4. Increased difficulty: Making gradual increases to the gameSpeed or the number of cars in the game to make it harder to continue.

 

Week 5: Midterm Progress

For my midterm project, I wanted to create something that had a defined storyline as the main component pulling the user in. After struggling to decide on whether to go for an interactive experience or a game, I went for the latter. My hope is to create a gamified storyline that takes the user through a beginning, middle, and end. Seeing as I have remained faithful to the butterfly motif for quite some time now, I decided it would be a shame to break the streak with this assignment.

Design and Concept

For the main design of the game, I was inspired by games like Super Mario, the offline Chrome Dinosaur Game, and also the feel of a retro game design in this work by Arshiya Khattak. The main storyline is to help a butterfly find her way home in the open fields after getting lost in a city full of high-rise skyscrapers, with no flowers in sight. The game starts with the scene of a butterfly fluttering through the main character’s window. After the scene ends, a prompt is displayed with directions on how to play the forthcoming game. For now, there will be two main levels, the first being easier than the second. The character, carrying the butterfly on her back, would have to jump over obstacles or enemy objects to maintain her health status. She would also be presented with the opportunity to regain some of her health back by collecting rewards on the way. If the character manages to reach the end without losing all of her health, the ending game scene is played in which the butterfly is reunited with her family in a field of flowers and a prompt is displayed congratulating the player with instructions on how to repeat the game. In the process of visualizing this, I sketched out the following storyboard:

Code Design

To make my code easier to design and scale up, I plan to encapsulate everything, including the game experience itself in a class. Therefore, it was helpful to sketch out class diagrams in a UML-ish fashion as follows:

Anticipated Complexities/Challenges and Risk Mitigation

My strategy for mitigating risk was to implement the basic skeleton of the game levels to allow me to seamlessly scale up complexity later on. I also wanted to get the most time-consuming aspects of the game design, like collecting assets, out of the way so that I could focus on the game functionality this upcoming week.

Asset Collection

I knew one of the things I would struggle with was finding unified assets that were consistent with the feel of the game I was looking for. Thus, I dedicated some time to collecting spritesheets and level/scene backgrounds as I wanted to minimize having to design most components myself. After some digging, I collated the following:

Sprites:

(https://opengameart.org/content/oracles) 
(https://opengameart.org/content/butterfly)

Backgrounds:

(https://opengameart.org/content/forest-background)

(https://opengameart.org/content/cyberpunk-street-environment)

(https://www.deviantart.com/watashi-sensei/art/Bedroom-Pixel-628248413)

(https://www.freepik.com/free-vector/pixel-art-rural-landscape-background_49685498.htm#from_view=detail_serie)

Parallax Background Implementation

For the background of the levels, I wanted to incorporate a parallax effect to add dynamism to the backdrop of the experience. It has been some time since I have implemented this and I had forgotten the basic principles, so I wanted to get this part out of the way first. Each background is composed of multiple layers and the basic idea is to have each layer loop back into the screen at varying speeds to create a continuous animation with different depth components.

class ParallaxBackground {
  constructor(layersImages, layersSpeeds, layersDir) {
    this.layersImages = layersImages;
    this.layersDir = layersDir;
    this.layersSpeeds = layersSpeeds;
    this.layersX1 = [];
    this.layersX2 = [];
    for (let i = 0; i < this.layersImages.length; i++) {
      this.layersX1.push(0);
      this.layersX2.push(width);
    }
    print(this.layersSpeeds);
  }

  showBackground() {
    for (let i = 0; i < this.layersImages.length; i++) {
      image(this.layersImages[i], this.layersX1[i], 0, width, height);
      image(this.layersImages[i], this.layersX2[i], 0, width, height);
      if (this.layersDir[i] == 1) {
        this.layersX1[i] -= this.layersSpeeds[i];
        this.layersX2[i] -= this.layersSpeeds[i];
        if (this.layersX1[i] < -width) {
          this.layersX1[i] = width;
        }
        if (this.layersX2[i] < -width) {
          this.layersX2[i] = width;
        }
      }
    }
  }
}

 

Sprite Movements and Collisions

I also wanted to simulate the movement of objects on the screen to get a headstart. I used placeholder ellipses for this process, which I will later replace with actual images. I utilized principles of velocity and acceleration to simulate the player’s vertical jumps upon the space bar key press and the movement of enemies in the horizontal direction across the screen:

class Player {
  constructor(x, y, ground) {
    this.x = x;
    this.y = y;
    this.vy = 0;
    this.ground = ground;
    this.gravity = 1;
    this.jumpPower = 20;
    this.collisions = 0;
    this.collidedEnemy = [];
  }
  move() {
    this.y += this.vy;
    // player is not on ground
    if (this.y < this.ground) {
      this.vy += this.gravity;
    } else {
      this.vy = 0;
      this.y = this.ground;
    }
  }

  jump() {
    if (this.y >= this.ground) {
      this.vy = -this.jumpPower;
    }
  }
  show() {
    ellipse(this.x, this.y, 50, 100);
    this.move();
  }
}
class Enemy {
  constructor(x, y) {
    this.x = x;
    this.y = y;
    this.vx = -5;
    this.collided = false;
  }
  move() {
    this.x += this.vx;
  }

  show() {
    ellipse(this.x, this.y, 50, 50);
    this.move();
  }
}

I also wanted to implement the collision mechanism by essentially detecting when the boundary of the player touches the boundary of an object and classifying it as a collision:

detectCollision(enemy) {
  let radius = 10;
  if (!enemy.collided) {
    // get distance betweem the center of the character and that of the enemy object
    let d = dist(this.x, this.y, enemy.x, enemy.y);
    // collision detected
    // distance is less than the sum of objects' radii
    // and the radius for collisioon
    if (d < radius + 25 + 25) {
      this.collisions += 1;
      enemy.collided = true;
      return true;
    }
  }
  else{
    return false;
  }
}
Current Progress

Week 5: Reading Response – Computer Vision for Artists and Designers

It is really hard to outdo interactions that map the objects existing in our physical reality onto our screens for us to warp and play around with, so it was not surprising to me that computer vision techniques would be popular in the production of interactive art. One of the things that surprised me, however, was how early the adoption of computer vision principles – as early as the late 1960s– occurred. In addition to introducing a brief history of computer vision in interactive art, the reading also delineates simple computer vision algorithms that can be exploited by digital creators. Moreover, while the article acknowledges the complexity of the field of computer vision, it introduces the audience to a wide range of multimedia tools that make computer vision more accessible to beginners who are not familiar with machine vision.

Given the power that computer vision grants creators in being able to recreate, manipulate, and reshape physical reality, artworks created with computer vision techniques have the potential to represent the multiple layers of complexity that exist in the world around us. It was interesting – and shocking at times – to see how different sociopolitical themes can be expressed using the power of computer vision. This is demonstrated best by the pieces created on surveillance that the article mentions. The idea of using belts controlled by a tracking system that rotates according to the motion of the audience to highlight surveillance by an “empty crowd” in Rafael Lozano-Hemmer’s installation, Standards and Double Standards, exemplifies how well computer vision can “metaphorize” our physical reality. However, additional considerations, and potentially greater risks, come with more capability. An example of this is the Suicide Box,  which captured video footage of suicide jumpers from the Golden Gate Bridge. It was jarring for me to see a tragic phenomenon turned into a piece of “art” and was a testimony to how computer vision technology, like any tool with great computational power, must be accompanied by a recognition of the ethical implications of its use.

I also enjoyed how the reading emphasized the importance of using computer vision techniques that are compatible with the surrounding physical environment as no particular algorithm will be truly generalizable to all video/image formats. This may take the form of making the physical environment conducive to optimizing the workings particular algorithm or choosing an algorithm that is best suited for a given environment. I loved learning about the different techniques that designers employ to achieve this as it effectively extends the idea of Human-Centered Interaction to Environment-Centered Interaction and truly captures the unleashing of interactivity potential that computer vision affords digital creators.

Midterm Project Progress (The Dungeon Maze)

MAZE GAME

Concept

The idea behind the game is that player must make their way through a sequence of difficult mazes using mathematical algorithms, each one generated randomly to provide a different challenge with each maze. The main objective of the game is for the player to navigate from the beginning point to the maze’s exit.

Gold coins are scattered throughout the maze and act as collectibles for players to find. This helps them to finish the game if they collect a certain number of gold coins. Also, 90% of the gold coins spawned on the maze must be collected for the player to exit the maze. However, they can use the teleporter without this condition.

There are teleporters placed in the maze that serve as an interesting twist to the game. Stepping upon a teleporter transports the player to a whole new maze, which changes the difficulty and forces them to adjust their approach on the fly.

Every maze in the game has a timer built into it, which creates a sense of urgency and tests the player’s ability to think quickly and make decisions under pressure. Also, the player has limited vision which makes it difficult for the player to plan their next move. This forces them to memorize their path to complete the maze.

Design 

The visual and thematic components of this maze game are designed to put the player in the shoes of a dungeon master.

The image is used for the background of the maze.

This is a snippet of the maze. The design is a work in progress as I am focusing on getting the basic functionality to work first. Currently, the big yellow is the player, the small bronze-colored circles are the gold coins, the blue circles are the teleporters, and the red square is the exit. I planning on replacing these with some avatars or icons.

Frightening Part

The most frightening part when I started to code was the main functionality of the game, The Maze. I wanted to have random mazes being generated every time the user plays the game or is teleported. So that it invokes a new sense of challenge.

The use of randomness and the cyclical structure of the path-carving process is what gives the procedure its complexity. Recursive functions include function calls that repeat back on themselves, they can be challenging to study and comprehend. I have

The code below shows how I tackled this issue:

function generateRandomMaze(rows, cols) {
  let maze = new Array(rows);
  for (let y = 0; y < rows; y++) {
    maze[y] = new Array(cols).fill('#');
  }

  function carvePath(x, y) {
    const directions = [[1, 0], [-1, 0], [0, 1], [0, -1]];
    // Shuffle directions to ensure randomness
    shuffle(directions); 

    directions.forEach(([dx, dy]) => {
      const nx = x + 2 * dx, ny = y + 2 * dy;

      if (nx > 0 && nx < cols - 1 && ny > 0 && ny < rows - 1 && maze[ny][nx] === '#') {
        maze[ny][nx] = ' ';
        // Carve path to the new cell
        maze[y + dy][x + dx] = ' '; 
        carvePath(nx, ny);
      }
    });
  }

  // Randomize the starting point a bit more conservatively
  const startX = 2 * Math.floor(Math.random() * Math.floor((cols - 2) / 4)) + 1;
  const startY = 2 * Math.floor(Math.random() * Math.floor((rows - 2) / 4)) + 1;

  // Ensure starting point is open
  maze[startY][startX] = ' '; 
  carvePath(startX, startY);

  // Set exit
  maze[rows - 2][cols - 2] = 'E';

  // Place teleporters and coins after maze generation to avoid overwriting
  addRandomItems(maze, 'T', 3);
  addRandomItems(maze, 'C', 5);

  return maze;
}

function shuffle(array) {
  for (let i = array.length - 1; i > 0; i--) {
    const j = Math.floor(Math.random() * (i + 1));
    [array[i], array[j]] = [array[j], array[i]];
  }
}

The generateRandomMaze() function creates a random intricate maze by first populating it with walls and then utilizing the depth-first search algorithm to recursively find pathways. To ensure variation in the pathways formed, it shuffles possible directions (up, down, left, and right) starting from a randomly selected point. It improves the gameplay experience by slicing through the grid, placing an exit, and dispersing interactive features like coins and teleporters at random. The carvePath function’s use of recursion and randomness is essential to creating a maze that embodies the spirit of maze exploration and strategy and is both difficult and unique each time the game is played.

 

NOTE: I have used a lot of functions rather than classes. The functionality of the game is 80% achieved and from here on now I will change the functions to classes and complete 100% of the game/gameplay.

Reading Response Week 5 : OpenAi Sora, Apple Vision Pro, Virtual Reality and the rise of Computer Vision

For this reading response I decided to take a different kind of approach and Instead of acknowledging and kind of rephrasing what is said in the original piece, I decided to look at the topic from a different pair of lenses.

Starting off, when we talk about Computer Vision and this interaction between computer systems and machines with humans, it always comes out as some kind of a new concept. One example is Virtual Reality and the New Apple Vision Pro headset which actually is an augmented reality (AR) headset but I don’t want to get deeper into that. What if I tell you that this concepts have actually been around since the 1970, YES, THE 1970S, that is like 50 years ago!

To explore more about the concept you can read the book “What you should wear to an artificial Reality?” but to summarize, the author talks about this development of this artificial reality world starting of from the 1970s with his exhibition called METAPLAY which involved two people playing with a ball that was not even a real one (it was just a projection on a screen). That quickly escalated to his projects called VIDEOPLACE, DIGITAL DRIVING and CRITTER which all worked on this idea to connect people throughout this computer vision and manage interaction in spaces which don’t really exist in real life.

On the other side, what I found interesting is the rise of AI systems in the past few years, specifically one that was announced in the past week and that is the OpenAi Sora, an AI software that can supposedly make videos out of simple prompts and all of that without even filming a single second. I am really interested in how this affects the Computer Vision, Film and Interactive Media World

 

Reading Reflection- Week 5

In this passage the author talks about computer vision, which is a technology which allows computers to interpret and understand visual information from the surrounding environment.

What is so cool about this is how the computers can understand what they see through the use of computer vision, like movements and objects. Lets take the game “Limbo Time” as an example, where players use their hands, and a computer tracks their movements to play the game. Its fascinating how these simple techniques like movements can create some intriguing interactive experiences.

Another example which fascinates me is the “Messa di Voce,”a performance where voices were transformed into images. Its crazy how the simple voices and sounds themselves transforms into images. As a musician this art-piece really caught my attention and showed me the range of possibilities that are achievable using computers.

Lastly, I found it interesting how computer vision is becoming more accessible in multimedia tools. They talked about plug-ins for programs like Processing and Max/MSP/Jitter that let artists and designers easily add computer vision features to their projects. It’s like having a toolbox full of cool gadgets that make it easier to create interactive art or games, which for us could be useful for our future projects.

These examples show how artists use technology to make interactive projects in lots of different  varieties of ways. In the age of Artificial Intelligence, it’s cool to see how these early ideas helped make the tech world we have now. These are like the building blocks for the interactive designs we see all the time.