Week 5: Midterm Progress

For my midterm project, I wanted to create something that had a defined storyline as the main component pulling the user in. After struggling to decide on whether to go for an interactive experience or a game, I went for the latter. My hope is to create a gamified storyline that takes the user through a beginning, middle, and end. Seeing as I have remained faithful to the butterfly motif for quite some time now, I decided it would be a shame to break the streak with this assignment.

Design and Concept

For the main design of the game, I was inspired by games like Super Mario, the offline Chrome Dinosaur Game, and also the feel of a retro game design in this work by Arshiya Khattak. The main storyline is to help a butterfly find her way home in the open fields after getting lost in a city full of high-rise skyscrapers, with no flowers in sight. The game starts with the scene of a butterfly fluttering through the main character’s window. After the scene ends, a prompt is displayed with directions on how to play the forthcoming game. For now, there will be two main levels, the first being easier than the second. The character, carrying the butterfly on her back, would have to jump over obstacles or enemy objects to maintain her health status. She would also be presented with the opportunity to regain some of her health back by collecting rewards on the way. If the character manages to reach the end without losing all of her health, the ending game scene is played in which the butterfly is reunited with her family in a field of flowers and a prompt is displayed congratulating the player with instructions on how to repeat the game. In the process of visualizing this, I sketched out the following storyboard:

Code Design

To make my code easier to design and scale up, I plan to encapsulate everything, including the game experience itself in a class. Therefore, it was helpful to sketch out class diagrams in a UML-ish fashion as follows:

Anticipated Complexities/Challenges and Risk Mitigation

My strategy for mitigating risk was to implement the basic skeleton of the game levels to allow me to seamlessly scale up complexity later on. I also wanted to get the most time-consuming aspects of the game design, like collecting assets, out of the way so that I could focus on the game functionality this upcoming week.

Asset Collection

I knew one of the things I would struggle with was finding unified assets that were consistent with the feel of the game I was looking for. Thus, I dedicated some time to collecting spritesheets and level/scene backgrounds as I wanted to minimize having to design most components myself. After some digging, I collated the following:

Sprites:

(https://opengameart.org/content/oracles) 
(https://opengameart.org/content/butterfly)

Backgrounds:

(https://opengameart.org/content/forest-background)

(https://opengameart.org/content/cyberpunk-street-environment)

(https://www.deviantart.com/watashi-sensei/art/Bedroom-Pixel-628248413)

(https://www.freepik.com/free-vector/pixel-art-rural-landscape-background_49685498.htm#from_view=detail_serie)

Parallax Background Implementation

For the background of the levels, I wanted to incorporate a parallax effect to add dynamism to the backdrop of the experience. It has been some time since I have implemented this and I had forgotten the basic principles, so I wanted to get this part out of the way first. Each background is composed of multiple layers and the basic idea is to have each layer loop back into the screen at varying speeds to create a continuous animation with different depth components.

class ParallaxBackground {
  constructor(layersImages, layersSpeeds, layersDir) {
    this.layersImages = layersImages;
    this.layersDir = layersDir;
    this.layersSpeeds = layersSpeeds;
    this.layersX1 = [];
    this.layersX2 = [];
    for (let i = 0; i < this.layersImages.length; i++) {
      this.layersX1.push(0);
      this.layersX2.push(width);
    }
    print(this.layersSpeeds);
  }

  showBackground() {
    for (let i = 0; i < this.layersImages.length; i++) {
      image(this.layersImages[i], this.layersX1[i], 0, width, height);
      image(this.layersImages[i], this.layersX2[i], 0, width, height);
      if (this.layersDir[i] == 1) {
        this.layersX1[i] -= this.layersSpeeds[i];
        this.layersX2[i] -= this.layersSpeeds[i];
        if (this.layersX1[i] < -width) {
          this.layersX1[i] = width;
        }
        if (this.layersX2[i] < -width) {
          this.layersX2[i] = width;
        }
      }
    }
  }
}

 

Sprite Movements and Collisions

I also wanted to simulate the movement of objects on the screen to get a headstart. I used placeholder ellipses for this process, which I will later replace with actual images. I utilized principles of velocity and acceleration to simulate the player’s vertical jumps upon the space bar key press and the movement of enemies in the horizontal direction across the screen:

class Player {
  constructor(x, y, ground) {
    this.x = x;
    this.y = y;
    this.vy = 0;
    this.ground = ground;
    this.gravity = 1;
    this.jumpPower = 20;
    this.collisions = 0;
    this.collidedEnemy = [];
  }
  move() {
    this.y += this.vy;
    // player is not on ground
    if (this.y < this.ground) {
      this.vy += this.gravity;
    } else {
      this.vy = 0;
      this.y = this.ground;
    }
  }

  jump() {
    if (this.y >= this.ground) {
      this.vy = -this.jumpPower;
    }
  }
  show() {
    ellipse(this.x, this.y, 50, 100);
    this.move();
  }
}
class Enemy {
  constructor(x, y) {
    this.x = x;
    this.y = y;
    this.vx = -5;
    this.collided = false;
  }
  move() {
    this.x += this.vx;
  }

  show() {
    ellipse(this.x, this.y, 50, 50);
    this.move();
  }
}

I also wanted to implement the collision mechanism by essentially detecting when the boundary of the player touches the boundary of an object and classifying it as a collision:

detectCollision(enemy) {
  let radius = 10;
  if (!enemy.collided) {
    // get distance betweem the center of the character and that of the enemy object
    let d = dist(this.x, this.y, enemy.x, enemy.y);
    // collision detected
    // distance is less than the sum of objects' radii
    // and the radius for collisioon
    if (d < radius + 25 + 25) {
      this.collisions += 1;
      enemy.collided = true;
      return true;
    }
  }
  else{
    return false;
  }
}
Current Progress

Week 5: Reading Response – Computer Vision for Artists and Designers

It is really hard to outdo interactions that map the objects existing in our physical reality onto our screens for us to warp and play around with, so it was not surprising to me that computer vision techniques would be popular in the production of interactive art. One of the things that surprised me, however, was how early the adoption of computer vision principles – as early as the late 1960s– occurred. In addition to introducing a brief history of computer vision in interactive art, the reading also delineates simple computer vision algorithms that can be exploited by digital creators. Moreover, while the article acknowledges the complexity of the field of computer vision, it introduces the audience to a wide range of multimedia tools that make computer vision more accessible to beginners who are not familiar with machine vision.

Given the power that computer vision grants creators in being able to recreate, manipulate, and reshape physical reality, artworks created with computer vision techniques have the potential to represent the multiple layers of complexity that exist in the world around us. It was interesting – and shocking at times – to see how different sociopolitical themes can be expressed using the power of computer vision. This is demonstrated best by the pieces created on surveillance that the article mentions. The idea of using belts controlled by a tracking system that rotates according to the motion of the audience to highlight surveillance by an “empty crowd” in Rafael Lozano-Hemmer’s installation, Standards and Double Standards, exemplifies how well computer vision can “metaphorize” our physical reality. However, additional considerations, and potentially greater risks, come with more capability. An example of this is the Suicide Box,  which captured video footage of suicide jumpers from the Golden Gate Bridge. It was jarring for me to see a tragic phenomenon turned into a piece of “art” and was a testimony to how computer vision technology, like any tool with great computational power, must be accompanied by a recognition of the ethical implications of its use.

I also enjoyed how the reading emphasized the importance of using computer vision techniques that are compatible with the surrounding physical environment as no particular algorithm will be truly generalizable to all video/image formats. This may take the form of making the physical environment conducive to optimizing the workings particular algorithm or choosing an algorithm that is best suited for a given environment. I loved learning about the different techniques that designers employ to achieve this as it effectively extends the idea of Human-Centered Interaction to Environment-Centered Interaction and truly captures the unleashing of interactivity potential that computer vision affords digital creators.

Midterm Project Progress (The Dungeon Maze)

MAZE GAME

Concept

The idea behind the game is that player must make their way through a sequence of difficult mazes using mathematical algorithms, each one generated randomly to provide a different challenge with each maze. The main objective of the game is for the player to navigate from the beginning point to the maze’s exit.

Gold coins are scattered throughout the maze and act as collectibles for players to find. This helps them to finish the game if they collect a certain number of gold coins. Also, 90% of the gold coins spawned on the maze must be collected for the player to exit the maze. However, they can use the teleporter without this condition.

There are teleporters placed in the maze that serve as an interesting twist to the game. Stepping upon a teleporter transports the player to a whole new maze, which changes the difficulty and forces them to adjust their approach on the fly.

Every maze in the game has a timer built into it, which creates a sense of urgency and tests the player’s ability to think quickly and make decisions under pressure. Also, the player has limited vision which makes it difficult for the player to plan their next move. This forces them to memorize their path to complete the maze.

Design 

The visual and thematic components of this maze game are designed to put the player in the shoes of a dungeon master.

The image is used for the background of the maze.

This is a snippet of the maze. The design is a work in progress as I am focusing on getting the basic functionality to work first. Currently, the big yellow is the player, the small bronze-colored circles are the gold coins, the blue circles are the teleporters, and the red square is the exit. I planning on replacing these with some avatars or icons.

Frightening Part

The most frightening part when I started to code was the main functionality of the game, The Maze. I wanted to have random mazes being generated every time the user plays the game or is teleported. So that it invokes a new sense of challenge.

The use of randomness and the cyclical structure of the path-carving process is what gives the procedure its complexity. Recursive functions include function calls that repeat back on themselves, they can be challenging to study and comprehend. I have

The code below shows how I tackled this issue:

function generateRandomMaze(rows, cols) {
  let maze = new Array(rows);
  for (let y = 0; y < rows; y++) {
    maze[y] = new Array(cols).fill('#');
  }

  function carvePath(x, y) {
    const directions = [[1, 0], [-1, 0], [0, 1], [0, -1]];
    // Shuffle directions to ensure randomness
    shuffle(directions); 

    directions.forEach(([dx, dy]) => {
      const nx = x + 2 * dx, ny = y + 2 * dy;

      if (nx > 0 && nx < cols - 1 && ny > 0 && ny < rows - 1 && maze[ny][nx] === '#') {
        maze[ny][nx] = ' ';
        // Carve path to the new cell
        maze[y + dy][x + dx] = ' '; 
        carvePath(nx, ny);
      }
    });
  }

  // Randomize the starting point a bit more conservatively
  const startX = 2 * Math.floor(Math.random() * Math.floor((cols - 2) / 4)) + 1;
  const startY = 2 * Math.floor(Math.random() * Math.floor((rows - 2) / 4)) + 1;

  // Ensure starting point is open
  maze[startY][startX] = ' '; 
  carvePath(startX, startY);

  // Set exit
  maze[rows - 2][cols - 2] = 'E';

  // Place teleporters and coins after maze generation to avoid overwriting
  addRandomItems(maze, 'T', 3);
  addRandomItems(maze, 'C', 5);

  return maze;
}

function shuffle(array) {
  for (let i = array.length - 1; i > 0; i--) {
    const j = Math.floor(Math.random() * (i + 1));
    [array[i], array[j]] = [array[j], array[i]];
  }
}

The generateRandomMaze() function creates a random intricate maze by first populating it with walls and then utilizing the depth-first search algorithm to recursively find pathways. To ensure variation in the pathways formed, it shuffles possible directions (up, down, left, and right) starting from a randomly selected point. It improves the gameplay experience by slicing through the grid, placing an exit, and dispersing interactive features like coins and teleporters at random. The carvePath function’s use of recursion and randomness is essential to creating a maze that embodies the spirit of maze exploration and strategy and is both difficult and unique each time the game is played.

 

NOTE: I have used a lot of functions rather than classes. The functionality of the game is 80% achieved and from here on now I will change the functions to classes and complete 100% of the game/gameplay.

Reading Response Week 5 : OpenAi Sora, Apple Vision Pro, Virtual Reality and the rise of Computer Vision

For this reading response I decided to take a different kind of approach and Instead of acknowledging and kind of rephrasing what is said in the original piece, I decided to look at the topic from a different pair of lenses.

Starting off, when we talk about Computer Vision and this interaction between computer systems and machines with humans, it always comes out as some kind of a new concept. One example is Virtual Reality and the New Apple Vision Pro headset which actually is an augmented reality (AR) headset but I don’t want to get deeper into that. What if I tell you that this concepts have actually been around since the 1970, YES, THE 1970S, that is like 50 years ago!

To explore more about the concept you can read the book “What you should wear to an artificial Reality?” but to summarize, the author talks about this development of this artificial reality world starting of from the 1970s with his exhibition called METAPLAY which involved two people playing with a ball that was not even a real one (it was just a projection on a screen). That quickly escalated to his projects called VIDEOPLACE, DIGITAL DRIVING and CRITTER which all worked on this idea to connect people throughout this computer vision and manage interaction in spaces which don’t really exist in real life.

On the other side, what I found interesting is the rise of AI systems in the past few years, specifically one that was announced in the past week and that is the OpenAi Sora, an AI software that can supposedly make videos out of simple prompts and all of that without even filming a single second. I am really interested in how this affects the Computer Vision, Film and Interactive Media World

 

Reading Reflection- Week 5

In this passage the author talks about computer vision, which is a technology which allows computers to interpret and understand visual information from the surrounding environment.

What is so cool about this is how the computers can understand what they see through the use of computer vision, like movements and objects. Lets take the game “Limbo Time” as an example, where players use their hands, and a computer tracks their movements to play the game. Its fascinating how these simple techniques like movements can create some intriguing interactive experiences.

Another example which fascinates me is the “Messa di Voce,”a performance where voices were transformed into images. Its crazy how the simple voices and sounds themselves transforms into images. As a musician this art-piece really caught my attention and showed me the range of possibilities that are achievable using computers.

Lastly, I found it interesting how computer vision is becoming more accessible in multimedia tools. They talked about plug-ins for programs like Processing and Max/MSP/Jitter that let artists and designers easily add computer vision features to their projects. It’s like having a toolbox full of cool gadgets that make it easier to create interactive art or games, which for us could be useful for our future projects.

These examples show how artists use technology to make interactive projects in lots of different  varieties of ways. In the age of Artificial Intelligence, it’s cool to see how these early ideas helped make the tech world we have now. These are like the building blocks for the interactive designs we see all the time.

 

Mid Term Process Update

My game is inspired by the Club Penguin Puffle Park games, specifically the Puffle Roundup. The game aims to gather as many Puffles into a designated area (the cage) in under 120 seconds. The more you gather, the more points you earn. The twist is that if you’re not careful with the movement of the mouse you could push away the puffle, making them escape, meaning you lose points.
The most complex part of the project would be looking for images/assets to make the game, the mouse interactivity smoothness, and probably the timer and point count system. I did some research and found a tool that lets you pack your images together to use in the code “http://free-tex-packer.com/” which would be very useful for the clock timer in my game.

Assignment #5 – Progress on the midterm

For my Midterm Assignment I decided to go a little bit back in time to the Windows XP era, basically the Windows of my childhood.  Just hearing the sounds of a windows XP computer turning on brings back so many memories.

My midterm is going to be exactly that, a Windows XP emulator with a twist, It has errors all over it and it gives us the famous Blue screen of death. Fun isn’t it? Let me show you my progress so far.

For the opening screen I decided to have a button which would be used to “turn on the computer”. It looks something like this:

Furthermore when we click the button the original Windows XP sound plays and I have also added the original background and My Computer Icon:

Snippet of code where I load these elements upon a click:

if(mouseIsPressed === true)
        {
          noLoop();
          clickSound.play();
          noTint();
          image(windowsImage, width/2,height/2, 400,400);
          imageMode(CENTER);
          windowsSound.play();
          image(computerIcon, width/10, height/10, 60, 60)

 

I know there is a long way to go but for now you can enjoy the sketch using the window below.

Reading Reflection – Week 5

The reading on “Computer Vision for Artists and Designers” discusses how computer vision is becoming more accessible to students/artists due to easier-to-use software and open-source communities. Out of the many projects showcased, I was really impressed (and slightly creeped out) by Rafael Lozano-Hemmer’s installation Standards and Double Standards (2004), where belts were controlled by a computer vision-based tracking system, causing the buckles to rotate automatically to follow the public. It was really interesting to see this type of interaction that, in a way, isn’t intentional, direct, or digital. However, when it came to the project Suicide Box by the Bureau of Inverse Technology (1996), where a motion-detection video system was utilized to record real data of suicides, I found that ethically concerning. You don’t need to record such events to store data; yet, on the other side, it might serve as a security measure for people who have presumably been missing. It is a pretty controversial project, to say the least.

Field of view comparison of a conventional and Telecentric Lens. Note the conventional lens’s angular field of view and the Telecentric Lens’s zero angle field of view.

Furthermore, the reading discussed the different kinds of problems that vision algorithms have been developed to address, and their basic mechanisms of operation, such as detecting motion, detecting presence, object tracking, and basic interactions. All of which the designers of C2’s doors should have taken into account. Moreover, something new I have come across is the term “Telecentric lenses,” which are lenses used to improve object recognition by maintaining constant magnification regardless of distance. Yet, I came to find out that it is high in cost, large in size, and heavy in weight, in addition to causing some distortion issues. So, I wonder when it is appropriate to use it or if it is smart to do so to begin with. All in all, this was a very interesting read that showed me that interaction can be more than just key or computer-based; rather, it’s more about innovative ways to bridge the two different worlds we live in! Last but not least, I wonder where the line is drawn when it comes to privacy and motion/facial detection. Have we as a society come to accept that we are being watched & listened to all the time, whether it’s your phone’s facial recognition or the immediate response after “Hey Siri!” ?

Computer Vision Reading Response – Redha

The main point that stood out to me from this week’s reading was the wide prospects of the use cases surrounding computer vision.

To begin with, two artworks stood out to me for two varying reasons. Both artworks, however, expanded the scope of possibilities for me concerning the applications of computer vision within the context of art.

The first of these artworks is Rafael Lorenzo-Hemmer’s Standards and Double Standards (2004). This work piqued my interest due to its incorporation of space and inanimate objects which are activated through the help of computer vision. Personally, I find the overlap between the digital and the tangible to be an interesting area of focus so this work immediately caught my attention for its symbolic repurposing of an everyday object which is then given a sense of agency through programming that is supported by computer vision. Moreover, this work allowed me to consider the potential of using computer vision without requiring a visual output based on the data that the program is using. For example, in Krueger’s Videoplace, the user can see a visualisation of the input that the computer vision system is receiving (their silhouette) and it becomes central to the work. Conversely, Standards and Double Standards makes use of the input internally in order to trigger an another action.  Finally, I definitely appreciated that this work does not feature a screen (!) as I feel that it has become an overly predictable method of presenting interactive art.

Rafael Lozano-Hemmer, "Standards and Double Standards," 2004 on Vimeo

That being said, the next work that I have identified is Christopher Moller’s Cheese (2003) – an installation which solely presents screen-based work. While I do feel that this installation is an exception to the statement above (due to its bold imagery and simple presentation (and the fact that the work itself is not interactive)), what stood out to me was not the effectiveness of the work itself but the technical implications concerning the computer vision system that made the work possible. Considering the exponential development of technology, and the fact that the work was produced over two decades ago, one can’t help but wonder what can be done with facial recognition technology today. The reading mentioned how sophisticated the computer vision system needed to be in order to recognise slight changes in emotion and provide a response (albeit a simple one).

Cheese - Christian Moeller

This has lead me to ponder what is possible with facial recognition technology (and computer vision as a whole) within the artistic space today. I was reminded of an installation produced in 2019 which I had looked at for another class entitled Presence and Erasure by Random International. As part of my presentation on this work I discussed the concept of consent within interactive art and, as an Arab and a Muslim, I immediately recognised that such a work would may not be able to exist in certain parts of the world (such as this one) as a result of social and cultural beliefs. Ultimately, going down this rabbit hole as led me to consider the endless possibilities we have with today’s technology but it has also helped me understand that just because you can pursue an idea, does not always mean that you should.

RANDOM INTERNATIONAL

Raya Tabassum: Midterm Project Progress

 

Concept:
I’m trying to make sort of a version of “Super Mario” game using one player who walks along the path collecting gold coins and the game ends when it collides with the enemy. The player have to jump (using UP key) to collect coin, and there’ll be sound incorporated with each jump and coin collection. When the game is over there’ll be a screen saying “Game Over” and if the player wants to play again they can press SHIFT key and the game will start again. There’ll be a scoring system displayed on the screen too.

Difficulty/Challenges:
Challenge would be to move the sprite smoothly along the background. I want to design the background myself and make the player and enemy designs too – to make it unique. I guess collaborating all the elements of the game together to respond to the user interaction would be a challenge so that the game runs properly.

Visualization:
I want my game screen to look like this (primarily, this is not a definite design, just drawn into Procreate as I’m currently using dummy sprites only to run the game code):

 

Coding:
There’ll be a Player class, an Enemy class, and a Coin class. I’ve designed the basic code for collision etc. Here are some highlighted code snippets:

The Player class:

class Player {
  constructor() {
    this.playerYOnGround = 550;
    this.playerSize = 60;
    this.bgGroundHeight = 45;
    this.animationSlowDown = 8;
    this.width = 1000;
    this.jumpHeight = 0;
    this.jumpStrength = 0;
    this.jumpStrengthMax = 5;
    this.gravity = 0.1;
    this.jumping = false;
    this.playerImg = [];
    this.numberPlayerImg = 6;
    this.playerImgIndex = 0;
    for (let i = 1; i <= 3; i++) {
      this.playerImg.push(loadImage(`guy-${i}.png`));
    }
  }

  initPlayer() {
    xpos = (this.width * 0.5) - (this.playerSize * 0.5);
    ypos = this.playerYOnGround;
  }

  animatePlayer() {
    if (this.jumping) {
      this.jumpStrength = (this.jumpStrength * 0.99) - this.gravity;
      this.jumpHeight += this.jumpStrength;
      if (this.jumpHeight <= 0) {
        this.jumping = false;
        this.jumpHeight = 0;
        this.jumpStrength = 0;
      }
    }

    ypos = this.playerYOnGround - this.jumpHeight;

    if (this.jumping) {
      image(this.playerImg[0], xpos, ypos);
    } else {
      image(this.playerImg[this.playerImgIndex], xpos, ypos);
      if (frameCount % this.animationSlowDown === 0) {
        this.playerImgIndex = (this.playerImgIndex + 1) % 3;
      }
    }
  }
}

When the player collides with enemy:

if (dist(this.enemyX, this.enemyY, xpos, ypos) <= (this.playerSize / 2 + this.enemySize / 2)) {
      win = false;
}

When the player collects coin:

if (dist(this.coinX, this.coinY, xpos, ypos) <= (this.playerSize / 2 + this.coinSize / 2)) {
      this.initCoin();
      score += 10;
}