Afra Binjerais – Reading response 5

From what I understand, “computer vision” refers to a broad range of algorithms that let computers make sophisticated analyses of digital images and movies. The reading begins with defining computer vision and then explores its use in interactive arts, emphasizing Marvin Minsky’s early miscalculation of the field’s complexity. The reading was educational, especially the portion that highlighted how many different kinds of artworks use computer vision. These works of art range widely in style, from abstract and formal pieces to ones with comedic or political overtones, and this is important in regard to understanding the different ways in which we can communicate art.

Moreover, other artists mentioned explore a wide spectrum of human movement, from close studies of facial expressions to broader gestures and even entire body movements. Overall, the reading underscores the rich potential of computer vision in the realm of interactive art, showcasing its versatility and impact on artistic expression.

Week 5 – Reading response Shereena AlNuaimi

“Computer Vision for Artists and Designers” by Golan Levin introduces computer vision as a set of algorithms enabling computers to intelligently interpret digital images and videos, emphasizing its recent accessibility to novice programmers and interactive-media artists. It demystifies computer vision for beginners, focusing on its applications in interactive art, elementary computer vision techniques, physical optimization for computer vision, multimedia authoring tools for computer vision.

The author also explores the evolution of computer vision as a medium for artistic expression, outlining its recent democratization and tracing its historical origins. It highlights how computer vision methods are being applied to a wide range of artistic mediums, video games, and home automation systems. In addition, he deconstructs basic computer vision methods including frame differencing, background subtraction, brightness thresholding, and basic object tracking and provides an understanding of how to use and use them in interactive media. It highlights how crucial physical optimization is for computer vision and offers suggestions for setting up conditions that support reliable algorithmic performance.

Furthermore, he also examines multimedia authoring tools for computer vision, including major software development environments and their corresponding computer vision toolkits or plug-ins, such as Processing, Macromedia Director, and Max/MSP/Jitter. Additionally, it provides an example of a workshop project named LimboTime that shows how computer vision techniques can be used to create an interactive game. LimboTime highlights the wider application of computer vision in art and design by demonstrating the accessibility and possibilities for non-programmers to create vision-based interactive systems.

Overall, this reading offers us a comprehensive overview of computer vision, catering to novice programmers and artists, and highlights its growing significance in interactive art and design, offering practical insights and resources for implementing computer vision techniques in various artistic and design contexts.

Midterm Progress/ Reading response – Shaikha AlKaabi

Midterm Project Progress:

Whimsical Garden: Seasons of Growth is a point-and-click game where players nurture a magical garden through the changing seasons. The core gameplay involves planting, watering, and harvesting a variety of plants, each with unique needs. The game is a garden simulation where the player’s objective is to grow and manage a garden. You start by planting a flower in a location of your choice within a designated area. As the game progresses, you need to water the plants to ensure they grow. If a plant doesn’t receive water for too long, it will wither and die. Players must balance their attention between different plants, making strategic decisions about which plants to water with their limited resources. The challenge lies in keeping all plants alive and thriving, which becomes more difficult as the garden grows in size and complexity. The game ends when the player successfully maintains the garden for a certain period, or if all the plants die. There’s an option to restart the game after completion to try different strategies or improve your previous score.

Game Structure:

1. Start Screen: The game opens with a start screen that displays instructions for playing. This screen explains the basics of gardening mechanics, seasonal changes, and the objectives of the game. The start screen awaits user input (such as pressing a ‘Start’ button or hitting a specific key) to begin the game.

2. Gameplay Session: Once started, the player enters the main gameplay area, where they manage their garden through the various seasons, facing challenges like planting, watering, harvesting, and solving environmental puzzles.

3. End Session and Restart: After a gameplay session is completed (which could be defined by reaching a certain goal, surviving through a set number of seasons, the game transitions to an end screen. This screen summarizes the player’s achievements and offers the option to start a new session. Choosing to start a new session resets the game environment , allowing players to begin with a fresh garden.

Challenges:

1. Managing Game States: Implementing a system to manage different game states (e.g., start screen, active game, end screen) is crucial. This requires careful design to ensure smooth transitions between states based on user inputs.

2. Session Reset Functionality: Developing a way to reset the game environment for a new session without restarting the sketch poses a challenge. This involves resetting game variables, clearing the garden area, and preparing the game for a new set of seasonal cycles.

3. User Input Handling: Creating an intuitive and responsive input system for navigating between the start screen, gameplay, and restart option is essential. This includes implementing event listeners for keyboard, mouse, or button inputs that trigger the correct actions within the game’s flow.

By addressing these challenges, Whimsical Garden: Seasons of Growth aims to offer a rich and engaging gameplay experience that is accessible, educational, and enjoyable for a wide range of players.

let gameState = 'start'; // Possible states: 'start', 'game', 'end'
let garden = [];
let restartButton;

function setup() {
  createCanvas(600, 600);
  textAlign(CENTER, CENTER);
  
  // Restart button (hidden initially)
  restartButton = createButton('Restart');
  restartButton.position(width / 2 - 50, height / 2 + 20);
  restartButton.mousePressed(restartGame);
  restartButton.hide();
}

function draw() {
  background(161, 196, 170);
  
  if (gameState === 'start') {
    drawStartScreen();
  } else if (gameState === 'game') {
    drawGarden();
  } else if (gameState === 'end') {
    drawEndScreen();
  }
}

function drawStartScreen() {
  textSize(32);
  text('Whimsical Garden: Seasons of Growth', width / 2, height / 3);
  textSize(25);
  text('Click to start', width / 2, height / 2);
}

function drawGarden() {
  for (let plant of garden) {
    // Simple representation of plants
    fill(20, 180, 60);
    ellipse(plant.x, plant.y, 20, 20);
  }
  
  // Example end condition: 5 plants
  if (garden.length >= 5) {
    gameState = 'end';
    restartButton.show();
  }
}

function drawEndScreen() {
  background(47, 54, 50);
  textSize(50);
  fill(184, 46, 64);
  text('Garden is Full!', width / 2, height / 3);
  textSize(25);
  text('Restart to play again', width / 2, height / 2);
}

function mousePressed() {
  if (gameState === 'start') {
    gameState = 'game';
  } else if (gameState === 'game' && mouseY < height - 100) {
    // Allow planting only within the game area (excluding UI elements, e.g., buttons)
    plantSeed(mouseX, mouseY);
  }
}

function plantSeed(x, y) {
  // Add a new plant to the clicked position in the garden
  garden.push({x: x, y: y});
}

function restartGame() {
  garden = []; // Clear the garden
  gameState = 'start'; // Set game state back to start
  restartButton.hide();
}

 

Reading Response: 

We’ve come a long way in how we interact with computers, starting with some really early virtual reality that let people play in digital spaces just by moving around. Myron Krueger was one of the first to make this happen with his Videoplace, and it was a big step forward because it made the connection between people and computers feel natural and easy.

Then there’s Bob Flanagan’s Suicide Box, which is a lot more serious. It’s a piece of art that makes us think about tough topics like sickness and the choice to end one’s life. It’s a strong reminder that technology can do more than just entertain us; it can also make us think deeply about life’s big questions.

Understanding how these systems work is pretty cool too. They can tell the difference between what’s important for the interaction and what’s just background noise. They look for the brightest spot they can find to keep track of what the user is doing. This is really important for making games and art installations where you can move around and have the game or art react to what you’re doing.

Jonah Warren’s Master’s thesis takes this idea further. He found new ways for games to understand our body movements. He talks about direct ways to interact with games, like touching, covering, or mirroring, which are now common in games that use body movement for control.

From the early days of virtual reality to the deep questions raised by interactive art, and the nitty-gritty of tracking movement, it’s clear that the way we communicate with computers has many sides to it. Technology keeps getting better, and it changes how we play, create, and think about the world. The field of human-computer interaction is really about connecting with each other, sharing ideas, and getting to grips with both the real and digital worlds we live in.

Midterm Progress / Reading Response – Hamdah AlSuwaidi

 Midterm Progress:

For my midterm project, I’m taking on the challenge of designing a captivating costume exhibit for The MET’s Costume Institute. The exhibit will showcase the evolution of haute couture through iconic designs from renowned fashion houses, spanning different eras and design philosophies.

The overarching theme of the exhibit is “Timeless Elegance: A Journey Through Fashion History.” My aim is to highlight the enduring allure and innovation of haute couture, exploring how designers have redefined beauty and style over the decades.

1. Homepage:
– The homepage welcomes visitors to the online exhibit with a visually appealing layout featuring the exhibit’s title, a brief description, and navigation options.

2. Navigation Menu:
– A navigation menu at the top or side of the webpage allows visitors to easily access different sections of the exhibit, including:
– Explore: Provides access to the main exhibit layout where visitors can view featured designers and garments.
– Audio Guides: Offers a collection of audio commentary on select pieces.
– Search: Enables visitors to search for specific designers, garments, or fashion movements.
– About: Provides information about the exhibit, its curators, and contributors.

3. Explore Section:
– Clicking on the “Explore” option takes visitors to the main exhibit layout, which may resemble a gallery or virtual space.
– Visitors can navigate through different areas of the exhibit by clicking or tapping on designated hotspots or arrows.
– Clicking on a designer’s name or garment reveals detailed information, including descriptions, historical context, and images.

4. Audio Guides:
– The “Audio Guides” section presents visitors with a curated collection of audio commentary on select pieces within the exhibit.
– Visitors can listen to the audio commentary by clicking on specific garments or audio icons associated with them.

5. Search Functionality:
– The “Search” option allows visitors to search for specific designers, garments, or fashion movements.
– Visitors can enter keywords or phrases into the search bar to find relevant content within the exhibit.
– Search results display relevant garments, designers, or topics, with clickable links to access more information.

Interactive Elements:
– Throughout the exhibit, interactive elements such as clickable images, videos, and multimedia presentations provide additional context and engagement.
– Visitors can interact with these elements by clicking or tapping on them to access related content or animations.

Some examples:

7. Responsive Design:
– The online exhibit is designed to be responsive, ensuring a seamless experience across different devices and screen sizes.
– Whether visitors access the exhibit on a desktop computer, tablet, or smartphone, the layout and functionality adapt to provide an optimal viewing experience.

  1. Maintaining Engagement: Keeping visitors engaged throughout their online exhibit experience is crucial. Unlike in-person exhibits where visitors may spend hours exploring, online visitors may have shorter attention spans. Designing captivating visuals, interactive elements, and compelling content is essential to maintain visitor interest and encourage exploration.
  2. Audio and Video Integration: Integrating audio and video content into the exhibit may present challenges such as ensuring cross-browser compatibility, synchronizing multimedia elements with other interactive elements or animations, and optimizing playback performance for smooth audio/video streaming.
let titleSize = 40; // Decreased the title size
let subtitleSize = 20; // Decreased the subtitle size

function preload(){
   hangerImage = loadImage('Hanger.png'); // Load hanger image }
}
function setup() {
  createCanvas(windowWidth, windowHeight);
  textSize(titleSize);
  textAlign(CENTER, CENTER);
}

function draw() { background(255); // Set background color to white
if (mouseIsPressed || mouseX != pmouseX || mouseY != pmouseY) {
  // Display hanger image on mouse hover
  image(hangerImage, mouseX, mouseY, 70, 50);
}
                
 // Title
                 
fill(166, 0, 0); // Set text color to red
  textSize(titleSize);
  text("Welcome to Timeless Elegance", width / 2, height / 2 - 50);
  
  // Subtitle
  fill(0); // Set text color to red
  textSize(subtitleSize);
  text("Touch the screen to begin your journey", width / 2, height / 2 + 20);
}

function mouseClicked() {
  // Trigger next screen or action when the screen is touched/clicked
  // For example, navigate to the main exhibit layout
}

The hanger icon serves as a visual representation of fashion and garment design, reinforcing the theme of the exhibit. It symbolizes the process of selecting, showcasing, and appreciating designer garments, further immersing visitors in the world of fashion.Changing the mouse cursor to a hanger creates a more immersive experience for visitors, immediately signaling that they are entering a fashion-themed environment. This subtle visual cue helps transport visitors into the world of haute couture and sets the tone for the exhibit.

https://docs.google.com/document/d/1wQb92P4HcfuLNuzHEYyvs3ZUFQNo6hspwGtPbJLv53Q/edit?usp=sharing (More details about the costume institute and what dress will be included in the exhibit)

Reading response:

The reading on “Computer Vision for Artists and Designers” delves into the increasing accessibility of computer vision technology to artists and designers, facilitated by user-friendly software and open-source communities. It showcases various projects, including Rafael Lozano-Hemmer’s intriguing installation “Standards and Double Standards” (2004), where belts controlled by a computer vision-based tracking system rotate to follow individuals, offering an unintentional yet captivating form of interaction.

However, the ethical implications become pronounced with projects like Suicide Box by the Bureau of Inverse Technology (1996), which utilized motion-detection to record real data of suicides, raising concerns about privacy and surveillance. While such data might aid in locating missing individuals, it blurs ethical boundaries, sparking controversy.

Moreover, the reading outlines different problems addressed by vision algorithms, such as motion detection and object tracking, underscoring their relevance in design considerations. The introduction of “Telecentric lenses” offers insights into improving object recognition, albeit with drawbacks like cost and distortion issues, prompting reflection on their judicious usage.

The discussion expands to the societal acceptance of constant surveillance through technologies like facial recognition, prompting introspection on privacy norms. This leads to questioning the boundaries between innovation and intrusion, highlighting the need for ethical frameworks in technology adoption.

In reflecting on the reading, two artworks stand out for their distinct approaches to computer vision. Rafael Lorenzo-Hemmer’s “Standards and Double Standards” impresses with its inventive use of space and objects, blurring the lines between digital and tangible realms. In contrast, Christopher Moller’s “Cheese” raises questions about the advancements in facial recognition technology and its potential implications for art and society.

Week 5: Midterm Progress

Concept and Design

The core concept of this project revolves around creating an immersive interactive experience that blends art and technology, leveraging the versatility of the p5.js library. The aim is to develop an artwork or game that captivates the user by engaging their senses through visual elements, sound, and interactivity. The design is centered on the principle of simplicity to ensure accessibility while fostering a deep sense of exploration and interaction. By incorporating a variety of media types—including shapes, images, sounds, and on-screen text—the project seeks to create a rich, multi-sensory environment that responds to the user’s actions in intuitive and surprising ways.

Implementation

So far, the project has laid down a foundational structure that supports Object-Oriented Programming (OOP) to manage its various interactive elements effectively. A particle system has been developed as the primary visual feature, showcasing a dynamic and aesthetically pleasing behavior that can be manipulated through user input. The system begins with an instructional screen, ensuring that users understand how to interact with the artwork or game before commencing. This design choice not only improves user experience but also aligns with the project’s requirement to start the experience with an instruction screen and wait for user input before starting.

The current implementation allows for the experience to be restarted without the need to reload the sketch entirely, promoting a seamless user interaction that encourages exploration and repeated engagement. The integration of state management facilitates this process, enabling the transition between different phases of the application, such as viewing instructions, interacting with the experience, and restarting the session.

Challenges

One of the main challenges faced during development has been incorporating all the required elements (shape, image, sound, and on-screen text) into a cohesive and interactive experience. While the foundational structure for shapes and on-screen text has been established through the particle system and instructional screen, the integration of images and sounds remains incomplete. Balancing the aesthetic appeal with functional interactivity, especially when introducing multimedia elements, requires careful consideration to maintain performance and user engagement.

Another significant challenge is ensuring that the experience is equally compelling for one or more users. Designing interactions that are universally intuitive yet offer depth and discovery poses a unique challenge in user experience design. Additionally, creating a mechanism for seamlessly restarting the experience without restarting the sketch has required thoughtful state management and control flow within the code.

Future Directions

Moving forward, the project will focus on integrating the remaining required elements—specifically, the inclusion of at least one image and one sound—to enrich the sensory experience. Exploring creative ways to incorporate these elements will not only fulfill the project’s requirements but also enhance the overall aesthetic and interactive quality of the work. Addressing the challenges of multi-user interaction and refining the user interface to accommodate various interaction modes will also be a priority. Through iterative design and testing, the project aims to evolve into a fully-realized interactive experience that leverages everything learned so far, presenting users with an engaging and memorable exploration of digital art and interactivity.

Progress

Week 5 Reading Response – Khalifa Alshamsi

The historical context provided, referencing Marvin Minsky’s underestimation of the complexity of computer vision, sets the stage for the discussion on how the field has evolved to include a wide array of applications, especially in the arts. The mention of Myron Krueger’s Videoplace as an early example of interactive artwork utilizing computer vision illustrates the potential for creative interaction between humans and computers beyond the conventional mouse and keyboard interfaces.

The exploration of projects such as Messa di Voce, Standards and Double Standards, Sorting Daemon, and others within the article showcased the versatility of computer vision in crafting interactive and immersive art experiences. These projects, each with its unique thematic focus, from surveillance and privacy to the nuanced exploration of human emotions via facial recognition, illustrate the breadth of computer vision’s applicability. They catalyzed my curiosity, pushing me to consider how technology can be leveraged to challenge societal norms, provoke thought, and evoke emotional responses.

Week 5 – Khalifa Alshamsi

For the midterm, I plan on creating a car racing game in which the objective is to get as many points as possible depending on how far you reach before colliding with a vehicle.

The Sketch:

Description:

The game uses a simple rectangular design so far until I can finish the game in more detail in which the white rectangle is your car and the blue rectangles are the other cars on the road and you aim to not collide with them.

Script:

let player;
let obstacles = [];
let gameSpeed = 2;

function setup() {
  createCanvas(400, 600);
  player = new Player();
}

function draw() {
  background(51);
  
  // Player
  player.show();
  player.move();
  
  // Obstacles
  if (frameCount % 120 == 0) { // Adds a new obstacle every 100 frames
    obstacles.push(new Obstacle());
  }
  
  for (let obs of obstacles) {
    obs.show();
    obs.update();
    
    // Checking for collision
    if (player.collidesWith(obs)) {
      noLoop(); // Stop the game
      textSize(32);
      textAlign(CENTER, CENTER);
      fill("White");
      text("Game Over", width / 2, height / 2);
    }
  }
}

// Player class
class Player {
  constructor() {
    this.size = 30;
    this.x = width / 2;
    this.y = height - this.size * 2;
  }

  show() {
    fill("white");
    rectMode(CENTER);
    rect(this.x, this.y, this.size, this.size);
  }

  move() {
    if (keyIsDown(LEFT_ARROW)) {
      this.x -= 5;
    } else if (keyIsDown(RIGHT_ARROW)) {
      this.x += 5;
    }
  }

  collidesWith(obstacle) {
    let hit = collideRectRect(this.x, this.y, this.size, this.size, obstacle.x, obstacle.y, obstacle.size, obstacle.size);
    return hit;
  }
}

// Obstacle class
class Obstacle {
  constructor() {
    this.size = random(20, 50);
    this.x = random(this.size, width - this.size);
    this.y = 0;
  }

  show() {
    fill("blue");
    rect(this.x, this.y, this.size, this.size);
  }

  update() {
    this.y += gameSpeed;
  }
}

// Function to check rectangular collision (Note to self p5.collide2D library needed)
function collideRectRect(x, y, w, h, x2, y2, w2, h2) {
  return x < x2 + w2 && x + w > x2 && y < y2 + h2 && y + h > y2;
}

Problems Encountered:

The problem that I faced the most was setting up the collision format, but I will, later on, be able to fix it by using a p5.collide2D library because, from my understatement, it will make the collision simpler and exact to the collision I want the game to run.

Future goals:

  1. Improved graphics: By using either a better code format for the cars instead of the rectangles or implementing pictures to represent the cars.
  2. Scoring system: The scoring system is yet to be implemented into the game but the idea is for it to score a higher number the further you reach in the game while avoiding collisions.
  3. Sound Effects:  Adding music to the background to add more to the game’s value while playing.
  4. Increased difficulty: Making gradual increases to the gameSpeed or the number of cars in the game to make it harder to continue.

 

Assignment #5 – Code – ☆Midterm Progress☆

Concept and Inspiration

For my midterm, I want to create some sort of audio-reactive (?) artwork. I am unsure about the audio-reactive part because I don’t think I want it to actually live react to audio; rather, I want the artwork to depend on the preloaded sounds.

Essentially, I created a playlist of 12 songs. I want to upload these songs and their matching cover images as well as the title and the artist. The idea is that the user can switch between songs with the left and right arrows keys, and the visuals in the background will change depending on which song is playing.

Here are the songs I picked:

  1. Heart-Shaped Box – Nirvana
  2. Cool Colorado – La Femme
  3. Weak For Your Love – Thee Sacred Souls
  4. Spooky – Dusty Springfield
  5. Karma Police – Radiohead
  6. Buddy’s Rendezvous – Lana Del Rey
  7. Althea – Grateful Dead
  8. Naive – The Kooks
  9. Drink Before The War – Sinead O’Connor
  10. Right Down The Line – Sam Evian
  11. She – The Blaze
  12. Belong In The Sun – ¿Téo?

My inspiration comes from the feelings I get when I listen to music. Sometimes, I just lay in bed and close my eyes and I start visualizing the energy of the music. So I simply wanted to create an artwork that matches visuals to music.

Back to my first uncertainty, I think I want the visuals to depend on the characteristics of the song itself. In other words, I would like to extract the amplitudes and frequencies of each songs and load them in the functions of the visuals in order for them to control what the visuals will look like.

For the visuals, I want to have some sort of melting effect. That is what will be controlled by the characteristics of the music. I am also thinking, for the color, whether to also have it controlled by one of the music’s characteristics or whether it should be based on the colors of the cover image (by extracting the pixels).

Risks and Problems
  1. I tried loading the songs as mp3 files and most of them didn’t work. The only two that worked I had compressed through a website as they were over 5mb. The other ones wouldn’t load for some reason, so I need to figure that out.
  2. I tried playing with the visuals a bit, but I am not satisfied. As I am still exploring how to make them, I think it will take some time for me to achieve something close to what I want to get. Basically, I tried to create lines that have a melting effect by using Perlin noise, in order to make it dynamic
  3. I think the hardest part will be having the amplitude/frequency of the music control the visuals. I will be working on that next!
Sketch Progress

Anyway, here is what my sketch looks like so far (I hate!!!! the way the background looks like currently):

You can use the left arrow and right arrow keys to switch between songs, though no music will play :/

What’s Left to Add and Change
    • At least one sound (12, technically)
    • A screen giving instructions and wait for user input (button / key / mouse / etc.) before starting
    • A way to start a new session (without restarting the sketch), though I’m not sure what exactly this would entail
    • The shape, perhaps, and to have it controlled by the music
    • The colors, and to have them controlled either by the music or by the cover image – then the text will be more readable hehe

Week 5: Midterm Progress

For my midterm project, I wanted to create something that had a defined storyline as the main component pulling the user in. After struggling to decide on whether to go for an interactive experience or a game, I went for the latter. My hope is to create a gamified storyline that takes the user through a beginning, middle, and end. Seeing as I have remained faithful to the butterfly motif for quite some time now, I decided it would be a shame to break the streak with this assignment.

Design and Concept

For the main design of the game, I was inspired by games like Super Mario, the offline Chrome Dinosaur Game, and also the feel of a retro game design in this work by Arshiya Khattak. The main storyline is to help a butterfly find her way home in the open fields after getting lost in a city full of high-rise skyscrapers, with no flowers in sight. The game starts with the scene of a butterfly fluttering through the main character’s window. After the scene ends, a prompt is displayed with directions on how to play the forthcoming game. For now, there will be two main levels, the first being easier than the second. The character, carrying the butterfly on her back, would have to jump over obstacles or enemy objects to maintain her health status. She would also be presented with the opportunity to regain some of her health back by collecting rewards on the way. If the character manages to reach the end without losing all of her health, the ending game scene is played in which the butterfly is reunited with her family in a field of flowers and a prompt is displayed congratulating the player with instructions on how to repeat the game. In the process of visualizing this, I sketched out the following storyboard:

Code Design

To make my code easier to design and scale up, I plan to encapsulate everything, including the game experience itself in a class. Therefore, it was helpful to sketch out class diagrams in a UML-ish fashion as follows:

Anticipated Complexities/Challenges and Risk Mitigation

My strategy for mitigating risk was to implement the basic skeleton of the game levels to allow me to seamlessly scale up complexity later on. I also wanted to get the most time-consuming aspects of the game design, like collecting assets, out of the way so that I could focus on the game functionality this upcoming week.

Asset Collection

I knew one of the things I would struggle with was finding unified assets that were consistent with the feel of the game I was looking for. Thus, I dedicated some time to collecting spritesheets and level/scene backgrounds as I wanted to minimize having to design most components myself. After some digging, I collated the following:

Sprites:

(https://opengameart.org/content/oracles) 
(https://opengameart.org/content/butterfly)

Backgrounds:

(https://opengameart.org/content/forest-background)

(https://opengameart.org/content/cyberpunk-street-environment)

(https://www.deviantart.com/watashi-sensei/art/Bedroom-Pixel-628248413)

(https://www.freepik.com/free-vector/pixel-art-rural-landscape-background_49685498.htm#from_view=detail_serie)

Parallax Background Implementation

For the background of the levels, I wanted to incorporate a parallax effect to add dynamism to the backdrop of the experience. It has been some time since I have implemented this and I had forgotten the basic principles, so I wanted to get this part out of the way first. Each background is composed of multiple layers and the basic idea is to have each layer loop back into the screen at varying speeds to create a continuous animation with different depth components.

class ParallaxBackground {
  constructor(layersImages, layersSpeeds, layersDir) {
    this.layersImages = layersImages;
    this.layersDir = layersDir;
    this.layersSpeeds = layersSpeeds;
    this.layersX1 = [];
    this.layersX2 = [];
    for (let i = 0; i < this.layersImages.length; i++) {
      this.layersX1.push(0);
      this.layersX2.push(width);
    }
    print(this.layersSpeeds);
  }

  showBackground() {
    for (let i = 0; i < this.layersImages.length; i++) {
      image(this.layersImages[i], this.layersX1[i], 0, width, height);
      image(this.layersImages[i], this.layersX2[i], 0, width, height);
      if (this.layersDir[i] == 1) {
        this.layersX1[i] -= this.layersSpeeds[i];
        this.layersX2[i] -= this.layersSpeeds[i];
        if (this.layersX1[i] < -width) {
          this.layersX1[i] = width;
        }
        if (this.layersX2[i] < -width) {
          this.layersX2[i] = width;
        }
      }
    }
  }
}

 

Sprite Movements and Collisions

I also wanted to simulate the movement of objects on the screen to get a headstart. I used placeholder ellipses for this process, which I will later replace with actual images. I utilized principles of velocity and acceleration to simulate the player’s vertical jumps upon the space bar key press and the movement of enemies in the horizontal direction across the screen:

class Player {
  constructor(x, y, ground) {
    this.x = x;
    this.y = y;
    this.vy = 0;
    this.ground = ground;
    this.gravity = 1;
    this.jumpPower = 20;
    this.collisions = 0;
    this.collidedEnemy = [];
  }
  move() {
    this.y += this.vy;
    // player is not on ground
    if (this.y < this.ground) {
      this.vy += this.gravity;
    } else {
      this.vy = 0;
      this.y = this.ground;
    }
  }

  jump() {
    if (this.y >= this.ground) {
      this.vy = -this.jumpPower;
    }
  }
  show() {
    ellipse(this.x, this.y, 50, 100);
    this.move();
  }
}
class Enemy {
  constructor(x, y) {
    this.x = x;
    this.y = y;
    this.vx = -5;
    this.collided = false;
  }
  move() {
    this.x += this.vx;
  }

  show() {
    ellipse(this.x, this.y, 50, 50);
    this.move();
  }
}

I also wanted to implement the collision mechanism by essentially detecting when the boundary of the player touches the boundary of an object and classifying it as a collision:

detectCollision(enemy) {
  let radius = 10;
  if (!enemy.collided) {
    // get distance betweem the center of the character and that of the enemy object
    let d = dist(this.x, this.y, enemy.x, enemy.y);
    // collision detected
    // distance is less than the sum of objects' radii
    // and the radius for collisioon
    if (d < radius + 25 + 25) {
      this.collisions += 1;
      enemy.collided = true;
      return true;
    }
  }
  else{
    return false;
  }
}
Current Progress

Week 5: Reading Response – Computer Vision for Artists and Designers

It is really hard to outdo interactions that map the objects existing in our physical reality onto our screens for us to warp and play around with, so it was not surprising to me that computer vision techniques would be popular in the production of interactive art. One of the things that surprised me, however, was how early the adoption of computer vision principles – as early as the late 1960s– occurred. In addition to introducing a brief history of computer vision in interactive art, the reading also delineates simple computer vision algorithms that can be exploited by digital creators. Moreover, while the article acknowledges the complexity of the field of computer vision, it introduces the audience to a wide range of multimedia tools that make computer vision more accessible to beginners who are not familiar with machine vision.

Given the power that computer vision grants creators in being able to recreate, manipulate, and reshape physical reality, artworks created with computer vision techniques have the potential to represent the multiple layers of complexity that exist in the world around us. It was interesting – and shocking at times – to see how different sociopolitical themes can be expressed using the power of computer vision. This is demonstrated best by the pieces created on surveillance that the article mentions. The idea of using belts controlled by a tracking system that rotates according to the motion of the audience to highlight surveillance by an “empty crowd” in Rafael Lozano-Hemmer’s installation, Standards and Double Standards, exemplifies how well computer vision can “metaphorize” our physical reality. However, additional considerations, and potentially greater risks, come with more capability. An example of this is the Suicide Box,  which captured video footage of suicide jumpers from the Golden Gate Bridge. It was jarring for me to see a tragic phenomenon turned into a piece of “art” and was a testimony to how computer vision technology, like any tool with great computational power, must be accompanied by a recognition of the ethical implications of its use.

I also enjoyed how the reading emphasized the importance of using computer vision techniques that are compatible with the surrounding physical environment as no particular algorithm will be truly generalizable to all video/image formats. This may take the form of making the physical environment conducive to optimizing the workings particular algorithm or choosing an algorithm that is best suited for a given environment. I loved learning about the different techniques that designers employ to achieve this as it effectively extends the idea of Human-Centered Interaction to Environment-Centered Interaction and truly captures the unleashing of interactivity potential that computer vision affords digital creators.