Midterm Progress Check – Jihad Jammal

Concept:

The core idea of my game reimagines the classic mechanics of the retro game Snake, introducing a playful twist where the protagonist is a dog on a mission to eat as much homework as possible. In this game, players navigate a dog around the screen, aiming to collect and consume homework pieces scattered across the play area. Each piece of homework the dog eats not only adds points to the player’s score but also visually enlarges the dog’s face, making the game increasingly challenging and humorous as the dog’s appearance grows comically large.

To add a layer of strategy and urgency, the game is set against a ticking clock. Players must race against time, strategizing to eat as much homework as they can before the timer expires. The score is a combination of two elements: the physical size of the dog’s face, which grows with each piece of homework eaten, and a numerical value that increases with the homework consumed. This dual scoring system provides immediate visual feedback and a quantifiable measure of success, engaging players in a quest to beat their own high scores or compete with others.

A highlight of some code that you’re particularly proud of:

function drawDog(x, y) {
  push(); // Start a new drawing state
  translate(x - width / 2, y - height / 2); // Center the drawing on the dog's position
  scale(0.25)
  
  var colorW = color(255,255,255);
  var colorBL = color(0,0,0);
  var colorBR = color(160,82,45);
  var colorP = color(255,182,193);
  
  // Ears
  push();
  noStroke();
  fill(colorBR); 
    // Right ear
    rotate(-PI/2.2);
    translate(-400,30);
    ellipse(width/2, height/4, 150, 50);  

    // Left ear
    rotate(PI/-12);
    translate(-56,245);
    ellipse(width/2, height/4, 150, 50);
  pop();
  
  // Base
  push();
  noStroke();
  fill(colorW);
  ellipse(width/2, height/2, 200, 200)
  pop();
  
  // Mouth
  push();
  noStroke();
  fill(colorBL);
  translate(150,210);
  arc(50, 50, 80, 80, 0, HALF_PI + HALF_PI);
  pop();
  
  // Tongue
  push();
  noStroke();
  fill(colorP);
  translate(-25,65);
  rect(width/2, height/2, 50, 35, 20)
  pop();
  
  // Tongue detail
  push();
  fill(219,112,147);
  ellipse(width/2, 277.5, 5, 25);
  stroke(125);
  pop();
  
  // Nose
  push();
  noStroke();
  fill(colorBL);
  translate(142,150);
  triangle(30, 75, 58, 100, 86, 75);
  pop();
  
  // Nose shine
  push();
  noFill();
  stroke(colorW);
  strokeWeight(2);
  noFill();
  strokeJoin(MITER);
  beginShape();
  scale(0.5, 0.5);
  translate(380,437);
  vertex(10, 20);
  vertex(35, 20);
  endShape();
  pop();
  
  // Eyes
  push();
  noStroke();
  fill(colorBR);
  rect(220, 160, 60, 60, 20, 30, 20, 40);
  fill(colorBL);
  ellipse(250, 190, 35, 35);
  ellipse(150, 190, 35, 35);
  fill(colorW);
  ellipse(150,180,10,10);
  ellipse(250,180,10,10);
  pop();
}

 

Embedded sketch:

Reflection and ideas for future work or improvements:

Progress on the game development is quite encouraging at this point. I’ve successfully tackled the challenge of scaling the dog’s head in a way that ensures all facial features remain proportionate, regardless of its size. This was a crucial step to maintain the visual consistency and appeal of the game, ensuring that as the dog “eats” more homework and its face grows, it still looks natural and retains the humorous charm intended for the game’s design.

The next significant hurdle I’m facing involves developing a robust logic system for the game. Specifically, I need to implement a mechanism where the growth of the dog’s head is directly tied to the action of consuming homework pieces (HW) within the game. This means creating a set of rules and conditions in the game’s code that accurately tracks each piece of homework the dog eats and translates that into proportional increases in the size of the dog’s head.

Week 5: Midterm Progress

Going into the midterm project I originally had the idea to make a two player game of Russian roulette, but eventually I changed my mind. I have now decided to make a two player, Mexican standoff game. Where there is a countdown and whoever presses the shoot button first wins. Player 1 will have a button on the left hand side of the keyboard like the letter ‘S’ and player 2 will have a button on the right hand side of the keyboard like the letter ‘L’ for example.

I spent most of my time working on the scene management. By that I mean the functionality around being able to switch to and from scenes. For example, going from the main menu to the tutorial screen or to the credits or game screen and back. I also decided for this project I would use visual studio code to program the code. It has worked perfectly up to this point where I have to now upload the code onto the website so that I can share it here. For some reason, as of writing this I am having the problem where whenever I run the code in the website editor the website decides to turn white. I thought this was a problem with chrome and maybe one of my extensions so I tried switching to Safari but had the same problem. Here is the sketch below. I managed to fix the issue and I explain the

I was having the issue of the website editor going blank, because one of my functions returns a console.error(). At the time when I had made the function return such a value I didn’t think it would be an issue in the first place because it would never happen, and it never did, but because a function could return such a value it crashed the editor. Note to self: don’t try to be fancy with the values you return

At the moment the program is very bare bones. In the future I plan to have the main title an image, and maybe the main menu background will be an animated one. Then for the game each player will be a pixelated version of a cowboy or something along those lines, and there will be some kind of background. I may also make the mexican standoff game a best of 5.

Lastly, thank you Pi for telling me about scene management, and thank you professor for helping with uploading the code from Visual Studio Code to the website editor.

Week 5 Reading Response – Saeed Lootah

I found the article to be very interesting, before reading I had a very primitive idea as to how a computer can recognize objects in a video or anything else to do with a computer processing a video. But, as I was reading I was reminded of examples of “computer vision” as the author put it. On the way to the Intro to IM class there is a screen and camera, and displayed on the screen is whatever is in front of the camera but the brightness of (nearly) every pixel is represented as a character. The end result is a black and white image but with characters to represent the brightness of most of the pixels. I was also reminded of a time where during the thunderstorm which had happened recently I met someone as I was walking to D2 who was having trouble photographing a lightning strike. In my head I thought that he could have benefited from some kind of program which could do all the work for him. I thought about it, and the idea I came up with was if the camera was recording all the time but deleting footage older than 30 seconds (to save space) then if the camera detects a spike in brightness it saves the last 30 seconds plus some footage after the lightning strike (this could be done by delaying the time between detecting the spike and saving the footage). Of course I don’t really know how to implement it but in theory it could work… I think.

I also learned a lot from the article. There were techniques mentioned, most of which which I would never be able to come up with myself. “Frame differencing,” “background subtraction,” and “brightness thresholding.” While I do not have a great idea of how to implement these techniques I think the most valuable thing I took away from the article were the names as I could always search them up and learn more afterwards. Fortunately they also linked some processing programs at the bottom that I could use to learn more.

Lastly, I noticed that the article was somewhat outdated. It was released in 2006. I felt it was worth mentioning because at the time machine learning was no where near as advanced as it is today so I would have liked to have learned more about how machine learning could be used to improve “computer vision.”

 

Wanderer – Midterm Progress

For my midterm project I will be creating an interactive artwork entitled Wanderer which will take the form of a visual loop featuring a figure that walks through various landscapes. The interaction itself will be fairly simple as the user will only be able to initiate the figure’s movement and change the landscape in which the figure is walking.  I have chosen to simplify the interaction as a means to draw attention towards the work’s visual qualities which will (hopefully) be satisfying to look at. To fulfil the brief, I plan to incorporate an “energy” meter which will cause the session to restart if it reaches zero. As of right now, I have started working on a sprite sheet and will make progress on the backgrounds in the coming days.

let spritesheet;
let sprites = [];
let direction = 1;
let step = 0;
let x;
let y;
let speed = 7;
let stepSpeed = 60;
let animationTimer;
let automaticMovement = false; 

function preload() {
  spritesheet = loadImage("walk-sprite.png");
}

function setup() {
  createCanvas(600, 600);

  let w = spritesheet.width / 17;
  let h = spritesheet.height / 2;

  for (let y = 0; y < 2; y++) {
    for (let x = 0; x < 17; x++) {
      sprites.push(spritesheet.get(x * w, y * h, w, h));
    }
  }

  x = width / 2;
  y = height / 2;

  imageMode(CENTER);
}

function draw() {
  background(255);

  if (automaticMovement) {
    direction = 1; 
    x += speed;
    step = (step + 1) % 12;

  translate(x, y);
  scale(direction, 1);
  image(sprites[step], 0, 0);
}
}

function keyPressed() {
  clearInterval(animationTimer);

 
  automaticMovement = true;

  // then set the interval
  animationTimer = setInterval(() => {
    step = (step + 1) % 12;
  }, stepSpeed);
}

function keyReleased() {
  clearInterval(animationTimer);


  automaticMovement = false;
}

function mouseClicked() {

  automaticMovement = true;
}

 

Afra Binjerais – Reading response 5

From what I understand, “computer vision” refers to a broad range of algorithms that let computers make sophisticated analyses of digital images and movies. The reading begins with defining computer vision and then explores its use in interactive arts, emphasizing Marvin Minsky’s early miscalculation of the field’s complexity. The reading was educational, especially the portion that highlighted how many different kinds of artworks use computer vision. These works of art range widely in style, from abstract and formal pieces to ones with comedic or political overtones, and this is important in regard to understanding the different ways in which we can communicate art.

Moreover, other artists mentioned explore a wide spectrum of human movement, from close studies of facial expressions to broader gestures and even entire body movements. Overall, the reading underscores the rich potential of computer vision in the realm of interactive art, showcasing its versatility and impact on artistic expression.

Week 5 – Reading response Shereena AlNuaimi

“Computer Vision for Artists and Designers” by Golan Levin introduces computer vision as a set of algorithms enabling computers to intelligently interpret digital images and videos, emphasizing its recent accessibility to novice programmers and interactive-media artists. It demystifies computer vision for beginners, focusing on its applications in interactive art, elementary computer vision techniques, physical optimization for computer vision, multimedia authoring tools for computer vision.

The author also explores the evolution of computer vision as a medium for artistic expression, outlining its recent democratization and tracing its historical origins. It highlights how computer vision methods are being applied to a wide range of artistic mediums, video games, and home automation systems. In addition, he deconstructs basic computer vision methods including frame differencing, background subtraction, brightness thresholding, and basic object tracking and provides an understanding of how to use and use them in interactive media. It highlights how crucial physical optimization is for computer vision and offers suggestions for setting up conditions that support reliable algorithmic performance.

Furthermore, he also examines multimedia authoring tools for computer vision, including major software development environments and their corresponding computer vision toolkits or plug-ins, such as Processing, Macromedia Director, and Max/MSP/Jitter. Additionally, it provides an example of a workshop project named LimboTime that shows how computer vision techniques can be used to create an interactive game. LimboTime highlights the wider application of computer vision in art and design by demonstrating the accessibility and possibilities for non-programmers to create vision-based interactive systems.

Overall, this reading offers us a comprehensive overview of computer vision, catering to novice programmers and artists, and highlights its growing significance in interactive art and design, offering practical insights and resources for implementing computer vision techniques in various artistic and design contexts.

Midterm Progress/ Reading response – Shaikha AlKaabi

Midterm Project Progress:

Whimsical Garden: Seasons of Growth is a point-and-click game where players nurture a magical garden through the changing seasons. The core gameplay involves planting, watering, and harvesting a variety of plants, each with unique needs. The game is a garden simulation where the player’s objective is to grow and manage a garden. You start by planting a flower in a location of your choice within a designated area. As the game progresses, you need to water the plants to ensure they grow. If a plant doesn’t receive water for too long, it will wither and die. Players must balance their attention between different plants, making strategic decisions about which plants to water with their limited resources. The challenge lies in keeping all plants alive and thriving, which becomes more difficult as the garden grows in size and complexity. The game ends when the player successfully maintains the garden for a certain period, or if all the plants die. There’s an option to restart the game after completion to try different strategies or improve your previous score.

Game Structure:

1. Start Screen: The game opens with a start screen that displays instructions for playing. This screen explains the basics of gardening mechanics, seasonal changes, and the objectives of the game. The start screen awaits user input (such as pressing a ‘Start’ button or hitting a specific key) to begin the game.

2. Gameplay Session: Once started, the player enters the main gameplay area, where they manage their garden through the various seasons, facing challenges like planting, watering, harvesting, and solving environmental puzzles.

3. End Session and Restart: After a gameplay session is completed (which could be defined by reaching a certain goal, surviving through a set number of seasons, the game transitions to an end screen. This screen summarizes the player’s achievements and offers the option to start a new session. Choosing to start a new session resets the game environment , allowing players to begin with a fresh garden.

Challenges:

1. Managing Game States: Implementing a system to manage different game states (e.g., start screen, active game, end screen) is crucial. This requires careful design to ensure smooth transitions between states based on user inputs.

2. Session Reset Functionality: Developing a way to reset the game environment for a new session without restarting the sketch poses a challenge. This involves resetting game variables, clearing the garden area, and preparing the game for a new set of seasonal cycles.

3. User Input Handling: Creating an intuitive and responsive input system for navigating between the start screen, gameplay, and restart option is essential. This includes implementing event listeners for keyboard, mouse, or button inputs that trigger the correct actions within the game’s flow.

By addressing these challenges, Whimsical Garden: Seasons of Growth aims to offer a rich and engaging gameplay experience that is accessible, educational, and enjoyable for a wide range of players.

let gameState = 'start'; // Possible states: 'start', 'game', 'end'
let garden = [];
let restartButton;

function setup() {
  createCanvas(600, 600);
  textAlign(CENTER, CENTER);
  
  // Restart button (hidden initially)
  restartButton = createButton('Restart');
  restartButton.position(width / 2 - 50, height / 2 + 20);
  restartButton.mousePressed(restartGame);
  restartButton.hide();
}

function draw() {
  background(161, 196, 170);
  
  if (gameState === 'start') {
    drawStartScreen();
  } else if (gameState === 'game') {
    drawGarden();
  } else if (gameState === 'end') {
    drawEndScreen();
  }
}

function drawStartScreen() {
  textSize(32);
  text('Whimsical Garden: Seasons of Growth', width / 2, height / 3);
  textSize(25);
  text('Click to start', width / 2, height / 2);
}

function drawGarden() {
  for (let plant of garden) {
    // Simple representation of plants
    fill(20, 180, 60);
    ellipse(plant.x, plant.y, 20, 20);
  }
  
  // Example end condition: 5 plants
  if (garden.length >= 5) {
    gameState = 'end';
    restartButton.show();
  }
}

function drawEndScreen() {
  background(47, 54, 50);
  textSize(50);
  fill(184, 46, 64);
  text('Garden is Full!', width / 2, height / 3);
  textSize(25);
  text('Restart to play again', width / 2, height / 2);
}

function mousePressed() {
  if (gameState === 'start') {
    gameState = 'game';
  } else if (gameState === 'game' && mouseY < height - 100) {
    // Allow planting only within the game area (excluding UI elements, e.g., buttons)
    plantSeed(mouseX, mouseY);
  }
}

function plantSeed(x, y) {
  // Add a new plant to the clicked position in the garden
  garden.push({x: x, y: y});
}

function restartGame() {
  garden = []; // Clear the garden
  gameState = 'start'; // Set game state back to start
  restartButton.hide();
}

 

Reading Response: 

We’ve come a long way in how we interact with computers, starting with some really early virtual reality that let people play in digital spaces just by moving around. Myron Krueger was one of the first to make this happen with his Videoplace, and it was a big step forward because it made the connection between people and computers feel natural and easy.

Then there’s Bob Flanagan’s Suicide Box, which is a lot more serious. It’s a piece of art that makes us think about tough topics like sickness and the choice to end one’s life. It’s a strong reminder that technology can do more than just entertain us; it can also make us think deeply about life’s big questions.

Understanding how these systems work is pretty cool too. They can tell the difference between what’s important for the interaction and what’s just background noise. They look for the brightest spot they can find to keep track of what the user is doing. This is really important for making games and art installations where you can move around and have the game or art react to what you’re doing.

Jonah Warren’s Master’s thesis takes this idea further. He found new ways for games to understand our body movements. He talks about direct ways to interact with games, like touching, covering, or mirroring, which are now common in games that use body movement for control.

From the early days of virtual reality to the deep questions raised by interactive art, and the nitty-gritty of tracking movement, it’s clear that the way we communicate with computers has many sides to it. Technology keeps getting better, and it changes how we play, create, and think about the world. The field of human-computer interaction is really about connecting with each other, sharing ideas, and getting to grips with both the real and digital worlds we live in.

Midterm Progress / Reading Response – Hamdah AlSuwaidi

 Midterm Progress:

For my midterm project, I’m taking on the challenge of designing a captivating costume exhibit for The MET’s Costume Institute. The exhibit will showcase the evolution of haute couture through iconic designs from renowned fashion houses, spanning different eras and design philosophies.

The overarching theme of the exhibit is “Timeless Elegance: A Journey Through Fashion History.” My aim is to highlight the enduring allure and innovation of haute couture, exploring how designers have redefined beauty and style over the decades.

1. Homepage:
– The homepage welcomes visitors to the online exhibit with a visually appealing layout featuring the exhibit’s title, a brief description, and navigation options.

2. Navigation Menu:
– A navigation menu at the top or side of the webpage allows visitors to easily access different sections of the exhibit, including:
– Explore: Provides access to the main exhibit layout where visitors can view featured designers and garments.
– Audio Guides: Offers a collection of audio commentary on select pieces.
– Search: Enables visitors to search for specific designers, garments, or fashion movements.
– About: Provides information about the exhibit, its curators, and contributors.

3. Explore Section:
– Clicking on the “Explore” option takes visitors to the main exhibit layout, which may resemble a gallery or virtual space.
– Visitors can navigate through different areas of the exhibit by clicking or tapping on designated hotspots or arrows.
– Clicking on a designer’s name or garment reveals detailed information, including descriptions, historical context, and images.

4. Audio Guides:
– The “Audio Guides” section presents visitors with a curated collection of audio commentary on select pieces within the exhibit.
– Visitors can listen to the audio commentary by clicking on specific garments or audio icons associated with them.

5. Search Functionality:
– The “Search” option allows visitors to search for specific designers, garments, or fashion movements.
– Visitors can enter keywords or phrases into the search bar to find relevant content within the exhibit.
– Search results display relevant garments, designers, or topics, with clickable links to access more information.

Interactive Elements:
– Throughout the exhibit, interactive elements such as clickable images, videos, and multimedia presentations provide additional context and engagement.
– Visitors can interact with these elements by clicking or tapping on them to access related content or animations.

Some examples:

7. Responsive Design:
– The online exhibit is designed to be responsive, ensuring a seamless experience across different devices and screen sizes.
– Whether visitors access the exhibit on a desktop computer, tablet, or smartphone, the layout and functionality adapt to provide an optimal viewing experience.

  1. Maintaining Engagement: Keeping visitors engaged throughout their online exhibit experience is crucial. Unlike in-person exhibits where visitors may spend hours exploring, online visitors may have shorter attention spans. Designing captivating visuals, interactive elements, and compelling content is essential to maintain visitor interest and encourage exploration.
  2. Audio and Video Integration: Integrating audio and video content into the exhibit may present challenges such as ensuring cross-browser compatibility, synchronizing multimedia elements with other interactive elements or animations, and optimizing playback performance for smooth audio/video streaming.
let titleSize = 40; // Decreased the title size
let subtitleSize = 20; // Decreased the subtitle size

function preload(){
   hangerImage = loadImage('Hanger.png'); // Load hanger image }
}
function setup() {
  createCanvas(windowWidth, windowHeight);
  textSize(titleSize);
  textAlign(CENTER, CENTER);
}

function draw() { background(255); // Set background color to white
if (mouseIsPressed || mouseX != pmouseX || mouseY != pmouseY) {
  // Display hanger image on mouse hover
  image(hangerImage, mouseX, mouseY, 70, 50);
}
                
 // Title
                 
fill(166, 0, 0); // Set text color to red
  textSize(titleSize);
  text("Welcome to Timeless Elegance", width / 2, height / 2 - 50);
  
  // Subtitle
  fill(0); // Set text color to red
  textSize(subtitleSize);
  text("Touch the screen to begin your journey", width / 2, height / 2 + 20);
}

function mouseClicked() {
  // Trigger next screen or action when the screen is touched/clicked
  // For example, navigate to the main exhibit layout
}

The hanger icon serves as a visual representation of fashion and garment design, reinforcing the theme of the exhibit. It symbolizes the process of selecting, showcasing, and appreciating designer garments, further immersing visitors in the world of fashion.Changing the mouse cursor to a hanger creates a more immersive experience for visitors, immediately signaling that they are entering a fashion-themed environment. This subtle visual cue helps transport visitors into the world of haute couture and sets the tone for the exhibit.

https://docs.google.com/document/d/1wQb92P4HcfuLNuzHEYyvs3ZUFQNo6hspwGtPbJLv53Q/edit?usp=sharing (More details about the costume institute and what dress will be included in the exhibit)

Reading response:

The reading on “Computer Vision for Artists and Designers” delves into the increasing accessibility of computer vision technology to artists and designers, facilitated by user-friendly software and open-source communities. It showcases various projects, including Rafael Lozano-Hemmer’s intriguing installation “Standards and Double Standards” (2004), where belts controlled by a computer vision-based tracking system rotate to follow individuals, offering an unintentional yet captivating form of interaction.

However, the ethical implications become pronounced with projects like Suicide Box by the Bureau of Inverse Technology (1996), which utilized motion-detection to record real data of suicides, raising concerns about privacy and surveillance. While such data might aid in locating missing individuals, it blurs ethical boundaries, sparking controversy.

Moreover, the reading outlines different problems addressed by vision algorithms, such as motion detection and object tracking, underscoring their relevance in design considerations. The introduction of “Telecentric lenses” offers insights into improving object recognition, albeit with drawbacks like cost and distortion issues, prompting reflection on their judicious usage.

The discussion expands to the societal acceptance of constant surveillance through technologies like facial recognition, prompting introspection on privacy norms. This leads to questioning the boundaries between innovation and intrusion, highlighting the need for ethical frameworks in technology adoption.

In reflecting on the reading, two artworks stand out for their distinct approaches to computer vision. Rafael Lorenzo-Hemmer’s “Standards and Double Standards” impresses with its inventive use of space and objects, blurring the lines between digital and tangible realms. In contrast, Christopher Moller’s “Cheese” raises questions about the advancements in facial recognition technology and its potential implications for art and society.

Week 5: Midterm Progress

Concept and Design

The core concept of this project revolves around creating an immersive interactive experience that blends art and technology, leveraging the versatility of the p5.js library. The aim is to develop an artwork or game that captivates the user by engaging their senses through visual elements, sound, and interactivity. The design is centered on the principle of simplicity to ensure accessibility while fostering a deep sense of exploration and interaction. By incorporating a variety of media types—including shapes, images, sounds, and on-screen text—the project seeks to create a rich, multi-sensory environment that responds to the user’s actions in intuitive and surprising ways.

Implementation

So far, the project has laid down a foundational structure that supports Object-Oriented Programming (OOP) to manage its various interactive elements effectively. A particle system has been developed as the primary visual feature, showcasing a dynamic and aesthetically pleasing behavior that can be manipulated through user input. The system begins with an instructional screen, ensuring that users understand how to interact with the artwork or game before commencing. This design choice not only improves user experience but also aligns with the project’s requirement to start the experience with an instruction screen and wait for user input before starting.

The current implementation allows for the experience to be restarted without the need to reload the sketch entirely, promoting a seamless user interaction that encourages exploration and repeated engagement. The integration of state management facilitates this process, enabling the transition between different phases of the application, such as viewing instructions, interacting with the experience, and restarting the session.

Challenges

One of the main challenges faced during development has been incorporating all the required elements (shape, image, sound, and on-screen text) into a cohesive and interactive experience. While the foundational structure for shapes and on-screen text has been established through the particle system and instructional screen, the integration of images and sounds remains incomplete. Balancing the aesthetic appeal with functional interactivity, especially when introducing multimedia elements, requires careful consideration to maintain performance and user engagement.

Another significant challenge is ensuring that the experience is equally compelling for one or more users. Designing interactions that are universally intuitive yet offer depth and discovery poses a unique challenge in user experience design. Additionally, creating a mechanism for seamlessly restarting the experience without restarting the sketch has required thoughtful state management and control flow within the code.

Future Directions

Moving forward, the project will focus on integrating the remaining required elements—specifically, the inclusion of at least one image and one sound—to enrich the sensory experience. Exploring creative ways to incorporate these elements will not only fulfill the project’s requirements but also enhance the overall aesthetic and interactive quality of the work. Addressing the challenges of multi-user interaction and refining the user interface to accommodate various interaction modes will also be a priority. Through iterative design and testing, the project aims to evolve into a fully-realized interactive experience that leverages everything learned so far, presenting users with an engaging and memorable exploration of digital art and interactivity.

Progress

Week 5 Reading Response – Khalifa Alshamsi

The historical context provided, referencing Marvin Minsky’s underestimation of the complexity of computer vision, sets the stage for the discussion on how the field has evolved to include a wide array of applications, especially in the arts. The mention of Myron Krueger’s Videoplace as an early example of interactive artwork utilizing computer vision illustrates the potential for creative interaction between humans and computers beyond the conventional mouse and keyboard interfaces.

The exploration of projects such as Messa di Voce, Standards and Double Standards, Sorting Daemon, and others within the article showcased the versatility of computer vision in crafting interactive and immersive art experiences. These projects, each with its unique thematic focus, from surveillance and privacy to the nuanced exploration of human emotions via facial recognition, illustrate the breadth of computer vision’s applicability. They catalyzed my curiosity, pushing me to consider how technology can be leveraged to challenge societal norms, provoke thought, and evoke emotional responses.