Week 5 – Reading Reflection

“Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers” talks about the increasingly used concept of computer vision, which refers to the ability of a computer to interpret visuals such as data, images, and videos. The article focuses on the applications of computer vision in interactive art by describing particular examples that have stood out in this field. I found Myron Krueger’s Videoplace to be a remarkable artwork that involves various aspects of vision and interaction. It gave me the impression of optical illusions, making it seem intriguing. I also thought of the concept of LimboTime to be very impressive. Involving a relatively simple idea, the project includes exciting interactivity through computer vision.

I felt the article provided a concise yet thorough understanding of the concept of computer vision. It was informative and interesting to read about the different techniques that enable the computer to infer visual data namely, ‘frame differencing’ to detect movements, ‘background subtraction’ to detect the presence of people and objects, and ‘brightness thresholding’ that compares and classifies the pixel brightness.

The discussion of computer vision in the physical world reminded me of a very common application of computer vision. The reverse camera used in cars processes visual data to such precision which enables it to produce an accurate estimate of the guide lines that assist the driver in parking the car. However, as mentioned in the article such applications are only possible if the physical conditions are designed to be compatible with the program.

Overall, the article was an engaging read. I felt it has sparked new ideas that I hope to explore in projects and further work to understand the working of the various techniques involved in computer vision.

Midterm Progress 1 – Street Crash

Concept and User Interaction

Concept: The inspiration for my current project stems from a previous experience in an “Introduction to Computer Science” course, where I was tasked with creating a game using the Processing library in Python. This immersive experience not only kindled my passion for game development but also became the driving force behind my current endeavor. Inspired by this journey, I set out to create an entertaining 2D car racing game called “Street Crash”. The core idea is quite straightforward: players take the wheel of a virtual car and must deftly navigate it along a bustling road while skillfully avoiding collisions with other cars. The final goal is to achieve the greatest possible score while retaining control and avoiding accidents.

User Interaction: I’ve given considerable thought to the user experience in my game. I created an intuitive interaction approach to ensure that players find the game accessible and entertaining. This is how it works:

  • Player Control: Players can easily steer their car using the arrow keys. The left arrow key initiates a left turn, while the right arrow key guides the car to the right.
  • Game Start: A simple and universally recognized action ‘pressing the spacebar’ kickstarts the game. This design choice makes it easy for players to begin their adventure.
  • Game Restart: After a game over, players can swiftly get back into the action by pressing the spacebar again.

Classes and Functions

  • Player Class (player): This class takes care of everything related to the player’s car, including its position and movement. Here’s a snippet of the code:
function player() {
  // Define properties and methods for the player's car
  this.pos = createVector(width / 2, height - 100);
  this.r = createVector(40, 60);

  // Render the player's car
  this.render = function () {
    // Drawing code for the player's car
    // ...
  };

  // Update the player's car position
  this.update = function () {
    // Logic for updating player's car position
    // ...
  };
}
  • Street Class (street): This class represents the road that the player’s car travels on, including its movement.
  • Cars Class (cars): This class manages the spawning, movement, and collision detection of other cars on the road.
  • Keys Function: The keys function responds to arrow key input for controlling the player’s car.

Embedded Sketch

Tackling Complexity: Collision Detection

The collision detection system is without a doubt one of the most complex aspects of this project. It is critical to the game’s fairness and enjoyment that collisions are detected and handled reliably. To reduce the risk associated with this complexity, I’ve carefully implemented the collision detection algorithm. This method determines if the player’s car and other cars cross on both the X and Y axes, taking into account the cars’ size. In the event of a collision, the game ends, and the player is given the option to restart.

Edit Link

 

Week 5 – Reading Reflection

While reading this week’s article, I thought about where I had seen implementations of live images being input into a dynamic program and was reminded of Michael Reeves. For those of you who don’t know, Michael, a college dropout, makes YouTube videos about his absurd programming and design projects which, at the same time as being informative, always get a good laugh out of you. I remembered watching one of his videos where he pointed a camera toward his goldfish in a tank and used the goldfish’s live position from the camera’s feed to influence his trades in the stock market. I don’t recall how much of the fifty thousand dollars that he initially invested to test his project remained after the whole experiment but it was for sure an interesting concept. In another one of his videos, he uses his hand’s movement in air to replicate the movement of a surgical tool – or, well, a knife – on an operating table. According to the article, such dynamic image processing was not explored publicly until a few years ago, but was limited to military and research institutions. With recent advancements in this technology and rise in availability of resources for common programmers, however, the innovations in the fields of arts and sciences have exploded in number. Computer vision has provided yet another way for artists to express their interactive ideas, making actual physical interactivity possible. Whether the innovations are actual scientific advancements in the field of medicine like tracking body cells with computer vision, or Michael Reeves’ silly experiments such as making a taser electrocute you if you look less than six feet tall in a photograph, the use of computer vision is likely to become even more widespread and for good. I for one cannot wait to implement such ideas in silly projects of my own.

Week 5 – Reflection

The widespread accessibility of computer vision technologies, coupled with their integration into multimedia authoring systems, has empowered artists and designers to explore new dimensions of interactive and immersive experiences. By understanding the versatility of computer vision algorithms and optimizing the physical environment, creative professionals can harness this technology to innovate and enrich the world of art, design, education, and entertainment.

omputer vision algorithms can analyze a wide array of human activities, such as identities, locations, gestures, facial expressions, walking styles, and gaze directions. This versatility opens up endless possibilities for artists and designers, allowing them to create interactive experiences that respond to various human behaviors and emotions. : While some advanced vision algorithms require expertise in image processing and statistics, many effective techniques can be implemented by novice programmers in a short period. Artists and designers familiar with multimedia authoring systems like Macromedia Director and Max/MSP/Jitter can leverage a range of free and commercial toolkits. These toolkits provide ready access to advanced vision functionalities, simplifying the implementation process and encouraging experimentation.

As technology becomes more accessible, computer vision techniques are expected to be increasingly incorporated into media-art education, gaming, artworks, and various applications. This integration underscores the importance of mastering both the software and the physical setup, facilitating a creative synergy between the physical and digital realms.

Week 5: Mid-term Idea

Concept:

The basic concept of this game is a simple catching game where players control a basket at the bottom of the screen, moving it left and right using arrow keys, to catch objects falling from the top. The objective is to catch these falling objects to accumulate a score, with the goal of reaching a target score of 55. Once the player achieves or surpasses this score, the game concludes with a “You won!” message, testing the player’s hand-eye coordination and timing skills in the process.

 

Purpose of the Game:

Through this basic basket game, I would like to promote sustainability and environmental awareness. I will change the stick to a recycle bin and the falling objects to different kinds of garbage. The purpose of this recycling game is to offer an interactive platform to educate players about responsible waste management and recycling, emphasizing the positive impact of individual actions on preserving the environment. By collecting garbage with a recycling bin, the game can encourage players to adopt eco-friendly practices and underscores the significance of recycling as a means to reduce waste and conserve resources. This concept aligns with the goal of inspiring players to take real-world steps towards a cleaner and more sustainable planet.

 

Improvement:

To enhance my recycling game, I should consider implementing the following improvements:

Firstly, I should enrich the game’s visual experience with polished graphics and animations, along with a thematic background to reinforce sustainability. Secondly, I might incorporate sound effects and music for immersion. And gradually increase difficulty levels and introduce diverse recyclable items and challenges. For example, I can add power-ups, bonuses, and obstacles to make gameplay more engaging.  I would like to ensure responsiveness across devices and optimize code for smooth performance, accounting for accessibility features. Finally, I will consider localization to reach a wider audience. These enhancements can elevate the game’s impact and educational value while making it more enjoyable for my players.

Midterm Progress

Concept

For my midterm project, I want to create an engaging game by incorporating various elements, including sprites, background parallax scrolling, object-oriented programming (OOP), and TensorFlow.js integration for character control through either speech recognition or body motion detection (not sure if I can pull it off).

Most of the work I’ve done so far focuses on researching 2D side-scrolling game development, sprite creation, and background parallax scrolling techniques. I have also begun to explore TensorFlow.js, gaining a foundational understanding of its capabilities. The design phase is ongoing, with a focus on brainstorming methods to seamlessly integrate TensorFlow.js into the core gameplay experience. Although I have not yet begun coding, I plan to create preliminary prototypes to evaluate and refine my TensorFlow.js-based character control concept.

Potential challenges

The integration of TensorFlow.js introduces a learning curve and the potential for technical challenges. I am committed to investing the necessary time and effort to overcome these hurdles.

Reading reflection – Week 5

The accessibility and applications of computer vision in the arts are explored in the document “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers.” It discusses how advancements in software development tools, open-source communities, processing speeds, and affordable hardware have made computer vision techniques available to novice programmers and interactive-media artists. The document covers various aspects of computer vision, including its integration into interactive art installations, elementary vision algorithms, the importance of physical environments, artist-friendly toolkits, and a student project example. However, alongside the opportunities presented by computer vision, it is crucial to address the ethical concerns associated with its implementation.

Ethical concerns arise from the use of computer vision technology. One major concern is bias and discrimination, as algorithms trained on biased or unrepresentative datasets can perpetuate societal biases and lead to discriminatory outcomes. Invasion of privacy is another pressing issue, with facial recognition capabilities enabling surveillance systems to track and monitor individuals without their consent, raising questions about privacy rights and consent. Ensuring informed consent is important, as individuals may not always be aware of the extent to which their personal data is being collected and used in computer vision applications. Security and surveillance risks exist, as misuse of computer vision technology can result in unauthorized access, breaches of cybersecurity, and potential mass surveillance. Lastly, the widespread adoption of computer vision can have social impact and inequality implications, exacerbating existing inequalities and raising concerns about job displacement and socioeconomic disparities.

Addressing these ethical concerns is crucial to ensure the responsible and equitable use of computer vision technology. It requires measures such as curating diverse and representative datasets, implementing bias mitigation techniques, safeguarding privacy rights, providing transparent communication and consent mechanisms, prioritizing security and cybersecurity best practices, and considering the broader social implications to promote equitable outcomes. By integrating ethical considerations into the development and deployment of computer vision systems, we can harness the potential of this technology while mitigating its potential harms.

Week 5- Reading Reflection

As I read through the article on computer vision in the context of art and interactive media, I couldn’t help but draw parallels to real-life situations and experiences where computer vision is becoming increasingly relevant.

The article’s emphasis on the need to adapt computer vision algorithms to specific physical conditions struck a chord with me. In my daily life, I’ve noticed how computer vision applications in security systems have evolved. For instance, surveillance cameras that rely on computer vision algorithms are often used in shopping malls and airports to monitor crowds and detect unusual activities. These systems work best when lighting conditions are optimal and when individuals are easily distinguishable from their surroundings. However, in poorly lit areas or crowded environments, the effectiveness of these algorithms can be compromised. This reminds me of the article’s suggestion to optimize physical conditions to enhance the performance of computer vision systems.

The article’s discussion of multimedia authoring tools for computer vision applications made me reflect on the broader impact of these tools in various industries. For instance, the use of computer vision in augmented reality (AR) and virtual reality (VR) applications has become increasingly popular. AR applications like Pokémon GO and VR experiences in gaming rely on computer vision to track users’ movements and provide immersive experiences. These technologies have not only revolutionized gaming but also found applications in fields such as education, healthcare, and training simulations.

Can machines replace art? No, because art is all about human creativity and feelings. Technology can help artists create and share their work, but it can’t replace the uniqueness and emotions that artists put into their creations. Art also brings people together, makes us think, and raises important questions that machines can’t do. So, instead of replacing art, technology works alongside it to make new kinds of creative stuff.

Week 5:

I find myself intrigued by the evolving intersection of technology and creativity. The notion that art can connect people across distances through digital media is fascinating. I appreciate the idea of using computer vision to create interactive experiences that convey important messages, such as David Rokeby’s ‘Sorting Daemon’ illustrating the impact of surveillance. However, I can’t help but feel that there’s a slight artificiality to these experiences, despite their cleverness. There’s something special about physically interacting with art, like the example of touching lights in a room, which digital art can’t fully replicate. I agree that computers are improving in creating engaging and realistic interactive art, but they can’t replace the unique connection that occurs when all our senses and physical presence are engaged.

I also found the article’s insights about the adaptability of computer vision algorithms to be enlightening. It reinforces the idea that technology in art should be tailored to the specific context. I appreciate the emphasis on optimizing these systems. The mention of user-friendly tools like Processing and Max/MSP/Jitter resonates with my belief that technology should empower creativity for a wider audience. However, the article’s caution about biases in these algorithms, especially in identity recognition and gesture analysis, raises essential ethical considerations. As technology continues to shape our world, it’s crucial to address these concerns to avoid unintended consequences.

Last, the historical perspective of the evolution of interactive technology, particularly the emergence of virtual reality decades ago, adds depth to our understanding of our modern digital landscape. The surge in interactive projects in the early 2000s signifies the rapid evolution of technology and its expanding role in shaping our interactions with art and machines. The impact of computer vision on interactive art is awe-inspiring, blurring the boundaries between humans and technology, and opening up exciting avenues for creativity and innovation.