While reading this week’s article, I thought about where I had seen implementations of live images being input into a dynamic program and was reminded of Michael Reeves. For those of you who don’t know, Michael, a college dropout, makes YouTube videos about his absurd programming and design projects which, at the same time as being informative, always get a good laugh out of you. I remembered watching one of his videos where he pointed a camera toward his goldfish in a tank and used the goldfish’s live position from the camera’s feed to influence his trades in the stock market. I don’t recall how much of the fifty thousand dollars that he initially invested to test his project remained after the whole experiment but it was for sure an interesting concept. In another one of his videos, he uses his hand’s movement in air to replicate the movement of a surgical tool – or, well, a knife – on an operating table. According to the article, such dynamic image processing was not explored publicly until a few years ago, but was limited to military and research institutions. With recent advancements in this technology and rise in availability of resources for common programmers, however, the innovations in the fields of arts and sciences have exploded in number. Computer vision has provided yet another way for artists to express their interactive ideas, making actual physical interactivity possible. Whether the innovations are actual scientific advancements in the field of medicine like tracking body cells with computer vision, or Michael Reeves’ silly experiments such as making a taser electrocute you if you look less than six feet tall in a photograph, the use of computer vision is likely to become even more widespread and for good. I for one cannot wait to implement such ideas in silly projects of my own.
Category: F2023 – Aya
Week 5 – Reflection
The widespread accessibility of computer vision technologies, coupled with their integration into multimedia authoring systems, has empowered artists and designers to explore new dimensions of interactive and immersive experiences. By understanding the versatility of computer vision algorithms and optimizing the physical environment, creative professionals can harness this technology to innovate and enrich the world of art, design, education, and entertainment.
omputer vision algorithms can analyze a wide array of human activities, such as identities, locations, gestures, facial expressions, walking styles, and gaze directions. This versatility opens up endless possibilities for artists and designers, allowing them to create interactive experiences that respond to various human behaviors and emotions. : While some advanced vision algorithms require expertise in image processing and statistics, many effective techniques can be implemented by novice programmers in a short period. Artists and designers familiar with multimedia authoring systems like Macromedia Director and Max/MSP/Jitter can leverage a range of free and commercial toolkits. These toolkits provide ready access to advanced vision functionalities, simplifying the implementation process and encouraging experimentation.
As technology becomes more accessible, computer vision techniques are expected to be increasingly incorporated into media-art education, gaming, artworks, and various applications. This integration underscores the importance of mastering both the software and the physical setup, facilitating a creative synergy between the physical and digital realms.
Week 5: Mid-term Idea
Concept:
The basic concept of this game is a simple catching game where players control a basket at the bottom of the screen, moving it left and right using arrow keys, to catch objects falling from the top. The objective is to catch these falling objects to accumulate a score, with the goal of reaching a target score of 55. Once the player achieves or surpasses this score, the game concludes with a “You won!” message, testing the player’s hand-eye coordination and timing skills in the process.
Purpose of the Game:
Through this basic basket game, I would like to promote sustainability and environmental awareness. I will change the stick to a recycle bin and the falling objects to different kinds of garbage. The purpose of this recycling game is to offer an interactive platform to educate players about responsible waste management and recycling, emphasizing the positive impact of individual actions on preserving the environment. By collecting garbage with a recycling bin, the game can encourage players to adopt eco-friendly practices and underscores the significance of recycling as a means to reduce waste and conserve resources. This concept aligns with the goal of inspiring players to take real-world steps towards a cleaner and more sustainable planet.
Improvement:
To enhance my recycling game, I should consider implementing the following improvements:
Firstly, I should enrich the game’s visual experience with polished graphics and animations, along with a thematic background to reinforce sustainability. Secondly, I might incorporate sound effects and music for immersion. And gradually increase difficulty levels and introduce diverse recyclable items and challenges. For example, I can add power-ups, bonuses, and obstacles to make gameplay more engaging. I would like to ensure responsiveness across devices and optimize code for smooth performance, accounting for accessibility features. Finally, I will consider localization to reach a wider audience. These enhancements can elevate the game’s impact and educational value while making it more enjoyable for my players.
Midterm Progress
Concept
For my midterm project, I want to create an engaging game by incorporating various elements, including sprites, background parallax scrolling, object-oriented programming (OOP), and TensorFlow.js integration for character control through either speech recognition or body motion detection (not sure if I can pull it off).
Most of the work I’ve done so far focuses on researching 2D side-scrolling game development, sprite creation, and background parallax scrolling techniques. I have also begun to explore TensorFlow.js, gaining a foundational understanding of its capabilities. The design phase is ongoing, with a focus on brainstorming methods to seamlessly integrate TensorFlow.js into the core gameplay experience. Although I have not yet begun coding, I plan to create preliminary prototypes to evaluate and refine my TensorFlow.js-based character control concept.
Potential challenges
The integration of TensorFlow.js introduces a learning curve and the potential for technical challenges. I am committed to investing the necessary time and effort to overcome these hurdles.
Reading reflection – Week 5
The accessibility and applications of computer vision in the arts are explored in the document “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers.” It discusses how advancements in software development tools, open-source communities, processing speeds, and affordable hardware have made computer vision techniques available to novice programmers and interactive-media artists. The document covers various aspects of computer vision, including its integration into interactive art installations, elementary vision algorithms, the importance of physical environments, artist-friendly toolkits, and a student project example. However, alongside the opportunities presented by computer vision, it is crucial to address the ethical concerns associated with its implementation.
Ethical concerns arise from the use of computer vision technology. One major concern is bias and discrimination, as algorithms trained on biased or unrepresentative datasets can perpetuate societal biases and lead to discriminatory outcomes. Invasion of privacy is another pressing issue, with facial recognition capabilities enabling surveillance systems to track and monitor individuals without their consent, raising questions about privacy rights and consent. Ensuring informed consent is important, as individuals may not always be aware of the extent to which their personal data is being collected and used in computer vision applications. Security and surveillance risks exist, as misuse of computer vision technology can result in unauthorized access, breaches of cybersecurity, and potential mass surveillance. Lastly, the widespread adoption of computer vision can have social impact and inequality implications, exacerbating existing inequalities and raising concerns about job displacement and socioeconomic disparities.
Addressing these ethical concerns is crucial to ensure the responsible and equitable use of computer vision technology. It requires measures such as curating diverse and representative datasets, implementing bias mitigation techniques, safeguarding privacy rights, providing transparent communication and consent mechanisms, prioritizing security and cybersecurity best practices, and considering the broader social implications to promote equitable outcomes. By integrating ethical considerations into the development and deployment of computer vision systems, we can harness the potential of this technology while mitigating its potential harms.
Week 5- Reading Reflection
As I read through the article on computer vision in the context of art and interactive media, I couldn’t help but draw parallels to real-life situations and experiences where computer vision is becoming increasingly relevant.
The article’s emphasis on the need to adapt computer vision algorithms to specific physical conditions struck a chord with me. In my daily life, I’ve noticed how computer vision applications in security systems have evolved. For instance, surveillance cameras that rely on computer vision algorithms are often used in shopping malls and airports to monitor crowds and detect unusual activities. These systems work best when lighting conditions are optimal and when individuals are easily distinguishable from their surroundings. However, in poorly lit areas or crowded environments, the effectiveness of these algorithms can be compromised. This reminds me of the article’s suggestion to optimize physical conditions to enhance the performance of computer vision systems.
The article’s discussion of multimedia authoring tools for computer vision applications made me reflect on the broader impact of these tools in various industries. For instance, the use of computer vision in augmented reality (AR) and virtual reality (VR) applications has become increasingly popular. AR applications like Pokémon GO and VR experiences in gaming rely on computer vision to track users’ movements and provide immersive experiences. These technologies have not only revolutionized gaming but also found applications in fields such as education, healthcare, and training simulations.
Can machines replace art? No, because art is all about human creativity and feelings. Technology can help artists create and share their work, but it can’t replace the uniqueness and emotions that artists put into their creations. Art also brings people together, makes us think, and raises important questions that machines can’t do. So, instead of replacing art, technology works alongside it to make new kinds of creative stuff.
Week 5:
I find myself intrigued by the evolving intersection of technology and creativity. The notion that art can connect people across distances through digital media is fascinating. I appreciate the idea of using computer vision to create interactive experiences that convey important messages, such as David Rokeby’s ‘Sorting Daemon’ illustrating the impact of surveillance. However, I can’t help but feel that there’s a slight artificiality to these experiences, despite their cleverness. There’s something special about physically interacting with art, like the example of touching lights in a room, which digital art can’t fully replicate. I agree that computers are improving in creating engaging and realistic interactive art, but they can’t replace the unique connection that occurs when all our senses and physical presence are engaged.
I also found the article’s insights about the adaptability of computer vision algorithms to be enlightening. It reinforces the idea that technology in art should be tailored to the specific context. I appreciate the emphasis on optimizing these systems. The mention of user-friendly tools like Processing and Max/MSP/Jitter resonates with my belief that technology should empower creativity for a wider audience. However, the article’s caution about biases in these algorithms, especially in identity recognition and gesture analysis, raises essential ethical considerations. As technology continues to shape our world, it’s crucial to address these concerns to avoid unintended consequences.
Last, the historical perspective of the evolution of interactive technology, particularly the emergence of virtual reality decades ago, adds depth to our understanding of our modern digital landscape. The surge in interactive projects in the early 2000s signifies the rapid evolution of technology and its expanding role in shaping our interactions with art and machines. The impact of computer vision on interactive art is awe-inspiring, blurring the boundaries between humans and technology, and opening up exciting avenues for creativity and innovation.
Week 5- Midterm Progress 1
Generative Gallery: A Digital Art Experience
Concept:
The concept of the art gallery is to create an immersive and interactive digital art experience that showcases the power of generative art and Object-Oriented Programming (OOP). The gallery is designed to engage visitors in a multi-sensory journey through four distinct exhibits, each of which highlights different aspects of generative art.
Project Components:
- The background will be an image of the art gallery.
- There will be a soothing sound effect in the background
- On screen text will provide information about the art.
- The project will be implemented using object-oriented programming.
- When you start the project, a screen with instructions will be displayed.
- You will be able to interact with the project by clicking on the screen to look at the art.
- I will include a “Start Again” button, allowing you to start a new session without having to refresh the page or restart the sketch, ensuring a seamless experience.
This code just displays the exterior and interior of the art gallery.
Challenges:
Creating a Generative Gallery: A Digital Art Experience in p5.js will be an exciting project. It will come with some challenges I’ll have to work through. I’ll need to create complex art that changes in real-time, making sure it runs smoothly. I’ll need to design buttons and stuff so people can use the art easily. I’ll also have to think about how it will work on different screens and use a way of coding called object-oriented programming, which will be kind of like making different building blocks for the art. Finding and fixing mistakes in the art will be tough, but I’ll learn a lot. Making sure the art is well-documented will be important for others who might want to understand it. Despite these challenges, I’ll be proud of the result – a cool digital art experience that I hope everyone will enjoy.
Midterm Progress draft 1 – Week 5
Concept: PuzzlePlay
This project idea sprang from something personal and relatable – my sister’s love for solving puzzles. It always fascinates me yet triggers me how she has the patience and dedication to tackle intricate puzzles. It got me thinking, why not bring that captivating experience into the digital realm? I want to create a digital puzzle game that captures the essence of those satisfying moments when you piece together the final part of a puzzle.
But here’s where it gets exciting – users can take the reins to customize their puzzle-solving journey. Inspired by my sister’s passion for puzzles, I want to offer users the freedom to tweak the puzzle’s difficulty. They can do this by adjusting the number of columns and rows in the puzzle. It’s like having the power to set your own challenge level. this project aims to provide an interactive and enjoyable experience that’s both personal and engaging.
User Interaction Design:
- The user is greeted with on-screen instructions
- The puzzle piece will be an image from online
- Puzzle pieces are displayed on the canvas, and users can click and drag them.
- A “Shuffle” button resets the puzzle.
- Sound effects play when pieces are clicked or moved.
- Upon completing the puzzle, a “Congratulations!” message with a celebration sound effect is displayed.
The most frightening or complex part of the project would be the implementation of puzzle piece movement and the logic behind shuffling the puzzle once the user has completed it.
Piece movement:
As far as doing something to minimize that risk I’ve created a quick draft here of what the final project may look like I’ve numbered each piece on the puzzle so it will be easier for me to identify the grid location and can later replace the number with the image that I’ll use
Swapping pieces and finding the empty position: ( swap find/ 0 position):
I’ve created a function that allows me to check when the box is clicked if it’s valid to move to the empty position or not, also I created a function that can allow me to swap the pieces. Through an “IF” condition we check if the piece is valid/ empty then we allow the 2 pieces to swap.
Midterm Progress
For my midterm project, i researched many concepts but the concept that intrigued me the most is using animated sine wave structures to show data. As for the difficulties, the coding part of this concept is ranging in complexity, which i will have to pay extra attention to. Ive seen many examples on this interactive concept, i will figure out a creative way to join/connect all the things i learned in class also integrate sounds, and hopefully will have a final look to my project and by watching more tutorials on p5 i will be able to accomplish these ideas. To be Updated.