Reading refection : Making art is like setting a trap

Philip Larkin, a poet, offers a straightforward perspective on the artistic process, particularly when it comes to creating poems. He simplifies it into three stages:

Step one: Intensely Feel Something
At the start of your creative journey, you need to be deeply connected to a specific emotion. It’s like having a passionate fire burning inside you, driving you to take action.

Step two: Convey that Emotion
The next phase involves using words and other forms of expression to enable others to feel the same emotion that’s so alive within you. However, this part can be quite challenging, as not everyone may easily grasp the feeling you intend to convey.

Step three: Let Others Experience It
Once you’ve crafted your artistic work, it’s time to share it with the world. When people engage with your poetry or observe your artwork, it should ignite the same emotions in them that you originally felt. This is when your art truly succeeds.

Although Larkin presents this process as simple, the creation of art often involves complexities, particularly in the stages between these steps. Crafting the actual artwork can be a time-consuming and mysterious journey.

Larkin suggests that to evoke the emotions in others, you must first find satisfaction within yourself. This means that while creating art, it should resonate with your inner self. If it does, it increases the likelihood that others will experience similar feelings when they interact with your work.

One very intresting concept in the text is that some artists dedicate their entire lives to expressing the same emotion in various ways. It’s as though they possess one profound sentiment they wish to share with the world.

While Larkin’s stages offer a simple framework for art, the true artistry often unfolds in the nuances between these phases. Having that one special feeling you want to convey can propel your creative journey throughout your lifetime.

Week#5 Midterm Project

I wanted to make my midterm project really personal so as to motivate me to want to code and really push myself and the only topic that instantaneously came to mind was music. I am an avid music lover and have used music as a coping mechanism during times of (lowkey) depression and so I naturally garnered a really strong feeling about it. I decided to create my own version of the “coffee shop experience” which really struck a chord in me. My rendition of the “coffee shop”, as aforementioned, centers around music. A vintage radio with an on/off, skip and pause button, which necessitates user interaction, would play a sequence of 5(?) songs – more details listed below. Each song has their own uniquely specific background, each relating to a specific memory, and each background has an element of dynamicity. It’s important to note that the song title and artist name will be displayed on the radio for each song that plays.

Checklist:
1) create a radio using p5.js javascript.
(on/off button – when “on”, music plays, skip button, pause button)
(off button – sequence of songs randomizes + turns off music).
(make sure songs autoplay on loop)
2) (image and sound) – different songs(x5) have different backgrounds.
3) (list/arrays and on-screen text) – song title shows on display when song is playing.

Specific worries/ possible challenges:
1) keeping track of specific parts of codes:
it’s absolutely imperative that my code is well organised and commented, more so for me since I’m in the process of establishing somewhat of a foundation in coding. Keeping track of variable names, classes, functions are most imperative. One solution could be to work on separate sketch.js files for specific parts (for example., backgrounds) specifically for code sections that will be encompassed in classes. Once I’ve made sure it works, I can always copy and paste it onto the main sketch.js file. I’ll also need to copy and paste regularly my main file code onto googledocs/ word doc just so that it’s easier for me to keep track and stay more organised – I find that the p5.js window size limits readability.

2) localising “click” function to specific shapes:
throughout my assignments where interactivity is present, I’ve used “click” functions that result in a change, but it has never been localised to a specific area. It was general and random, which hence enabled flexibility. Because of this, I am naturally worried of the complexity of the code. And with this follows a subsequent building anxiety: the numerous conditions (if/for loops) that majorly depend on how the user decides to interact. To me, this particular segment appears as a big knotted, balled up thread which I have to individually unpick and so at the moment seems incredibly daunting… Of course though nothing is impossible with an internet search (in-depth if necessary). According to my memory, the topic of localising the “click” function to specific shapes has been touched upon in one of the coding train videos. The same can be applied to the if and for loops.  Furthermore there is always the “coffee shop experience” example I can use for guidance or as a source of reference. 

Week 5 – Midterm Progress

Concept)
Looking around, I see that most people on campus are worn out both mentally and physically. Many students have hit a point where they are so stressed, but don’t even have time to manage and relieve stress. From this, I started thinking that I want to do something with ‘screaming,’ as it’s a good means of letting out stress and frustration.

I believe there are a few games that use volume or pitch of sound as a way to move around characters. Below is one of the game I have used as a reference.

Most games, I found out, use either volume or pitch of input sounds, not both. Based on this, I decided to use both factors. I plan to use the volume as the speed of the character and the pitch as the location (up and down) of the character. With these controls, the player will have to navigate through a certain map, collecting coins. (Below is a reference of what I’m thinking of.)

Programming Ideas)
There should be functions that create the map with collectables. There should also be a function that takes the input sound and translates it to the player(character)’s location and moving speed. There should also be something that keeps track of the contact of player and the wall, and another that keeps track of the scores.

Getting sound input and translating it to the player’s position and its moving speed would involve interactivity.

Complex Parts)
I think the hardest part would be figuring out how I’m going to translate the volume and the pitch of the user’s sound. It can’t be too sensitive as the game will be come frustrating, but it also can’t be too hard to move the user around.

I also think figuring out when the player has contacted a wall (which should end the game) would be a challenge.

Reducing Risks)
I think I’ll work on two parts separately: the movement of the user using sound and the moving map. I will first work on the movement of the user, testing out different volumes and pitches of the user. If I feel like the sensitivity is just right, I will move on to work separately on the map.

I will have to figure out the shape of the maps and how I want to generate it. I will also think about how to randomly place collectibles. When both parts are done, I can combine the two projects to create a full game.

 

Week 5 – Reading Reflection

The study that was given as our reading covers how the technology of computer vision started and how it evolved through different usages of different artists and fields. I think most of the readers would have expected how computer vision’s utility is limitless- we’ve already seen so many works that use computer vision that suggests potentials of further development and extension.

The reading gave a good reminder that as much as using computer vision can lead to fascinating works, it holds lots of limitations and needs careful considerations in order to maintain good accuracy. The fact that how precisely the technology can work depends on our decisions is another charm of it in my opinion.

I also like how the work Suicide Box sparks questions. I understand that there can be different views (especially ethical) on the approach. I wouldn’t say a certain side is more ‘correct’ than the other. However, I do want to say that the sole fact that it sparked questions and discussions about an issue that people tended to walk away from and ignore is a significance on its own.

Week#5 – Reading Reflection

Golan Levin’s reading sheds light on computer vision and its role in interactive art and creative design. It’s intriguing how the seemingly contradictory worlds of computers and art come together to create interactive art that deeply engages people. This irony lies in the fact that digital technology often absorbs individuals, disconnecting them from the physical world. However, when art and computers combine, they captivate and transport individuals into alternate realms that encourage technological interaction. During this interaction, artists embed powerful meanings into their pieces, which become all the more memorable because people physically engage with them.

On a different note, one aspect that stands out is how computer vision has evolved beyond art and found applications in diverse fields, including healthcare, transportation, security, and entertainment. This evolution reflects the pivotal role of computer vision in our modern lives. However, its important to note that the reading also raises ethical and societal questions, particularly in the context of surveillance-themed artworks like “Sorting Daemon” and “Suicide Box.” These pieces challenge us to consider the implications of using computer vision for both artistic and surveillance purposes, which ultimately blurs the line between observation and intrusion.

In conclusion, computer vision has undergone a remarkable transformation, becoming a powerful tool for artists, designers, and creators across various fields. Its integration with cleverly designed physical environments showcases the interdisciplinary nature of interactive art and design, bridging the gap between the virtual and physical worlds. While offering incredible creative possibilities that often proffer a deep and meaningful message, it also prompts us to reflect on the ethical and societal implications of this technology.

Reading reflection – Week #5

The study by Golan Levin delves into the integration of computer vision algorithms within the realm of interactive artworks and creative design. While importance of cleverly designed algorithms that would allow computers to make smart interpretations about digital images and video is unquestionable, I kept thinking about the role of other factors influencing computer vision. One such factor is the optimization of the physical environment where the video data is collected. Especially in the case of artworks and the creative field where the physical environment can be controlled, it is important to consider how the visual data collection is staged, as it can either increase the reliability of computer vision or make it more difficult to analyse. Cleverly designed physical environments can potentially decreased the need for more complex software solutions. This concept highlights the creativity necessary in the world of interactive art and design. It implies that artists and designers can use their spatial and environmental design skills to develop setups that allow for accurate data collection. They can extend the capabilities of computer vision without resorting to advanced and intricate software development by deliberately altering the physical context. This approach reflects the field’s interdisciplinary nature, in which creativity goes beyond the digital domain and demands a complete understanding of both the virtual and physical components of interactive artworks.

A compelling example of the principles discussed in the paper can be found in the artwork “LimboTime.” “LimboTime” is a simple game that was conceived and implemented in a remarkably short span of a single afternoon. Its creators utilized a wall-sized sheet of white Foamcore as an ideal environment for brightness-based thresholding. The usage of this material enabled them to effectively find the game’s three players against the background, demonstrating how physical environment optimization may contribute to computer vision success. Rather than depending on complex software solutions, the detection of players’ hands and heads was accomplished by creative heuristics, such as recognizing the limboer’s head as the uppermost point of the middle ball of black pixels. This elegant solution showcases that a well-designed physical environment can improve the performance of computer vision techniques, simplifying the implementation process, and fostering innovation in the realm of interactive art and design.

Week 5: Mid-term progress

For my mid-term project, my initial inspiration was to create a minimalist cooking game, a concept deeply rooted in my childhood experiences. During those early years, I would eagerly scour websites in search of every cooking and Barbie game available, often spending entire days immersed in these virtual culinary adventures. Drawing from my love for baking, I decided to develop a game focused on making cupcakes.

The core interaction in this game revolves around the user’s ability to add ingredients and craft a delightful cupcake. The user can click the ingredients to add them, and a cupcake would appear. To enhance the overall experience, I incorporated lively and playful background music that would accompany the gameplay.

 However, when I began coding the game, I encountered my first major hurdle: I couldn’t figure out how to call the function responsible for the cupcake’s creation. While I successfully implemented the ingredient addition aspect, the expected output of a fully-formed cupcake remained difficult to track down. These are basic coding that I have done till now. I would like to change the background to a self created portrait and add additional elements. 

Additionally, I encountered difficulties when attempting to integrate music seamlessly into the gameplay as I had initially envisioned. This obstacle prompted me to consider a more straightforward approach, where I would include a continuous background music track instead.

In summary, my mid-term project revolves around a nostalgic cooking game centered on baking cupcakes. Despite encountering challenges with function calls for cupcake creation and integrating music as part of the interactive experience, I am determined to overcome these obstacles and provide an enjoyable and immersive gaming experience for players.

Week 5- Reading Reflection

Computer Vision for Artists and Designers

We always see the stage but the backstage remains secret for most of us until we get interested and look at it by ourselves. Today’s material opened the curtains and showed the backstage of the computer vision. We all have been imposed on face and movement recognition to some extent, starting from the face masks in the Instagram filters and the face ID of the iPhone to the movement recognition in various artworks. Although I always wondered how it works, I didn’t have a chance to look at it by myself until today. 

Although I could predict beforehand that the motion is detected by the comparison of the pixels in each subsequent frame, the information about the detection through brightness threshold was new for me, yet so simple. It was interesting to read all these operations made by the computer, which is hidden from our eyes. After reading the article, I have a question whether the advanced technologies still operate the same way, or whether their way of detection might be changed over time. 

I am surprised by the amount of constraints and limitations of the detection of presence and motion, object tracking, etc. Although the technologies have developed rapidly since the day of the first interactive media artworks such as Videoplace, there are still many things, that need to be developed. For instance, simply the face ID in my iPhone recognizes me and my 11-year-old sister as one person. 

Last but not least, I would like to mention examples of artwork using computer vision. While some create an entertaining and aesthetically pleasing experience with voice recognition as Messa di Voce or movement recognition as Videoplace, others use computer vision to raise social issues such as Standards and Double Standards and Suicide Box. I liked the idea of Suicide Box in a way that this project could be developed further as a safety tracker. Every time the vertical motion is recognized, the net can be released under the bridge to catch the falling person and the signal can be sent to the police about possible suicide. Considering the statistics of 17 suicides in 100 days of deployment, the issue of suicide remains significant and needs a solution.

Week 5 – Midterm idea

Design and Concept


Over the first few weeks of this class and our class projects, I’ve found myself leaning towards pieces that would work well in an art installation setting ( or at least I think so ), ones that don’t require much user interaction but is just pleasant to look at.

For my midterm, I wanted to do something similar, but at the same time incorporating some more interaction in it while retaining the essence of being an ‘installation piece’ . I was inspired by Chris Milk’s “The Treachery of Sanctuary”, Romy Achituv & Camille Utterback’s “Text Rain”, and more than the others, Scott Snibbe’s “Deep Walls”. I loved the concept of leaving behind little snippets of yourself, seeing the snippets left behind by other people, and the idea that for every memory that is added, a memory is lost.


Picture of Scott Snibbe’s “Deep Walls”

For my midterm project, I’d like to do something similar with a camera and a keyboard. On the screen, there will be a few black and white photo threshold’d to be fully black or white so that the person is not identifiable, along with some text that was left behind by that person. The next viewer will be given one of a few prompts, such as “What is your favorite memory?” , “Who do you miss the most?”, “What are you looking forward to?”, and the viewer can type their answers with the keyboard while the camera takes a photo of them without a prompt. Their memory and photo will be left on the screen, replacing one of the existing ‘memories’, causing it to be lost forever.

I’ve been thinking about the concept of memory for a while, and I thought that this will be a good project to incorporate my thoughts into a physical piece. The biggest challenge for this project I think will be getting the camera and photo-taking to work with Processing, and doing so in a meaningful way. I would also have to think of some good prompts for the piece, as some prompts would be too simple, while having them be too complicated might discourage viewers from typing.

Week 5 – Computer Vision Reading

It’s fascinating how many technologies were developed first for military use, before finding it’s way into the commercial space! This was the first time I’ve heard of computer vision originating from military origins, but I knew that the Internet was first invented for war-time communications, and super-glue was an accidental invention from other military-based research. Those inventions that were first made for something as violent as harm has eventually settled down and turned into technologies that we use in our everyday lives, and even in art.

I still think it’s funny how some tasks that are so simple for humans are extremely difficult problems in computation, and some tasks that are nigh-impossible for humans is easy for computers. Computers can calculate 3 to the power of 100 almost instantly, a task that no sane human can do. In the case of computer vision, any human can identify a pigeon in a picture, yet it took computers decades of research before it started being able to identify objects well.

I also found it interesting how many of the early applications of computer vision in projects has a theme of surveillance around it, and I think it reflects how the scene viewed cameras at the time — tools of surveillance. Standards and Double Standards and Suicide Box both use computer vision to highlight societal observations through the lens of an array of pixels.