In this reading, Golan Levin talks about the computer vision (the broad class of algorithmic so that allow computers to make intelligent assertions about digital images and video) for artists and designers in particular, through a survey of the new applicants in the arts, system design consideration, and contemporary tool. In 1966, Marvin Minksy decided to try to solve “the problem of computer vision”. Many years later, we still use this discipline to confront ideas such as pattern recognition and feature recognition. Between 1969 and 1975, Myron Krueger was working on the first interactive artwork, ‘Videoplace’, which displayed the user’s silhouette on screen as well as movements. Furthermore, it allowed users to paint lines with both their fingers and body. There have (obviously) been many interactive media designs that followed from the videoplace such as the suicide box. The suicide box was placed in front of the Golden Gate Bridge (a known suicide spot) and recorded the suicides through motion-detection. This was a very controversial project as it was morally and ethically wrong for obvious reasons.
There were numerous kinds of problems that vision algorithms have been developed to address. For example, a ‘Digital video is computationally “opaque” – it doesn’t contain any intrinsic semantic or symbolic information. Thus, a computer with no additional programming is unable to answer basic questions such as whether a video stream constrains a person or objects or whether it shows daytime or nighttime. There are three elementary techniques that can solve these problems. 1) frame differencing (attempts to locate features by detecting their movements). 2) background subtraction (locates visitor pixels according to their differences from a known background scene). 3) brightness thresholding (hoped-for-differences in luminosity between foreground people and their background environment). There are algorithms to help achieve this: detecting motion, detecting prescience, detection through brightness thresholding, simple object tracking, and basic interactions. However, we should solely rely on this as it can fail. We should also focus on computer vision in the physical world. These techniques include low light conditions, the use of retroflex time marking materials, and perhaps not intervening in the environment at all. To implement machine vision techniques directly from the first principles, all that is required is that It should provide direct read access to the array of video pixels obtained by the computer frame-grabber. To conclude, quality is determined by the specific algorithms which are used to analyze it. Furthermore, people new to computer vision should focus on the physical scenarios as well as the code.