I find the Suicide Box project that surveyed the suicide jumpers from the Golden Gate Bridge to be very intriguing. Jeremijenko stated that the project was able to monitor real data and formulate a database for this. What I, as well as other people who thought the project was controversial, would like to know is how it was programmed to capture such data to the point where it could be called “real.” Usually machines/softwares are prone to the black box issue, which makes them very susceptible to making inexplicable mistakes while they are working. Obviously, this project faced the controversy for the right reason regarding the ethics of technology, that is using it to measure something very tragic. Nevertheless, the authors had good intention, but the way the data was recorded needed to be examined carefully.
In regard to computer vision techniques, detection through brightness thresholding mainly deals with illumination or contrast. The computer does some very simple comparison to figure out if objects are darker or lighter than the surroundings. But I would like to learn more about this aspect in terms of color vibrance and saturation. Say for example, can the computer compare the vibrance in at least 8-bit RGB color between a given object and the surrounding? Or between different objects?
As for computer vision in multimedia authoring tools, in addition to Java-based scripting, live video input and fast pixel manipulation these days can also be done with other languages. Art-based softwares are having a prime time given how much these languages are being developed. However, the more advanced processing a software is written in, the more complicated the interaction between users and computers will become, which entails a detailed and almost error-free instruction designs.