Reading this article, what I picked up was this: “it took computer scientists a few decades to figure this out, but now you get to play around with it too.” At the beginning, the author states how computer vision used to be a domain exclusive to specialists, such as the military or law-enforcement. Then at the end, there’s the appendix with the sample processing code that even “novice programmers” could use to kick-start a computer vision project “in as little as an afternoon”. This is grounds for celebration because, by lowering the barrier to entry here, we have democratized the technologist community by a notch, and that opens up the scope for so much creativity that would not have such a limitless medium of expression otherwise. The article seems a little old, and newer technologies have entered the space since its time of writing, including as Google’s Teachable machine. It is fascinating how I can now train a ML model and make something like *Cheese* by Christian Möller (2003) in a few hours using Teachable, whereas Möller’s piece was a product of the joint effort of the artist and a whole research laboratory.
Another aspect of computer vision that made me think was the hand of humans, and all of our humane subjectivities, in the development of these algorithms. The author mentions that a computer processes a video stream differently than text, in that there are no principles for semiotic interpretation in place, and so computers need to be programmed to answer even the most rudimentary questions about the images it “sees”. Then, the algorithms are susceptible to absorbing the same implicit biases as their creators. For example, whether a computer thinks it’s looking at a “woman in traditional clothes” or “an exotic foreigner” will depend largely on how the programmers perceive the woman. Biases in AI and computer vision are now a popular topic in contemporary tech discourse, but I wonder how these concerns have evolved over the years, since the inception of computer vision in the 70s until now. Have programmers always been wary of negatively influencing their softwares, or are these 21st century fears? What can technologists do to ameliorate them?