While reading this week’s article, I thought about where I had seen implementations of live images being input into a dynamic program and was reminded of Michael Reeves. For those of you who don’t know, Michael, a college dropout, makes YouTube videos about his absurd programming and design projects which, at the same time as being informative, always get a good laugh out of you. I remembered watching one of his videos where he pointed a camera toward his goldfish in a tank and used the goldfish’s live position from the camera’s feed to influence his trades in the stock market. I don’t recall how much of the fifty thousand dollars that he initially invested to test his project remained after the whole experiment but it was for sure an interesting concept. In another one of his videos, he uses his hand’s movement in air to replicate the movement of a surgical tool – or, well, a knife – on an operating table. According to the article, such dynamic image processing was not explored publicly until a few years ago, but was limited to military and research institutions. With recent advancements in this technology and rise in availability of resources for common programmers, however, the innovations in the fields of arts and sciences have exploded in number. Computer vision has provided yet another way for artists to express their interactive ideas, making actual physical interactivity possible. Whether the innovations are actual scientific advancements in the field of medicine like tracking body cells with computer vision, or Michael Reeves’ silly experiments such as making a taser electrocute you if you look less than six feet tall in a photograph, the use of computer vision is likely to become even more widespread and for good. I for one cannot wait to implement such ideas in silly projects of my own.