Reading response #4: computer vision

Computer vision has strong analytical capabilities, but only with proper programming. For example, in Limbotime (the example given by the web), the computer interprets the human body as black pixels, where the head is identified as the vertex of the pixel group. This expands the range of input compared to simpler systems like p5, where input is limited to the keyboard. It also highlights the key difference between computer vision and human vision. While computer vision relies on data and algorithms, human vision rely on context and experience, which is similar to computer vision’s logic but is much more subjective.

Optimizing physical conditions, like using reflective materials or controlled environments, can improve computer vision’s accuracy. But there are broader implications, especially regarding surveillance and privacy. I’m always surprised by how quickly we’ve adapted to technologies like Instagram filters. It’s now expected for the screen to show something when you interact with them.  This also makes me wonder how computer vision might shape human vision, not just physically but in how we perceive ourselves. In terms of servailence, the privacy intrusion may not be as extreme as sci-fi films suggest, but it does subtly alter how we see and present ourselves.

Leave a Reply