Neil Leach’s talk on AI went pretty much as I expected. After attending another AI talk the day before, it always seems like people, especially during Q&A sessions, are very interested in the ethical implications of AI. And why wouldn’t they be? My capstone project focuses on surveillance systems and facial recognition technologies that are used to target marginalized groups in oppressive contexts. When I see a mid-journey or a DALL E image, I’m not amazed by how advanced our technology has become in generating text to image. Instead, I struggle with the fact that these deep learning models are also used for facial recognition, deepfake technology, and the spread of fake news. They are likely to replace countless blue-collar and white-collar jobs. For me, the negatives far outweigh the positives of using illegal copyrighted datasets to create images. The excuse of the “blackbox” has been used too often to argue against regulating AI, but I believe there needs to be a pause if not regulation. The legal process of regulating AI cannot keep up with the rapid pace at which AI is transforming, and it is a frightening time. I don’t care much about architecture being built through AI when these deep learning models have been repeatedly used in surveillance systems by regimes like Israel in their occupation, leading to the destruction in Gaza, countless lives lost, buildings in rubble. What’s the point of creation when it comes at the cost of life?
Israel/OPT: Israeli authorities are using facial recognition technology to entrench apartheid