Ep. 102: Was HAL 9000 A Racist? (Probably)
In some cases, the idea of automating a given system is to remove the human element that might bring with it the baggage of a white supremacist, patriarchal society. But too often, we see automated systems and artificial intelligence carrying the same bias as humans. In this episode, we're looking at why that is.
Subscribe to Future Left Podcast on SoundCloud, iTunes, Stitcher, Google Play, and YouTube.
READING LIST:
https://www.engadget.com/2018/06/07/psychopathic-ai-norman-reddit-data-bias/
https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html
https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses
https://newrepublic.com/article/144644/turns-algorithms-racist
https://theconversation.com/artificial-intelligence-could-reinforce-societys-gender-equality-problems-92631
https://www.forbes.com/sites/parmyolson/2018/02/26/artificial-intelligence-ai-bias-google/#70665c931a01
https://www.newsweek.com/artificial-intelligence-scientists-racist-sexist-robots-ai-693440

