Future left is an effort sustained by the voluntary efforts of its contributors and the support of its visitors. Please share content you find useful, and please consider donating.

Ep. 102: Was HAL 9000 A Racist? (Probably)

Ep. 102: Was HAL 9000 A Racist? (Probably)

patreon button for newsletter.png

In some cases, the idea of automating a given system is to remove the human element that might bring with it the baggage of a white supremacist, patriarchal society. But too often, we see automated systems and artificial intelligence carrying the same bias as humans. In this episode, we're looking at why that is.

Subscribe to Future Left Podcast on SoundCloud, iTunes, Stitcher, Google Play, and YouTube.

READING LIST:
https://www.engadget.com/2018/06/07/psychopathic-ai-norman-reddit-data-bias/
https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html
https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses
https://newrepublic.com/article/144644/turns-algorithms-racist
https://theconversation.com/artificial-intelligence-could-reinforce-societys-gender-equality-problems-92631
https://www.forbes.com/sites/parmyolson/2018/02/26/artificial-intelligence-ai-bias-google/#70665c931a01
https://www.newsweek.com/artificial-intelligence-scientists-racist-sexist-robots-ai-693440

Ep. 103: Goodnight Alt-Right -- Fascism, Anti-Fascism, & You

Ep. 103: Goodnight Alt-Right -- Fascism, Anti-Fascism, & You

Ep. 101: The Case Against Thanos (And Thomas Malthus)

Ep. 101: The Case Against Thanos (And Thomas Malthus)