The End of Social Media Symposium, University of Toronto, 2019
Abstract: In January of 2019, IBM responded to contemporary research highlighting facial recognition software (FRS) biases by releasing Diversity in Faces (DiF), which the company promoted as “a large and diverse dataset that seeks to advance the study of fairness and accuracy in facial recognition technology.” However, less than a month later, it was revealed that DiF was compiled by scraping over one million images from the photo-based social media platform Flickr without the consent of those who posted the photos nor those appearing in the photos. While IBM later released a “corrected” version of the database, the incident reveals how corporations, and social media companies in particular, often collect and process user data with limited concern for user consent, data sovereignty, and/or privacy. As both Cathy O’Neil In Weapons of Math Destruction (2017) and Safiya Umoja Noble in Algorithms of Oppression (2018) make clear, beyond simple concerns about user privacy, the data practices exemplified by DiF are deeply troublingly because they invisibly instrumentalize facial data as training materials for machine learning and other algorithms. This paper, in conversation with Joy Buolamwini’s contemporary work into FRS, explores how collecting, organizing and redistributing faces as data, via social media that works synergistically with other corporate and State actors, automates Gilles Deleuze and Felix Guattari’s faciality machine. DiF, and its application as a component of FRS, is just one example of digital data that makes itself vulnerable to being deploying as a tactic of governmentality which then becomes a means to implement biopolitical mechanics of identification and surveillance.