My current research in the Cinema and Media Studies at York University involves studying the cinema of facial recognition software (FRS) alongside research creation that build and trains FRS using Tensorflow and Keras and employing OpenCV and Python, as well as a variety of publicly-available datasets like FERET, the ORL face database, and the recent IBM Faces in Diversity. From the generation and testing of this software my goals are to establish best practices for identifying and creating what I am calling “ethically sourced algorithms”. There is an urgency around the need for public-facing and easily digestible explanations of FRS, and by extension guidelines for ESAs, as it is widely accepted that facial recognition software can, and very often does, contain biases in its code libraries and/or training databases. Using the case studies of online beginner tutorials for FRS, in particular the most commonly used Haar Cascades, I hope to establish the parameters for ESAs, with an action plan for enforcement that would grant users of an algorithm greater knowledge and transparency into the datasets, data sources, and underlying logic of the computational system they are participating in. I am then generating artwork that exposes the problematic overlaps in vision between facial recognition software and Hollywood cinema.
Meta-Watching: The Cinema of a Facial Recognition-Enabled Camera
FSAC , 2019
The proliferation of governmental and corporate use of facial recognition software, veiled under its secretive surveilling nature, has created an increasingly urgent need to consider the particular kinds of “perception” and “spectatorship” the facial recognition-enabled (FRE) camera possesses and the properties of the cinema such a camera might generate. Superficially, the FRE camera appears to be marked by Manovich has called “the uniformity of machine vision” where the “mechanical eye [becomes] coupled with a mechanical heart: photography [meets] the motor.” Yet, a FRE camera does possess memory, even as “memory” is a word activated very differently when housed in a digital assemblage: it is reflexively watching its own footage, a meta-watching wherein, first, the camera receives footage; second, that footage is taken in as input and processed through its algorithm (which includes matching within its databases and a set of other potential interlocking algorithms and processes); lastly, that information is made visible as an image, or moving image, that is interchangeably combined with footage and data. Linking this to Wolfgang Ernst’s “working memory,” the circuits, the “working memory,” that the FRE camera loops through are not simple mechanical reactions: the output image or moving image would be a collage of the raw footage and the layers of data, from the database-memory as processed by the algorithm, overtop. Looking at this informational flow and working memory, we can see multiple versions of what Alexander Galloway would identify as an interface, which he defines as “the place where information moves from one entity to another, from one node to another within the system” or more simply “the name given to the way in which one glob of code can interact with another.” A good number of those interfaces enact their liminal nature “internally,” with the computational activity occurring so quickly that it appears instant; from these actions, the FRE camera’s output, the data-layered image/moving image, is exactly what Manovich is discussing when he crosses a new “info-aesthetics . . . the aesthetics of information access as well as the creation of new media objects that ‘aestheticize’ information processing.” However, activating Jacques Rancière’s notion of “aesthetic practices,” the FRE camera is just one example of potentially very problematic “forms of visibility that disclose artistic practices, the place they occupy, what they ‘do’ or ‘make’ from the standpoint of what is common to the community.” Importantly then, “aesthetics” extends artistic practices to the types of “visibility” that is recognized and valued (or unrecognized and undervalued), that Rancière ties directly to a society’s consensus, the hegemonic practices of a society, that is ultimately politically, not artistically, activated. The FRE camera’s cinema adopts the “common” digitalized strata of a contemporary information processing and image making and repurposes it into a form of unidirectional panoptic surveillance layered overtop an omni-directional space, enacting its power dynamics unevenly and invisibly.
The Coded Gaze Watches Hollywood
International Conference on Social Media & Society, 2019
This presentation explores the overlaps between what Joy Buolamwini calls the “coded gaze” inherent to facial recognition software and a similarly discriminatory vision enacted through Hollywood cinema.
Governmentality, Facial Recognition Software and Hollywood Cinema
FSAC Graduate Student Conference , 2019
Facial recognition software (FRS) is a symptom of Foucault’s notion of “governmentality,” wherein the State deploys tactics of power, like FRS, as a means to ensure its own self-preservation (102). Giorgio Agamben argues that “by applying these [tactics] to the citizen, or rather, to the human being as such, the State is applying a technological apparatus that was invented for a dangerous class of persons. The State…has made the citizen into the suspect par excellence, to the point that humanity itself has become a dangerous class.” (202); this is supported by the the Centre on Privacy and Technology at Georgetown Law titled “The Perpetual Line-Up” which states that nearly 1 in every 2 U.S. citizens are already in a facial database (Garvie et al, para. 3) and that the most consistent victims of its over- and misapplication are intersectionally-disadvantaged populations. It is clear that the production and processing of portraiture aimed at datafying a population, often under the guise of “objective science” and “public health,” is a practice that most often assumes that population must be controlled and monitored in order to enact governmentality. I will support these claims with a close reading of photos from the NIST Special Database 18 Mugshot Identification Database (1997) through the lens of portraiture; this examination will be further grounded in the 19th century practices of phrenology, physiognomy, eugenics and signaletics and put in combination with Joy Buolowini’s concept of the “coded gaze” as part of her Gender Shades project.