Only recently has the psychologically hazardous side of data work—the under-paid labor necessary to sustaining or scaling contemporary AI systems—come under broad public scrutiny. With the astronomical expansion of generative AI applications, a feat made possible by a parallel expansion of the data work labor force, these jobs increasingly involve repeated exposure to extreme graphic content, which data workers must examine, parse, and label to ensure a safe and sanitary experience for end users. Investigative reporting and data worker testimonies have underscored that these activities leave lasting psychological scars, often invoked as evidence that data worker employers are engaging in exploitative labor practices. Notably, in a lawsuit against Meta, former content moderators successfully garnered restitution by connecting the after-effects of their employment with Diagnostic and Statistical Manual of Mental Disorder (DSM) criteria for post-traumatic stress disorder (PTSD). This marked one of the most high-profile instances in which technical, biomedical definitions of mental distress played a role in legitimizing the damaging nature of data work.
But are medicalized frameworks like PTSD the only—or most ideal—way to recognize and rectify data worker harm? This talk considers some of the potential problematics that might arise through a narrow operationalization of explanatory models like PTSD to codify the occupational hazards of making AI. It does so through an ethnographic account of the dataset annotation process essential to “vocal biomarker AI,” mental healthcare technologies designed to detect expressions of mental distress in the voice in order to streamline patient assessment or triaging. Integrating feminist science and technology studies (STS) analyses of data work with disability studies scholarship on the limitations of rights-based disability claims, the talk uses this ethnographic case study to suggest that a singular reliance on DSM-sanctioned diagnoses might run the risk of treating data worker distress as an exceptional, individual bug and not—as data worker labor organizers emphasize—endemic to the design features and labor relations that many tech companies depend upon.
Bio: Beth Semel studies the sensory politics and technopolitics of American mental health care in an era in which artificial intelligence (AI) is called upon to manage increasingly broad arenas of human life. Her ethnographic research traces the sensory-communicative practices and labor that underpins machine listening technologies, especially those designed to evaluate and track people experiencing mental distress using computational voice analysis.
Semel’s current book project investigates efforts to utilize voice analysis AI to radically transform the way that people who interface with the American mental health care system are listened to. She exposes how the decontextualizing, actuarial, and universalizing ideologies of listening at play in these projects are shaped by the American mental health care system’s ever-tightening entanglements with profit and security regimes. Semel argues that the making of machine listening mental health care technologies are sites where dominant frameworks of language, disability, race, gender, and care are reproduced, and at times, contested and reconfigured by the very individuals involved in assembling them, from clinical social workers and data annotators to human research subjects.
Semel completed her Ph.D. in History, Anthropology, Science, Technology and Society at MIT, her M.A. in Anthropology at Brandeis, and was a 2018-2019 Weatherhead Fellow at the School for Advanced Research.
In-person attendance is open to Princeton University faculty, staff and students. This talk will not be recorded. The livestream will be available on Zoom, at this link, for those with a Princeton University email address.
If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.