While the failures of industrial-scale algorithms are often attributed to some failure of machine learning engineering, many of these failures actually stem from something else entirely: the human beings whose behavior generates the data used to build these algorithms. So the solutions to these algorithmic problems are as likely to require tools from behavioral economics as from computer science. For example, research shows that prejudice can arise not just from preferences and beliefs, but also from the way people choose. When people behave automatically, biases creep in: quick, snap decisions are typically more prejudiced than slow, deliberate ones, and can lead to behaviors that users themselves do not want or intend. As a result, algorithms trained on automatic behaviors can misunderstand the prejudice of users: the more automatic the behavior, the greater the error.
We empirically test these ideas in a fully controlled randomized lab experiment, and find that more automatic behavior does indeed lead to more biased algorithms. We also explore the potential economic consequences of this idea by carrying out algorithmic audits of Facebook in its two biggest markets, the US and India, focusing on two algorithms that differ in how users engage with them: News Feed (people interact with friends’ posts fairly automatically) and People You May Know (people choose friends fairly deliberately). We find significant outgroup bias in the News Feed algorithm (e.g., whites are less likely to be shown Black friends’ posts, and Muslims less likely to be shown Hindu friends’ posts), but no detectable bias with the PYMK algorithm. Together, these results suggest a need to rethink how large-scale algorithms use data on human behavior, especially in online contexts where so much of the measured behaviors might be quite automatic.
Bio: Diag Davenport is a Presidential Postdoctoral Research Fellow at the Princeton School of Public and International Affairs, where he studies various topics at the intersection of big data and behavioral economics. Much of his research has been informed by his industry experience as an economic consultant for corporate litigation and as a data scientist at a variety of organizations, ranging from a small DC startup to the Board of Governors of the Federal Reserve. His research blends a variety of methods to understand the societal impacts of imperfect humans interacting with imperfect algorithms and imperfect institutions.
Before Princeton, Diag earned a Ph.D. in behavioral science from the University of Chicago, an MS in mathematics & statistics from Georgetown University, and bachelor’s degrees in economics and management from Penn State.
To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.
This seminar will not be recorded.