Artificial Intelligence, Due Process, and Human Responsibility
Monday, April 8th, 2019 4:30 PM - 5:30 PM
Sponsor: The Center for Cyberspace Law and Policy Distinguished Lecture
Webcast Archive Content
When should a person take direct responsibility for an action by a corporation or government? As automation and artificial intelligence advance, important decisions are increasingly being made by software. While this “legaltech” and “govtech” promises some efficiency gains, it has also caused hardship and hassle. Virginia Eubanks’s book Automating Inequality identifies profound problems in governmental use of algorithmic sorting systems. Eubanks tells the stories of individuals who lose benefits, opportunities, and even custody of their children, thanks to algorithmic assessments that are inaccurate or biased in profound ways. The Australian robo-debt scandal offers yet another example of ordinary individuals harmed by computerized mistakes. Turning to the private sector: in Algorithms of Oppression, Safiya Umoja Noble tracked search engines’ representation of black women and found disturbing evidence of sexist and racist overtones in search results. Algorithmic representations of people were all too vulnerable to manipulation by racists, or bias from a disproportionate number of salacious searches. Cognate problems of reputational injustice afflict many on an individual level.
The more bias and arbitrariness emerge in such systems, the more their would-be perfecters seek the pristine clarity of rules so clear and detailed that they can specify the circumstances of their own application. The end-point here would be a robotic judge, pre-programmed (and updated via machine learning) to apply law or ethics to any situation that may emerge. However, this lecture will pursue an alternate approach. The solution may not be trying to perfect the machine ex ante, but rather, making certain officials or managers responsible for fixing its malfunctioning within a given period of time once they have been notified of the problem. Inspired both by the Digital Millennium Copyright Act’s regime of responsibility for intellectual property infringement, and Judge Henry Friendly’s factors for assuring due process, this lecture will argue that AI should, in most cases, only be an adjunct to—rather than a replacement of—human decision making.
Frank Pasquale researches the law and policy of artificial intelligence, algorithms, and machine learning. His book, The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press, 2015), develops a social theory of reputation, search, and finance, and offers pragmatic reforms to improve the information economy. He has published on the regulation of algorithmic ranking, scoring, and sorting systems, including credit scoring and threat scoring. He is a member of the American Law Institute and the National Committee on Vital and Health Statistics.
Continuing Legal Education Readings
Moot Courtroom (A59)
11075 East Blvd.
Cleveland, Ohio 44106
Professor of Law, University of Maryland Carey School of Law
Free and open to the public
Online registration available or register at the door
Academic Centers and Continuing Legal Education Programs