Faculty of law blogs / UNIVERSITY OF OXFORD

Behavioral Law & Policy of AI Trust

Author(s)

Orly Lobel
Warren Distinguished Professor of Law, the Director of the Center for Employment and Labor Law, and a founding member of the Center for Intellectual Property Law and Markets at the University of San Diego

Posted

Time to read

4 Minutes

With the dazzling advances in artificial intelligence capabilities, regulatory policy should aim at spurring the right amount — and the correct kind — of AI trust. In my recent research on AI policy, including my new book The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future and my article, The Law of AI for Good, I aim to pivot policy debates about automation and artificial intelligence (AI) toward more rational and grounded analysis. Just as behavioral research first developed in relation to marketing and consumer behavior and only later came to be recognized as significant in policymaking, so should policymakers turn their attention to understanding the human biases that lead to irrational algorithmic aversion and algorithmic adoration. The emerging experimental literature on trust, and distrust, of AI can serve as a blueprint for policy research and interventions. The adoption of AI is bound to accelerate, affecting every aspect of our lives. At the same time, contemporary tech policy scholarship, public debates, and reform proposals pervasively question automation as a desirable development. People thus have a heightened awareness that there are harms and risks associated with automation. However, we do not yet have a common language, or even shared taxonomy, to compare and evaluate the tradeoffs inherent to automation. I call this the human-AI trust gap, which I argue is a significant barrier to benefiting from automation opportunities. That is, whether we have too little trust or too much trust in algorithms, the human-AI trust gap is that we are missing a shared literature and methods to understand when trust is given and when trust is due. Government entities should commit to improving AI and building rational social trust in these systems. Policymakers must study how to effectively integrate AI tools within human processes and systems. Digital literacy — and improving digital rationality — should be a national strategy. The aim should be the right mix of trust and skepticism — a Goldilocks appreciation of technology, based on accurate assessments and acceptable trade-offs.

Behavioral human-AI research, examining algorithmic trust and human-algorithm interactions, is a rather nascent field of study. Similar to the overarching field of behavioral studies, much of the insights come from the business schools, particularly the marketing literature, which tends to focus on how consumers make decisions. Similar research needs to be done at the policy level. For decades, the policy implications of behavioral studies have lagged behind market applications, and I predict that similarly, we will soon see a more concerted effort to understand the policy implications of behavioral human-AI studies. Algorithmic trust — and distrust — is multi-dimensional. A 2022 Pew survey, consistent with other recent studies, finds that most Americans fear AI and have little confidence about its use by government entities. Ironically, we fear AI’s flaws and flawlessness. Studies also find demographic differences in AI trust. For example, women view AI more negatively than men. Education and income levels are also predictors of AI aversion, with lower education and income predicting higher distrust.

Under certain circumstances, we trust bots too much. We might call this algorithmic adoration — some behavioral scientists have termed it algorithm appreciation, but that does not seem to capture an over-trust attitude. As I explore in The Equality Machine, we have long held ambivalent, and even irrational, attitudes towards technology, and in some studies, humans are found to be too trusting and perceive algorithms as inherently superior to human decision-makers. The technical nature of AI tools may convey a false sense of precision and objectivity, lending a sense of inevitability to outcomes that in fact rest on human choices. Part of the responsibility of government regulators is to understand why and when people are averse to algorithms or inherently prefer a human decision-maker. Moreover, educational efforts can help moderate the irrationalities of both algorithm aversion and algorithm adoration.

The existing research insights on human-machine trust should raise doubt about recent policy reforms, such as laws requiring real-time consumer notification about the use of automated processes. I argue that there may be inadvertent irrationality in some aspects of contemporary AI policy. For example, the right to know that you are interacting with a bot, or that you are subject to automated decision-making, is a centerpiece of EU/US legislative proposals. For example, under the draft EU AI Act, consumers would have a right to see disclosures that they are chatting with or seeing images produced by AI. In 2021, Quebec similarly passed a law that requires individuals to be informed when automated decision-making tools are being used. Yet, this right to know about automation may inadvertently create aversion and promote a human-in-the-loop impulse. For example, in a recent experiment published in Nature, physicians received chest X-rays and diagnostic advice, some of which was inaccurate. While the advice was all generated by humans, some of the advice for the purpose of the experiment was labeled as generated by AI and some by human experts. In the experiment, radiologists rated the same advice as lower quality when it appeared to come from an AI system. Other studies find that when the recommendations pertain to more subjective types of decisions, humans are even less likely to rely on the algorithm. This holds true even when the subjects see the algorithm outperform the human, and even when they witness the human make the same error as the algorithm. These behaviors must be studied from a policy perspective, and regulators have a duty to help educate citizens and support rational interaction with the ever-expanding AI applications.

 

Orly Lobel is the Warren Distinguished Professor of Law, the Director of the Center for Employment and Labor Law, and a founding member of the Center for Intellectual Property Law and Markets at the University of San Diego.

 

This post is published as part of the series ‘Smart Compliance Systems in the AI Era: Combining Criminal and Administrative Measures’ and is a contribution from the symposium ‘Smart Compliance Systems in the AI Era: Combining Criminal and Administrative Measures’ co-organised by Bar-Ilan Lab for Law, Data-Science and Digital Ethics and Ono Academic College in December 2022.  

 

 

Share

With the support of