Faculty of law blogs / UNIVERSITY OF OXFORD

AI Judgment Rule(s)

Author(s)

Katja Langenbucher
Law Professor at Goethe University's House of Finance, Frankfurt; Affiliated Professor at SciencesPo, Paris; Long-term Guest Professor at Fordham Law School, NYC

Posted

Time to read

3 Minutes

AI and Law

In an upcoming paper, I explore whether the use of AI to enhance decision-making brings about radical change for legal doctrine or, by contrast, is just another new tool. The essay submits that we must rethink the law’s implicit assumption that (and how) humans make the decisions that corporate law regulates. If there is movement in implicit assumptions about how people make decisions, legal rules need review.

Decision-making is the cornerstone of corporate life and of keen interest to a variety of scholarly disciplines. They range from rational-actor theories over behavioral approaches to neuro-economics and psychology. The law has its own theories on decision-making. Many are normative and specify decision procedures and outcomes. In addition, the law rests on implicit theories of decision-making: A legal rule will look different if, for instance, it assumes either that decision-making follows optimal choice patterns or that heuristics and biases guide human decisions.

A unifying assumption of the law’s implicit theories has been that they regulate human behavior. With the rise of artificial intelligence (AI) to support and augment human decision-making, this assumption does not necessarily hold.

The paper focuses on decision-making by board members. This provides an especially interesting example because corporate law has laid out explicit expectations for how board members must go about decision-making. The law requires board members to own their decisions. At the same time, the law trusts corporate boards to take disinterested, and well-informed decisions. Directors are encouraged to delegate decisions, they can, sometimes must, seek information and expert support. Technical support tools have been a part of this process, encompassing pocket calculators, excel spread sheets or more sophisticated machines. The same goes for human support. Boards regularly hear employees, officers, or outside experts to inform their decisions and corporate law has been confident that board members can affirm ownership of a decision that technical support tools or humans have contributed to.

So far, corporate law rules on business judgments, on other decisions, and on owning information that was furnished from non-board members all have human decision-makers in mind. An implicit assumption is that board members can cognitively follow when experts present their findings or, alternatively, ask for an explanation. Does this implicit assumption translate seamlessly to integrating an AI?

One way of looking at it is to conceptualize AI as a technical support tool. It is still the board, one might claim, that takes decisions even if it follows what the AI suggests.

Another way of looking at it is to analogize an AI to a human expert informing the board. Again, one might stress, the board, not the AI, takes the final decision, even if the board as a rule follows the AI’s recommendation.

Instead, yet another way of looking at it is to stress dissimilarities between AI, traditional technical support tools, and human experts. This essay goes down that route. It submits that reflection and review are core elements of how corporate law has conceptualized board decision-making. The essay moves on to suggest that with increasing complexity of an AI, especially of the black-box variety, processing its input by humans looks fundamentally different than dealing with traditional support tools or with experts. The difference, I suggest, is the way in which an AI ‘malfunctions’ and ‘errs’. There is no conversation, even if large language models might make you think so. There is very little understanding of how the AI produces its results. With a black-box AI, board members do not get information on relevant variables nor on their weights. Depending on data and model, it will be hard or impossible to estimate the probability that the AI’s prediction is biased, not well enough suited to the corporation’s situation, or altogether wrong. Critical dialogue among humans has little equivalent because an AI ‘reasons’ differently from a human expert. Humans, when faced with a prediction task, tend to formulate a hypothesis against the background of their real-world understanding. By contrast, an AI approaches this task as a challenge of inductive inferences from data. Even if researchers can employ an AI to generate a variety of causal hypotheses, it still performs a theory-blind, data-driven search. A deeper reason for this conundrum is the difference between how a human and an AI ‘explain’. When confronted with a human expert, the board member would ask for a causal explanation. For an AI to provide that, we would need to model and infer causality from data. However, an AI mostly gives counterfactual clues, but does not provide a theoretical, conceptual explanation a human expert would expect to hear.

Encoding knowledge, building hypotheses-based explanations, and causality are just three examples to show how an AI ‘reasons’ and ‘explains’ differently from a human person. They suggest that building a board decision on an AI’s prediction resembles neither the use of a traditional technical support tool nor a dialogue with a human expert.

Against that background, the essay concludes that ownership of a board decision must be reviewed because cognitive reflection, critical dialogue, and review look different from what human decision-makers are used to. For corporate law, this implies the need to rethink implicit assumptions about how board members make decisions. Enhanced duties for boards that consult outside experts do not adequately capture what is different about an AI that augments board decisions, especially of the black box variety. At the same time, tightly regulating the use of AI would deprive boards of a powerful tool. Arguably, the law, first, needs another implicit theory on decision-making to, then, review and adapt its normative framework.

Katja Langenbucher is Professor at Goethe University's House of Finance, Frankfurt.

This post is part of the series ‘How AI Will Change the Law’. The other posts in the series are available here.

Share

With the support of