Faculty of law blogs / UNIVERSITY OF OXFORD

Ownership and Trust—A Corporate Law Framework for Board Decision-making in the Age of AI

Author(s)

Katja Langenbucher
Law Professor at Goethe University's House of Finance, Frankfurt; Affiliated Professor at SciencesPo, Paris; Long-term Guest Professor at Fordham Law School, NYC

Posted

Time to read

4 Minutes

In a new paper, I explore the legal ramifications for boards to use AI as a ‘prediction machine’, ie, to enhance its understanding of which future events are likely to happen. Most management decisions imply predictions of that type, usually based on statistics. Complementing or replacing the statistician, AI furnishes patterns in data, reinforces knownor assumedinterrelationships or points out novel, unanticipated correlations.

I focus on board decision-making because corporate law has laid out explicit expectations for how board members mayand mustgo about decision-making. By and large, the law trusts corporate boards to make disinterested, careful, and well-informed decisions. Directors are encouraged to delegate many decisions. Most legal orders accept that managerial intuition and gut may play a role. At the same time, the law requires board members to own the decisions they make. If they delegate, a duty to structure and supervise remains with the board. Running the corporation is not a task board members can outsource via abdicating authority.

My paper explores two core expectations corporate law has for board member decision-making. I suggest that corporate law expects board members to fully own their decisions. As a flipside of ownership, corporate law places trust in board members to form business judgments, immune against judicial second-guessing. These finely tuned rules have been developed against the background of human cooperation. They assume incentives for human behavior, the potential for communication, the chance to build interpersonal trust or, alternatively, the need for skepticism and critical inquiry. An AI, by contrast, does not offer opinions or engage with board members for a critical discussion among peers. Instead, it produces a data-driven statistical prediction. How does this fit in with the expectation of ownership? Is an AI like a technical support tool, a pocket calculator on steroids, as it were? Alternatively, should we treat an AI like a corporate officer or even like an outside expert?

The expectation that boards own their decisions leaves no room for the board to have an AI decide in its place. At the same time, the law has nothing against the board asking for support in its decision-making. With AI developing into a standard tool, boards will use it to support their decisions, shareholders will expect this, and failing to do so will open the board up to liability. At the same time, board judgments will look and feel differently than today. A clear distinction between the AI preparing the decision and the board making the decision will often look artificial. The more closely a decision follows the AI’s recommendation, the more the board’s role might seem reduced to implementing what the AI has proposed. The paper engages with an emerging discussion around how to draw the line between an AI merely supporting and entirely taking over decision-making, especially if dealing with a black-box AI. I submit that it is unlikely to see a board so comprehensively integrate an AI in its decisions that we would be looking at an abdication of board authority. Instead, I suggest that fresh efforts must go into understanding what corporate law expects as a minimum from board members who rely on support to augment their decision-making.

Most board decisions rest on a large variety of assumptions and predictions. Many of these are known unknowns: Will the self-driving car produce terrible accidents? Which percentage of my debtors will perform on their loans? In these scenarios, the board owning its decision translates as: Understanding the risk of working with a known unknown, evaluating it, and forming an informed and reasonable judgment. The prediction that an AI makes, explainable or black box, can be just that, a known unknown. While the law expects the board to own its decisions, it also trusts the board to handle a known unknown situation and come to a reasoned business judgment.

However, boards do not get a carte blanche. For both, ordinary and business judgments, boards must evaluate and double-check information they receive. On closer inspection, corporate law proceeds along two dimensions, namely the type of decision (business judgments and others) and the type of support tool (technical help, humans integrated into the corporation, outside experts). To illustrate, I use Delaware’s DGCL § 141 (e) and German law’s ‘plausibility check’ for board members relying on expert input

Trust and Ownership

Against that background, the paper provides a visualization in the form of a four-square control matrix. The y-axis represents the level of allowance for board discretion according to the decision’s subject matter (trust). Boards enjoy broad discretion for those elements of a resolution that qualify as a business judgment. Little discretion is accorded to parts of a decision that have to do with compliance, risk management, and similar, non-business-judgment issues. The x-axis looks at the intensity of information support (ownership). Boards have been free to use technical support tools, ranging from pocket calculators to high-powered computer networks. Human helpers have attracted more scrutiny. This is true for input by officers, committees, or employees of the corporation. Even more scrutiny concerns outside experts.

The legal logic underlying the matrix reflects the tension between boards owning their decisions and the law trusting boards without holding them accountable for ‘simply bad judgment’. Visualizing it is helpful, given that board decisions rarely fall into one neat category but, rather, combine different elements. Some are about deciding on a novel business strategy, involving market knowledge, experience, and intuition. All these are characteristics of a low-judicial-scrutiny decision. However, other parts of the decision might depend on the professional evaluation of a particular market or a new product that only experts can deliver. Legal issues might be decisive for the success of the new strategy because a new product requires regulatory approval. The matrix allows us to understand the degree of judicial review that a board resolution, with its various sub-parts, will attract. It shows how it is neither necessary to comprehensively define any AI as a purely technical support tool, nor to unfailingly analogize an AI to a human expert, be it inside or outside the corporation. Instead, it allows us to move the needle, as it were, along the x-axis, ranging from low to high ownership, and the y-axis, exhibiting low or high allowance for discretion.

The author’s full paper can be found here.

Katja Langenbucher is a law professor at Goethe-University's House of Finance in Frankfurt, affiliated professor at SciencesPo, Paris, and long-term guest professor at Fordham Law School, NYC.

 

Share

With the support of