Faculty of law blogs / UNIVERSITY OF OXFORD

Algorithmic Oversight: What Directors and Officers Must Do to Comply with Caremark Duties in the Age of Artificial Intelligence

Posted:

Time to read:

3 Minutes

Author(s):

Pierluigi Matera
Professor of Comparative Law at Link Campus University, Rome and Visiting Professor of Corporations at Boston University School of Law

Delaware oversight doctrine has always been structured around information. Under In re Caremark International Inc Derivative Litigation and Stone v Ritter, liability may arise where fiduciaries (1) fail to implement a reasonable reporting or information system or (2) having implemented such a system, consciously fail to monitor it or respond to red flags. In either case, liability turns on bad faith—a sustained failure to exercise oversight or a conscious disregard of known risk.

Recent cases have sharpened how this framework applies. In MarchandClovis, and Boeing, the Delaware courts emphasized that boards must structure oversight around risks that are mission critical to the corporation’s business, particularly in highly regulated industries. More recently, In re McDonald’s Corp Stockholder Derivative Litigation clarified that officers also owe oversight duties within their delegated domains—an extension that remains debated but significantly expands the practical reach of Caremark.

This doctrinal structure now confronts a structural transformation: artificial intelligence (AI) increasingly mediates the production of oversight-relevant information.

Public companies now deploy AI systems to perform functions at the core of board-level supervision—compliance monitoring, anomaly detection, cybersecurity surveillance, financial screening, and the filtering of internal complaints. These systems do not merely accelerate reporting. They shape what is reported. Oversight is therefore shifting from human-detectable red flags to algorithmically mediated black boxes.

In my recent paper, From Red Flags to Black Boxes: Corporate Oversight in the Age of Artificial Intelligence, I argue that AI does not alter the legal standard governing Caremark liability but changes the evidentiary terrain through which directors’ and officers’ good-faith oversight is demonstrated.

The central question is therefore not whether AI destabilizes Caremark. It does not. The question is what Caremark requires when oversight itself is partially automated.

Directors: Designing and Supervising Algorithmic Oversight

Caremark’s first prong concerns the implementation of a reasonable reporting and information system. When AI systems form part of that architecture, the legal standard remains unchanged. Directors are not expected to master machine-learning models or guarantee technological performance. Caremark remains a doctrine of loyalty, not care.

Yet good faith must now be demonstrated in a different informational environment.

At a minimum, boards must understand where AI systems operate within the firm’s risk architecture and why they matter. If algorithmic tools mediate compliance, safety, or regulatory exposure, they cannot be treated as ordinary operational details.

In practical terms, boards should ensure that AI-driven monitoring is embedded within a governance structure capable of surfacing material risk. This typically requires clear managerial responsibility for the system, regular reporting to the board or relevant committees, periodic review of system performance and limitations, and credible escalation channels for anomalies, model degradation, or regulatory concerns.

Reliance on officers, experts, or vendors under DGCL §141(e) remains permissible but must be informed and periodically reassessed. Delaware law protects good-faith reliance on competent experts—not blind reliance. A board that adopts an AI system and delegates oversight entirely without revisiting the reasonableness of that reliance may find that formal reliance provides little protection.

The inquiry, ultimately, is procedural rather than technological. Courts are not asked to evaluate algorithms; they are asked whether directors made a genuine good-faith effort to supervise the system on which they rely.

Responding to Algorithmic Red Flags

Caremark’s second prong concerns the conscious disregard of red flags. AI complicates this inquiry because it intervenes at the earliest stage—the generation of the red flag itself.

If a model fails to detect misconduct, the absence of a warning cannot alone establish bad faith. A board cannot consciously disregard a signal that never appears.

Yet Delaware law has never confined red flags to explicit alerts of wrongdoing. Repeated regulatory inquiries, operational anomalies, or evidence that controls are ineffective may themselves function as warning signs.

In an AI-mediated environment, a new category of red flags emerges: signals that the monitoring infrastructure itself may be malfunctioning. Model drift, performance degradation, unexplained stability in alert rates, or divergence between algorithmic outputs and external indicators—such as enforcement actions or whistleblower complaints—may function as second-order red flags. As in Stone and Boeing, escalation—not technological complexity—anchors the inference of bad faith.

Officers After McDonald’s

The officer dimension is particularly significant in an AI-mediated governance environment.

McDonald’s makes clear that officers owe oversight duties within their operational spheres. Senior officers responsible for compliance, technology, risk management, or data governance often operate closer to algorithmic systems than the board itself. Their obligation is not to guarantee technical performance but to ensure that oversight-relevant information flows upward and that credible warning signals are escalated rather than suppressed.

Delegation does not dilute fiduciary duty. It defines its perimeter.

Conclusion: A Stress Test for Caremark

AI does not transform Caremark into a negligence regime or make fiduciaries insurers of technological performance. What it does is stress-test the doctrine.

By embedding oversight functions in opaque and probabilistic systems, AI increases the difficulty of demonstrating good faith while simultaneously generating richer records through which courts may evaluate fiduciary engagement. This evolution may ultimately work in favor of either plaintiffs or defendants, depending on how directors and officers structured, monitored, and documented the algorithmic oversight systems adopted by the corporation.

Caremark survives this stress test intact—but only if boards and officers adapt their governance processes so that their good-faith engagement with mission-critical risk remains visible in an algorithmic age.

The full paper can be accessed here.

Pierluigi Matera is a Visiting Professor of Corporations at Boston University School of Law, a Professor of Comparative Law at LCU of Rome, and  co-founder and Managing Partner at Libra Legal Partners.