Faculty of law blogs / UNIVERSITY OF OXFORD

Fiduciary Duties and the Business Judgment Rule 2.0 in the AI Act Age

Posted:

Time to read:

3 Minutes

Author(s):

Maria Lucia Passador
Assistant Professor in Department of Law, Bocconi University
Maria Lillà Montagnani
Full Professor in Department of Law, Bocconi University; Transatlantic Technology Law Forum Fellow, Stanford Law School

We are witnessing a quiet revolution in corporate governance. Artificial intelligence has moved from the periphery of operations into the core of board deliberation—shaping how risks are analysed, markets are read, and strategic choices are made. As we deploy machine-learning systems in recruitment, lending, compliance, and investment, the legal architecture surrounding directors’ accountability is being redrawn. 

In our paper, Fiduciary Duties and Business Judgment Rule 2.0 in the AI Act Age, we argue that the EU Artificial Intelligence Act (the ‘AI Act’) decisively reshapes directors’ fiduciary duties. Though the AI Act is not formally addressed to corporate boards, its risk-based approach creates a de facto governance standard. The result is the emergence of two novel fiduciary duties: AI due care and AI loyalty oversight. These duties compel directors to exercise informed, technologically literate, and ethically grounded oversight of algorithmic systems. They require us to view fiduciary law not as a static safeguard of human judgment but as a dynamic framework for supervising hybrid human–machine decision-making. The Business Judgment Rule (BJR), long relied upon as a shield for managerial discretion, must now evolve to preserve its legitimacy in this new environment.

From Human Judgment to Algorithmic Stewardship

Our starting point is straightforward: fiduciary law presumes that directors deliberate, decide, and bear responsibility. Yet as decisions become mediated by data and models, the space for human judgment narrows. The traditional duty of care—acting with the diligence of a reasonably prudent director—no longer suffices when we confront systems whose internal logic may be opaque even to experts. We therefore propose a reconceptualised duty of AI due care, which demands cognitive adequacy: the capacity to question, understand, and monitor the technological tools shaping corporate choices. Directors need not become coders, but they must know which questions to ask and how to interpret the answers. Technological literacy, in this sense, becomes a baseline fiduciary competence. Failing to interrogate an algorithm’s design assumptions, data provenance, or bias parameters is not simply a technical oversight—it may be a breach of duty.

Alongside this, the duty of AI loyalty oversight reframes loyalty for the algorithmic age. Conflicts of interest no longer arise only from personal gain; they may be embedded in the systems through which directors act. Vendor-developed algorithms may optimise for commercial interests that diverge from the firm’s objectives, or internal systems may encode preferences that marginalise key stakeholders. Loyalty, therefore, must extend beyond human intention to institutional design: we must ensure that the technologies we deploy serve the company’s purpose rather than silently displacing it. Delegating discretion to AI does not diminish loyalty—it heightens the obligation to verify that delegated systems remain impartial and aligned.

These duties have tangible implications for how we structure governance. Procedural compliance—what Michael Power famously called the ‘rituals of verification’—is no longer enough. Under the AI Act, documentation and audit trails are necessary but not sufficient. Boards must establish substantive oversight architectures: dedicated AI governance committees, clear escalation channels for algorithmic anomalies, and integration of AI risk into audit and ESG frameworks. Across jurisdictions, courts are already moving in this direction. Delaware’s Marchand v Barnhill and Germany’s Aktiengesetz §93(1) converge on a single principle: omission is liability. The failure to monitor, record, or escalate emerging risks—even absent bad faith—may amount to breach. In this light, AI governance becomes not a matter of form, but a matter of fiduciary substance.

The Business Judgment Rule 2.0

The Business Judgment Rule must also adapt. Historically, the BJR insulated directors who acted in good faith, on an informed basis, and in the company’s best interests. But the doctrine presupposes that directors are the true authors of their decisions. In the algorithmic age, when outputs are generated by systems that few fully understand, this presumption falters. If directors cannot explain how a decision was reached, or what assumptions drove it, judicial deference loses its foundation. 

The BJR 2.0 we propose preserves protection only for those who can demonstrate informed stewardship—directors who engage critically with algorithmic tools, demand traceability, and document the rationale for relying on machine-generated insights. Deference without understanding is deference to no one.

This recalibration of the BJR also aligns with the AI Act’s regulatory logic with no need to create a specific AI judgment rule. The AI Act sets out expectations of risk classification, human oversight, and impact assessment that naturally extend to board oversight. Compliance with these standards is not merely a legal safeguard—it is the procedural manifestation of care and loyalty in the algorithmic age. We foresee that these duties will soon influence directors’ and officers’ insurance, material risk disclosures, and corporate reporting. Insurers and regulators alike will assess whether boards can evidence meaningful oversight of AI systems, not just formal compliance. For us, this signals a deeper cultural transformation: the end of fiduciary formalism and the rise of fiduciary demonstrability. Directors will be measured not by how many policies they sign off, but by whether they understand, challenge, and govern the technologies they rely upon.

We believe that the legitimacy of corporate governance in the AI era depends on this shift. Fiduciary law cannot retreat into procedural comfort while decision-making becomes opaque. The duties of AI due care and AI loyalty oversight are not abstract innovations; they are necessary safeguards to preserve judgment in a world increasingly mediated by code. The Business Judgment Rule will endure only if it rewards directors who exercise not ritual compliance, but authentic discernment—directors who practise what we call true care and algorithmic loyalty lived. In the end, governance remains a human enterprise: to govern AI is, above all, to remember that judgment cannot be automated.

The authors’ article can be accessed here.

Maria Lucia Passador is an Assistant Professor in the Department of Law, Bocconi University.

Maria Lillà Montagnani is a Full Professor in the Department of Law, Bocconi University.