Faculty of law blogs / UNIVERSITY OF OXFORD

The Independence Illusion: AI Directors and the End of the Governance Fiction

Posted:

Time to read:

5 Minutes

Author(s):

Corporate governance has spent decades insisting that boards can achieve genuine independence through the right disclosure rules and structural safeguards. This piece argues that the doctrine was never designed to work, and that a recent proposal to seat AI systems as independent directors makes that impossible to ignore. The resistance AI directors will face reveals what independence doctrine has always obscured: that boards are built for calibrated partiality, not neutrality.

Corporate governance does not have an independence problem. It has an honesty problem. For decades, the law has insisted that boards can be both socially legitimate and genuinely independent, that directors can owe their seats to management and still monitor them without bias. This insistence has produced a litany of independence standards, from NYSE listing requirements to Sarbanes-Oxley’s audit committee mandates (see for a discussion: Lisa M Fairfax). Though the search has been relentless, the systems remain remarkably unchanged. The reason is structural: the modern boardroom is incapable of the kind of independence these doctrines purport to demand, not because the standards are insufficiently calibrated, but because genuine independence is incompatible with the social architecture of the boardroom itself.

Even directors who satisfy every technical criterion of independence, by having no material financial relationships, no family ties, and no recent employment, remain embedded in elite social networks that compromise their judgment. As Victor Brudney observed, independent directors are rarely appointed ‘without at least the prior approval of management’. This creates a selection bias that no disclosure requirement can cure; selection itself creates dependence. Directors who make it onto boards are, almost by definition, the ones who have cultivated relationships with the people they are supposed to monitor. The board class is not merely a collection of individuals with certain credentials; it is a social institution that reproduces itself through the same mechanisms of sponsorship, mentorship, and mutual recognition that independence doctrine purports to police. The result is a system where ‘independence’ has become but a fiction: legally certified but sociologically impossible. Even though the law treats independence as a sort of factual inquiry into a board member’s status, the real constraint operates through social embeddedness. This embeddedness, however, is a feature, not a bug, of how managerial capitalism stabilizes itself.

Consider the Block-Tidal acquisition, where Jack Dorsey’s board approved a $300 million purchase of a financially distressed music streaming company owned by his friend, Jay-Z. The Court dismissed the shareholder suit while admitting the deal was ‘by all accounts, a terrible business decision’. The court’s reasoning was doctrinally sound: mere friendship does not constitute a disabling conflict under Delaware law. But that is precisely the problem. The doctrine has been constructed in a way that renders it toothless against the most common form of boardroom bias, being the natural tendency to defer to people we like, respect, and consider friends. The insulation of relational loyalty from legal scrutiny is the system working as designed.

The doctrinal record tells a less coherent story than the formal standard suggests. In Sandys v Pincus, the Delaware Supreme Court reversed a Chancery dismissal on the basis of a single salient fact: two directors co-owned a private airplane with the interested party. The majority inferred from this arrangement an ‘extremely close, personal bond’ of the kind that one would expect to ‘have a material effect on the parties’ ability to act adversely toward each other’—a conclusion the dissent found unsupported by the sparse allegations in the complaint. In Delaware County Employees Retirement Fund v Sanchez, a fifty-year friendship woven through with financial dependence was held sufficient to compromise independence, with the Supreme Court faulting Chancery for analyzing the relationship’s components in isolation rather than in their totality. And in Tornetta v Musk, the Court of Chancery found Tesla directors non-independent on the basis of family vacations and billion-dollar co-investments in Musk-controlled entities. Yet Beam v Stewart had already established that a ‘thin social-circle friendship’ does not, standing alone, compromise independence—a baseline Dorsey applied to wave through a $300 million acquisition in favor of a founder’s close friend. The common thread is not doctrinal consistency but judicial discretion: courts drawing impressionistic lines between relationships that ‘heavily influence’ judgment and those that merely exist. That discretion is not incidental to the independence framework. It is load-bearing.

A recent proposal makes this contradiction impossible to ignore. Zhaoyi Li’s ‘Artificial Fiduciaries’ argues that AI systems should serve as independent directors with voting rights. The logic is almost tautological: genuine independence requires the absence of relational capacity, and only non-human actors can satisfy that condition. Unlike human directors, AI systems have no social networks to compromise their judgment, no career incentives to please management, and no emotional attachments that might cloud their analysis. They could evaluate transactions without wondering whether their criticism might cost them their board seat.

The obvious objection is that AI reflects the biases of its designers, that we are merely trading one form of dependence for another. But this criticism misses the comparative institutional point. The question is not whether AI directors would be perfect, but whether they would be better than the status quo at solving a clear doctrinal gap. Human directors carry biases they cannot articulate, operating through intuitions shaped by decades of social conditioning. AI systems, by contrast, make decisions through processes that can be audited, tested, and improved. Their partiality is technical and capable of refinement; human partiality is strictly relational and immune to disclosure. We can patch an algorithm. We cannot patch a friendship.

The structural parallel to the post-Enron accounting reforms is instructive. Before Sarbanes-Oxley, Arthur Andersen made the substantive judgment about financial accuracy but owed no direct accountability to shareholders; management could point to the auditor’s sign-off as evidence of due diligence. Post-Enron, the accountability framework was redesigned to bring auditors inside the liability structure. The AI director question is structurally similar: the entity most capable of making a genuinely independent governance judgment currently sits outside the legal framework entirely. The difference is that after Enron, Congress acted. The governance establishment’s response to the AI director proposal has been, so far, silence.

That silence is itself revealing. The deeper resistance to the artificial fiduciary is unlikely to be technical, but rather institutional. The threat they pose is not that they might fail to satisfy independence standards, but that they satisfy them too well, exposing the fragility of a construct that was never meant to be taken seriously. If a machine can fulfill the independence ideal more faithfully than any human, then the legal insistence on ‘natural persons’ starts to look less like a safeguard and more like a defensive shield around a socially embedded elite. For decades, doctrine has refined its tests for independence while carefully avoiding the possibility that it may be incompatible with the social architecture of the boardroom. This forces a choice: either corporate governance really values independence, implying artificial fiduciaries may be preferable to humans, or it prefers the role of socially embedded directors who reproduce corporate dynamics a la trust and cohesion. If the former, then Delaware’s ‘natural person’ requirement is subject to revision. If the latter, then independence should be abandoned as a regulatory North Star, and the law should admit that boards are designed for calibrated partiality, not neutrality. 

For fifty years, the law has insisted that boards can be both socially legitimate and genuinely independent. It has insisted that directors can owe their seats to management and still monitor them without bias. AI directors do not solve this contradiction; they reveal it. And that is precisely why they will be resisted.

Jack Resnick is a J.D. Candidate at Stanford Law School.