Turning Rivals into Watchdogs: Shared Residual Liability for Frontier AI Firms
Posted:
Time to read:
Introduction
As AI systems become increasingly capable, they stand to improve our lives dramatically, facilitating scientific discoveries, medical breakthroughs, and economic productivity. But capability is a double-edged proposition and, indeed, notwithstanding their promise, advanced AI systems also threaten to do great harm, whether by accident or because of malicious human use.
Many of those closest to the technology warn that the risk of an AI-caused catastrophe is unnervingly non-trivial. In a 2023 survey of over 2,500 AI experts, the median respondent placed the probability that AI causes an extinction-level event at 5%, with 10% of respondents placing the risk at 25% or higher. And leading scientists and industry players have publicly urged that ‘mitigating the risk of extinction from AI should be a global priority’ on par with ‘pandemics and nuclear war’ prevention.
Frontier AI firms continue, however, to underinvest in safety. This underinvestment is driven, in large part, by three major challenges: AI development’s judgment proof problem; its perverse race dynamic; and AI regulation’s pacing problem. To address these challenges, in a recent paper I propose a new legal intervention: shared residual liability for frontier AI firms. Modeled after state insurance guaranty associations, shared residual liability would hold frontier AI companies jointly liable for catastrophic damages in excess of individual firms’ ability to pay. This would prompt the industry to internalize more risk as a whole and incentivize firms to monitor each other (to reduce their now shared financial exposure).
Three Challenges Driving Firms’ Underinvestment in Safety
- AI’s Judgment Proof Problem
No one firm is capable of covering the full damages of a sufficiently catastrophic event. (For reference, the cost of the COVID-19 pandemic to the U.S. has been estimated at $16 trillion; an AI system might be used to deploy a virus even more destructive.) This ‘judgment proofness’ means that AI firms do not fully internalize the risks they generate: because each firm’s liability is, as a practical matter, capped at its ability to pay, firms lack financial incentive to continue scaling up the precautions they take. The shortfall between total damages and what firms can actually pay is externalized onto victims of the firm’s harm.
- AI’s Perverse Race Dynamic
Frontier AI firms are locked in an intense race. There are plausibly enormous first-mover advantages to bringing a highly sophisticated, general-purpose AI model to market, including disproportionate market share, preferential access to capital, and potentially even dominant geopolitical leverage. These stakes make frontier AI development an extremelycompetitive affair. A firm that unilaterally redirects some of its compute, capital, or other vital resources away from capabilities development and toward safety management risks ceding ground to faster-moving rivals who don’t do the same. It is a prisoner’s dilemma: unable to trust that their precaution will not be exploited by rivals, each firm is incentivized to cut corners and press forward aggressively, even if all would prefer to prioritize safety.
- AI Regulation’s Pacing Problem
Traditional command-and-control regulation struggles to address these issues because the speed of AI development vastly outpaces that of conventional regulatory response (constrained as the latter is by formal legal and bureaucratic process). Informational and resource asymmetries fuel and compound this mismatch, with leading AI firms generally possessing superior technical expertise and greater resources than regulators. By the time regulators develop sufficient understanding of a given system or capability, and then navigate the relevant institutional process to implement an official regulatory response, the technology under review may already have advanced well past what the regulation was originally designed to address. A gap persists between frontier AI and the state’s capacity to efficiently oversee it.
Potential Virtues of a Shared Residual Liability Regime
Under a shared residual liability regime, if a frontier AI firm causes a catastrophe that results in damages exceeding its ability to pay (or some other pre-determined threshold), all other frontier firms would be required to collectively cover the excess damages.
Each firm’s share of the excess would be allocated proportionate to their respective riskiness. The less risky a firm is, the less it stands to have to pay in the event one of its peers triggers residual liability. Riskiness could be approximated with a formula that takes into account inputs like compute and revenue from AI products, mirroring the approach of the Federal Deposit Insurance Corporation (FDIC), which calculates assessments with formulas that synthesize various financial and risk metrics. (Further design questions, such as defining regime membership and qualifying trigger events, are discussed in the full paper.)
The regime has a number of potential virtues. First, it would help mitigate AI’s judgment proof problem by increasing the funds available for victims and, therefore, the amount of risk the industry would collectively internalize.
Second, it could help counteract AI’s perverse race dynamic, as tying each firm’s financial fate to that of its peers would incentivize companies to monitor and cooperate with one another in order to reduce shared financial exposure.
Thus incentivized, firms might set up mutual self-insurance arrangements to protect themselves. (An instructive analogue is the mutuals that many of the largest U.S. law firms—despite the competitive nature of Big Law and the commercial availability of professional liability insurance—have formed). Or firms might broker joint agreements committing to, for instance, increased development and sharing of alignment technology. Because firms would bear partial financial responsibility for the catastrophic failures of their peers, each firm would have a direct stake in reducing not only its own safety risk but also that of the industry more generally. Developing and widely sharing alignment tools (which lower every adopter’s risk) would, accordingly, become in every firm’s interest.
To be sure, firms might well devise other, more efficient means of reducing shared risk. Shared residual liability embraces this likelihood: it leaves firms free—and with the stick of financial exposure, incentivizes firms—to themselves identify and implement safety interventions. This describes a third virtue of shared residual liability: by shifting some responsibility for safety governance from slow-moving, resource-constrained regulators to the better-positioned firms (leveraging the latter’s comparative advantages), it offers a partial solution to the pacing problem.
A fourth virtue of shared residual liability is its modularity. In principle, it is compatible with many other regulatory instruments. This makes it particularly attractive in a regulatory landscape that is still evolving. Shared residual liability might, for instance, be layered atop commercial AI catastrophic risk insurance, should such become available (coverage would simply raise the threshold at which residual liability activates); or it might be layered atop reforms to underlying liability law (shared residual liability is agnostic about the doctrine that governs first-order liability determinations).
Conclusion
Shared residual liability is not a panacea. It cannot by itself fully eliminate catastrophic AI risk or resolve all coordination failures. But it does offer a potentially robust framework for internalizing more catastrophic risk (mitigating AI development’s judgment proof problem), and it would plausibly incentivize firms to coordinate and self-regulate in safety-enhancing ways (counteracting AI development’s perverse race dynamic and offering a partial solution to AI regulation’s pacing problem). Shared residual liability could be a valuable component of a broader AI governance architecture.
The full paper is available here.
Ben Gil Friedman is a JD Candidate and Levy Scholar at the University of Pennsylvania Carey Law School and was a 2025 Summer Research Fellow at the Institute for Law & AI (LawAI).
OBLB categories:
OBLB types:
Share: