Faculty of law blogs / UNIVERSITY OF OXFORD

Addressing AI-Related Harms Through Existing Tort Doctrines

Author(s)

Anat Lior
Assistant Professor at Drexel University’s Thomas R. Kline School of Law

Posted

Time to read

5 Minutes

AI and Law

Our current tort doctrines can serve us well even when addressing AI liability. Despite the AI revolution and notwithstanding the ‘black box’ challenge, traditional tort doctrines are still relevant, apt, and applicable. These doctrines are inherently flexible, which is why the tort system has always been able to tackle new challenges resulting from innovation.

AI-based algorithms’ decision-making progress is opaque and unknown to the algorithm’s user or programmer, commonly called the “black box” issue. This hampers foreseeability and predictability. Companies developing AI often claim they cannot know how the algorithm reached a harmful decision or recommendation, as it doesn’t reveal the proxies it uses. Some claim that this might prevent AI from fitting into our existing liability doctrines because these legal doctrines were designed with human conduct in mind; thus, they might not function when applied to AI.

Embedded in tort law is the ability of the decision-maker to balance the competing interests entrenched in the assimilation of any new technology into our lives – consumer safety, judicial efficiency, and the support of a new industry. There is no reason to believe, nor is it supported by evidence, that the tort system won’t be able to do the same regarding AI.

Proximate Cause examines whether a specific harm was a foreseeable consequence of a defendant’s conduct. AI risks can be roughly divided into three categories: misalignment failures, where the AI has goals different from those provided by humans; capabilities failures, where the AI malfunctions; and misuse, where the AI functions as planned but with the human creators’ malicious intentions. Misuse falls under intentional torts or negligence; capabilities pose no difficulty to the foreseeability question because the resulting damage should have been expected when the AI malfunctions.

Misalignment failures present a challenge. Proximate causation depends on the generalization level under which foreseeability is examined. Alignment failures are foreseeable, and more experts warn about them. However, it is tough to foresee the specific alignment failure that could emerge. It is fair to apply a high level of generality as companies know this problem is inherent to AI and still choose to disseminate it. The only way to achieve optimal deterrence will be to use a general level of generalization while examining the foreseeability of a given harm. Otherwise, AI companies and users will claim that the alignment problem prevents the assignment of liability.

The factfinder doesn’t have to decide that the specific way the damage happened was foreseeable if the harm itself was a result of the general type of risk that a reasonable person should have taken steps to mitigate. It seems sufficient to find that a general sort of risk exists, even if we cannot precisely predict how this risk will come to be. The proximate cause doctrine shouldn’t be changed as it was meant to act as a ‘safety valve’ regardless of its applied domain.

Market Share Liability can help when dealing with AI harms when it is unclear who the liable entity within the AI industry is. This is especially true given the highly condensed structure of this industry, which a few giant tech companies dominate. Focusing on the ‘substantial share’ of the market these companies hold, there could be scenarios where this doctrine will be required to establish causation in-fact. Courts have been reluctant to apply this doctrine outside of the DES context, focusing on whether the product is fungible and if the manifestation of the injury is far removed. Given the similarity of algorithms based on ML and LLMs deployed by different AI companies, the way the industry is rapidly developing, and AI’s ability to cause damages that will only be manifested in the future, the application of this doctrine seems appropriate. The latter could happen, for example, in AI-assisted CRIPSR cases, gene editing, and facial recognition harms that can go undetected for years. Utilizing this doctrine can incentivize AI companies to document and track the AI-based products they disseminate.

Respondeat Superior: if a principal entrusts subordinates, eg, an AI entity, to carry out an inherently risky activity, then fairness necessitates that that principal will bear responsibility for that conduct if it results in harm. The principal is better positioned to bear the damages or insure against potential liability claims than the agent. This doctrine is designed, in part, to ensure victims will not be under-compensated in the case of an insolvent agent. This rationale is fundamental because AI entities are inherently insolvent (they are not humans (nor corporations) and thus have no pockets to pay from).

To decide whether Respondeat Superior applies, courts examine if the agent acted ‘in the course of the employment’ when the damage occurred. This is meant to distinguish between acts carried out by the agent for which the principal won’t be held liable (eg, frolics) and those for which they will. In the AI context, this distinction doesn’t exist, as there are no acts that can be carried out by the AI agents that will exceed the principal’s liability scope.

Determining the AI agent’s principal(s) is challenging. The principal(s) should be identified as those with the highest capability to affect the actions of an AI entity through monitoring, supervision, and guidance. The identity of the principal(s) will heavily depend on the circumstances of an accident. Courts will review the level of involvement, supervision, monitoring, and ability to direct the actions of the AI agent. In the early stages, this level of control will frequently be attributed to the designer, programmer, trainer, or manufacturer of the AI. The more the usage of AI agents becomes pervasive, the more likely the operator’s or owner’s level of control and monitoring will result in identifying them as the appropriate principal(s).

Liability Insurance has had an important, though underappreciated, impact on the development of tort law. The same trajectory of liability insurance influencing the tort system is likely in the context of AI liability, as more insurance companies are offering AI policies. Insurance can help avoid legal blame-placing issues and compensate those harmed by AI. Insurers can incentivize the behavior of their policyholders via their different products to act cautiously once AI is involved. Liability insurance is not a stand-alone solution and has many negative implications once involved in a specific industry (eg, moral hazards, adverse selection, regulatory capture, and negative externalities). Nonetheless, it is an integral part of the overall tort approach policymakers should consider.

AI as An(y) Emergent Technology

In The Wizard of Oz, when Dorthy discovers that the wizard is just a man and not the series of flashes, loud noises, and flames, the Wizard’s first response is, ‘Pay no attention to that man behind the curtain.’ This reaction conveys how big-tech companies view their AI outputs when acknowledging the possibility of harm — pay no attention to us.

Innovation has always put the tort system under constraint. Every new technology has triggered alarm bells echoing the same claim: the tort system is on the verge of collapsing and should be changed to address the latest technological challenge. However, the tort system has been able to handle new technological developments without significantly reinventing itself. The tort system usually reacts with suspicion to new technologies. It tends to first impose a rigorous liability regime in the form of strict liability. As the social and economic benefits of the latest technology become apparent, as well as its associated risks, the tort system tends to shift to a more flexible regime, eg, negligence or safe harbors. Given the black-box issue and the ‘known unknown’ risks associated with AI, this cycle seems apt.

AI is aimed at conducting what humans are doing in a more efficient and hopefully safer manner. We already have legal doctrines to govern these behaviors and the damages that can result from them. Reinventing the tort system while AI is widespread could lead to confusion and uncertainty. Courts may decide to develop new tort doctrines as AI capabilities become more advanced, but this doesn’t render current legal doctrines irrelevant.

Anat Lior is Assistant Professor at Drexel University’s Thomas R. Kline School of Law.

This post is part of the series ‘How AI Will Change the Law’. The other posts in the series are available here.

Share

With the support of