The Law of AI Is the Law of Risky Agents Without Intentions
Posted
Time to read
Many areas of the law, including freedom of speech, copyright, and criminal law, make liability turn on whether the actor who causes harm (or creates a risk of harm) has a certain mens rea or intention. But AI agents—at least the ones we currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability.
Of course, the AI programs themselves are not the responsible actors; instead, they are technologies used by human beings who design, deploy and use them and that have effects on other human beings.
The people who design, deploy, and use AI are the real parties in interest. Hence to regulate AI we should adopt legal standards that hold human actors and companies accountable for the harms produced by their design, deployment, and use of AI technology.
We can think of AI programs as acting on behalf of human beings. In this sense AI programs are like agents that lack intentions but that create risks of harm to people. Hence the law of AI is the law of risky agents without intentions.
The law should hold these risky agents to objective standards of behavior, which are familiar in many different parts of the law. These legal standards ascribe intentions to actors—for example, that given the state of their knowledge, actors are presumed to intend the reasonable and foreseeable consequences of their actions. Or legal doctrines may hold actors to objective standards of conduct, for example, a duty of reasonable care or strict liability.
Holding AI agents to objective standards of behavior, in turn, means holding the people and organizations that implement these technologies to objective standards of care and requirements of reasonable reduction of risk.
Take, for example, defamation law. Large language models (LLMs) often hallucinate on prompting—they generate content that is false or misleading. One can't show that LLM’s act out of actual malice, the standard required for libel suits against public figures. Mens rea requirements like the actual malice rule protect human liberty and prevent chilling people’s discussion of public issues. But these concerns do not apply to AI programs, which do not exercise human liberty and cannot be chilled.
Instead, the law should aim at creating incentives for those who design, implement, and use LLMs to internalize the costs they impose on society. The proper analogy is not to a negligent or reckless journalist who deliberately writes a false and damaging story. The correct analogy is to a defectively designed product—one that is produced by many people in a chain of production and that causes injury to a consumer. The law can give the different players in the chain of production incentives to mitigate the risks created by AI-generated content—such as installing filtering algorithms.
Similarly, in cases of copyright infringement involving AI-generated content, the focus should be on whether the human actors in the chain of production acted reasonably in designing and using the technology. Current fair use doctrine is poorly organized for the rise of AI. It compares a challenged work with an asserted original and asks if there was deliberate copying or, failing that, substantial similarity. This model of individualized comparisons does not make much sense in the context of programs that lack intentions and can combine and recombine any number of elements in their training data to generate an endless supply of new works.
Here again we should think of AI systems as risky agents that create pervasive risks of copyright infringement at scale. To respond to this problem, the law should require that AI companies take a series of reasonable steps that reduce the risk of copyright infringement even if they cannot completely eliminate it. A fair use defense tied to these requirements is akin to a safe harbor rule. Instead of litigating in each case whether a particular output of a particular AI prompt violated copyright, as we do today, this approach asks whether the AI company has put sufficient efforts into risk reduction. If it has, its practices constitute fair use.
The examples of defamation and copyright suggest why the spread of AI systems may require changes in many different areas of the law. As we make these adjustments, we should view AI technology not in terms of its independent agency but in terms of the people and companies that design, deploy, offer and use the technology. To properly regulate AI, we need to keep our focus on the human beings behind it.
Ian Ayres is the Oscar M. Ruebhausen Professor of Law at Yale Law School.
Jack M. Balkin is the Knight Professor of Constitutional Law and the First Amendment at Yale Law School.
This post is part of the series ‘How AI Will Change the Law’. The other posts in the series are available here.
Share