Faculty of law blogs / UNIVERSITY OF OXFORD

What Should We Do About ChatGPT?

Author(s)

Roee Sarel
Professor at the Institute of Law and Economics, University of Hamburg

Posted

Time to read

3 Minutes

Everyone is talking about ChatGPT. This new AI-enabled chatbot, which can swiftly produce answers that feel as if a human could have written them, promises to revolutionize the way in which we generate text. Although ChatGPT emerged just a few months ago, it is already causing commotion across various fields. Microsoft embedded it into their Bing search engine. Google declared a ‘code red’ and tried to introduce a competing bot. Universities are rethinking how to evaluate student performance, as ChatGPT can apparently already pass exams in law, business, and medicine. Publishers announced that ChatGPT could not be credited as a co-author. New York schools banned its use entirely. Some scholars speculated that ChatGPT will also reshape corporate governance, eg, by giving shareholders clearer information on the obligations of managers.

Despite all of the hype, ChatGPT is not perfect. Among others, it is prone to inaccuracies and may suffer from a phenomenon colloquially known as ‘hallucinations’, where the output deviates from what one would reasonably expect. For instance, shortly after ChatGPT was released, Hadar Jabotinsky and I tried asking ChatGPT to provide academic references in support of its answers. Surprisingly, some of the citations were not only inaccurate but non-existent, specifying fake titles and arbitrarily crediting them to authors. While such problems may be mitigated through the users’ expertise, they do raise an important question: can we trust ChatGPT without regulatory oversight?

The United States has been lagging behind on its response to the AI revolution, and ChatGPT is no exception. The existing proposals, such as an ‘AI Bill of Rights or a voluntary AI risk-management framework, neither directly address ChatGPT nor entail meaningful interventions. By contrast, the European Union has been diligently working on proposals that might fill the gap: a regulatory framework (‘AI Act’), a revision of its product liability directive, and a new ‘AI Liability directive’. As these proposals may eventually lead to a ‘Brussels effect’ — where the EU’s policy alone moves the global market — it is essential to evaluate whether they provide satisfactory answers to the regulatory problems. 

In a recent paper, I conducted such an evaluation through the lens of law and economics and identified three main problems. The first problem concerns the EU’s general approach: instead of looking at whether the AI market needs fixing (ie, whether is suffers from a market failure), the AI Act takes a risk-based approach. This approach divides the uses of AI into categories such as unacceptable risk, high risk, and limited risk, but does not distinguish between risks that constitute a market failure and those that do not. In particular, not all risks are externalities; some are governed by contractual terms and conditions, or otherwise subject to negotiations. Moreover, market forces may already be sufficient to induce AI creators to fix inaccuracies, as failing to do so will lead to loss of business to competitors. Hence, the need for legal interventions may be independent of whether the risk belongs to a group of ‘high risk’ or ‘low risk’.

The second problem lies in the EU’s choice to intertwine regulation with liability. Specifically, as breaches of regulatory obligations under the AI Act trigger some assumptions that enable victims to sue more easily, there are concerns of either under-compliance or over-compliance if the regulatory standard is imperfectly set (eg, because it is homogeneous but AI creators are heterogeneous). For instance, if an AI creator can automatically escape liability by complying with the regulatory standard, they will have no incentive to take additional precautions, even when it would be efficient to do so (under-compliance). Similarly, if failure to comply with the regulatory standard automatically gives rise to liability, some AI creators would comply although they should not (over-compliance).

The third problem lies in the standard of liability. The EU’s proposals apply three different standards — strict liability, negligence, and no liability — depending on factors such as whether the risk is high or low. However, the type of risk, per se, is not usually a relevant factor for the choice between different liability regimes. Instead, what matters for efficiency are factors such as whether or not the victim can take precautions to avoid harm and whose activity level needs controlling. For instance, if AI creators are strictly liable for all harm caused, even when taking precautions to prevent it, this gives victims implicit insurance. The problem with this implicit insurance is that it eliminates the incentive of victims to take precautions, as they get paid through the damages paid in a lawsuit. At the same time, if AI creators are not strictly liable, they may choose an inefficiently high level of activity (eg, by releasing more and more algorithms), as long as they do not behave negligently. These factors are not a function of the type of risk, so a risk-based approach does not capture the relevant distinctions. 

In light of these problems, my paper (forthcoming in the UC Law SF Journal (formerly Hastings Law Journal)) calls upon AI policymakers to pay closer attention to principles of law and economics to ensure that the most relevant distinctions are taken into account.

Roee Sarel is a Junior Professor for Private Law and Law & Economics at the Institute of Law and Economics, University of Hamburg.

Share

With the support of