Faculty of law blogs / UNIVERSITY OF OXFORD

Regulating ChatGPT and Other Large Generative AI Models

Posted

Time to read

4 Minutes

Author(s)

Philipp Hacker
Professor for Law and Ethics of the Digital Society, European New School of Digital Studies
Andreas Engel
Senior Research Fellow, Heidelberg University
Marco Mauer
Student Assistant, European New School of Digital Studies

Large Generative AI Models (LGAIMs) are revolutionizing how we create and visualize new content, communicate, and work. They will likely impact all sectors of society, from business development to medicine, from education to research, and from coding to the arts. As we describe in a recent paper, LGAIMs offer enormous potential, but also carry significant risks. Today, they are already deployed by millions of private and professional users to generate human-level text (eg, ChatGPT), images (eg, Stable Diffusion, DALL·E 2), videos (eg, Synthesia), or audio (eg, MusicLM). In the near future, they may be integrated into tools used to assess and communicate with job candidates, or into hospital administration systems drafting letters to patients based on case records. This could free up time for professionals to focus on substantive mattersfor example, actual patient treatment. Hence, such multi-modal decision engines may contribute to a more effective and potentially fairer allocation of resources. However, errors are costly, and risks need to be adequately addressed. Already, the unbridled capacities of LGAIMs can be harnessed to take manipulation, fake news, and harmful speech to a whole new level. As a result, the debate on how (not) to regulate LGAIMs is intensifying.

In our paper, we argue that regulation, and EU regulation in particular, is ill-prepared for the emergence of this new generation of AI models. The EU is at the forefront of efforts to effectively regulate AI systems, with specific instruments (AI Act, AI Liability Directive), software regulation (Product Liability Directive), and laws targeting platforms that cover AI (Digital Services Act, Digital Markets Act). Meanwhile, LGAIMs deserve special attention from the legislator. So far, AI regulation, in the EU and beyond, has mainly focused on conventional AI models, but not on the new generation whose rise we are witnessing these days.

In this light, we criticise the EU AI Act, which seeks to directly address the risks posed by AI systems. These proposals, currently debated in the European Parliament, arguably fail to adequately accommodate the risks posed by LGAIMs, due to their versatility and wide range of applications. Mitigating every conceivable high-risk use as part of a comprehensive risk management system for all high-risk purposes under the proposed AI Act (Article 9) may be overly burdensome and unnecessary. Instead, the regulation of LGAIM risks should generally focus on the applications rather than the pre-trained model. However, non-discrimination provisions may still apply more broadly to the pre-trained model itself to mitigate bias at its data source. In addition, data protection risks arise and need to be addressed for GDPR compliance, particularly with respect to model inversion attacks.

The issue of content moderation may be even more pressing for LGAIMs. It is particularly worrisome that recent experiments have revealed that, despite its built-in safeguards, ChatGPT can still be used to generate large-scale hate speech campaigns, as well as the code required for maximum distribution. The high speed and syntactic accuracy of LGAIMs make them ideal for the mass production of seemingly well-researched but deeply misleading fake news. This, along with the recent decrease in content moderation on platforms such as Twitter, is a cause for concern in the upcoming global election cycle. Our research highlights that the EU's primary tool for combating harmful speech, the Digital Services Act (DSA), does not cover LGAIMs, creating a dangerous regulatory gap.

Scholars and regulators have long suggested that, given the rapid advances in machine learning, technology-neutral laws may be better equipped to address emerging risks. While this claim cannot be definitely confirmed or refuted here, the case of LGAIMs highlights the limitations of regulation that is focused specifically on certain technologies. Our research shows that technology-neutral laws may be more effective, as technology-specific regulation (on platforms; AI systems) may become outdated before (AI Act, AI liability regime) or immediately after (DSA) its enactment. As a way forward, we suggest several regulatory strategies to ensure that LGAIMs are trustworthy and used for the benefit of society at large.

First, we propose a differentiated terminology. To capture the AI value chain in LGAIM settings, we distinguish between LGAIM developers who pre-train models; deployers who fine-tune them for specific use cases; professional and non-professional users, who actually generate the output in these use cases; and recipients of LGAIM output, such as consumers exposed to AI-generated advertisements or products. Accordingly, more nuanced regulatory obligations can be tailored to these different actors along the value chain.

Second, rules in the AI Act and other direct regulation must match the specificities of LGAIMs. Hence, regulations should target specific high-risk applications rather than the pre-trained models as a whole. For instance, it would be unrealistic to expect the developers of ChatGPT to anticipate and mitigate every possible risk to health, safety, and fundamental rights that ChatGPT might pose in every conceivable high-risk scenario of its use. Instead, those who deploy and use the model for a specific high-risk purpose (eg, summarizing or scoring résumés in employment decisions) should be subject to the AI Act’s high-risk obligations, including transparency and risk management. The devil, however, is in the detail: even with such narrower regulatory requirements, collaboration between developers, deployers, and users will be crucial for compliance. To strike an adequate balance, we suggest drawing on the experience of the US pre-trial discovery system to balance the interests of access to information with trade secret protection.

Third, exceptionally, LGAIM developers should be subject to non-discrimination rules, including a version of Article 10 of the proposed AI Act. This would most effectively prevent biased output, particularly with respect to the collection and curation of training data from the internet.

Fourth, detailed transparency obligations should be enacted. This applies to both LGAIM developers/deployers (performance metrics; harmful speech issues raised during pre-training) and users (disclosure of the use of LGAIM-generated content).

Finally, the content moderation rules of the DSA should be selectively extended to cover LGAIMs, including notification and action mechanisms, trusted flaggers, and comprehensive audits. Arguably, content moderation should take place at the AI generation stage, rather than ex-post, when the effects of AI-generated hate speech and fake news may be difficult to stop.

In all areas, regulators and lawmakers need to act quickly to keep up with the rapidly evolving dynamics of ChatGPT et al. Updating regulations will hopefully help maintain online civility and create a level playing field for the development and deployment of future AI models in the EU and beyond.

Philipp Hacker is Professor for Law and Ethics at the European New School of Digital Studies.

Andreas Engel is Senior Research Fellow at Heidelberg University.

Marco Mauer is Student Assistant at the European New School of Digital Studies

Share

With the support of