Faculty of law blogs / UNIVERSITY OF OXFORD

Generative AI at the Crossroads

Author(s)

Philipp Hacker
Professor for Law and Ethics of the Digital Society, European New School of Digital Studies

Posted

Time to read

6 Minutes

When Sam Altman, the Founder and CEO of OpenAI, recently appeared in a US Senate hearing, he affirmed the need to regulate the generative AI models he helped bring about, such as ChatGPT and GPT-4. And indeed, a regulatory framework is being built for generative and other AI systemsbut, for the time being, not in the US.  

Rather, on May 11, the European Parliament (EP) had a decisive vote on its version of the controversial European AI Act, setting the stage for a plenary EP vote around June 14. The AI Act will soon regulate AI systems operating in the EU, irrespective of where they were developed. In a last-minute effort, European legislators raced to include rules on generative AI systems, such as ChatGPT and GPT-4. These rules are set to define the future of generative AI, in the EU and beyond. Quite literally, billions of dollars are at stake, as well as access to and regulation of a technology perceived by some observers to potentially spin out of control in the future.

In a nutshell, while the EP proposal does contain steps in the right direction, the current draft still has the potential to derail generative AI development in crucial areas like medicine or education, while failing to effectively address the perhaps largest challenge generative AI present: the mass generation of fake news and hate speech.

The EP version of the AI Act is the first one to spell out a specific regime for what it calls ‘foundation models’, an umbrella term for very powerful AI models including many generative AI systems, such as ChatGPT, GPT-4, Bard, or Stable Diffusion. The term ‘foundation model’ has gained considerable traction in the computer science community and rightly focuses on the generality of tasks and output. For example, a simple classifier that can distinguish wolves from huskies in images would not qualify; a text generator à la GPT-4 or Luminous, able to summarize, complete, and freely generate text, would fall under the definition.

Concerning the overall legal architecture, the EP rightly establishes three levels in regulating foundation models, including generative AI:

(1) minimum standards for all foundation models; (2) specific rules for concrete applications in high-risk scenarios; and (3) rules for collaboration and information exchange along the AI value chain, ie, between developers, deployers, and (professional) users.

Under the proposal, developers of generative AI systems, such as OpenAI, will be forced to adhere to certain minimum standards if they wish to offer their models to EU customers (Level 1). To a great extent, these rules make sense to ensure a standard level of protection for persons affected by foundation models. For example, data governance rules ensure that training data consist not only of white men but are sufficiently diverse. This may prevent someeven though not allforms of discrimination in AI output. Cybersecurity obligations incentivize guardrails against hacking, which is particularly important in times of perpetual geopolitical crisis. Copyrighted material used to train generative AI models must be disclosed, so that authors may avail themselves of their rights. As the $1.8 trillion suit of Getty Images against Stability AI shows, copyright provisions will play a crucial role in allocating the benefits of generative AI.  

The EP proposal, however, contains one provision that harbors the potential to render the development of new foundation models all but impossible except for Big Techeven though a better regulatory alternative exists. As part of the Level 1 obligations applying to all foundation models, the EP seeks to compel developers to establish a comprehensive risk management system. In doing so, they have to assess foreseeable risks their model holds for health, safety, fundamental rights, environment, the rule of law, and democracy; devise mitigating measures; and establish a risk management system throughout the lifecycle of the AI model monitoring these risks. At first blush, this sounds reasonableafter all, developers should not be allowed to put models on the market whose risks nobody has studied.

The devil, however, is in the details. Foundation modelsby definitionhave a myriad of different potential applications. GPT-4, for example, could be used for recruitment; in medical contexts; by the public administration and the judiciary; in general elections; for purposes of insurance and credit scoring; the list could be extended ad libitum. Mapping, describing, and reining in the risks regarding the six broad categories, from health to democracy, for all these hypothetical scenarios borders on the impossible. Significantly, risk management systems for general-purpose technologies like ChatGPT or GPT-4 will come with significant fixed costs, irrespective of the size of the company developing these models.

This threatens to distort competition: high compliance costs will be much more easily absorbed by big players, such as Google and Microsoft, than by SMEs (Small and Midsize Enterprises). This effect has already been described for the GDPR, which also entails steep compliance costs. Hence, the risk management rules of the AI Act may lead to a further concentration in the market of foundation model developers, potentially paving the way for a Google/Microsoft duopoly. Ultimately, the AI Act may thus inadvertently undermine the efforts of the EU Digital Markets Act to strengthen workable competition in the digital economy.

In my view, two specific rules are needed to avoid these anticompetitive effects and to address more specifically the genuine risks generative AI harbors for our democracies.

First, the risk management system currently foreseen at the level of the foundation model itself (Level 1) should apply if, but only if, the model is indeed used in a high-risk application (Level 2). This is a well-known strategy in product safety regulation: not all screws need to follow the same risk standards, either. Most of them only need to be fit for assembling IKEA furniture. If, however, they are supposed to hold together parts of a spaceshipbut only then—, they need to fulfill the more stringent requirements for spaceship screws. Similarly, if ChatGPT is used for medical purposes, those deploying it in this scenario must make sure that all specific medical regulations are complied with. Foundation model developersbe they OpenAI or SMEsmay need to provide information and support to this effect (Level 3), but they should not need to centrally conduct the risk assessment for all the thousands of potential use cases in high-risk scenarios, only a fraction of which may eventually be realized.

Second, while the risk of overregulation persists concerning general AI Act duties, the perhaps most important current threat of generative AI remains woefully unaddressed: fake news and harmful speech. Experiments have shown that internal content moderation strategies can be circumvented to generate hate-filled speech on a massive scale, and AI generators may then produce the code necessary for maximum proliferation. In our deeply divided societies, driven apart inter alia by questions of climate change, support for Ukraine, the pandemic, and rampant inequality, the automated mass generation of fake news and hate speech spells trouble for the next election cycles. And Elon Musk, after demoting content moderation on Twitter, has already announced to create a GPT without any moderation guardrails.

The EP version of the AI Act, however, only compels generative AI developers to generically prevent the generation of illegal content. While a step in the right direction, much more concrete rules are needed, particularly if actors enter the scene who actively seek to avoid content moderation. Here, lessons may be learned from the EU Digital Services Act (DSA), which seeks to reign in harmful speech and illegal content on social networks. The DSA does not apply to generative AI developers directlythis is a loophole that must urgently be fixed.

More specifically, companies like OpenAI should be compelled to establish notice and action mechanisms: if they receive a complaint about harmful content, they must be obliged to review and act upon it. The system should be coupled with ‘trusted flaggers’, eg, civil society organizations (NGOs, consumer protection agencies) who may register as generative AI watchdogs. If they file a complaint, because they have found a chain of prompts to generate instructions for building a biological bomb, or for creating a fictional dialogue between prominent Nazis with illegal content, developers should have to prioritize these complaints and offer redress within two to three days. In this way, content moderation may be decentralized beyond (potentially unwilling) developers and bring in civil society actors to effectively monitor AI output and represent the views of otherwise underrepresented or vulnerable groups.

In sum, AI regulation is at a crossroads. With AI development accelerating ever further, and offering significant benefits to society, the time is now to implement crucial guardrails for the months and years to come. To reap the substantial benefits AI offers, risk management and regulation must be tailored to the complexities of the AI value chain. Simultaneously, however, we need to protect the civility of our discourses, and the future of our democracies, by compelling developers to establish robust content moderation schemes that integrate the wisdom of the crowds–of civil society.

Philipp Hacker holds the Chair for Law and Ethics of the Digital Society at the European New School of Digital Studies, European University Viadrina.

 

Share

With the support of