Faculty of law blogs / UNIVERSITY OF OXFORD

Global AI Governance—Part 1: Decoding Developers and Deployers

Posted

Time to read

9 Minutes

Author(s)

Philipp Hacker
Professor for Law and Ethics of the Digital Society, European New School of Digital Studies
Ramayya Krishnan
Ruth F. Cooper Professor Of Management Science And Information Systems, Carnegie Mellon University
Marco Mauer
Researcher, European University Institute

The rapid advancement and widespread adoption of AI creates complex challenges related to security and governance. International AI governance seeks to navigate these challenges and establish frameworks ensuring that AI development and deployment are ethical, secure, inclusive, and globally beneficial. Several actors have already proposed strategies to ensure the secure development and use of AI (see eg the US, the EU, the UK, Singapore, the Council of Europe, and the UN). All these initiatives share common high-level goals. Yet, they often lack a pathway to move from policy to practice. The recently adopted United Nation's Global Digital Compact, for example, foresees a reporting mechanism and global dialogues, but says little about how the principles agreed upon can be operationalized.

We intend to address this gap in a series of three blog posts. It explores the global landscape of AI governance by analyzing the interplay between developers and deployers of foundation models, extending this distinction to entire nations. The first post introduces the core theoretical framework, emphasizing the critical distinction between developers, who build general-purpose AI models, and deployers, who adapt them for specific use cases. It highlights the nuances of these categories, and how this distinction shapes regulatory dynamics and global AI governance hierarchies. The second post examines how deployer states can effectively regulate local AI deployment while collaborating to set global standards that influence developers in more powerful jurisdictions. The final post outlines current global AI governance efforts, addressing pre-deployment guidance, open-source challenges, and the importance of adaptive global standards. We also discuss the possible impact of a second Trump administration on AI governance in the US and globally, and conclude with strategies for fostering safe, equitable, and transformative AI ecosystems.

This first post will show that a crucial distinction in developing and regulating foundation models is the one between developers and deployers. While developers invest significant resources into developing a general-purpose product, deployers use their models to apply them to a specific use case. We will show that drawing the line between developing and deploying a model has significant implication for the regulatory framework in which these economic actors operate. We will then show that the distinction between developing and deploying models does not only apply to enterprises but can be used as a theoretical framework to classify entire countries on the Global AI Governance landscape.

Developing and Deploying Foundation Models

The advent of so-called ‘transformer architectures in 2017 led to a leap in AI capabilities. Using vast amounts of training data, these models are developed with no specific downstream application in mind and thus often referred to as ‘foundation models’. OpenAI's GPT-4, Google's Gemini, Meta's Llama 3 or Stable Diffusion are all examples. Their capabilities, at least in general, improve with model size which has led to a significant increase of model sizes in the past years. Developing these large models requires skilled engineers, significant amounts of data and computational resources which only few companies can afford: Conservatively estimated, developing a model as capable as GPT-4 costs around one hundred million dollars. The costs of operating the model are considerable as well and beyond the reach of all but the most well-resourced startup and large tech companies.

These general-purpose models are often fine-tuned and aligned to meet the needs of a specific application. In some instances, enterprises who have the required data relevant to meet their application needs, employ either internal technical capabilities or systems integration partners to fine tune the models for their proprietary use. In other cases, enterprise software vendors such as Salesforce or SAP have integrated these models to provide AI-enabled features. Applications such as Retrieval-Augmented Generation which use foundation models with external databases to ground context and prompt engineering techniques such as  chain-of-thought prompting have gained popularity as organizations have deployed foundation models to generate insight and decision support from their unstructured data (internal and external text and image corpora).

These developments lead to increasingly complex value chains and blur the line between the developers and deployers of foundation models. Furthermore, it has consequences for the regulatory regimes. Unlike in the area of platform regulation, for example, where few big platforms provide services and are the main targets of regulation, this division of labor in foundation model creation and use enables focused intervention with developers and deployers. This in turn has implications for state's capabilities to enforce their local rules. Instead of trying to enforce their rules against developers abroad—which is something else than merely prescribing them without any real prospects of compliance—, they can ensure a certain degree of compliance by enforcing their laws against local deployers, as we will explain later on.

Distinguishing Developers and Deployers: the billion-dollar question in AI governance

As was already stated above the complex value chains when employing foundation models can blur the line between developers and deployers. This raises a crucial question that could have billion-dollar implications: Where does the role of AI developer end and that of deployer begin? This seemingly simple distinction is proving to be a Gordian knot for policymakers and industry players alike, particularly in the context of foundation models and generative AI. It is of prime importance because different obligations, and liability exposures, are attached to the respective roles.

The EU AI Act: A Case Study in Complexity

The European Union's AI Act attempts to draw a line between these roles, defining ‘providers’ (developers) and deployers. This distinction is far from academic—it carries significant legal and financial consequences. Providers face a gamut of obligations, from ensuring transparency to conducting rigorous risk assessments. For the crème de la crème of AI models (think GPT-4 or Gemini Ultra, called general-purpose AI models with systemic risks in the AI Act), the stakes are even higher, with requirements for red teaming, incident reporting, and cybersecurity measures.

But even more importantly: under certain circumstances, the Act allows for a metamorphosis from deployer to provider. This transformation can occur when (Art 25(1)):

  1. A deployer puts their name or trademark on an existing high-risk AI system.
  2. A general-purpose AI is used for high-risk activities, such as recruitment or medical diagnoses.
  3. Significant modifications are made to a high-risk AI system.

The Fine-Tuning Conundrum

Perhaps the most contentious issue in this developer-deployer debate is the treatment of fine-tuning. When does tweaking a model cross the line from deployment to development? The AI Act offers two competing interpretations, in our view:

First, Recital 109 offers some clues: ‘In the case of a modification or fine-tuning of a model, the obligations of providers of general-purpose AI models should be limited to that modification or fine-tuning, eg by complementing the already existing technical documentation with information on the modifications, including new training data sources.’ This seems to imply that any, even very limited, fine-tuning always leads to provider status. While the obligations should be confined to the changes made during the course of fine-tuning, Articles 53 and 55 would still have to be fulfilled if a highly powerful foundation model, such as GPT-4, is fine-tuned. As mentioned, this would pose very palpable practical problems for many companies not familiar with frontier AI safety protocols, and expose them to significant liability.

The second interpretation would apply Article 25(1)(b) AI Act by analogy. Strictly speaking, it only covers changes to high-risk models (ie, any model deployed in a high-risk sector defined by the AI Act, eg, recruitment, education, medical AI). But the same reasoning can be applied to general-purpose AI models: only ‘significant modifications’ that alter the model’s risk profile should trigger provider obligations.

The latter approach aligns more closely with the Act's risk-based philosophy. After all, if fine-tuning doesn't materially change the risks of discrimination, safety breaches, or privacy violations, why saddle the deploying entity with the full weight of provider responsibilities? If this reasoning applies to high-risk AI models, this does not seem to be a justification for treating general-purpose AI models any differently in this respect. This interpretation would ensure that, in many scenarios, the onus remains on the better-resourced and more experienced foundation model developers.

Overall, if the fine-tuning significantly changes relevant risks, then, and only then, a deployer becomes a provider.  For example, removing safety layers or employing a biased dataset for fine-tuning, thus amplifying safety or discrimination risks, mandates the consideration of such actors as developers. Standard fine-tuning techniques, on the other hand, should not warrant the qualification of the entity as a developer. However, and here comes the catch: all entities engaging in fine-tuning would still have to test that such risks are not exacerbated. This, on the other hand, seems justified as a matter of responsible AI development and deployment, but it still does not amount to the full weight of fulfilling the requirements of Articles 53 and 55 AI Act. Importantly, the costs for undertaking the testing will have to be borne by the organizations seeking to deploy the models. Depending on the extent of these costs, this might adversely affect deployment by small and medium-sized enterprises.

Another approach to drawing the line between developers and deployers features in the Californian attempt to enact AI legislation. The legislature relied on a purely quantitative distinction and considered everybody who invests more than 10 million dollars in the fine-tuning of a model to be a developer. This approach can guarantee a high degree of legal certainty. However, it fails to address situations in which economic operators change the risk profile of a model without significant investments.

 

Overview of the AI Act distinction between provider and deployer in the GPAI (general-purpose AI = foundation model) context
Figure 1. Overview of the AI Act distinction between provider and deployer in the GPAI (general-purpose AI = foundation model) context

Practical Implications and Strategies

For companies navigating these murky waters, three main strategies emerge:

  1. Opt for models below the ‘systemic risk’ threshold to avoid the most onerous obligations (Art 55).
  2. Embrace prompt engineering (the science of selecting the right prompts) and Retrieval Augmented Generation (RAG: use of an external knowledge source for the creation of prompts) as alternatives to fine-tuning. Arguably, in these cases, the model is not modified at all, and provider status is skewed both under Recital 109 and the analogy to Article 25(1)(b) AI Act. Companies would only have to fulfil much more light-weight risk management obligations as part of the deployer rules (Art 26 and general tort law).
  3. Leverage AI itself for risk management, using large language models to conduct semi-automated red teaming exercises.

As the AI governance landscape continues to evolve, one thing is clear: the billion-dollar question of ‘developer or deployer?’ will remain at the forefront of legal, ethical, and business discussions. The answer may well shape the future of AI innovation and deployment for years to come.

Developer and Deployer States

Importantly, this analytical distinction between developers training foundation models and deployers fine-tuning them and integrating them into applications does not only hold true at the individual company level. We argue that it applies to entire countries. Most jurisdictions fall into the category of ‘deployer states,’ given the oligopoly of model developers in the US (and China) and the limited number of powerful (and currently often open-weight) foundation models in other states (eg, UK, France, UAE).

Between these potentially just two to four genuine ‘developer states’ and the numerous deployer states exists a gray zone. It comprises countries that host significant intermediaries or actors who might be considered developers if they modify the model significantly.

Figure 2: Developer - deployer distinction mapped onto countries
Figure 2. Developer - deployer distinction mapped onto countries

The status as a deployer country does not mean that these countries have to refrain from regulating developers in other countries. To safeguard consumers within their jurisdictions, and foster innovation, deployer countries can adopt specific policies with extra-territorial reach. Examples for such provisions are Art 2 of the AI Act and Art 3 GDPR; in fact, many market regulations around the world function in this way, targeting external entities who offer products in a certain jurisdiction. For AI, these rules can include mandating developers who roll out their models in deployer states to address vulnerabilities that could emerge after deployment. They might require continuously monitoring AI models for emerging risks, issuing timely updates, and transparent reporting on risks and mitigation measures (see Art 9 ff. AI Act). The EU AI Act, in a controversial move, even extends EU copyright rules for training AI models to third countries, breaking with the territoriality principle in international copyright law (Art 53(1)(c) AI Act).

Furthermore, to retain control over AI models, deployer countries might mandate their localization. Such an approach requires parts or the entire supply chain of AI applications to be confined within a nation's jurisdiction. These measures mirror general trends in data localization and are illustrated by the European Union Cybersecurity Certification Scheme for Cloud Services (EUCS). The EUCS draft introduced a certification with three assurance levels for cloud services, linked to their risk level. The highest level requires data localization and an EU-based global headquarters. Although currently voluntary, this might become mandatory for specific EU cloud users, and also for the AI value chain, for example under a new EU cybersecurity law (see Art 24 NIS 2 Directive), even though the data sovereignty requirement has recently been watered down.

Yet, these rules with extra-territorial effect might have a hard time being enforced against powerful tech companies, especially when small states attempt enforcement. Unlike with local deployers small states are usually unable to get hold of the persons responsible for breaches of local regulations and to access the assets of the offender. So even if international law allows them to prescribe rules with extraterritorial effect, states have to means to resort to coercive measures to enforce them. However, as we will show in our next blog post, deployer states can influence AI governance at a global level by making use of their ability to regulate deployers of AI models. They are in a position to enforce deployment regimes tailored at their local needs and cooperate to set standards and strive for global cooperation to enforce rules that ensure ethical, secure, inclusive and globally beneficial AI development.

In Conclusion: The AI Chessboard Redrawn

As we have seen, the developer-deployer distinction in AI is far from a simple binary triggering the applicability of certain regulatory duties. It constitutes a spectrum that spans from individual companies to entire states, reshaping the global tech landscape. In this new AI world order, which has significant geostrategic implications, even ‘deployer states’ hold significant cards—if they play them right.

So, in the context of AI governance, it is important to remember: it's not just about the handful of tech giants creating these models. It's also about a complex web of developers, deployers, and states, all vying to shape the future of AI.

In the next post we will show how these complex value chains can provide avenues for states across the developer/deployer spectrum to shape the future of Global AI Governance.

Philipp Hacker is Professor for Law and Ethics of the Digital Society, European New School of Digital Studies.

Ramayya Krishnan is Dean, Heinz College of Information Systems and Public Policy and Ruth F. Cooper Professor Of Management Science And Information Systems, Carnegie Mellon University.

Marco Mauer is Researcher, European University Institute.

Share

With the support of