Global AI Governance—Part 2: The Power of the Deployer States
In a previous blog post, we have explained the theoretical distinction between developer and deployer states in global AI governance: Foundation models are developed in developer states. At the moment, these are primarily the United States and China. Deployer states are home to businesses that use existing foundation models and adapt them to fit a specific application. Of course, both developer states are also deployer states. However, fine-tuning and other specification techniques increasingly blur the lines between deployers and developers, and the respective states where such work is conducted. Nevertheless, the companies that build (and use), at least in a technical sense, the most advanced AI systems are primarily headquartered in the US, China, France and very few other states.
The process of regulating access to prized frontier foundation models has deep implications for industrial policy, economic development, and geostrategy. We have likened it to a high-stakes, multidimensional game of chess, played by many actors simultaneously. While developer states, at first glance, seem to have the upper hand, this post will first show that deployer states—essentially, the rest of the world—still have the capability to successfully regulate AI matters when the deployment of foundation models is concerned. Since deploying models is decentralized there exists a real chance that small jurisdictions can enforce their own regulatory regime, unlike in the case of platform regulation. We will then argue that deployer states can cooperate to strategically set global standards for AI development so that they can indirectly assert regulatory power against the model deployers. Eventually, the post will explain which local criteria might factor into the effectiveness of local regulation.
Deployer States can enforce their rules
The best regulatory frameworks are nothing without enforcement. In the digital sector, particularly, laws are often considered to suffer from enforcement deficits. The GDPR is a prominent example of a piece of legislation that has, for a long time, failed to live up to its promise due to under-enforcement against big tech companies.
Filippo Lancieri has argued that data protection regimes are undermined by information asymmetries and market power. Arguably, these problems do not occur with the same level of severity at the deployer level of the AI supply chain. While developing foundation models requires an extraordinary amount of resources, fine-tuning and deployment are tasks that do not differ significantly from regular application development. Just as in the software market, we do not expect to see an accumulation of market power with only a few big companies that develop AI-based software solutions for their business partners. Certainly, there will be leaders in this AI applications market. However, we also expect small and medium enterprises to have a chance to develop applications that are powered by AI and tailored to serve the needs of their customers in specific industries.
This lack of accumulated market power for companies has several consequences for deployer states' capability to enforce their laws. First, they will find deployers in their jurisdiction are not dependent on international cooperation to enforce sanctions against them. Second, the deployer market will not be composed of few big players that share an interest in lenient enforcement. Instead, deployers and developers adhering to the local standards will have an incentive to ensure their competitors will not be at an unfair advantage by ignoring relevant legislation. Furthermore, deployers clearly have an interest in the regulation of upstream developers, in order to benefit from safer and more rigorously vetted products, and to share liability. Recent tendencies indicate a growing gap between general-purpose AI providers and other stakeholders, including deployers. This allows deployer states to rely on private enforcement of their rules to increase their efficiency.
A lack of market power in deployer markets also softens issues regarding information asymmetries between deployers and their regulators. Deployer countries may have to compete less with a few big, well-paying companies for scarce tech talents. Instead, they might be able to offer attractive workplaces in the public sector for individuals interested in AI governance but with the skill set required to understand the deployment stage of AI application development. This also minimizes the risk of revolving doors between regulators and big players, so another obstacle to effective regulation can be circumnavigated. The EU has, quite wisely, centralized the enforcement of the general-purpose AI provisions of the AI Act at the European level, with the AI Office. While it does not eliminate the recruitment problems, it does mitigate them vis-à-vis GPAI enforcement in each and every Member State.
This shows that unlike other areas of tech regulation, deployer countries can regulate when they can guarantee sufficiently diverse and competitive deployer markets. Any regulatory strategy should therefore focus first and foremost on avoiding market concentration. If they succeed, they may face smaller obstacles as other areas of tech regulation suffer from an enforcement deficit against big players. Conversely, it will remain challenging for deployer states to effectively regulate GPAI model providers, as competition for talents, revolving doors, and information asymmetries remain issues of prime concern in this area. Nonetheless, it is clearly worth a try.
Measures for Successful Local Regulation
A number of measures might contribute to the success of a local regulatory framework.
First, states should review and update their ‘horizontal’, ie sector-transcending, and ‘vertical, ie sectoral, regulations. These regulations target specific businesses and do not aim at regulating AI as such. An example for a horizontal regulation in need of an update is the GDPR, being representative of data protection rules across the globe. It is a technology-neutral regulatory instrument that already encompasses AI when it relies on the processing of personal data. However, there is a need for specific rules and safe harbors to facilitate the use of personal data in AI training for socially beneficial purposes. Data protection might, in some cases, hinder the implementation of anti-discrimination metrics at the model level. To implement them developers usually need to know about the protected group individuals fall into. Acquiring this sensitive information may in many cases be prohibited under rules for the protection of sensitive information (eg under Art. 9 GDPR). Under EU law the AI Act will solve this issue partially, but only by allowing using sensitive data to mitigate discrimination in high-risk AI systems. For ‘normal’ AI systems not classified as high-risk, the problem will persist. Similar challenges arise in many vertical frameworks, such as in health, automotive, or financial law.
Second, states will want to increase their enforcement capacities and the competence of national courts and agencies. The success of local AI regulation relies on a country's dedication to improving the expertise of its regulatory bodies and the ability of its judicial system to handle complex cases effectively. At the time of writing budgets for enforcement agencies differ strikingly between states: While only 46.5 Million Euros are foreseen for the new EU AI Office, the UK alone has earmarked 116 Million Euros for the AI Safety Institute. Also, courts will need to have the resources and knowledge to speedily resolve AI related disputes. In particular, when states decide to rely on private enforcement, they do need to make it attractive to file private lawsuits.
Third, as AI technologies and regulations simultaneously advance, countries must create public goods in the AI space, like verification platforms and safety evaluation tools, to address regulatory challenges and support responsible deployment. Open Source tools like AI Verify and GitHub’s AI sandbox enable developers to test and validate models, but should be complemented by public resources. Government initiatives, such as the UK AI Safety Institute’s evaluations, further enhance safety protocols for advanced AI systems. Investing in such public resources helps businesses navigate regulations with limited cost and promotes safe AI use, but it also potentially fosters international cooperation - an issue to which we now turn again.
Jointly Regulating the Developers through Standard Setting
Even if regulating deployers is a possibility for many states: Not all risks that emanate from general-purpose AI can be mitigated at the deployer level. Deployer countries thus have an interest in also effectively regulating developers of foundation models. Enforcing their rules against the few existing big players operating from abroad is a significantly bigger challenge than regulating local deployers.
Of course, states can try to ‘climb the pyramid’ and attempt to become a developer country themselves. They can invest in research and development and try to cultivate AI innovation ecosystems that eventually lead to the development of foundation models that cater to their local markets and address local and regional needs (see eg Singapore’s model delivering to the needs of the South East Asian region). Besides attempts at developing their own models, states can try to amass significant demand capacities, as buyers of AI systems, to gain negotiating power. Yet, whether these investments will pay off remains uncertain.
In any event, the past has shown that despite almost unbounded jurisdiction in the digital sphere only a handful of states, which can be called ‘regulatory oligarchs,’ have enough power to effectively enforce their rules abroad. This calls for other strategies to assert influence over deployers than the mere enactment of laws with extra territorial reach. In our view, deployer states can only successfully influence developers if they cooperate amongst one another. More specifically, we propose forming coalitions that work to use international standard-setting to develop global development and governance standards that suit their needs.
The importance of standards must not be underestimated. The AI Act already incorporates them into the very core of its governance framework. Art. 40 of the AI Act allows compliance with the Act to be achieved by aligning with established norms from reputable standard setting bodies such as ISO, IEEE, or CEN/CENELEC. Several of these organizations are currently formulating these standards. Compliance with standards leads to a presumption of compliance with the AI Act. This strategic alignment offers the EU significant leverage in AI governance. By influencing these standards, the EU and other deployer states can position themselves as critical players in setting global norms for AI. By requiring American and Chinese firms, which seek access to European markets, to comply with these standards, the EU can enforce a relatively high level of regulatory control. Their impact is reinforced by the absence of equivalent laws in other major jurisdictions, such as the US. From a pragmatic standpoint, it also reduces compliance costs for small and medium sized enterprises.
The AI Act is therefore an example of deployer states cooperating to exert influence over developers, and developer states. Admittedly, the EU ranks relatively high in the hierarchy of regulators. Other countries might find it difficult to form similarly strong coalitions to exert influence by using standards. Still, the fact that standardizing bodies usually allow each national standardization body to cast one vote and that the discussions in it are technical in nature, they can achieve a greater influence than by just enacting their own laws with extra-territorial reach.
Conclusion
As we have shown also states that merely host deployers will likely be able to enact local regulatory frameworks to achieve their regulatory aims. To do so, they should comprehensively review their local regulations and increase their enforcement capacities. International standards are a good way to impact the rules that apply to AI developers. However, in that area a hierarchy between different jurisdictions will likely persist; hence not all states have an equal seat at the table on which the chess game for the regulation of foundation model providers is being played.
Philipp Hacker is Professor for Law and Ethics of the Digital Society, European New School of Digital Studies.
Ramayya Krishnan is Dean, Heinz College of Information Systems and Public Policy and Ruth F. Cooper Professor Of Management Science And Information Systems, Carnegie Mellon University.
Marco Mauer is Researcher, European University Institute.
Share
YOU MAY ALSO BE INTERESTED IN