Faculty of law blogs / UNIVERSITY OF OXFORD

The policy discussion around whether and how to regulate Artificial Intelligence is gaining increasing traction, especially in light of new technological developments such as the emergence of generative and general purpose AI such as ChatGPT. Two camps are irreconcilably opposed to each other: the pro-regulation camp, advocating strict rules to address the risk of human rights violations and potential shortcomings in regulatory oversight; and the free-market supporters who argue that premature regulation and overly restrictive rules may stifle innovation. 

It is time to consider a more innovative, third way ahead which seeks to combine the benefits of both. A regulatory sandbox could be the best solution for handling AI in the current situation.

A regulatory sandbox is a controlled environment that allows innovators and businesses to test and develop new AI technologies in an environment with reduced regulatory constraints. The idea behind the sandbox is to provide a safe space for businesses and regulators to work together to understand how new technologies can be developed and regulated in a responsible and ethical way.

A regulatory sandbox promises a number of advantages. First, it promotes innovation: AI is a rapidly evolving technology, and the regulatory environment has struggled to keep up. A sandbox allows for the development of new AI technologies in a controlled environment reducing the risk of violating laws or regulations. This has proven to reduce the so-called ‘time to market’ for innovations, giving new businesses increased legal certainty and thereby leading to more innovation.

A related advantage is the speed of response to new technological developments. Current legislative efforts such as the EU AI Act are very slow to adopt—it was proposed in April 2021 and is still making its way through the legislative process, not expected to become binding before 2025/26. Worse still, once a piece of classic legislation such as this one is adopted, it will be extremely difficult to overhaul it in the future to keep track with new developments. In some ways, the AI Act is already now outdated as it was first conceived in a world without generative AI and chatbots such as ChatGPT. A sandbox, in contrast, is a flexible and responsive tool to respond to new developments and can be adjusted quickly to take into account new challenges.

At the same time, the sandbox regime provides safeguards for consumer protection. AI systems have the potential to cause harm to consumers, and a regulatory sandbox can help ensure that AI systems are safe for use. The sandbox allows for testing of AI systems in a controlled environment, identifying and mitigating potential risks. This can help protect consumers and ensure that they have confidence in the technology being developed.

A sandbox further enables collaboration: it brings together regulators, businesses, and other stakeholders to collaborate on the development of AI technologies. This collaboration can lead to more effective and efficient regulations that balance the needs of innovation with public safety. This learning process for regulators and regulatees is a win-win situation and can help build trust in the technology and increase adoption.

To be sure, there are a number of potential downsides to a regulatory sandbox that need to be addressed. Inadequate safeguards, limited scope, unforeseen consequences, and lack of consistency are all potential issues that need to be carefully considered in the design of the sandbox before its implementation.

The proposed AI Act does mention the idea of a sandbox on the margin. However, the proposed text resembles more of a symbolic marketing trick rather than a serious utilization of the potential of a sandbox. Article 53 of the proposed AI Act floats the idea, without specifying exactly what such a sandbox would allow. If adopted in its current form, the text of the Regulation would merely allow EU Member States to introduce a local sandbox, but not require it. This would not accomplish much: national authorities would not have the opportunity to deviate from the requirements of the AI Act or EU legislation more generally to create a truly innovative regulatory environment. Rather, a real ‘experimentation clause’ is needed, which would allow the supervisory authority to operate flexibly in applying the existing legal framework. Only then will the sandbox be truly attractive for companies, and a sincere dialogue and responsible guidance for artificial intelligence will be possible.

Overall, a genuine AI sandbox can provide a valuable platform for the development of new AI technologies while also ensuring that they are safe and beneficial for society. There is certainly no ‘free lunch’: a truly functional sandbox is a costly affair, requiring adequate personnel resources and corresponding expertise to ensure that the sandbox operates as intended. But such costs seem small compared to the benefits that this new technology can bring us.

Wolf-Georg Ringe is Professor of Law and Finance and Director of the Institute of Law & Economics at the University of Hamburg, and Visiting Professor at the University of Oxford and at Stanford Law School.

Share

With the support of