Faculty of law blogs / UNIVERSITY OF OXFORD

Solving the AI Regulation’s Rhetorics

Author(s)

Renata Thiébaut
Professor at Gisma University of Applied Sciences, Germany and COO of Green Proposition

Posted

Time to read

5 Minutes

As the United Nations and other international organizations, such as the Organization for Economic Co-operation and Development (OECD), began addressing artificial intelligence (AI) global governance in their agendas, countries have rushed to release policies, initiate legal amendments, and promulgate new laws to demonstrate their commitment. Here are two approaches the judiciary can adopt to address this issue.

Lately, numerous jurisdictions have amended existing legislation and introduced new national policies to address artificial intelligence, acknowledging its potential advantages and risks. Brazil, Nigeria, Singapore, and Saudi Arabia show the diversity of approaches taken in different regions. While both law-making and law amendment processes typically involve multiple stakeholders or branches of state power in a rather lengthy process, there has been a broader dissemination of policies and national directives, which often does not require formal legislative procedure. Overall, these national ‘documents’ assess needs, identify threats, and determine how individual jurisdictions should prepare for forthcoming regulations. National directives are often non-binding, with limited or no legal enforceability, serving as guidelines to setting industry standards. Binding regulations, on the other hand, are precise on their content, and often include liabilities and remedies for breach, including those caused by emerging technologies.

Governments have faced an increasing pressure to demonstrate their commitment to AI governance and its ethical nuances, mostly because of uncertainties surrounding the actual threats that AI may pose to society. As a result, many of them opted to release national directives: The OECD’s public repository has tracked over a thousand AI policy initiatives from 69 countries. China, on the contrary, has issued several legislations addressing AI. National Development and Reform Commission, the Ministry of Education, the Ministry of Science and Technology, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration) jointly issued the ‘Interim Measures to Regulate Generative AI’ in 2023, coming into force later the same year, aimed at establishing a framework for responsible AI innovation while mitigating risks associated with generative AI technologies. In addition, the draft regulations titled Cybersecurity Technology – Basic Security Requirements for Generative Artificial Intelligence (AI) Service which established security measures for trained data, and the upcoming AI Law, which is not yet in force, will force the most advanced AI-related legislation on the globe. Similarly, Canada has initiated the development of regulations under its Artificial Intelligence and Data Act, which seeks to establish a framework for AI deployment in a responsible way.

The European Union published the first legislation on AI, the EU AI Act, which adopts a risk-based approach with specific compliance processes for higher-risk systems. The Act’s key provisions include the prohibition of certain AI technologies, extraterritorial application, and non-compliance penalties. The EU AI Act identifies several technologies as forbidden due to their significant risks to fundamental rights and safety, including social scoring systems that evaluate individuals based on their behaviour and reputation, real-time remote biometric identification technologies such as facial recognition used in public spaces, and manipulative systems that exploit vulnerabilities in specific groups and other types of intrusive surveillance systems, among others. Regarding the extraterritorial aspect of the Act, the document emphasizes that the Act is applicable for any foreign company dealing within the European Union, or an EU-based company, as stated on article 2 (1)(c). Finally, an important element of the Act's enforceability is the imposition of a maximum financial penalty of up to €35 million or 7 percent of the company’s annual turnover, whichever is higher, for non-compliance, as detailed in article 99, Chapter XII (penalties). There is not yet the full implementation of the Act, but it is safe to assume that it will have similar outcomes as the EU General Data Protection Regulation (GDPR).

Evaluating the main characteristics of the EU AI Act is essential, as it may significantly influence the formulation of future legislation in other jurisdictions, also setting precedent within the AI regulatory environment. Australia and the United Kingdom, for instance, have discussed the future regulations to also adopt a risk-based approach, potentially being inspired by the EU AI Act. Governments must, however, consider the current context of the European Union in the field of artificial intelligence more specifically, where many companies operate in service-oriented economies, strongly contrasted with that of emerging economies.

The need for binding legislation relies on the fact that companies engaged in AI-related activities must assume risks and understand the legal implications of breach. Rule of law is one of the main principles that validates these binding documents, as it ensures that all parties are held accountable as AI technologies can lead to significant ethical and other challenges.

To establish effective legal mechanisms for regulating AI, two approaches can be adopted. The first approach involves amending existing legislation, including laws related to data protection, cybersecurity, consumer protection, intellectual property, among others which might be related or influenced by AI. In many jurisdictions, these laws are outdated and do not adequately address the complexities and nuances introduced by digitalisation or new technologies.  Singapore, Canada and Japan are some jurisdictions which have taken steps to include provisions related to AI in their Data protection regulations.

Amending existing regulations does not mean simply inserting new terminologies and definitions in the preamble and scope of the law. On the contrary, the amendment process must begin with a thorough evaluation of which laws need to be revised, while also identifying their gaps in relation to any existing policies on AI. First, a clear definition of AI must be included to eliminate ambiguity, together with ethical, societal and democratic values for its responsible use, as addressed in international agreements. Second, provisions should contain specific content regarding AI with both negative and positive covenants, including for instance, principles of algorithmic bias, transparency and accountability. Finally, compliance mechanisms and penalties should be established based on the severity of harm, specifically for breaches related to AI, particularly if existing laws lack relevant provisions or require amendments.

Subsequently, a public consultation mechanism should be implemented to solicit input from experts, civil society, and the private sector. This phase is essential for bridging the gap between regulations and business needs, underscoring that compliance not only mitigates potential harms but also enhances legal security and trust, thereby promoting a conducive environment for business growth. Lawmakers should conduct a regulatory impact assessment which evaluates the potential economic, social and environmental effects as well as the impacts of the proposed amendments.

The second approach involves the adoption of a ‘shell legislation’. The shell legislation concept refers to laws that are typically broad and contain general principles instead of prescriptive rules. They are intended for temporary application while more detailed provisions are developed. In the context of artificial intelligence, shell legislation can be modified progressively to address new concerns and risks, allowing for frequent revisions without the need to rewrite the law in its entirety. Initially, it is principle-based, with a broad framework but it can evolve into a more detailed and enforceable law as needed, including the introduction of penalties for potential violations. The process of passing a shell law does not differ from other laws, so there is no need for special procedures to be applied.

Both approaches are equally important and should be addressed with urgency, but special attention should be given to law amendment and the promulgation of AI-related laws accordingly. National directives discuss short and long terms goals to be achieve, but they do not suffice to curb the potential harms of technologies, which may include privacy breaches, algorithmic bias, surveillance, among others. Companies should hold accountable for the misuse of AI. Implementing effective regulatory measures will establish a robust foundation for ethics and safety in the application of artificial intelligence, while also being tailored to the specific needs of individual jurisdictions yet solving the AI regulations rhetoric.

 

Dr. Renata Thiébaut is a Professor at Gisma University of Applied Sciences and the COO of Green Proposition, a consulting firm.

Share

With the support of