Faculty of law blogs / UNIVERSITY OF OXFORD

Challenges for AI-Enhanced Banking Supervision

Posted

Time to read

3 Minutes

Author(s)

Alessio Azzutti
Research Associate, Centre for Banking & Finance Law, National University of Singapore; PhD candidate in Law, University of Hamburg
Pedro Magalhães Batista
Lecturer in Commercial, Corporate, and Banking Law, University of Leeds; PhD candidate in Law, Goethe University Frankfurt

In today’s rapidly evolving technological landscape, banking supervisors worldwide are recognizing the immense potential of artificial intelligence to enhance their efficiency and analytical capabilities. The European Central Bank (ECB) is no exception, as it has acknowledged the opportunities presented by supervisory technology (SupTech) and even established a dedicated SupTech Hub. However, the adoption of automated technologies in banking supervision raises critical questions related to legality, transparency, and accountability, especially for the ECB as a public institution within the EU’s democratic order built on the rule of law.

In our working paper, titled ‘Navigating the Legal Landscape of AI-Enhanced Banking Supervision: Protecting EU Fundamental Rights and Ensuring Good Administration, we delve into the legal implications of integrating AI within supervisory decision-making. Our focus is on how the adoption of AI, particularly machine learning (ML), may impact EU fundamental rights, specifically the right to good administration. As normative framework, the concept of good administration informs our analysis by enabling, for instance, to define the ECB’s reason-giving duty as a basis for both the intelligibility, contestability and reviewability of supervisory decisions.

Our main focus is to explore how this concept can guide the integration of AI and ML into supervisory processes and procedures. In examining the three main technical challenges associated with ML—ie, ‘fairness’, ‘automation’, and ‘transparency’—we unveil their associated risks to good administration that could ultimately jeopardise the legality of AI-assisted supervisory decisions.

Drawing inspiration from the regulatory approach of the EU AI Act, we provide some preliminary idea for regulating AI systems based on the risks they pose to good administration. We propose a risk-based taxonomy of AI systems based on four main risk-factors—ie, ‘scope of application’, ‘type of data’, ‘level of autonomy’, and ‘degree of opacity’ (see Figure 1 below). The resulting framework prioritizes, on a proportional basis, technical-legal aspects of ‘transparency’, ‘auditability’, and ‘accountability’.

To illustrate, for AI systems that pose minimal or no risks, less stringent requirements would suffice, with particular attention to cybersecurity when processing personal data. These systems are typically used for operational tasks, which traditionally require manual human work, such as data collection and organization, or as computational and visualization tools based on deterministic methods.

In contrast, AI systems with inherently higher risks, due to some level of opacity and autonomy, or those that aid human experts in decision-making through recommendations or decisions involving the processing of personal data, would fall under the high-risk category. Such systems would be subject to more detailed and stringent requirements to ensure compliance with good administration through the implementation of technical-legal safeguards. The latter include requirements on algorithmic impact assessments, algorithmic transparency, and human-in-the-loop.

Lastly, it is advisable to ban AI applications that present unacceptable risks to the legality of banking supervision. These systems cannot be meaningfully controlled or understood by their users, which turns EU banking supervision into a black box.

Figure 1: A risk-based taxonomy of AI applications in banking supervision as derived from the EU AI Act proposal

Figure 1: A risk-based taxonomy of AI applications in banking supervision as derived from the EU AI Act proposal

The proposed risk-based regulatory approach aims to ensure that future AI-driven banking supervision aligns with the principles of good administration. We also discuss the role of various forms of ‘explainability’ in safeguarding high standards of good administration, particularly with regard to reason-giving requirements. We advocate for a holistic approach to AI explainability that considers the specific knowledge and psychological needs of various stakeholders, integrating traceability requirements throughout the AI lifecycle to ensure trustworthy AI adoption in banking supervision.

Our study contributes to the growing scientific literature at the intersection of AI law, administrative law, and central banking law. By providing insights into the legal implications of AI and ML adoption by financial supervisors, we aim to support policymakers, regulators, and industry professionals in navigating the complex relationship between AI, banking supervision, and fundamental rights. Our proposed framework underscores the significance of a balanced approach that upholds fundamental rights while harnessing the benefits of technological progress.

 

 

Alessio Azzutti is a Research Associate at the Centre for Banking & Finance Law, National University of Singapore, and a PhD candidate in Law at the University of Hamburg.

Pedro Magalhães Batista is a Lecturer in Commercial, Corporate, and Banking Law at the University of Leeds, and a PhD candidate in Law at Goethe University Frankfurt. 

Wolf-Georg Ringe is a Professor of Law & Finance and Director of the Institute of Law & Economics at the University of Hamburg, and is a Visiting Professor at Stanford Law School. 

Share

With the support of