Faculty of law blogs / UNIVERSITY OF OXFORD

The ECB's Approach to AI Governance under the AI Act: Challenges and Opportunities of AI Integration in Banking Supervision

Author(s)

Maria Lucia Passador
Assistant Professor at Bocconi University – Department of Law.

Posted

Time to read

6 Minutes

Artificial Intelligence (‘AI’) has emerged as a transformative force across various sectors, notably within financial markets and corporate law. The European Union’s recent legislative efforts, particularly the EU Artificial Intelligence Act (‘AI Act’), signal a profound shift in the regulatory landscape governing AI in finance. My new paper aims to elucidate the implications of the AI Act for the European Central Bank (‘ECB’) and its oversight responsibilities within the EU banking sector. It delves into the intersections between the AI Act and prudential supervision, exploring the roles and responsibilities that the regulation delineates, alongside the collaborative dynamics it fosters. By scrutinising these elements, my paper seeks to provide a comprehensive understanding of the new regulatory framework and its impact on financial markets.

The paper advocates for a unified supervisory approach, by integrating the oversight of digital innovation with traditional prudential supervision, thus ensuring both ethical standards and financial stability. This approach requires collaboration between the ECB and the newly proposed AI Office and AI Board, composed of representatives from EU Member States and the European Data Protection Board. The AI bodies aim to promote best practices and facilitate knowledge-sharing among regulators, banks, and AI providers. By doing so, they support the creation of a cohesive market for AI that fosters innovation while maintaining high standards of safety and ethical conduct. The proposed collaboration ensures that banks can adopt digital technologies safely, aligning with the EU’s Digital Single Market Strategy (‘DSMS’). Moreover, the AI Office will assist the ECB in identifying high-risk AI systems and ensuring their compliance with the AI Act, addressing issues such as fairness, transparency, accountability, privacy, and non-discrimination. This dual focus on prudential and digital supervision exemplifies a balanced approach to regulatory oversight, promoting a stable yet innovative financial environment.

The paper addresses several pivotal research questions, which are crucial for understanding the AI Act’s impact on the ECB’s oversight responsibilities. Firstly, what specific obligations does the AI Act impose on providers and users of high-risk AI systems within the financial sector? Is the ECB required to apply the AI Act in its supervision? Secondly, what role does the ECB play within the AI governance framework established by the AI Act? This question investigates the collaborative dynamics between the ECB, the AI Office, and AI Board, and how these interactions shape AI governance within the financial sector. Thirdly, what are the challenges and opportunities presented by the integration of AI into banking supervision? Lastly, how can a unified supervisory approach be developed to ensure ethical AI deployment while maintaining financial stability?

Upholding Human Rights in EU Policy: The Intersection of AI Regulation and Financial Oversight

The AI Act aims to ensure AI technologies are trustworthy, human-centric, and protective of fundamental rights, echoing themes already addressed in the literature concerning the principle of good administration, but from a human rights perspective. The AI Office is fundamentally designed to uphold and protect human rights, ensuring that AI technologies developed and deployed within the EU adhere to these principles, emphasizing transparency, accountability, and the prevention of potential harms to fundamental rights. Conversely, and this is the core focus of the first part of my research, the ECB has a primary mandate that revolves around monetary policy and financial stability. While these goals are prima facie distinct from those of the AI Office, the ECB’s actions also indirectly impact fundamental rights. The European Court of Justice’s ruling in the Steinhoff case exemplifies this intersection. In this case, bondholders argued that the ECB failed to address the illegal nature of the Greek debt restructuring, claiming it breached various legal principles, including fundamental rights. The Court held that while the ECB has broad discretion, it must still act within the bounds of fundamental rights. Hence the ECB’s actions are not entirely separate from the AI Office’s focus on human rights.

The Impact of the AI Act on ECB’s Supervisory Framework

The AI Act delineates AI systems into categories such as high-risk and prohibited applications. High-risk AI systems, particularly those deployed in critical sectors like finance, must adhere to stringent requirements for risk management, data governance, human oversight, and cybersecurity. This classification ensures that AI applications in sensitive areas undergo rigorous scrutiny to prevent misuse and uphold ethical standards.

The AI Act does not explicitly include the ECB’s activities or mandate, nor does the ECB’s legal framework specifically mention the AI Office, as these two entities deal with different areas of regulation and supervision. However, certain functions of the AI Office might indirectly affect or be affected by the ECB’s responsibilities in areas such as financial stability, innovation, and policies on international cooperation.

Under Article 4(3) of the Single Supervisory Mechanism (SSM) Regulation, which requires the ECB to apply all relevant Union laws in its supervisory tasks, the AI Act could be considered "relevant Union law" if it relates to the prudential supervision of AI applications within the banking sector. This would lead to the application of stringent requirements for transparency, accountability, and risk management on providers and users of high-risk AI systems, which are crucial for the financial sector. Given its relevance to the ECB’s supervisory responsibilities under the SSM Regulation, the ECB will need to incorporate the AI Act’s requirements in its supervisory activities, thereby ensuring a cohesive and robust supervisory approach that aligns with broader objectives of financial stability and the protection of fundamental rights. For example, it can achieve this by aligning the data governance requirements with the governance provisions of the Capital Requirements Directive (‘CRD’).

The AI Act's relevance within the ECB’s supervisory framework is further underscored by its specific attention to high-risk AI systems, particularly prominent in the financial sector for tasks such as credit risk assessment, fraud detection, and algorithmic trading. The establishment of the AI Office and the AI Board under the Act introduces governance mechanisms that promote a unified approach to regulating and overseeing AI. This framework could enhance the ECB’s capacity to incorporate AI-specific considerations into its prudential supervision, addressing both the opportunities and risks associated with AI in banking. Moreover, the AI Act’s focus on procedural safeguards to mitigate biases and ensure transparency aligns with the ECB’s mandate to maintain financial stability and safeguard the integrity of the banking system.

It is noteworthy that, as a banking supervisor, the ECB presently evaluates banks’ systems and IT governance arrangements, recognizing that deficiencies in these areas can impact the safety and stability of individual banks. Given the AI Act’s shared focus on risk, there exists potential for alignment in how risks are assessed and managed, thereby indirectly linking the functions of the AI Office with the ECB’s supervisory responsibilities.

Bridging the Gap: AI Offices and ECB Synergy

The synergy between the AI Office and the ECB is crucial for reconciling AI innovations with the protection of fundamental rights. My paper makes several policy recommendations to enhance the regulatory framework for AI systems:

  • Coordinated efforts are recommended to ensure that AI systems comply with both AI-specific regulations and broader financial laws. One main instrument in this regard are structured dialogues’—ie systematic exchanges of information, data, and insights, ensuring transparency, accountability, and adherence to regulatory standards.
  • Clear delineation of roles between the AI Office and the ECB prevents overlap and confusion, enhancing regulatory efficiency. Establishing distinct competencies would help maintain a balanced approach to AI governance and financial supervision.
  • Developing effective collaboration strategies between the AI Office, the ECB, and other stakeholders is essential for harmonizing AI regulations and financial supervision. These strategies include regular communication, joint initiatives, and shared resources to address common challenges.

Practical Impacts and Future Outlook

The AI Act’s implementation will significantly impact financial institutions, requiring them to adapt to new regulatory requirements and enhance their governance frameworks. Compliance with the AI Act will also drive improvements in risk management and operational efficiency.

The paper underscores the necessity for coordinated efforts between the ECB, the AI Board, and the AI Office to ensure effective AI governance within the EU banking sector. This involves establishing a clear communication framework that allows for the seamless exchange of information and best practices. The ECB could play a pivotal role in credit scoring by leveraging its expertise in financial stability and risk management to enhance the transparency and accountability of AI systems used in this domain. By collaborating closely with the AI Office, the ECB can ensure that credit scoring algorithms adhere to stringent standards for fairness, accuracy, and non-discrimination. My paper proposes that the ECB take an active role in overseeing the implementation of these standards, conducting regular audits and evaluations of AI systems to mitigate potential biases and systemic risks. Moreover, the ECB can contribute to developing regulatory sandboxes and real-world testing environments where AI applications in credit scoring can be safely and effectively assessed. This proactive approach would not only bolster the integrity of credit scoring practices but also enhance consumer trust and protect fundamental rights, thereby achieving a more balanced and comprehensive regulatory framework for AI in finance.

Policymakers could further refine AI regulations and support financial institutions in their compliance efforts. Strategic policy recommendations include fostering innovation, ensuring regulatory clarity, and promoting international cooperation.

The evolving landscape of AI and financial regulation presents numerous research opportunities. Future studies should explore the long-term impacts of the AI Act, the effectiveness of AI governance frameworks, and the role of emerging technologies in financial supervision.

Maria Lucia Passador is Assistant Professor at Bocconi University – Department of Law.

The paper is available here.

Share

With the support of