Faculty of law blogs / UNIVERSITY OF OXFORD

Regulating AI in Finance: How the EU AI Act Shapes Financial Technology’s Future

Author(s)

Maria Lucia Passador
Assistant Professor at Bocconi University – Department of Law.

Posted

Time to read

4 Minutes

Artificial Intelligence (AI) is reshaping industries at an unprecedented pace, yet its swift integration into critical sectors like finance raises profound regulatory issues. In response, the European Union's (EU) AI Act introduces a pioneering regulatory framework, which sets the stage for regulated, transparent, and accountable AI usage across the financial sector.

My paper critically examines the implications of the EU AI Act on the regulatory framework governing AI deployment within the banking and financial sectors. The analysis delves into the regulatory structure set forth by the AI Act, which categorizes AI applications by risk and mandates differentiated compliance standards, particularly for high-risk applications in critical areas like creditworthiness assessments. Furthermore, the article provides insights into AI governance and identifies complexities surrounding potential fragmentation among regulatory bodies, which may require closer coordination.

The central thesis is that the AI Act represents a balanced approach between promoting innovation and enforcing robust regulatory oversight, essential for maintaining financial stability without stifling technological advancements.

The paper also evaluates how this framework could inform similar regulatory developments in other jurisdictions, especially the United States, exploring both the merits and potential pitfalls of adopting analogous structures in different legal and technological landscapes.

Ultimately, the research underscores the AI Act's potential to drive regulatory discourse, fostering a rigorous model that balances innovation with public safety. It also highlights the critical role of adaptive regulatory measures in governing the AI future within the financial industry, positioning the EU as a potential global standard-setter for ethical AI governance.

Why the AI Act Matters in (and to) the Financial Sector

The integration of AI into the finance sector presents immense opportunities alongside unprecedented challenges. Applications range from customer profiling to fraud detection, but each of these raises critical ethical and regulatory issues, as well as potential vulnerabilities to the misuse of AI tools. Regulatory bodies are thus tasked with balancing innovation with safeguards to ensure AI serves financial stability and consumer trust.

A New Regulatory Blueprint

Adopted by the EU Parliament in March 2024 and approved by the Council in May 2024, the AI Act marks the EU’s commitment to creating a controlled yet advanced AI ecosystem.

At its essence, it aims to safeguard fundamental rights, prioritize human-centric approaches, and promote ethical practices in AI, categorizing AI systems by risk levels and establishing varied obligations for both providers (developers) and users (deployers). It also enforces stringent requirements on high-risk systems impacting vital sectors such as finance, healthcare, and public administration, requiring providers and users to adhere to rigorous standards in risk management, data governance, technical documentation, record-keeping, human oversight, accuracy, robustness, cybersecurity, and quality management to ensure responsible and accountable utilization of AI technologies. Inter alia, high-risk AI models must disclose training data sources and maintain documentation for audit purposes, enhancing accountability. Notably, the AI Act prohibits ethically unacceptable AI applications, such as social scoring and manipulative AI.

AI in Financial Regulation: Current Landscape and Implications of the AI Act for Financial Institutions

The transformation of finance through AI traces back to pioneers like Alan Turing and John McCarthy in the 1950s, leading to a domain now defined by machine learning and deep learning. This evolution enables algorithms to process data for predictions and classifications while exhibiting increasing decision-making autonomy. AI's applications in finance, from customer service to fraud detection, raise implications and risks requiring close scrutiny from policymakers and regulators. Balancing cybersecurity with potential vulnerabilities is essential, as AI sophistication presents unique challenges for market participants. The literature identified key areas of investigation, including robo-advisors, collective investment management, and algorithmic trading, each facing distinct regulatory frameworks, challenges, and responses. This underscores the necessity for ongoing assessment and adaptation in regulatory approaches to ensure responsible AI deployment in finance.

Nowadays, regulatory compliance necessitates significant operational changes. Compliance and transparency require financial entities to document AI system functionalities, manage risks, and monitor outcomes to prevent misuse. Additionally, data governance and consumer protection are paramount, as AI systems used in creditworthiness assessments carry risks of discrimination. Therefore, financial institutions must ensure the equitable and impartial application of AI technologies. Lastly, compliance demands may further pressure smaller financial players, potentially leading to market consolidation as they struggle to meet regulatory requirements.

International Influence and the US Perspective

The AI Act’s regulatory approach has sparked global interest, positioning the EU as a potential standard setter in AI regulation and creating a blueprint for safe and responsible AI usage worldwide. For the US, where AI regulation has traditionally been less comprehensive, the EU framework offers a valuable point of reference. As shifts in US policy priorities emerge, particularly following the 2024 election, a more robust conversation about AI governance is expected. This could stem from a desire to align with global standards, address monopolistic concerns raised by industry leaders like Elon Musk, or adopt a deregulatory approach that fosters innovation. Yet, the EU approach presents a complex balance:

  1. Advantages: Implementing regulations similar to the EU’s would promote fairness, transparency, and accountability, particularly for high-risk applications.
  2. Potential for Harmonization: Aligning with global standards, the US could facilitate cross-border cooperation in AI deployment, contributing to unified regulation.
  3. Risks of Over-Regulation: A too-rigid regulatory environment might stifle innovation, slow down market entry, and increase compliance costs. Given the fast-evolving nature of AI, overly strict measures could hinder flexibility in adapting to tech advancements.

A balanced approach could draw on EU's principles while maintaining flexibility, ensuring that the US remains competitive without compromising public trust and safety.

Conclusion

The paper argues that the AI Act is a transformative piece of legislation that presents a groundbreaking regulation promoting responsible AI innovation while safeguarding financial stability. To stay competitive and compliant, financial institutions and AI developers must actively engage in understanding and implementing its requirements: proactive alignment with these standards will not only safeguard consumer interests, but also drive innovation and foster trust within the global market. Staying abreast of this regulation is essential for non-EU countries, as it can serve as a model for their own future regulatory frameworks. Embracing this legislation will not only enhance competitiveness but also pave the way for a more responsible and innovative future in AI.

 

The author’s paper can be found here.

Maria Lucia Passador is an Assistant Professor of Corporate Law and Financial Markets Regulation at Bocconi University.

Share

With the support of