Why Europe Needs a ‘MiFID III’ for the Age of Artificial Intelligence
Posted:
Time to read:
Artificial intelligence is no longer a peripheral tool in financial markets but is becoming embedded in their core infrastructure: from robo-advisors that automate suitability assessments to algorithms that shape market liquidity and price formation.
The growing reliance of financial markets on AI raises questions about whether current European Union regulations can address the operational risks it creates. In the EU, the ambitious cross-sectoral Artificial Intelligence Act (AI Act) regulates technological aspects, while MiFID II governs investment services. An in-depth analysis of the interaction between these two regimes reveals a worrying regulatory gap, pointing to a radical reform of MiFID (namely, a MiFID III) as the most appropriate way forward.
AI’s Growing Role in Investment Services
Unlike banking or insurance, investment services do not transfer financial risk to the intermediary: the risk remains with the client. This makes AI particularly consequential: when AI systems collect and process client data, match investors to financial products, or automate trading strategies, errors and biases may directly harm retail investors.
We identify three key domains, and the associated risks, where AI now operates: (1) data management and profiling, where AI processes client information and market data; (2) high-added-value services, such as investment advice and portfolio management, where suitability assessments are central; (3) algorithmic trading, where AI shapes market structure, liquidity, and systemic risk.
Across these domains, AI brings not only efficiency gains, but also new vulnerabilities, especially when systems are opaque, difficult to audit, or trained on flawed data.
Three Core AI Risks
From this analysis, three composite risk categories emerge. First, data quality risk. AI systems depend on large volumes of data, but errors, biases, or outdated inputs can cascade through automated decision-making processes. When the same AI tools are widely deployed, these errors may create correlated, system-wide risks.
Second, suitability risk. In investment advice and portfolio management (which we term ‘high added-value services’, or HAVS), AI may autonomously—or quasi-autonomously—recommend products that do not align with a client’s risk tolerance or financial objectives. This problem is intensified by generative AI models, whose outputs tend to be non-deterministic and difficult to explain, raising concerns about accountability and client protection.
Third, market structure risk. In algorithmic trading, AI systems influence price discovery and market stability. These effects feed back into the quality of execution and the advice investors receive, blurring the line between market infrastructure and client-facing services.
Why MiFID II Falls Short for AI
MiFID II is built on a technology-neutral structure. Its rules focus on outcomes—acting in the client’s best interest, managing conflicts, and ensuring suitability—rather than on the tools used to achieve them. In principle, this neutrality allows MiFID II to accommodate technological change. In practice, however, it leaves crucial AI-specific issues unaddressed.
MiFID II contains no binding rules on AI model design, data governance, explainability, or traceability. Instead, regulators rely on non-binding guidance issued by the European Securities and Markets Authority (ESMA). While these guidelines encourage firms to ‘know their algorithms,’ document systems, and monitor outcomes, they lack enforceability and precision. Software developers, meanwhile, fall entirely outside the MiFID framework, even though their tools increasingly shape investor outcomes.
The Limits of the AI Act in Regulating AI in Investment Service
One might expect the EU’s AI Act to address this gap. Yet most AI systems used in investment services are not classified as ‘high-risk’ under the Act. Credit scoring and insurance pricing are covered; investment advice and portfolio management are not.
This exclusion has significant consequences. High-risk AI systems are subject to stringent requirements for data governance, testing, human oversight, documentation, and post-market monitoring. Investment firms deploying AI for suitability assessments must meet these obligations, remaining subject only to the AI Act’s general, horizontal rules.
The result is a regulatory mismatch. The AI Act is process-oriented and lifecycle-based, focusing on how AI systems are built and deployed. MiFID II is outcome-oriented and market-focused, assuming human agency at the centre of decision-making. Neither regime, on its own, adequately addresses AI-driven investment services.
The Case for MiFID III
Our paper evaluates two possible responses. One option would be to expand the AI Act’s high-risk classification to explicitly cover investment services. While tempting, this approach would be blunt and insufficiently tailored to the specific logic of financial regulation. Already, significant tensions exist between the horizontal framework of the AI Act and sectoral rules in banking and financial regulation, which would be exacerbated by simply adding investment services to the high-risk catalog.
We argue, instead, for a more focused solution: updating MiFID II itself through a comprehensive reform, effectively a ‘MiFID III.’ This would embed binding AI-specific safeguards directly into investment services law, aligned with the AI Act’s principles but adapted to financial markets.
Such a reform could harden existing ESMA guidance into enforceable rules, introducing clear requirements for AI governance, testing, traceability, and oversight of suitability assessments. Crucially, it would preserve MiFID’s client-centric focus while addressing, in an integrated fashion, the technical realities of AI-driven decision-making.
Europe is already revising its financial regulatory framework in response to sustainability and retail investor protection. AI poses a similar structural challenge—one that cannot be resolved through soft law or piecemeal amendments.
A MiFID III designed for the AI era would not stifle innovation. On the contrary, by providing legal certainty and coherent standards, it could foster responsible adoption of AI, strengthen investor trust, and safeguard market integrity.
The authors’ paper can be found here.
Riccardo Ghetti is an Assistant Professor of Business Law at the University of Bologna.
Claudio Novelli is a Postdoctoral Researcher at the Yale Digital Ethics Center.
Philipp Hacker is a Professor for Law and Ethics of the Digital Society at the European New School of Digital Studies (ENS).
Luciano Floridi is a Professor in the Practice of Cognitive Science and the Founding Director of the Digital Ethics Center, Yale University.
Share: