The Hare-Tortoise Race in Law & Technology: Rethinking Algorithmic Trading Regulation for Effective AI Governance
Posted
Time to read
Imagine a race between a flamboyant, fleet-footed hare and a clumsy yet determined tortoise. As a modern twist on a classic Aesop folktale, the tortoise—plodding after with tenacity despite evident struggle—symbolises the weariness, measured pace at which law adapts and evolves—especially financial regulation, which too often reacts after being compelled by crises. By contrast, the hare—boldly dashing ahead, leaving onlookers in awe—embodies the energetic, breakneck, and disruptive force of technological innovation (cf. Bennett Moses, 2011, 2015). In this symbolic ‘race’, the true challenge for the law is not so merely to catch up. Its real test lies in its ability to be questioned by society and, if necessary, to be reinvented in response to an evolving socio-technical context (cf. Bennett Moses, 2016). This is exactly what we witness in today’s capital markets system due to AI adoption.
In some way within this conceptual framework, I reflect on the ongoing struggle between EU financial regulation and AI trading technology in a recent article published in the Banking & Finance Law Review (preprint available on SSRN). Motivated by mounting attention—and in some cases, even apprehension—among regulatory authorities worldwide, the piece examines the risks to markets—and, by extension, society—stemming from financial AI applications. Powered by machine learning, particularly its subfield of deep learning methods, the most advanced AI trading systems are achieving greater autonomy, capability, and performance. Concurrently, widespread industry adoption intensifies algorithmic interactions and reinforces the interconnected nature of capital markets. This trend, in turn, is partly responsible for altering the very functioning of capital markets as a ‘complex system’ (cf. Azzutti, Ringe, & Stiehl, 2023; Castellano, 2024; Dell’Erba, 2024). In light of the widening asymmetry between technological ‘progress’ and regulatory adaptation, there is an urgent need to diagnose the state of AI governance in algorithmic trading. To that end, the article attempts five closely related scholarly contributions.
1. Reconceptualising Complexity in Finance Through Three AI Trading Generations
Building on extensive literature in Computational Finance conducted during my PhD project at Hamburg University, the article delineates three AI generations in financial trading: (i) ‘Good Old-Fashioned AI’ or ‘GOFAI’, (ii) the ‘First ML Era’, and (iii) ‘Deep Computational Finance’. This refined taxonomy maps the evolution of AI-based technology, reflecting both the growing sophistication of AI methods and the expanding availability of high-performance computing infrastructures. The resulting picture is one of escalating technological and market complexity with each successive AI generation (See Table 1).
Table 1: The Three AI Generations in Trading
2. A Novel Taxonomy of AI-Related Market Manipulation
By highlighting the positive correlation between market complexity and risks linked to technological advances, the article proposes an improved taxonomy of market manipulation forms associated with AI use, abuse, and misuse (cf. Blauth, Gstrein, & Zwitter, 2022). This taxonomy distinguishes between three scenarios: (i) ‘AI-assisted market manipulation’, (ii) ‘AI-enabled market manipulation’, and (iii) ‘AI-dependent market manipulation’. In the first two cases, AI functions as a tool supporting human agents in illicit activities, whereas in the third scenario, human intent may be less clear due to the techno-methodical aspects of the advanced systems employed, particularly their ‘black-box’ nature.
3. Timeline Analysis of Regulatory Lag
Observing that regulatory interventions typically follow market and technological developments, a straightforward timeline analysis was conducted (see Figure 1).
Figure 1: Comparison Between regulatory vs. technological developments
According to the visual representation above, financial regulation has consistently trailed behind innovations in trading technology. Although somewhat simplistic, this historical perspective may help shed light on the pronounced asymmetry between technological progress and regulatory adaptation—a gap that appears to have peaked precisely today, roughly a decade after the last major reform in this area (ie, MiFID II/MiFIR + MAR/MAD). Indeed, the impact of AI on current regulatory regimes governing algorithmic trading and market manipulation has increasingly captured the interest of EU scholars (eg, Martins Pereira, 2020; Azzutti, Ringe, & Stiehl, 2021; Raschner, 2021; Azzutti, 2022; Azzutti, 2023; Azzutti, Ringe, & Stiehl, 2023; Annunziata, 2023).
4. A Comparative Analysis of Regulatory Frameworks: MiFID II vs AI Act
Complementing the literature on the intersections of the AI Act with other pieces of EU legislation (eg, Hacker, 2024), the paper compares the regulatory requirements imposed on financial institutions as ‘deployers’ of algorithmic trading systems under MiFID II with those governing ‘providers’ of high-risk AI systems under the EU AI Act. Starting from the premise that capital markets trading should be considered a ‘high-risk’ domain for society, this comparative analysis exposes critical gaps between the two regimes. The paper concludes that the AI Act’s requirements—particularly those pertaining to (i) ‘risk management’ (Art. 9), (ii) ‘data governance’ (Art. 10), (iii) ‘technical documentation’ (Art. 11), (iv) ‘transparency’ (Art. 13) and (v) ‘human oversight’ (Art. 14)—are more detailed and comprehensive than the corresponding requirements found in Art. 17(2) MiFID II.
5. A Proposal for Improved AI Governance in Algorithmic Trading
Drawing on a proposal advanced in an earlier study, the article outlines a blueprint for enhanced AI regulation in algorithmic trading. Extending the regulatory approach of the AI Act to the algorithmic trading domain, it advocates for risk-based regulation of financial AI applications. Such an approach, though, would require regulators the ability to define and rank, without any legal ambiguity, different AI applications according to the specific risk associated with them. As just an initial proposal, the article suggests evaluating AI systems according to three key dimensions: (i) ‘AI Methods’, (ii) ‘AI Capability’, and (iii) ‘AI Materiality’ (see Schmid et al., 2021).
The ultimate goal of this paper is to remind readers that, in the end, it is the tortoise (the ‘law’) that leads the race despite the many challenges along the way. Achieving this objective starts by raising awareness and sparking debate about the challenges of AI governance in capital markets trading.
Alessio Azzutti is a Lecturer in Law & Technology (FinTech) at the University of Glasgow.
The author’s article can be found here.
Share
YOU MAY ALSO BE INTERESTED IN
With the support of
