Faculty of law blogs / UNIVERSITY OF OXFORD

Machine Learning, Market Manipulation, and Collusion on Capital Markets: Why the ‘Black Box’ Matters

Author(s)

Alessio Azzutti
Research Associate, Centre for Banking & Finance Law, National University of Singapore; PhD candidate in Law, University of Hamburg
H. Siegfried Stiehl
Senior Professor at the Department of Computer Science, University of Hamburg

A growing body of legal research is dedicated to investigating the implications of financial technology and innovation for the application of existing financial law and regulation to achieve basic public goals (eg consumer protection, competition, financial stability). One of the most recurrent concerns is that whenever regulation and supervision fall short, financial technology and innovation could expose the global financial system to unprecedented and unexpected sources of risks, even until the point to jeopardise financial stability. Within this scientific debate, ethical and legal issues associated with the use of artificial intelligence (AI) by humans to delegate decision-making tasks are doubtless in the spotlight. Continuous and spectacular achievements in AI methods have paved the way for increasingly autonomous algorithmic agents, with superior-to-human capabilities, in many real-life applications. However, delegating decision-making tasks to autonomous and opaque algorithms also raises fundamental questions, for instance, related to any potential liability for AI misconduct.

Our recent paper contributes to this innovative scholarship by exploring the relationship between machine learning (ML) methods, algorithmic trading, and market abuse. Using the proprietary trading industry as a case study, we take an interdisciplinary and innovative stance merging the scientific fields of financial regulation, law & economics, and computational finance.

In the last few decades, global capital markets have undergone a profound transformation led by the ‘algorithmic revolution’, fostered by both technological and regulatory innovations. The development and institutionalisation of algorithmic trading and markets have radically shaped financial markets’ infrastructures and services and impacted the modalities through which market participants interact and compete on markets. In this fast-evolving techno-economic environment, AI applications for financial trading are today proposed as game-changer, owning the promise of several efficiency gains for investment firms developing and implementing these tools to enhance business operations. Indeed, AI’s most recent subfield of ML methods and Big Data are the two fundamental ingredients to assemble the most sophisticated and cutting-edge algorithmic trading techniques and strategies.

Our study is explicitly dedicated to this new computational finance paradigm, as being at the foundation of increasingly autonomous AI trading agents. We review state-of-the-art developments and ongoing challenges in ML methods for financial trading. While we note that today’s algorithmic trading systems still have a predominantly hybrid ‘human-machine’ nature, the computational finance literature provides first evidence about the feasibility of autonomous AI trading agents to emerge on capital markets anytime soon. Thanks to the most innovative and sophisticated ML methods, AI trading systems have reached a tremendous level of analytical capabilities with increased autonomy, but this can also have some drawbacks. Specific ML methods (eg ‘deep learning’) can lead to the so-called ‘black box’ problem (ie AI developers and users’ inability to fully understand or predict algorithms’ outcome and behaviour). The ‘black box’ nature of specific AI trading applications is problematic from both a trustworthy implementation and a compliance perspective. At least de jure, trading algorithms are required to produce predictable, controllable, and not least explainable trading behaviour in order to ensure the orderly functioning of trading on markets.  

Further, the fact that humans can use algorithms for unlawful purposes is not anything new in finance. Our focus is, however, on a novel scenario: autonomous AI trading agents that, thanks to self-learning capabilities, can discover both old and new forms of market abuse, including emerging risks of ‘tacit’ collusion, in a fully autonomous way (i.e. without being expressively programmed or instructed in that way by human experts). Building on the findings from the computational finance and law & economics literature, we demonstrate that these risks are not only conceptually possible but more topical than what generally perceived by the public. There might be segments within the vast network of global finance in which AI trading system can arguably find a conducive techno-economic environment for their successful but unlawful implementation. We also demonstrate that different scientific disciplines have researched the same ML paradigm in parallel (ie ‘deep reinforcement learning’). On the one hand, computational finance shows that these methods are among the most successful implementation of ML-powered trading systems. On the other, competition law & economics scholars have most recently used this specific class of algorithms as a benchmark to assess algorithms ability to lead to ‘tacit collusion’ on digital marketplaces. From a methodological standpoint, the triangulation of different disciplines represents a vital contribution to the scientific literature. It allows us to expand our understanding of the relationship between increasingly autonomous algorithms and market abuse on global capital markets and how financial regulation should start approaching all relative issues.

With all these risks in mind, we question the adequacy of existing market abuse regulations, enforcement mechanisms and current governance frameworks for algorithmic trading in dealing with both the technical specificities and additional risks of specific ML methods applied to financial trading. We show how the ‘black box’ nature of most innovative ML-powered algorithmic trading strategies can lead to severe short circuits to the safe application of existing market abuse regulations. This is because, as being based on traditional liability concepts (such as ‘intent’ and ‘causation’), the latter can arguably cease to function in the presence of autonomous and ‘black box’ algorithms. In concluding, we discuss a number of policy initiatives adopted by regulators worldwide to curb some of the emerging risks arising from delegating cognitive agency to AI in order to provide some guiding principles to inform legal reform. Overall, we aim to foster a scientific debate on the interplay between AI and crime on capital markets and promote a new research agenda at the intersection of financial regulation, economics, and computer science to face the challenges brought about by increasingly complex and sophisticated financial technology.

Alessio Azzutti is a Research Associate in the ‘Law, Finance, and Technology’ programme at the Institute of Law & Economics, University of Hamburg.

Wolf-Georg Ringe is a Professor of Law and Finance, and Director of the Institute of Law & Economics, University of Hamburg. He is also a Visiting Professor at the University of Oxford.

H. Siegfried Stiehl is a Senior Professor at the Department of Computer Science, University of Hamburg.

Share

With the support of