The rapid rise of artificial intelligence (AI) and big data is fundamentally reshaping today’s capital markets. As we examine in detail in our new ECGI Working Paper, the adoption of increasingly autonomous, self-learning systems challenges the core principles and legal frameworks underpinning the EU’s Market Abuse Regulation (MAR), which was originally designed with humans and rule-based trading algorithms in mind.
This post summarises our key insights, exploring how autonomous AI reshapes insider trading law, enforcement, and disclosure duties in the EU.
From Automated to Autonomous: The Evolution of AI in Capital Markets
Traditionally, algorithmic trading relied on deterministic ‘if-then’ logic, and thus predefined rules encoded by humans. Today, advanced AI, especially deep learning and reinforcement learning models, can independently analyze vast, diverse datasets, including social media, satellite images, and alternative data sources. These AI systems learn from complex patterns and execute trades without direct human intervention, granting institutional players significant advantages in speed and information processing power. While this evolution promises efficiency gains and enhanced market liquidity, it also raises new regulatory risks, including what some have termed a ‘structural insider advantage’. In practice, this means that AI users could process and exploit hidden information more quickly and comprehensively than other market participants, thereby gaining a trading advantage.
Insider trading and the privilege for data driven research
The core objective of EU insider trading law is to safeguard market integrity by ensuring that all participants operate on an equal informational footing (parity of information). Under Article 8 MAR, anyone in possession of inside information is prohibited from trading on that basis. When AI tools aggregate so-called ‘alternative data’ (e.g. social media content, satellite imagery, or market feeds) that are themselves publicly accessible, Recital 28 MAR offers important clarification: research and estimates derived from publicly available data should not, as such, be classified as inside information, as long as every market participant has at least the theoretical possibility to obtain that data. Thus, the use of AI does not in itself conflict with the core principle of information parity, provided the system processes only information that is publicly available and legally accessible to all market participants, even if acquiring and analyzing such data requires costly or sophisticated methods.
Attribution problem and Compliance by Design
However, risk arises wherever AI systems are fed, intentionally or not, with non-public, price-sensitive information as input, or where the system independently acquires such information via unlawful means. The problem is exacerbated by the fact that autonomous AI systems may act on such input without human oversight, and even contrary to the explicit intentions of their designers. Because AI lacks legal personhood, liability ultimately falls on the natural or legal person who owns or operates the system. The core challenge, therefore, lies in attributing knowledge (‘possession’) and conduct (‘use’) when trading decisions are made not by humans but by autonomous AI.
We argue that knowledge should be attributed to the AI user if the system processes inside information as input. Under the MAR, however, there is a strong presumption: whenever a person who possesses inside information trades in the relevant shares, it is assumed that the information was used for trading. This presumption raises difficult questions for AI-driven trading, since the ‘black box’ nature of AI makes it virtually impossible to reconstruct the decision-making path and disprove use of inside information.
In general, the MAR does contain a compliance defense for legal entities in Article 9(1). Legal entities can rebut the presumption of use and thus avoid liability where they implement effective organizational measures—usually such as information barriers—and can demonstrate that they took all reasonable precautions to prevent the misuse of inside information by its employees. The underlying rationale is that liability should not extend to acts that an entity cannot reasonably control, provided that adequate compliance mechanisms were in place.
However, this rationale can be teleologically extended to the use of autonomous AI systems. Just as Article 9(1) shields a company from liability for the unforeseeable acts of employees, the same logic applies where trading decisions are made by an AI operating beyond direct human control. Moreover, this reasoning should not be confined to legal entities. It may also be extended, by analogy, to natural persons who deploy AI systems under comparable compliance-oriented conditions. Accordingly, if a market participant employs AI for trading but has designed and implemented robust safeguards to prevent AI-driven insider trading (compliance by design) the presumption of use should be rebutted. Such safeguards may include strict data governance, access controls, and continuous monitoring to ensure that the system cannot ingest or exploit inside information. Where these measures are in place, liability for insider trading should not attach merely because an autonomous AI, acting independently, engages in what might otherwise be characterized as what we call automated insider trading.
AI and the Public Disclosure Obligation
AI should not be seen solely as a source of regulatory risk; it also holds significant potential to enhance capital market transparency. Issuers can, and arguably should, deploy AI systems to identify, track, and manage inside information within the company, thereby strengthening compliance with the ad hoc disclosure obligations set out in Article 17 MAR. At the same time, critical decisions regarding disclosure—particularly whether to postpone publication or to correct market misinformation, including risks posed by AI-generated deepfakes—must remain subject to human judgment and responsibility. While current law does not require the use of AI for these functions, the exponential growth of data and the limits of human oversight suggest that such technological support may, over time, become a de facto necessity.
Enforcement and Compliance in the Age of Autonomous Trading
Nonetheless, as algorithms become increasingly autonomous and opaque, creating the classic ‘black box’ problem, ex-post enforcement of insider trading laws will become progressively more challenging. Regulators will likely need to employ AI tools to detect suspicious patterns and potential violations. In this context, sustained focus on ex-ante compliance measures, including careful system design, rigorous control over data inputs, and transparent documentation, will be essential both to mitigate risk and to provide evidence of proper conduct in the event of subsequent investigations.
Conclusion
In sum, while existing EU insider regulation is, by design, technologically neutral and generally robust enough to address the core risks posed by autonomous trading, targeted reforms could further enhance both legal certainty and regulatory efficacy. Codifying the privilege for data-driven research in the operative text of MAR, refining the application of compliance defenses, and introducing more granular guidance regarding organizational and technical safeguards against AI-driven abuses will help uphold the foundational principles of information parity and market integrity without stifling innovation.
The authors’ paper, an amended English version of Professor Poelzig’s article ‘Künstliche Intelligenz und Kapitalmarktrecht,’ published in the Zeitschrift für das gesamte Handels- und Wirtschaftsrecht (ZHR) 2025, p. 185-217, is available here.
Dörte Poelzig is a Professor of Private Law, Commercial Law and Company Law at the University of Hamburg.
Paul Dittrich is a Research Assistant and Doctoral Candidate at the University of Hamburg.
OBLB categories:
OBLB types:
OBLB keywords:
Jurisdiction:
Share: