AI Liability After the AILD Withdrawal: Why EU Law Still Matters?
Posted
Time to read
The already intense debate around artificial intelligence (AI) took another turn when the Commission withdrew the proposed AI Liability Directive (AILD) on 11 February 2025. This move has raised questions about how AI-related liability will be handled, particularly in cases that fall outside the scope of the harmonized Product Liability Directive (PLD). The prevailing assumption is that, in the absence of the AILD, such disputes would be governed solely by national law. This post challenges that assumption by arguing that EU law, particularly the principle of effectiveness, will influence how national courts resolve AI liability disputes. It further suggests a less obvious conclusion: even without the AILD, national courts will still need to produce outcomes closely aligned with what the withdrawn directive intended.
Disputes Outside the Scope of the PLD Will Still Be Partly Governed by EU Law
The withdrawal of the AI Liability Directive effectively means that cases falling outside the scope of the harmonized Product Liability Directive will be subject to national liability laws. The PLD primarily covers consumer AI products while excluding products used in professional settings or cases, inter alia, involving non-pecuniary damages or pure economic losses. Consequently, these cases will depend on national fault-based liability or, in certain Member States, strict liability for ultrahazardous activities—potentially including high-risk AI like autonomous vehicles.
Given this setup, one might assume that liability regimes outside the PLD fall entirely beyond EU law’s influence, particularly since the EU opted not to harmonize fault-based liability for AI-related damages. However, this assumption overlooks that national courts will frequently reference EU law—particularly the AI Act—in their rulings. The AI Act de facto defines what is ‘due care’, an element of liability, by setting obligations for AI developers and deployers regarding specific uses of AI. Importantly, EU influence extends beyond mere reference to the AI Act: national courts effectively enforce this act by determining consequences for its breaches. Therefore, ineffective handling of liability claims could weaken the overall enforcement and effectiveness of EU law.
Therefore, when courts rely on the AI Act to establish liability, these claims are no longer purely matters of national law but instead fall under EU tort law, governed by EU legal principles. To understand why, it is necessary to briefly revisit the concept of EU tort law itself. Paula Gilker describes EU tort law as a legal framework ‘whose source is EU law, not national law’. Ken Oliphant further explains that EU tort law encompasses three distinct types of claims, one of which is the liability of private persons for breaches of EU law.
This understanding did not emerge in isolation; it was shaped by two landmark judgments from the Court of Justice of the European Union (CJEU): Crehan (C-453/99) and Manfredi (C-295/04). In both cases, the EU lacked a harmonized liability regime, yet the CJEU clarified its stance on compensation when damage resulted from a violation of EU law. The CJEU ruled that claimants must have the right to seek compensation for the breach of EU law. Importantly, the CJEU emphasized that national liability rules should not undermine the enforcement of EU rights, reinforcing the principle that claimants must have an effective remedy in such cases (paragraphs 60-64, Manfredi (C-295/04)).
Following this reasoning, when national courts address AI-related liability claims ‘arising from violations of EU law’, like the AI Act, such disputes become EU tort law cases. Consequently, these cases are governed by the principles of EU law, notably the principle of effectiveness, which prohibits national courts from applying procedural or substantive rules that make it overly difficult for claimants to seek compensation.
Moreover, the principles governing EU tort law could also extend to strict liability for ultrahazardous objects or activities, particularly when AI technologies qualify as ‘ultrahazardous’. EU law remains relevant here, as the criteria defining ‘ultrahazardous’ activities closely mirror the AI Act’s classification of high-risk AI systems. Thus, EU law could influence national courts in determining liability elements, including what constitutes an ultrahazardous object or activity.
In short, despite the withdrawal of the AI Liability Directive, national courts will not operate entirely independently of EU law. The principle of effectiveness ensures EU law’s continued influence, raising an important question: what specific obligations does this principle impose on national courts when assessing damages linked to breaches of the AI Act.
EU Requirements for National Liability Claims for Breach of the AI Act
Finding an answer to what the EU’s principle of effectiveness requires from national courts in AI-related liability cases is relatively straightforward. The CJEU’s judgment in Sanofi Pasteur (C-621/15) indicates that, in some instances, the principle of effectiveness might compel national courts to apply factual presumptions. A ‘factual presumption’ refers to a presumption not established by statutory law but instead developed through judicial precedent. It is typically used when direct evidence is unavailable, requiring judges to rely on circumstantial evidence and patterns of likelihood to infer facts.
The Sanofi Pasteur case illustrates this clearly. It involved the alleged defectiveness of a Hepatitis B vaccine. The case was complicated by a lack of direct evidence, as medical research ‘neither confirmed nor ruled out’ a link between vaccination and the patient’s death. Facing this uncertainty, the CJEU explicitly connected factual presumptions to the principle of effectiveness. It determined that evidentiary rules that either (1) prohibit the use of circumstantial evidence or (2) require specific medical research to establish causation would undermine the directive’s effectiveness (paragraphs 30-31).
Extending this logic, national courts handling complex AI-related liability cases—where establishing fault or causation is particularly challenging—may also need to rely on circumstantial evidence. Incidentally, such reliance might lead courts to outcomes closely resembling those proposed by the withdrawn AI Liability Directive, which had introduced rebuttable presumptions for causation and non-compliance.
Judicial precedents provide several factual presumptions that national courts may refer to in ensuring the principle of effectiveness is upheld. One such presumption is res ipsa loquitur (‘the thing speaks for itself’). It resembles AILD’s presumption of causality, which was based on the reasonable likelihood of causation (Article 4(1)(b)). The res ipsa loquitur doctrine allows courts to infer fault or causation without requiring direct proof when circumstances strongly suggest it. It applies in situations where, based on ordinary experience, the nature of the event logically leads to the conclusion that liability elements are met. Examples include a single bottle, boiler, or other object exploding, wheels detaching from a moving vehicle, or the escape of gas or electricity.
Beyond res ipsa loquitur, legal systems also employ similar reasoning derived from circumstantial evidence, such as Germany’s burden reversal rule and France’s proof by exclusion. In France, if no other plausible cause exists, causation is presumed, shifting the burden to the defendant. For example, if a patient contracts an infection after a blood transfusion with no other possible cause, the clinic must prove the blood was not contaminated. Meanwhile, in Germany, liability may be presumed, if a product increases the likelihood of harm and that risk occurs.
Additionally, another relevant presumption mirrors one introduced in the AILD (Article 3(5)): the presumption of non-compliance with the duty of care if evidence is withheld. This aligns with the contra spoliatorem (‘against the spoliator’) doctrine, allowing courts to infer that missing or withheld evidence would have been unfavourable to the withholding party. The principle of effectiveness may also require courts to presume causation in cases of AI Act breaches, ensuring its rights and obligations are upheld. Such circumstantial evidence on presumption may also yield similar results to the presumption established in AILD Article 4(1)(a).
Conclusions
To sum up, despite the withdrawal of the AI Liability Directive—which aimed to harmonize procedural rules for fault-based liability in AI-related harm—EU law will continue to influence national liability practices. The principle of effectiveness ensures national courts will rely on circumstantial evidence to infer facts when damages result from breaches of the AI Act. Notably, this reliance on circumstantial evidence means national courts will reach outcomes similar to those intended by the withdrawn directive, indicating that the AILD did not introduce entirely new concepts but rather codified existing case-law principles. Thus, controversy over the AILD appears overstated. The directive was not designed to fundamentally change liability law but to enhance coherence and legal certainty—particularly crucial for national judges managing complex tort cases involving AI without direct evidence or specialized EU-law expertise.
Deimante Rimkute is a PhD student at Vilnius University Law Faculty, and a visiting researcher at Max Planck Institute for Comparative and International Private Law.
Share
YOU MAY ALSO BE INTERESTED IN
With the support of
