Faculty of law blogs / UNIVERSITY OF OXFORD

The European AI Liability Directives—Critique of a Half-Hearted Approach and Lessons for the Future

Author(s)

Philipp Hacker
Professor for Law and Ethics of the Digital Society, European New School of Digital Studies

Posted

Time to read

3 Minutes

The optimal liability framework for AI systems remains an unsolved problem across the globe. With ChatGPT and other large AI models taking the technology to the next level, solutions are urgently needed. Legislators in the EU are scrambling to include generative AI in the AI Act. But this instrument does not conclusively determine who will be liable in case harm occurs in the context of AI use. As I detail in a recent paper, the European Commission, in a much-anticipated move, advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive (AILD) and a revision of the Product Liability Directive (PLD). If enacted, this regime will constitute the capstone of AI regulation in the EU. Crucially, the liability proposals and the AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a ‘Brussels effectin AI regulation, with significant consequences for the US and other countries.

The AILD aims only at (minimum) procedural harmonization for the fault-based tort law of Member States. It contains rules governing the disclosure of evidence and the burden of proof in legal disputes. Its provisions on applicability heavily rely on references to the AI Act (AIA). Its provisions on disclosure of evidence only apply to ‘high-risk AI systems’ as per Art 6 of the AI Act, while the provisions on burden of proof apply to ‘AI systems’ as defined in Art 3(1) AI Act. The AILD provides for a presumption of non-compliance with the relevant duty of care if defendants do not comply with disclosure orders issued by a court. Under certain conditions it also foresees a presumption of causality between a defendant’s fault and the output of an AI system.

While the AILD is currently stalled and its legislative fate is unclear, the PLD is advancing through the Council. In contrast to the AILD, the PLD seeks to harmonize both substantive and procedural EU product liability law. It applies to all ‘products’, but its scope is now finally extended to include software—both ‘traditional’ and AI. Liability is triggered by the defectiveness of a product. The PLD’s evidence disclosure provisions also apply to non-high-risk products. These rules will be crucial: injured persons usually lack the means to prove defectiveness of an AI model unless they have access to the data and algorithms—which the PLD will grant if plaintiffs have a plausible claim against AI providers. The PLD even mandates presumptions of defectiveness if defendants do not comply with their disclosure obligations, violate safety requirements prescribed by law or if damage is caused by the obvious malfunction of a product. All of these provisions entail hard questions of interpretation, and of striking a balance between the protection of trade secrets, competition, and effective compensation of injured persons.

In my view, however, the Commission’s proposals only constitute a half-hearted approach to AI liability. There is significant room for improvement. First, instead of entrenching a fictional dichotomy between the fault-based Member State tort law and the supposedly strict liability PLD framework, the Commission should opt for one fully harmonizing regulation. Another, second issue that must be resolved is that the current proposals unjustifiably collapse fundamental distinctions between social and individual risk by equating high risk AI systems under the AI Act with those under the liability framework. The AIA’s definition of high risk systems runs risk of being over-inclusive because it may also apply to general purpose AI systems that are able to complete a broad range of tasks, only a few of which are potentially high-risk.

Third, based on the key risks AI poses—unforseeability, opacity, discrimination, privacy, cyber security and ecological impact—we can deduce further necessary steps for a workable regime of AI liability and regulation. Effective compensation should be ensured by combining truly strict liability for certain high-risk AI systems with general presumptions of defectiveness, fault and causality in cases involving SMEs or non-high-risk AI systems. This calls for a novel distinction between illegitimate- and legitimate-harm models to delineate strict liability’s scope. Truly strict liability should be reserved for high-risk AI systems that, from a social perspective, should not cause harm and usually do not do so if working properly (illegitimate-harm models: eg autonomous vehicles or medical AI). Models meant to cause some unavoidable harm, for example by ranking and rejecting individuals (legitimate-harm models, eg credit scoring or insurance scoring), should only face rebuttable presumptions of defectiveness and causality. General-purpose AI systems should only be subjected to high-risk regulation, including liability for high-risk AI systems, in specific high-risk use cases for which they are deployed. Consumers ought to be liable based on regular fault, in general.

Furthermore, fourth, innovation and legal certainty should be fostered through a comprehensive regime of safe harbours, defined quantitatively to the best extent possible. Moreover, fifth, trustworthy AI remains an important goal for AI regulation. Hence, the liability framework must specifically extend to non-discrimination cases and provide for clear rules concerning explainability (XAI).

Finally, awareness for the climate effects of AI, and digital technology more broadly, is rapidly growing in computer science. In diametrical opposition to this shift in discourse and understanding, however, sustainable AI currently constitutes a blind spot in AI regulation. EU legislators have so far neglected environmental sustainability in both the AI Act and the proposed liability regime (although this may be changing with the latest EP proposals). To counter this, I suggest to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair ML and XAI, but potentially also Sustainable AI (SAI).

Philipp Hacker is Professor for Law and Ethics at the European New School of Digital Studies.

Share

With the support of