Should Generative AI Have a Significant Effect on Questions of Liability in the Law of Tort?
The 2024 inaugural OUULJ Annual Essay Competition invited undergraduate students to consider the legal implications of the explosive growth of generative Artificial Intelligence systems and the possible liability a user might be held for under criminal law and/or the law of tort. The Editorial Board was pleased to have received a number of quality submissions, which demonstrated willingness to engage critically with current legal issues and commitment to high standards of legal argument.
We are delighted to publish the best submission received by Ming Song Oh of Somerville College.
Posted:
Time to read:
Abstract
The rapid commercialisation of Generative Artificial Intelligence (GenAI) has presented new challenges in the legal sphere. GenAI, known for its ability to create new content by extrapolating information from the data it was trained on, has raised concerns due to its tendency to produce inaccurate content and be exploited for malicious purposes. Liability issues are further complicated by the need for regulators to strike a balance between innovation and risk management. This essay focuses on the implications of GenAI for tortious liability under English law through the lens of the tort of negligence and product liability. It argues that the current laws are largely sufficiently robust to deal with novel harms which may arise from GenAI, though there is a need to reconsider established principles under the tort of negligence.
The flexibility inherent in the judicially developed tort of negligence makes this tort well-suited to address harms caused by new technologies such as GenAI. In cases where GenAI is used maliciously by a third party, the essay argues that GenAI companies should still be held liable under the general rules on omission. The following analysis includes discussions on duty of care, legal causation, and the need to reconsider the exclusion of pure economic loss as a form of actionable harm.
The essay also evaluates product liability under the Consumer Protection Act 1987, questioning whether GenAI can be classified as a ‘product’ and the appropriateness of strict liability for GenAI companies. It references recent developments in the EU’s Product Liability Directive which has answered both questions in the affirmative, an approach that the UK may consider if it chooses to adopt a cautionary approach to GenAI development.
In conclusion, the essay posits that while GenAI poses new challenges, the existing frameworks of negligence and product liability law are sufficiently robust to adapt and provide remedies. The main problem lies in defining the extent of regulation and the allocation of liability to effectively balance innovation with risk mitigation.
I. Introduction
The commercialisation of Generative Artificial Intelligence (‘GenAI’) in recent years has caught the world by surprise. At the time of writing, OpenAI has just launched ChatGPT-4o, which they tout as being able to process audio and visual imagery in addition to text, and respond to users in real time.¹ While some have welcomed the advent of GenAI,² others are more wary about recent developments and have sought to regulate this field.³
This essay will focus on the implications of GenAI for tortious liability. It will be argued that the English law of tort – specifically the tort of negligence and the product liability regime – is sufficiently robust to provide remedies for the harms arising from the use of GenAI.
A. An Introduction to GenAI
At the risk of oversimplifying, GenAI refers to algorithms trained on massive amounts of data thatcan output information based on the user’s inputs.⁴ A key feature of GenAI is that they can create new content by extrapolating from the data they have been trained on. However, this feature is also one of its pitfalls – GenAI is known to ‘hallucinate’ and produce inaccurate content.⁵ Another area of concern is that GenAI can be deliberately used for harmful purposes. Malicious actors can easily use engineered prompts to exploit vulnerabilities in the system and bypass the safety guidelines imposed by the creators of the GenAI (jailbreak prompts)⁶, or to generate ‘digitally manipulated synthetic media content’ in which subjects are portrayed as doing or saying something that never happened in the real world (deepfakes).⁷
B. What Kinds of Liabilities can Arise from GenAI?
There are three primary stakeholders involved: the owners of the data used for training the algorithm, the company who owns the GenAI, and the end-users. Less discussed is the potential fourth stakeholder – a party who suffers damage or harm directly caused by the output of the GenAI itself (direct harm) or by an end-user deliberately using such technology in a harmful manner (indirect harm).
There exist many ongoing disputes concerning allegations of copyright infringement between the owners of the data used for training GenAI products and the companies who train their GenAI on said data.⁸ There have also been cases of direct harm where the output of GenAI has been false and defamatory in nature.⁹ Cases involving alleged GenAI copyright infringement and direct harm have already elicited much academic commentary.¹⁰ This essay will instead focus on the less-explored area of indirect harm resulting from GenAI usage and the question of whether tort law is sufficiently robust to tackle this issue.
C. The Regulation Conundrum
One of the core problems facing regulators is that two key principles are in tension when deciding the appropriate extent of regulation: innovation and risk management.¹¹ Should the regulator risk stifling innovation by introducing regulations to protect stakeholders, or adopt a more laissez-faire approach that may expedite innovation at the expense of potentially overlooking novel risks?¹² This problem is compounded by the Collingridge dilemma which states that in the nascent stages of a new technology, regulators do not know enough about it to introduce effective regulation.¹³ However, when they finally understand the full extent of the risk, they are no longer able to effectively control or change the industry as the technology has already become too entrenched. Regulators therefore face the challenging task of striking an optimal balance between regulation and innovation in the nascent stages of new technology.
II. Tort Law
Tort law governs civil liability claims – one is legally obliged to compensate the party he injures. What makes tort law particularly suitable to remedying harms caused by the advent of new technologies is that tortious doctrines are mostly created by the judiciary. The incremental judicial development of tort law strengthens its ability to provide a remedy in novel situations.¹⁴ This unique flexibility of tort law has also been noted by academics who have argued that there is no unifying theory to explain tort law, since tort law is heterogeneous¹⁵ and ‘internally intelligible’.¹⁶
A discussion on tortious liability necessarily includes questions about who should be liable. Some have advocated treating GenAI as a separate legal personality, not unlike a company.¹⁷ However, unlike companies that can hold assets for the purposes of compensation, it is difficult to envision how assets could be imputed to GenAI. Further, there is concern that giving GenAI a separate legal personality unfairly shields the developers from liability.¹⁸ Therefore, it is submitted that this is not an apt solution for the common law.
A. Negligence
The tort of negligence was borne out of a seminal case holding a drink manufacturer liable for the harm caused to consumers by virtue of their negligence in the production process, even though there was no direct contract of sale between the two parties.¹⁹ The current rule prescribes an incremental approach to developing the tort of negligence where precedent should be followed and novel cases should be analogised to the closest case based on ‘legally significant features’ so as to achieve a ‘fair, just and reasonable’ outcome.²⁰ Notably, there is no clear conceptual limit to this tort, which makes it adaptable to harms caused by new technologies.²¹
It is widely accepted that the following elements must be present for an action in negligence to succeed: (1) the tortfeasor must owe the claimant a duty of care; (2) the tortfeasor must have breached that duty of care by falling below an objective standard; (3) the claimant must have suffered actionable damage; (4) damage to the claimant must have been caused by the tortfeasor’s breach of duty, and (5) the damage suffered by the claimant must not be too remote.
An example where the tort of negligence may apply is where a user deliberately uses GenAI in a harmful manner. This essay seeks to explore a hypothetical scenario of indirect harm arising from the use of GenAI to explore the robustness of tort law principles as they currently stand. Take the scenario where X is a malicious actor that wants to create his own ransomware.²² By inputting jailbreaking prompts into a commercial GenAI, X successfully procures information to assist him with coding the program.²³ X may also use the same tool to create strategies for deploying the malicious malware, or to seek assistance on coding a programme that automates the deployment process. Y is a target of X’s ransomware and suffers monetary loss as a result. Due to practical difficulties with locating X, and assuming that there is evidence that X used the GenAI product, the question is whether Y can bring a claim in negligence against the company operating the GenAI.
The starting point of this analysis, which also happens to be the most contentious question, is whether the company running the GenAI should be held liable in the first place. Adopting a precautionary approach, it is submitted that liability should attach to the company. However, there could arguably be a reduction in damages due to contributory negligence by Y in their failure to verify the authenticity of the link containing the ransomware. The question is whether the current law on negligence can support this result.
Duty of Care
The first difficulty lies in establishing a duty of care between the company and the victim. Following Robinson, an analogical, incremental approach should be adopted to the question of whether a duty of care should be imposed in this novel scenario.²⁴ The closest analogy to GenAI would be search engines, since they also involve producing outputs (links to relevant webpages) based on user inputs (search queries). In Metropolitan International Schools Ltd v Designtechnica Corp, the High Court held that Google could not be considered a publisher of defamatory material on account of the results that its search engine returns to users.²⁵ Eady LJ reasoned that Google could not formulate the search terms and hence could not stop the appearance of the alleged defamatory content – they were a mere ‘facilitator’ in providing the search service,²⁶ and the search results had not been interfered with by a human agent.²⁷
While this case was not based upon an action in negligence, the reasoning of the court strongly suggests that companies operating search engines should not bear a duty of care for the results that their products return if the user decides to use the returned results for malicious purposes. Why, then, should GenAI companies be treated differently?
The reason lies in the fundamental difference in the technologies involved. Search engines operate by trawling through the world wide web and indexing the different webpages, before returning relevant results based on the user’s queries.²⁸ On the contrary, GenAI can produce new content based on the data sets they have been trained on. In this scenario, if X uses a search engine to assist him in coding his ransomware, X is limited to finding malicious content published by other web users (the assumption being that the companies running search engines will not publish malicious content by their own volition). However, with the right jailbreak, X could use GenAI to produce an output that builds on the existing information available. Therefore, this output is better categorised as being a product of the GenAI and by extension the company running the GenAI, rather than user-published content like what search engines return. This problem could be further worsened with the rapid rate at which technology in this field is developing.
Given the key differences between search engines and GenAI, we must go back to the drawing board to question why a duty of care is appropriate. It is submitted that the law on omissions provides an answer. The general rule under UK law is that there is no general duty of care to prevent harm caused by the deliberate wrongdoing of third parties. However, one can be liable if they negligently create a source of danger and it is reasonably foreseeable that a third party may interfere with it, spark off the danger, and cause damage to others.²⁹ This principle is applicable to our current case since the exploitability of GenAI products are well-publicised as established in the preceding sections. It is also interesting to note that there is an increasing global recognition that technology companies should take responsibility for the products they release. In 2022, TikTok was sued for negligence for promoting the ‘blackout’ challenge to users which resulted in the death of a 10-year-old child. The claim was initially dismissed by the US District Court for the Eastern District of Pennsylvania on the basis of Section 230 of the Communications Decency Act, which protects platforms from being held liable for user-published content. However, the decision was recently overturned by the US Court of Appeals for the Third Circuit, which held that TikTok is ‘engaged in its own first-party speech’ as it ‘makes choices about the content recommended and promoted to specific users’.³⁰ Though it remains to be seen whether the action in negligence will succeed, this decision is groundbreaking because it creates the potential for technology companies to be liable to third parties for how their intermediary users utilise their products.
Having established a duty of care on the part of GenAI companies, the question is whether the duty was breached by said companies. It is submitted that by wilfully releasing a product that is known to contain safety flaws, these companies have breached their duty of care from an objective standpoint. In this situation, it is likely that an objective and reasonable bystander would expect the company to either release their product to limited classes of users only, or to make further improvements on their product such that safety guardrails cannot be easily circumvented.
Causation
The second difficulty lies with causation. Both factual and legal causation must be established before the action can succeed. Factual causation refers to establishing that the victim would not have suffered actionable damage but for the tortfeasor’s negligence. Legal causation refers to the absence of any intervening events that are so significant that it can be taken to break the causal link between the tortfeasor’s conduct and the victim’s damage.
The contentious issue here is legal causation - whether X’s malicious acts constitute a novus actus interveniens (‘NAI’) that breaks the causal link between the company’s negligence and the damage suffered by Y. The general principle is that deliberate conduct by a third party can only amount to a NAI if it was unlikely to happen,³¹though it must be acknowledged that the required threshold of likelihood differs across cases.³² Since safety concerns surrounding GenAI are well-publicised, conduct by malicious actors that exploit the negligent safety measures of the GenAI company should not constitute a NAI.³³ A caveat here is that whether causation can be established depends on the particular facts of the case. The more the company operating the GenAI turns a blind eye to safety issues,³⁴ the less likely a malicious actor’s conduct can constitute a NAI. Nonetheless, even if the company is prudent in patching vulnerabilities in the system, they may still be liable under product liability rules which will be discussed in the later sections.
Actionable Harm Suffered by Y
The last complication in this scenario is that pure economic losses (‘PEL’) are generally not recoverable under the tort of negligence.³⁵ The main justification that can be distilled from House of Lords decisions is to prevent ‘opening of the floodgates’ arising from liability to (1) an indeterminate class and for (2) an indeterminate amount.³⁶ The exception to this rule is where the tortfeasor holds themselves out as possessing a special skill which the victim places reliance on – this applies to both negligent misstatements and performance of services.³⁷ In this scenario, however, this is clearly inapplicable as there was no negligent misstatement nor performance of a service.
One possible workaround in this specific scenario is to formulate the claim as damage to property instead which is a recognised category of ‘actionable damage’ under English law.³⁸ Since a ransomware attack involves interference with files, the victim could argue that their files should be categorised as property. Even if intangibles are not held to amount to ‘property’, the court could instead choose to recognise a new ‘analogous’ tort such as the tort of ‘cyber-trespass’.³⁹ However, it is acknowledged that outside of the ransomware example, it may be difficult to reformulate one’s claim in terms of property damage (such as in the case of a deepfake fraud which results in the victim losing money without any conceivable form of property damage).
Therefore, it is submitted that the better approach is for English law to finally recognise that there should not be a general exclusionary rule preventing recovery of PEL. To avoid opening the floodgates of litigation, there should be new liability limiting elements introduced to the current framework. The approach by the courts in Singapore, which was founded on English law, is a good starting point. Under Singaporean law, there are three stages in the leading Spandeck framework which governs claims in negligence.⁴⁰ First, there is a threshold of factual foreseeability such that the defendant must have foreseen the harm caused to the claimant. Second, there must be sufficient legal proximity between the two parties. The incremental approach adopted in Robinson is applicable here, such that proximity factors should be analogised to the closest cases as far as possible. Third, policy considerations must be applied to the factual matrix to determine whether or not to negate this duty, effectively serving as a liability limiting limb. The issue of liability to an indeterminate class is tackled by the proximity requirement – ‘a defendant only owes a duty of care to parties to whom it stands in a sufficiently close relationship’ which limits the class of plaintiffs that can be liable; the issue of liability of an indeterminate amount is addressed by the doctrine of remoteness which is an element under the tort.⁴¹ Further, it should also be noted that the liability to an indeterminate class argument does not fundamentally object against a large number of claims, but it is instead concerned with ripple effects such as loss of profits down a supply chain which may induce more claims than expected.⁴² Applying this to our scenario, only victims of the ransomware can bring a claim against the GenAI company who created the danger, and consequential damages such as loss of profits will likely be barred for not being reasonably foreseeable.
B. Product liability
Rather than fault liability, there has been a shift from caveat emptor towards protecting consumers by placing strict liability on manufacturers.⁴³ In the United Kingdom, product liability is governed by statutory means, specifically the Consumer Protection Act 1987 (‘CPA 1987’). The core difference between product liability and the tort of negligence is that the former is concerned with defects in products, while the latter is concerned with duties of care. Under the CPA 1987, ‘product’ is defined as ‘any goods or electricity’.⁴⁴ A ‘defect’ is defined as a product failing to meet the safety expectations that people are generally entitled to expect, where safety refers to risks of damage to property or injury to persons.⁴⁵There are four potential parties that could be liable: the ‘producer of the product’,⁴⁶ a person who has ‘held himself out to be the producer of the product’,⁴⁷ any person who imported the product in the course of business for the purpose of supplying it to another,⁴⁸ or the supplier of the product who fails to identify the persons from whom they got the product from.⁴⁹
The first question is whether GenAI can be classified as a ‘product’ within this statutory scheme. It is still unclear whether software can be classified as such in English law.⁵⁰ However, the European Parliament has recently approved the new Product Liability Directive that expands the definition of ‘product’ to include software such as commercial AI products.⁵¹ The new Directive also recognises the right to compensation in cases of destruction or corruption of data, thereby signalling a cautionary approach to the development of AI. Post-Brexit, it remains to be seen whether the UK Parliament will choose to adopt policies along the lines of the new Directive and impose liability on AI companies.
The second question is whether a strict liability standard is appropriate for regulating GenAI companies. Chesterman compares strict liability for AI systems to the old rules for damages caused by dangerous animals.⁵² The keeper of an animal belonging to a ‘dangerous species’ is presumed to know about its tendency to cause harm and is therefore liable for damages without the need for the claimant to establish fault on the keeper’s part. In the context of this essay, it can be said that GenAI companies are similarly presumed to know about the tendency for their software to be used for malicious purposes. Therefore, the company’s failure to ensure that their GenAI meets reasonable safety expectations would justify the imposition of strict liability in the context of product liability.
III. Conclusion
While the advancement of GenAI should not have significant effects on questions of liability in the law of tort, it prompts us to reconsider established principles under the tort of negligence. The law of negligence, being a creation of the courts, has the flexibility required to adapt to novel situations of harm caused by GenAI and offer a remedy. However, I argue that in order to do so the law should recognise PEL as a head of damage and remove the exclusionary principle while introducing new liability limiting mechanisms akin to the approach adopted by the Singapore courts. Further, product liability can also provide a remedy should the UK choose to follow the approach taken by the EU. The salient issue is therefore not so much about updating the legal toolkits available, but rather where to draw the regulatory line to ensure the optimal balance between risk and innovation.
Endnotes:
1 OpenAI, ‘Hello GPT-4o’ (13 May 2024) <https://openai.com/index/hello-gpt-4o/> accessed 31 May 2024.
2 Anna Gross, ‘Rishi Sunak says he will ‘not rush to regulate’ AI’ Financial Times (London, 15 April 2024) <https://www.ft.com/content/509012f9-4e08-414c-a97f-dd733b9de6ef> accessed 31 May 2024.
3 European Parliament Directorate General for Communication, ‘EU AI Act: first regulation on artificial intelligence’ (European Parliament, 19 December 2023) <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence> accessed 31 May 2024.
4 McKinsey & Company, ‘What is generative AI?’ (2 April 2024) <https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai> accessed 31 May 2024.
5 MIT Sloan Teaching & Learning Technologies, ‘When AI Gets It Wrong: Addressing AI Hallucinations and Bias’ <https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/> accessed 31 May 2024.
6 Yi Liu and others, ‘Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study’ (arXiv, 10 March 2024) <https://arxiv.org/pdf/2305.13860> accessed 31 May 2024.
7 Mekhail Mustak and others, ‘Deepfakes: Deceptions, mitigations, and opportunities’ (2023) 154 J Business Research 113368.
8 Michael Grynbaum and Ryan Mac, ‘The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work’ The New York Times (New York, 27 December 2023) <https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html> accessed 31 May 2024.
9 Walters v OpenAI LLC, No 1:23-cv-03122 (ND Ga, filed 14 July 2023).
10 Cheryl Seah, ‘Liability for AI-generated Content’ Law Gazette (Singapore, March 2024) <https://lawgazette.com.sg/feature/liability-for-ai-generated-content/> accessed 31 May 2024.
11 Roger Brownsword, Eloise Scotford, Karen Yeung, ‘Law, Regulation, and Technology: The Field, Frame, and Focal Questions’ in Roger Brownsword, Eloise Scotford, Karen Yeung (eds), The Oxford Handbook of Law, Regulation, and Technology (OUP 2017) 20-24.
12 Daniel Nasaw, ‘Obama ends Bush ban on embryo stem cell research’ (Washington, Mar 2009) <https://www.theguardian.com/world/2009/mar/06/embryonic-stem-cell-research-obama> accessed 26 November 2024. For example, President Bush imposed a ban on federal funding for stem cell research in the United States in 2001 due to ethical backlash. This led to competitors in other countries like Britain and Canada eventually overtaking the US in terms of scientific progress in the field.
13 David Collingridge, The Social Control of Technology (St Martin’s Press 1980).
14 Jonathan Morgan, ‘Torts and Technology’ in Roger Brownsword, Eloise Scotford, Karen Yeung (eds), The Oxford Handbook of Law, Regulation, and Technology (OUP 2017) 523.
15 John Murphy, ‘The Heterogeneity of Tort Law’ (2019) 39 OJLS 455.
16 Ernest Weinrib, The Idea of Private Law (OUP 1995) 2.
17 Vagelis Papakonstantinou and Paul De Hert, ‘Refusing to award legal personality to AI: Why the European Parliament got it wrong’ (European Law Blog, 25 November 2020) <https://www.europeanlawblog.eu/pub/refusing-to-award-legal-personality-to-ai-why-the-european-parliament-got-it-wrong> accessed 26 November 2024.
18 Simon Chesterman, We, the Robots?: Regulating Artificial Intelligence and the Limits of the Law (CUP 2021) 121.
19 Donoghue v Stevenson [1932] AC 562 (HL).
20 Robinson v Chief Constable of West Yorkshire Police [2018] UKSC 4, [2018] AC 736 [27] (Lord Reed JSC).
21 Morgan (n 14) 522–28.
22 Ransomware refers to programmes that encrypt the files on the target’s computer which renders them inaccessible. The ransomware owner will then seek a payment from the target in exchange for a key to decrypt the files and restore access to the target.
23 This section will not deal with malicious LLMs because they are often run by shadow entities that cannot easily be identified or held legally accountable. Rather, this section deals with commercial GenAIs that are run by companies that are properly incorporated.
24 Robinson (n 20) [27] (Lord Reed JSC).
25 [2009] EWHC 1765 (QB), [2011] 1 WLR 1743.
26 ibid [51]–[52].
27 ibid [53].
28 BBC, ‘How do search engines work?’ (n.d.). <https://www.bbc.co.uk/bitesize/articles/ztbjq6f> accessed 31 May 2024.
29 Smith v Littlewoods Organisation Ltd [1987] AC 241 (HL) 271–73 (Lord Goff).
30 Tawainna Anderson v Tiktok Inc, No 22-3061 (3d Cir. 2024).
31 Knightley v Johns [1982] 1 WLR 349 (CA) 364-67 (Stephenson LJ).
32 See Home Office v Dorset Yacht Co Ltd [1970] AC 1004 (HL), 1030 (Lord Reid), Attorney-General of the British Virgin Islands v Hartwell [2004] UKPC 12, [2004] 1 WLR 1273, [25] (Lord Nicholls), and Lamb v Camden London Borough Council [1981] QB 625 (CA), 642 (Oliver LJ).
33 Hannah Murphy, ‘Networks linked to Russia and China use OpenAI tools to spread disinformation’ Financial Times (London, 30 May 2024) <https://www.ft.com/content/40e39936-651b-442a-8df8-46cf6b7aed77> accessed 31 May 2024.
34 Jacob Hilton and others, ‘A Right to Warn about Advanced Artificial Intelligence’, (California, 2024) <https://righttowarn.ai/> accessed 27 November 2024.
35 Spartan Steel & Alloys Ltd v Martin [1973] QB 27, 39 (Lord Denning MR), PEL refers to economic loss that is not consequent to physical damage to one’s property or one’s person. See also Stevens Robert, Torts and Rights (Oxford, 2007) 21, where Stevens argues that there is no liability for pure economic losses because there is no infringement of rights involved, suggesting that there is no general right to profits.
36 Ultramares Corporation v Touche, 174 NE 441 (1932), 444 (Cardozo CJ). See also Jane Stapleton, ‘Duty of care and economic loss: a wider agenda’ (1991) 107 LQR 249, 254.
37 Henderson v Merrett Syndicates Ltd [1995] 2 AC 145 (HL) 178–81 (Lord Goff).
38 Morgan (n 14) 528.
39 ibid 528.
40 Spandeck Engineering Pte Ltd v Defence Science and Technology Agency [2007] SGCA 37, [2007] 4 SLR(R) 100, [75]-[86] (Chan Sek Keong CJ).
41 NTUC Foodfare Co-operative Ltd v SIA Engineering Ltd [2018] SGCA 41, [43] (Chong JA).
42 Stapleton (n 36) 254–55.
43 Chesterman (n 18) 93.
44 Consumer Protection Act 1987, s 1(2).
45 ibid, s 3(1).
46 ibid, s 2(2)(a).
47 ibid, s 2(2)(b).
48 ibid, s 2(2)(c).
49 ibid, s 2(3).
50 Nicholas J McBride and Roderick Bagshaw, Tort Law (6th edn, Pearson 2018) 364.
51 Sven Förster and Dardan Gashi, ‘The EU’s new Product Liability Directive (from a German perspective)’ (Clyde & Co, 4 April 2024) <https://www.clydeco.com/en/insights/2024/04/the-eu-s-new-product-liability-directive> accessed 31 May 2024.
52 Chesterman (n 18) 91–3.
Share: