Faculty of law blogs / UNIVERSITY OF OXFORD

Robot Liability - How Does it Feel to be Hit by an ePerson?

Author(s)

Gerhard Wagner
Chair for Private Law, Commercial Law and Law and Economics at the Law Faculty of Humboldt-University, Berlin

Posted

Time to read

4 Minutes

The arrival of robots, autonomous software agents and so-called 'Internet of Things'-devices challenges existing liability systems. While the operation of legacy products was mostly in the hands of users, autonomous cars and other algorithmic devices will be operated by algorithms that are identical across the whole fleet of products. The 'behaviour' of autonomously operated products will be determined by their respective manufacturers. Because of this shift in responsibility from autonomous product users to manufacturers, responsibility for damages caused by these devices must shift as well. In economic terms, the manufacturer clearly is the 'cheapest cost avoider'; in fact, the manufacturer may be the only actor with practical capability to avoid accidents. Therefore, we may see a massive shift away from the current systems of 'operator liability' and towards 'manufacturer liability'. As a consequence, product liability may significantly gain in importance.

The question is whether product liability law is up to the task. In Europe, Directive 85/374/EEC governs the area. Even though it is commonly said that the Directive imposes strict liability, this is not really the case. Holding the manufacturer liable under the Directive requires a finding of product defect. And the concept of product defect entails quite the same considerations that are familiar from the negligence test, ie the costs of precautions, the amount of harm, and the probability of harm occurring. Applying these criteria to an algorithm that determines the 'behaviour' of a whole fleet or series of devices is no mean feat. Comparing the performance of the algorithm in a given situation to that of a human operator (the 'human driver test') is inadequate as algorithms outperform humans in many ways while the accidents that they still cause will not be the same as those accidents that a human driver is unable to avert. A comparison between algorithms (the 'system-based concept of product defect') avoids this mistake but raises another problem, namely that only the products running the best algorithm will conform to the standard. Manufacturers of devices operated by less-than-optimal algorithms will be saddled with the full costs of the accidents caused by their products. It is difficult to see how fair competition in product markets can be maintained with such a liability system.

The European Parliament, in its resolution of 16 February 2017 on Civil Law Rules on Robotics (European Parliament resolution 2015/2103/INL of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics [2015], para 49), seems to count on the liability of the robot’s user, considering regimes of strict or semi-strict liability. In fact, users of autonomous systems already face liability under national systems of tort law for any fault committed in the course of operating the device. These rules provide the necessary incentives for users not to interfere with the operation of a product’s autonomous system and not to misuse or abuse it. As long as autonomous systems are marketed as bundles of hard- and software that are protected against user interference, fault-based liability seems fully adequate. In contrast, strict user liability would shift accident costs away from manufacturers even though these actors control the safety features of the algorithmic device. Correcting for this cost externalisation by granting users rights of recourse against manufacturers would involve wasteful administrative costs.

The European Parliament further considers the idea to accord the autonomous system or robot itself the status of a legal entity or 'ePerson' (ibid para 59). Much has been said from sociological and philosophical perspectives as to whether personhood should or must be accorded to autonomous software systems, particularly if those systems possess so-called 'artificial intelligence'. In my paper, I argue that this discussion misses the mark because it fails to acknowledge that entity-status in legal contexts is a functional concept that cannot be defined with a view to certain biological, intellectual or ethical properties and capabilities. The essential point rather is that recognition of robots as separate legal entities would shield manufacturers and users against liability, much in the same way as corporate entities protect shareholders and managers. The ensuing externalization of accident risk onto victims is unacceptable as it would destroy any incentives to take care.

In an obvious attempt to avoid the problems of externalization, proponents of ePersons envision a system where manufacturers or users or both classes of actors contribute to a fund from which victims would be compensated. This would work in a similar way as minimum capital requirements do for corporate entities. For robots, it would be more efficient to instead rely on mandatory liability insurance. Those actors who put robots into the stream of commerce or otherwise 'set them free' would be required to provide for an insurance cover up to a certain minimum threshold.

While such a scheme is rather easy to implement its benefits are questionable. The parties standing 'behind' the robot, ie manufacturers and users, would still be shielded from liability for any damages that exceeded the cap of the mandatory insurance policy. While there may be arguments in favour of limiting the exposure of manufacturers and users vis-à-vis victims, such caps should be discussed and, if appropriate, be imposed directly rather than being buried in the concept of an ePerson.

This sceptical assessment of ePersons is built on the assumption that robots will be marketed as a bundle of hard- and software components that remain closed to the user. Once this assumption is removed, things change dramatically. If the user is able to compile the robot from unbundled hard- and software products or to manipulate the safety features of an algorithmic device, it would be unacceptable to burden the victim with the task to disentangle the components and to identify which one caused the accident. In this scenario, the proper function of ePersons would be to provide a one-stop-shop for the victim, and to shift the burden of proof to the liability insurers of the robot. Insurers would bear the risk of enforcing recourse actions against the responsible party, whether it is the hardware manufacturer, the software programmer, or the user. However, until unbundling occurs on a significant scale, policy makers would be well-advised to focus their attention on product liability instead of granting robots the legal status of ePersons, which would externalize the costs of accidents onto users rather than the manufacturers who design the algorithms responsible for product operation.

Professor Wagner’s paper is accessible here.

Gerhard Wagner is the Chair for Private Law, Commercial Law and Law and Economics at the Law Faculty of Humboldt University, Berlin.

Share

With the support of