Faculty of law blogs / UNIVERSITY OF OXFORD

Law and Autonomous Systems Series: Machine Performance and Human Failure

Autonomous machines are on the rise.  In many use cases, their performance today already exceeds human capabilities. However, machines do not operate flawlessly—defects and accidents occur. This raises a couple of important regulatory issues: When are autonomous machines safe enough to justify their admission to practice? What are the feedback effects of machine performance on the level of care required from humans? What should be the civil liability regime governing autonomous machines if defects and/or accidents occur? 

This contribution attempts to offer exploratory thoughts on these questions. I suggest that the “deep normative structure” of a particular society determines crucial policy choices with respect to autonomous machines. More specifically, regarding the first two questions, a welfare economic approach might admit autonomous machines to practice if they are marginally safer than humans, and it would hold humans to the same liability standard as autonomous machines. The consequences could be dramatic: a rapid crowding out of human activity in our daily lives. By contrast, a non-utilitarian approach might yield different policy recommendations and avoid these undesirable consequences. The impact of the “deep normative structure” is less pronounced with respect to question three. A strict civil liability regime governing autonomous machines appears to be apposite regardless of the philosophical conception underlying fundamental policy choices in a particular society. 

I will illustrate my analysis in the following with examples taken from two domains in which autonomous machines are making rapid progress: cars on the road and medical diagnosis/treatment. I should mention at the outset that I am fully conscious that the distinction I draw between humans on the one hand and fully autonomous machines on the other is stark. It might even be considered unrealistically binary: the foreseeable future might be dominated by a middle-ground, i.e. semi-autonomous machines. However, focusing on extreme cases has the advantage of more clearly revealing the fundamental policy choices societies face with respect to regulating autonomous machines and the impact these choices have on humans and human behaviour.

Machines’ “admission to practice”

When are autonomous machines safe enough to justify their admission to practice? This question is currently discussed, for example, with respect to self-driving cars. It has been suggested that “… [autonomous vehicles] will always be held to higher safety standards than human drivers.Scholars and practitioners ponder whether they should be 10%, 90% or even 99.9% safer before being allowed to cruise the roads. 

One cannot answer this question sensibly without defining the normative measuring rod for making such difficult policy choices. Based on a utilitarian analysis, for example, there seems to be no reason to require autonomous cars to be (much) safer than humans at all. To the contrary: if they are marginally safer, such an analysis appears to lead to the conclusion that onlyautonomous cars should be allowed to operate and that humans should be prevented from driving.  

However, this conclusion would be based on an overly simplified and misleading utilitarian analysis. More specifically, important but hard to quantify elements of the utilitarian calculus, such as the “joy from driving”, need to be taken into account. Only humans experience such joy. (Delivering “Sheer Driving Pleasure” happens to be the key marketing slogan of one of the world’s premium automobile manufacturers) It might be negatively affected by the number of autonomous cars on the road—how much pleasure do we get from driving if surrounded by driverless cars? (See BMW Welcomes—Artificial Intelligence, YouTube, at 1:58:40 [Horst Eidenmüller].) Hence, it is unclear how the rising presence of autonomous cars will affect the overall net utility in a society—the utilitarian calculus appears to be indeterminate with respect to the regulatory question of the safety requirements such cars have to fulfil before we allow them to operate. 

Moreover, as a philosophical approach, utilitarianism is problematic notonly because the utilitarian calculus will be indeterminate in many settings. More importantly, it is difficult to reconcile utilitarianism with the notion that humans have (fundamental) rights that are not dependent on a utilitarian analysis (see Horst Eidenmüller, Effizienz als Rechtsprinzip (Mohr Siebeck, 4th ed. 2015) Part III). Such rights, we believe, should not be contingent on their “utilities”. Rather, they are “trumps” in the hands of individuals to protect themselves against majority rule (see Ronald Dworkin, Taking Rights Seriously (Duckworth, 2nd ed. 1978) 231-238, 272-278). Because of these problems, utilitarianism has lost most of its practical appeal as a social philosophy.

The exception to this may be the economic analysis of law which is based on welfare economic concepts derived from utilitarianism. Reducing “utilities” to monetizable benefits and costs has the advantage of rendering the utilitarian calculus more functional. Autonomous cars, for example, should be allowed to operate if the expected costs from accidents are (marginally) lower than those from cars driven by humans. At the same time, applying the benefits/costs analysis to matters which do not appear to involve fundamental human rights issues could appear innocuous.

Is admitting autonomous cars to our roads such an issue? This surely is debatable. One might be concerned about the privacy implications of all the personal data that is needed to train the underlying machine learning models powering autonomous cars, plus the data that these cars will increasingly collect as part of the Internet of Things. One might also be concerned about the liberty of human drivers in the sense of autonomously choosing how to drive and interact with other drivers. Being constrained by the liberty of other humans is one thing—being constrained by machines’ actions is quite another. Machines (things) don’t enjoy rights in the same way humans do—at least not yet

By contrast, stipulating that autonomous machines must be (much) safer than comparable human activity before they can be admitted to practice is clearly plausible once one adopts some non-utilitarian philosophical conception which categorically distinguishes between machines and humans. If humans are categorically different from machines, it is completely rational and consistent to require “more” from machines before we allow them to populate our environment and endanger our lives. We are necessarily surrounded by humans but not by machines—their admission to practice is based on a human decision.  In this sense, the “deep normative structure” of a particular society—utilitarianism, welfare economics, Rawlsian or Kantian approach, etc.—determines crucial policy choices regarding autonomous machines.

Machine performance and human liability

This becomes even clearer if we consider the second question posed in the introduction. Namely, should the standard of care applicable to autonomous machines under a fault-based liability regime be the same as the one applied to humans? More specifically, should an assumed superior machine performance translate into a higher required care level also for humans? 

Based on a welfare economic policy conception, for example, the answer to these questions should be yes—if the goal is to maximize net economic welfare in a society, then surely the law should set incentives to substitute less safe conduct (and “technologies”) by more safe conduct (and “technologies”). To put it differently: if machines operate more efficiently than humans, also the liability system should contribute to a process of substituting humans by machines.

It is only once one adopts some non-utilitarian conception of “the good” that this reasoning loses traction. Once humans and machines are viewed as categorically different and the policy objective is not to “maximize” any goal function, applying the same liability standard to humans and to machines appears to be problematic at the outset: it would treat humans and machines as if they were literally the same “thing”.

True, already today technological advances influence the standard of care required from humans, for example with respect to medical diagnosis and treatment.  This is based on an implicit welfare economic policy conception that appears acceptable as long as the effects on human activity are relatively limited.  Making certain medical procedures safer for humans with the help of costly technology is going to raise the price of these procedures and reduce demand for them—fewer doctors will treat fewer patients, but they will treat them better.  

However, if technology advances to the point of super-human capabilities, and if the liability system required these capabilities also from humans, there comes a point at which humans would basically be shut off from many activities that are central to our daily lives—such as driving a car or just going out for a walk.  It is at least at this point that I think many would start feeling uncomfortable with the notion that super-human machine performance should lower the bar for what are actionable wrongs committed by humans.

Liability regime applicable to machines

Finally, a policy issue regarding autonomous machines where a welfare economic analysis might seem both feasible and appropriate and where such an analysis does not differ significantly from a non-utilitarian approach might be the liability regime applicable to defects and accidents. Who should be liable, for example, if an autonomous car causes an accident, and should the liability regime be fault-based or strict?  It has been argued elsewhere that the appropriate liability standard with respect to autonomous cars should be strict, for two reasons: it is exceedingly difficult to precisely define the efficient level of care in this context (see: “It will take years rather than months for the industry to cohere around a standard.”), and only a strict liability regime regulates the “activity level” of the car which influences the likelihood of accidents.  

Note that this analysis is based on a welfare economic policy conception. It has more traction in this context than with respect to the issue of admitting autonomous cars to practice because it does not raise fundamental human rights issues. The privacy and liberty concerns mentioned above relate to the question of whether autonomous cars should be allowed to travel the roads at all. The applicable liability regime in case defects or accidents occur does not raise such fundamental concerns. Another way of putting this is to say that, for example, a Kantian probably does not hold strong views as to whether this liability regime should be strict or fault-based. If anything, he or she might be more inclined to argue for a strict liability regime as it appears to be more protective of human safety. By comparison, the marginal reduction of choice opportunities for humans—strict liability might lead to fewer autonomous cars on the road compared to a negligence regime—appears to carry less weight.  

The argument for strict liability with respect to autonomous cars might also have some relevance in other contexts where liability today is still fault-based. A good example is medical malpractice claims.  Increasingly, diagnosis and/or treatment are done by machines. If autonomous machines take over the crucial elements of the medical processes, their owners should be strictly liable in malpractice cases.

Conclusion

Increasingly, autonomous machines are making inroads into our daily lives. Regulating autonomous machines will often raise important policy issues and problems. Addressing these issues and problems will frequently require policy-makers to go back to first principles. Utilitarian or welfare economic analyses on the one hand and non-utilitarian views on the other will often yield very different recommendations and conclusions. Whenever fundamental human rights or values such as individual autonomy and liberty are involved, a utilitarian/welfare economic approach is problematic. This implies, for example, that there is nothing wrong with requiring autonomous machines to be much safer than comparable human activities before admitting them to practice. It also implies that there is nothing wrong with holding autonomous machines and humans to different liability standards.

Horst Eidenmüller is the Freshfields Professor of Commercial Law at the University of Oxford. 

 

Share

With the support of