Faculty of law blogs / UNIVERSITY OF OXFORD

Machines powered by artificial intelligence (AI) are on the rise. In many use cases, their performance today already exceeds human capabilities. Building on a prior OBLB post, I explore fundamental regulatory issues related to such ‘autonomous machines’ in a recent essay. I adopt an analytical perspective that highlights the importance of what I call the ‘deep normative structure’ of a particular society for crucial policy choices with respect to autonomous machines.

I make two principal claims. First, the jargon of welfare economics appears well-suited to analyse the chances and risks of innovative new technologies, and it is also reflected in legal doctrine on risk, responsibility and regulation. A pure welfarist conception of ‘the good’ will tend to move a society into a direction in which autonomous systems eventually will take a super-prominent role. However, such a conception assumes more than the welfarist calculus can yield, and it also ignores the categorical difference between machines and humans, which difference is characteristic of Western legal systems.

Second, taking the ‘deep normative structure’ of Western legal systems seriously leads to policy conclusions regarding the regulation of autonomous machines that emphasize this categorical difference. Such a humanistic approach acknowledges human weaknesses and failures and protects humans, and it is characterized by fundamental human rights and by the desire to achieve some level of distributive justice. Welfaristic pursuits are constrained by these humanistic features, and the severity of these constraints differs from jurisdiction to jurisdiction. I illustrate my argument with legal applications taken from various issues in the field of contract and tort.

Against this background, there is nothing wrong or problematic about, for example, requiring autonomous cars to be (much) safer than human drivers before we allow them to participate in regular traffic, and there is nothing wrong about allowing humans to drive cars even though their driving skills might fall much short of the level achievable by smart cars. There is also nothing wrong about applying different standards of care to humans and smart machines. In fact, societies probably will (and should) consider relaxing the standards applicable to humans: applying the same standards to humans and to autonomous machines translates into a cost and price advantage of the latter and might contribute to humans being shut out of more and more domains of our daily lives—such as driving a car or just going out for a walk.

For this is the ‘slippery slope’ of all societies which are built on foundations which reflect no only deep humanistic values but also a commitment to free markets as the main form of organizing economic activities: the jargon of welfare economics appears well-suited to analyze the chances and risks of innovative new technologies, and it is also reflected in legal doctrine on risk, responsibility and regulation. However, the welfarist narrative has an inbuilt tendency to go to extremes and shake off the humanistic constraints discussed above. What seems to be clear is that a pure welfarist conception of ‘the good’ will tend to move a society into a direction in which autonomous systems eventually will take a prominent role—by virtue of the law.

Hence, regulating autonomous systems is a challenge that requires us to take the ‘deep normative structure’ of our societies seriously. Our laws are an expression of the ‘human condition’. They reflect what we believe lies at the heart of humanity, at the heart of what it means to be human. It simply and literally would be the dehumanizing of the world if we were to treat machines like humans, even though machines may be smart—possibly even much smarter than humans.

Horst Eidenmüller is the Freshfields Professor of Commercial Law at the University of Oxford.

Share

With the support of