Faculty of law blogs / UNIVERSITY OF OXFORD

The Safety of Autonomous Vehicles – Fake News?

Author(s)

Gerhard Wagner
Chair for Private Law, Commercial Law and Law and Economics at the Law Faculty of Humboldt-University, Berlin

Posted

Time to read

7 Minutes

In June 2022, the US National Highway Traffic Safety Administration (‘NHTSA’) ordered an investigation of Tesla affecting no fewer than 830,000 vehicles across all product lines. The agency is responsible, among other things, for the safety of motor vehicles licensed for use on public roads. The investigation of Tesla concerns its driving software called ‘Autopilot’. It was prompted by an accumulation of accidents in which a Tesla collides with an emergency vehicle, or other first responder vehicle, standing on or off the roadway after an accident. Although a human driver would have detected the obstacle eight seconds before impact, Autopilot ‘waited’ until the last second to shut down and shift the control of the vehicle ‘back’ to the driver. At this point in time, the driver is obviously in no position to avoid the impact. NHTSA is investigating in two directions. The first question is: why does the system react so late to visible obstacles that are blocking the way of the vehicle? The second question concerns the disengagement of the autopilot ‘at the last second’, ie at a time when the driver can no longer avert impact anyway.

How Tesla manipulates its accident statistics

The second issue—ie the transfer of control a blink of an eye before impact—is obviously less serious than the first. If this shift of control occurs deliberately at all, then the motive must be to keep the track record of its own driving assistance software as impeccable as possible. It seems that the company wants to avoid headlines of the following type: ‘Tesla Autopilot causes fatal accident’. If Autopilot is switched off a fraction of a second before the collision, it appears that the human pilot caused the accident, when in fact Autopilot failed.

This kind of communication about the opportunities and risks of autonomous driving fits in well within Tesla's overall public relations strategy. Even calling its own software ‘Autopilot’ is, strictly speaking, a gross misrepresentation. In truth, Tesla's software is nothing more than a driving assistance software of so-called Level 2, in which the system takes over the longitudinal and lateral guidance of the vehicle in a specified lane. This is miles away from true autonomous driving as defined in Level 5 of the classification system set up by the Society of Automotive Engineers (SAE). Driving assistance systems do not enable autonomous driving, but require continuous oversight by the driver. Consequently, the latter is not authorized to turn away from the traffic situation when the vehicle is operating at Level 2. Rather, he or she must be constantly ready to intervene in critical situations. In Germany, this is expressly stipulated in § 1b of the Road Traffic Act (Straßenverkehrsgesetz or ‘StVG’). In the USA, the legal situation is the same. It is even more misleading when the accident statistics of one's own driving assistance software are manipulated by handing back control to the driver a wink before the collision. Such manipulations work to the detriment of competitors, ie other car manufacturers who also offer driving assistance systems without putting the title ‘Autopilot’ on them and without manipulating accident statistics in their own favour. Furthermore, they lull the drivers of Tesla cars into a false sense of security: they believe that they can entrust themselves to an ‘autopilot’ that, in truth, is not capable of mastering traffic driving situations and, in a sense, does not even have a valid driver's license. The fact that Tesla deliberately conceals these facts through its accident statistics reinforce the—mistaken—belief that drivers can safely ignore their oversight obligations and read the newspaper or watch a movie while driving. And then, an emergency vehicle gets in their way.

How dangerous is autonomous driving?

But isn't the situation much worse? Doesn't the example of Tesla show the whole drama of autonomous driving? Autonomous driving, it seems, is in fact not as safe as it is advertised, but rather extremely dangerous. Again and again, terrible accidents occur on the roads of those US states that allow the test operation of fully automated vehicles operated by Waymo, a subsidiary of Google’s holding company Alphabet, by Uber but also by Tesla. In March 2018, Elaine Herberg died in Temple, Arizona, as a result of a traffic accident. She was hit by an autonomously controlled test car from Uber as she crossed a four-lane road on her bicycle. A human driver would have recognized her and avoided the crash. Back in May 2016, ‘Autopilot’ steered a Tesla, virtually at full speed, into, or rather under, a truck crossing the road because the software failed to recognize the white-painted truck in glistening sunlight. And now it turns out that Tesla's driving assistance system is not capable of preventing the car from colliding with first responder vehicles that are easily visible from a distance.

The reflex to denounce autonomous driving because accidents happen and people are injured or even killed while operating computer-controlled vehicles is premature. All forecasts assume that replacing human drivers with computer programs will lead to a drastic reduction in the number and severity of traffic accidents. And the stakes are high: about 2,600 people still lose their lives on the roads in Germany each year. The US—with a population four times larger—suffers a whopping 43,000 traffic fatalities each year. Halving the number of fatal accidents would therefore ‘save’ 1,300 lives in Germany and 21,500 lives in the USA per year. Who would want to be responsible for doing without?

The optimistic forecasts about the beneficial effects of autonomous driving seem to be contradicted by the recurring accidents involving autonomous cars. However, this conclusion confuses spectacular individual cases with statistically measured performance per kilometre or mile. No one has promised that the use of autonomous vehicles will reduce accident rates to zero. The general insight that absolute safety in the sense of perfect accident avoidance cannot be achieved also applies to autonomous driving. The mere fact that an autonomous vehicle causes an accident in no way suggests that it is unreasonably dangerous. And it certainly does not prove that a human would perform better than digital technology overall, ie with a view to motor traffic as an activity.

Manufacturers should be liable for their driving algorithms

Nevertheless, accidents involving autonomous cars particularly excite the public—as rare as they may be on the whole. This is probably due to a characteristic that the Tesla case illuminates very well: driving algorithms make mistakes of a different kind than humans and therefore cause accidents that any human driver would have avoided without any effort. A human driver who spots an ambulance parked on the roadway far in front of him will swerve or bring his own vehicle to a stop. An autonomous vehicle that fails to recognize the ambulance or misses a white-painted truck in the glistening sunlight crashes into the obstacle without even reducing its speed. The point is that while autonomous vehicles sometimes produce such seemingly incomprehensible accidents, ‘in exchange’ they avoid a host of other accidents that human drivers would cause. Indeed, the vast majority of traffic accidents are due to human error, particularly through driving under the influence of alcohol and violating rules of the road, namely speeding, failure to maintain the minimum distance, disregarding the right of way, and errors in turning, and overtaking. A properly programmed digital traffic system does not drink, never speeds, and otherwise complies with all traffic rules. In short, driving algorithms perform better than humans not because they are better ‘drivers’, but because they guarantee full compliance with the rules of the road and the general commands of reasonableness. Because of these properties—and not through superior driving skills—they avoid most of the accidents caused by human drivers.

Accidents like the ones in the Tesla cases, however, show that digital systems have their own weak spots that they seem more or less incomprehensible to human beings. The obvious task is to investigate these shortcomings and then to remedy them, to the extent possible, through improvements of the respective driving algorithm. Getting to the bottom of the causes of a conspicuous accumulation of accidents following a certain pattern is the necessary first step in this direction. For this step to occur, the respective vehicle manufacturer must be confronted with the traffic accidents caused by its own vehicles. In both the US and Europe, the manufacturer of a product is responsible for its safety under the rules of so-called product liability law, and liable in damages if a defective product causes harm. This also applies to computer-controlled automobiles. If an accident occurs as a result of ‘negligence’ of the ‘Autopilot’ or other driving assistance program, the manufacturer is liable to make good any harm caused. In reality, however, liability for traffic accidents falls first and foremost on the user of the vehicle, ie the owner and driver. If the accident in question was caused by a defect of the car, damages may be shifted back to the manufacturer through a recourse claim. The EU plans to make the liability of the operator of artificially intelligent systems even stricter. As the Tesla case shows, however, these efforts are going in the wrong direction, because liability for damages should rest on the manufacturer. It is the manufacturer—and not the operator—who controls not only the safety features of the vehicle, but also the ‘behaviour’ of autonomous cars who follow the commands of a driving algorithm. Furthermore, the manufacturer must be obliged to communicate the accident statistics of its own products honestly and transparently, so that the customers—unlike, it seems, the customers of Tesla—know where they stand and can adjust their own precautions accordingly: anyone driving a Tesla with an ‘Autopilot’ had better keep their hands on the wheel and their attention on the road. The obligation to honestly and accurately inform potential customers about the accident risk associated with a particular product could either be integrated into the license requirements for motor vehicles under EU law or standardized within a new framework of AI regulation. Even the Product Liability Directive 85/374/EEC may offer a solution; in case of misinformation about expected accident costs, the manufacturer would have to reimburse the customer's increased insurance costs.

Supplementary to the liability system, there is a need for the involvement of administrative agencies, such as NHTSA. As they are charged with monitoring the accidents and collecting accident data, they are in the best position to glean the accumulation of a particular accident patterns from accident statistics. In response, the competent governmental agency may order investigations and require manufacturers to come up with better technical solutions. Software solutions are never perfect and should constantly be improved wherever possible and feasible at reasonable cost. Mankind will then have to live with the remaining risks. The risks associated with well-tuned driving algorithms promise to be much smaller than the risks posed by human drivers. And this, after all, is the relevant measure.

Gerhard Wagner holds the Chair for Private Law, Commercial Law and Law and Economics at the Law Faculty of Humboldt University, Berlin.

Share

With the support of