Home > Research and expertise > Research highlights > Law > Road death puts the brakes on self-driving cars as flaws are exposed

Road death puts the brakes on self-driving cars as flaws are exposed

Self-driving-car-480p

The first known death caused by a self-driving car in the US this year has highlighted some serious legal and safety issues in the booming autonomous vehicle industry. New LSE research is looking to find solutions.

When American motorist Joshua Brown switched his Tesla car into autopilot mode on a Florida highway on 7 May this year, seconds later he was dead. The car’s sensors allegedly failed to distinguish a white tractor-trailer crossing the highway against a bright sky, leading to a fatal collision and an official investigation which is still ongoing.

Who was at fault here? Tesla, the driver, or motor vehicle regulators? To some extent, all three, but the tragedy has exposed the very real risks of assigning robotic applications with complex and challenging tasks.

LSE PhD student Antti Lyyra has spent the past 18 months investigating the design and production of robotic technology and its impact on our lives.

The former technology and management consultant has recently co-authored a conference paper on Tesla’s Model S car and the shifting power balance between manufacturers and customers.

The paper outlines the stark differences between traditional and digitally enabled product strategies.

“Typically, cars are bought as fully designed and complete products,” Antti says. “While regular service is performed and defective products are fixed by manufacturers, in general, cars are not expected to improve or change radically once they have been delivered to customers.”

“When you purchase a car like Tesla that adopts a more open-ended design and product strategy, the car receives frequent software updates throughout its lifetime. While it could be argued that changes are generally aimed to make the cars better, sometimes bugs are introduced and not all changes are to everyone’s liking. A car owner may have very limited control to which extent the car’s self-steering capabilities and various other functionalities change over the years of use. “

Neither motor vehicle legislators nor consumer groups have got to grips with the ambiguity of this relationship between manufacturer and buyer where frequent changes and self-driving capabilities are concerned.

LSE Law Professor Andrew Murray says buyers essentially enter into a service contract: “You can think of a Tesla car as being like a smartphone. You buy the hardware as a product but the software that operates on it is constantly updated by the manufacturer. In this sense there are two agreements: a sale of the hardware and a series of software licensing agreements which cover updates and upgrades to the product.”

In the wake of the Tesla fatality, the vehicle manufacturer has leapt to the defence of the car, noting it is the first known autopilot death in some 130 million miles driven by Tesla’s customers. Where normal cars are involved, a fatality occurs every 94 million miles, the company claims.

Tesla also states that the car’s autonomous software is designed to nudge customers to keep their hands on the wheel to make sure they are paying attention and override the autopilot if needed.

The Florida highway fatality shows that technologies are not infallible, regardless of the “grand narratives” that applaud technological prowess, Antti adds.

“Self-driving cars are expected to navigate roads to reach their destination safely. This involves complex sets of computerised sense and decision making to match various situations on the road with graceful behaviour. However, whereas the real world is open-ended and unpredictable, robotic products are not, making it challenging to respond to unforeseeable circumstances.”

To further complicate matters, there is no clear legal framework where self-driving cars are concerned.

Existing motor vehicle safety rules do not address autonomous vehicles, so the question of who is liable in the event of an accident or fatality remains a grey area.

Professor Andrew Murray says UK law remains focused on the driver, not the car. Current regulations state that the driver must always be in full control of the vehicle and “not to rely on driver assistant systems such as cruise control“.

However, the government has also recognised the increasing uptake of self-driving cars and is looking at reviewing existing motor vehicle rules to allow for more driver freedom, Professor Murray adds.

The challenge for legislators will be to strike a fair balance, Antti Lyyra says. “If the product liability rules are too strict or unclear, manufacturers may shy away from radical innovation, which will slow down the uptake of robotic technologies. If the rules and compensation schemes are too lenient, the incentives to develop safer self-driving cars lie on reputational rather than the financial grounds.”

The insurance framework surrounding autonomous vehicles is another minefield which needs to be navigated.

“This is an interesting and developing area which the government is currently looking at,” Professor Murray explains.

“In a world where all vehicles are fully automated, and require no human input at all, it would be easy to place liability on the manufacturer and let them deal with claims arising from a collision. Collisions should be rarer than they are today because the vehicles will be programmed to drive more safely than humans tend to.”

Until these legal, safety and insurance matters are clarified, some consumers will be wary of embracing self-driving technology, both LSE academics say. However, the early adopters find joy in trying out and discussing the tricks their robotic cars are able to master, Antti adds.

“Robots are often presented as independent, tireless and unerring. Unfortunately, as appealing as this image of autonomy may be, it is pure science fiction.”

Robots are products that are built, programmed and used by people, and both people and robots are fallible, just in different ways.”

Additional notes

Antti Lyyra is undertaking a PhD in Information Systems and Digital Innovation at the Department of Management. He received a Master’s degree in Information System Science from Aalto University School of Business in Helsinki and studied a year at ITAM, Instituto Tecnológico Autónomo de México, in Mexico City as a part of the programme.

Antti worked several years in technology and management consulting in technical and leading positions before entering the PhD programme. During his professional career he specialised in large scale enterprise systems, architectures and supply chains in the telecommunications and automotive sectors.

The Ambivalent Characteristics of Connected, Digitised Products: Case Tesla Model S is co-authored by Antti Lyyra and Kari Koskinen, both PhD students in the Department of Management at the London School of Economics and Political Science.

Professor Andrew Murray is a Reader in Law at the London School of Economics and Political Science.  Dr Murray's research interests focus on the emerging disciplines of cyber-regulation and cyber law.

Posted 19 August 2016

 

Share:Facebook|Twitter|LinkedIn|