As car makers steam ahead in the driverless car race, lawmakers are being left in the dust – and one of the biggest grey areas is liability in the event of an autonomous car accident.
While the ultimate goal is for autonomous vehicles to be much safer than humans (94% of crashes are caused by human error), that doesn’t mean the pursuit of driverless cars has been without incident.
In March, in the first case of its kind, a self-driving Uber car killed a pedestrian. There have been a number of other non-fatal incidents before and after. While the technology continues to mature, there will likely be more.
With human-human car collisions, the main objective is to prove liability.
In some cases, such as a multi-car collision, this can be complex. But with autonomous car accidents, things can get even murkier.
For example, a court will have to take into account the car’s level of autonomy, with level 1 controlling just one aspect of the car’s performance, to level 5, in which no steering wheel is required.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataAn MIT paper recently questioned the morality of programming a driverless car to swerve into one person in order to save many, in a modern variation of the trolley dilemma thought experiment.
For lawmakers and practitioners, it is a question that is already having a real-world impact and will see its effect grow exponentially as autonomous cars proliferate on the roads. And while a number of car makers, including Volvo, Mercedes-Benz and Google, have publicly stated that they will accept liability for incidents when their car is in autonomous mode, there are many cases in between that lack clarity.
To illustrate the minefield that self-driving cars are creating for lawmakers when it comes to an autonomous car accident, Verdict gave personal injury lawyer of 22 years at US firm George Sink P.A Injury Lawyers Alan Kennington three hypothetical crashes to find out who is liable in a US court of law.
Scenario 1: Level 5 legbreaker
A level 5 fully autonomous car crashes into a pedestrian crossing the road, fracturing the pedestrian’s leg. The driver isn’t paying attention, but the car’s camera and CCTV show that pedestrian wasn’t looking when crossing road. Later investigations show that the car’s software wasn’t working. Who is liable?
“What would most likely happen would be that the pedestrian would bring the lawsuit against the car manufacturer as well as the creator of the technology,” explains Kennington.
They would also look at the driver, he says, to look at the actions they took in the build-up to the crash.
“I think that all of these people would come into play and that’s why I think it would be important for a jury to be able to hear the facts of the case so that at that point they could determine if the operator did everything they should’ve done. Then that shifts over to the technology.”
If the investigation proved that the software experienced an error, then it would be “more than likely” that the manufacturer could be held responsible.
“Because the ultimate goal of the automated vehicle is to create a safer driving experience for more efficient commuting and better productivity for everyone to create fewer accidents,” he says.
“But we can’t allow that type of technology to forget safety. Safety has to be the number one priority.”
Kennington adds that the owners of the autonomous vehicles will need to carry out maintenance on the software, a form of digital MOT that’s “similar to how we update our computers right now”.
Scenario 2: Lax driver in a level 2 car
A level 2 autonomous car crashes into a car in front of it, causing severe whiplash to all those involved. The driver was not paying full attention, but later investigations also show that the car’s sensors were faulty. Who is liable?
“In that situation, it is ultimately the driver, because they were not paying attention,” says Kennington.
“I don’t think under our current state law that they can shift that liability over to the manufacturer. I think ultimately that will rest with the driver itself.”
A key part of attributing liability to the car or driver lies with the level of involvement that the driver has.
“If the driver needs to be in the car just so that the vehicle recognises that it’s there when the vehicle is on autopilot, then I think it could shift the liability.
“So at any point where the driver’s judgment is still needed, like cruise control or automatic braking, lane changing – even though there are warning devices to help the driver – they are just there to assist, they are not there to take control.
“So I think that once the control is in the fully autonomous cars, there may not even be a steering wheel. Once that control is gone – I think that’s the dividing line.”
Scenario 3: The autonomous trolley dilemma
A level 5, fully autonomous vehicle swerves to avoid a group of children who have run into the road, killing an adult as a result of this evasive action. The software and car were all in working order. Who is liable?
Kennington says that in this situation it becomes “more difficult” to determine liability in this type of autonomous car accident, raising moral questions such as what the software should do, as illustrated by the MIT paper.
In a human driver case, the liability could be shifted away from the driver if all of the actions leading up to the incident indicated that the driver did what “a reasonable driver would have done under those circumstances”.
But with AI at the wheel, it is more complex. Experts will be required to explain the software to a jury, in a similar way that an expert currently tells the jury about a problem with a faulty tyre. But with software, it will be far more complex – and an unknown.
“The claimants are going to have to be experts and they are now going to get into the software, this technology, that is new, that has not been tested, and is not very clear to everyone what this technology is.
“And the creators of this technology are going to be very uncomfortable with people getting under the hood and looking at what they’ve created. And I think that’s where the expert for the victims is going to be crucial in these types of cases.”
Kennington compares understanding the data from a car’s computer to a black box. And it will require a new area of specialisation for experts, one that will become “very, very expensive”.
Once the expert has explained these factors, it becomes “an issue of fact for the jury to decide”.
Blame in an autonomous car accident: The overall verdict?
While the laws currently vary state by state and contain vast grey areas, there are some basics that tend to run true across the US.
“I think right now anything with level 2 or below it’s going to rest on the driver, the human,” says Kennington.
“Level 3, it gets very murky and it gets very difficult. However, as we get into 4 and 5 that may shift the liability to the car manufacturer or the manufacturer of the technology itself.”
Liability on dealership
There is also much confusion among consumers around the extent of autonomy in a vehicle, so much so that Tesla recently stopped promoting its cars as having a ‘full self-driving’ option.
In some circumstances, the dealership could become liable for a crash of one of its customers if they did not properly explain the extent of its autonomous mode.
“That gets into a training issue. When these vehicles are sold, the dealerships, the manufacturers [need to consider] how are they training? What warning did they tell people? Did they tell people that yes you can do limited activities with your hand off the wheel, but you still have to pay attention to the road?”
These factors would all have to take into account other variables, such as speed and state boundaries.
“This could lead to some form of failure to warn and unfair trade practices could come into play. Especially during the transition between level 3 and 4.”
Is a unified federal law the answer?
Another legal problem that lawyers face is the US’ fragmented legal system, which means autonomous car laws currently vary state by state.
Some states, such as California and Arizona – hotbeds for autonomous car tests – are proactive in creating self-driving car laws.
Others, such as Kennington’s home state of South Carolina, are less clear: “It is one of the states that has recognised some form of self-driving cars but is a very vague legislation right now.”
This poses problems for driverless cars travelling across state borders, both for the car’s operators and for manufacturers programming the car to meet varying state laws and ensure that the software adapts as it crosses state boundaries.
Having uniform legislation for autonomous car accident liability is one solution to these problems. Kennington says that this could be at a federal level or guided by the federal government to be administered state by state, such as minimum standards enforced by the federal government with each state adopting it and doing their own variation.
Last month, the US Department of Transport released a plan to rewrite safety rules for a driverless car future.
However, a federal law comes with its own problems.
“One of the things that I’m concerned about with federal regulation is that if they create some kind of restrictions on what victims can do, whether that is mandatory arbitration, whether that is a cap on damages – I think that would be very bad.
“I think that would be very beneficial to the manufacturers of vehicles and the companies that create the technology, but I think that could be detrimental to the people.”
The importance of case law for autonomous car accident liability
Case law will also be vital for helping lawyers determine liability in an autonomous car accident, says Kennington.
“When we write the laws, the laws are then going to have to be interpreted, so I think that’s where case law becomes very important so that we have a body of work so that we know it’s reasonable that when ‘x’ happens then the machine should do ‘y’.”
“And I think that would be the standard that everybody would accept and that would give some clarity for the manufacturers and it would also give clarity for the lawyers to understand.”
In some ways, it is comparable to the early 1900s when cars first started popping up on roads and there was little case law available.
As is often the case, technology develops faster than the law can keep up. But how long will it be before lawyers to have enough autonomous car accident precedent to make their jobs easier?
“Hopefully it would take a long time because there’d be fewer accidents because the vehicles would be much safer,” says Kennington.
“So I hope it would be a lot longer than when cars originally came out.”
Read more: Driverless cars in London move a step closer to reality