Some Washington residents may be familiar with the fatal accident that happened in March 2018 between a self-driving Uber vehicle and a pedestrian. The crash happened in Arizona, and the National Transportation Safety Board has issued a ruling that splits liability for the accident among the vehicle, the vehicle’s safety driver, Uber, the state, and the victim.
Determining liability in accidents involving autonomous vehicles is one of several issues that must be worked out before their use can be widespread. Researchers have looked at how game theory can be applied in these cases. Most studies have ignored how human drivers will adapt to self-driving vehicles in favor of examining different scenarios and the driving algorithms that will help ensure that the vehicles are both safe and efficient. The researchers in this study looked at the issue of “moral hazard” to see whether human drivers tend to be riskier while behind the wheel of a self-driving vehicle.
According to the researchers, human drivers tended to assume that the vehicles themselves would drive more conservatively, so they were more likely to make risky decisions. The model they developed looked at the different aims of law makers, drivers, manufacturers and the cars themselves. The researchers said they hoped their tools would help policy makers with regulations.
Many more regulations will need to be put in place before self-driving vehicles become commonplace. Usually, determining who is liable in a motor vehicle accident involves looking at whether a driver was negligent. A manufacturer may be held responsible if the vehicle was faulty in some way. If the accident is caused by a driver who was on the job at the time, the driver’s employer could be considered liable.