In early June, the IIHS (Insurance Institute for Highway Safety) announced that results of its recent study show self-driving vehicles could struggle to eliminate most crashes. While automation has been promoted as a potential game changer for safety, eventually eliminating most crashes due to driver error, it says that AVs (autonomous vehicles) might prevent only about a third of all crashes if automated systems drive too much like people.
IIHS researchers examined more than 5000 police-reported crashes from the National Motor Vehicle Crash Causation Survey and broke them into errors in sensing and perceiving (24%), predicting (17%), planning and deciding (39%), execution and performance (23%), and incapacitation (10%).
They found that crashes due to errors in only sensing and perceiving and incapacitation, totalling 34%, might be avoided if all vehicles on the road were self-driving—though it would require sensors that worked perfectly and systems that never malfunctioned.
To avoid the other two-thirds, AVs would need to be specifically programmed to prioritize safety over speed and convenience, and to avoid other types of prediction, decision-making, and performance errors. Self-driving vehicles will need to not only to obey traffic laws but also to adapt to road conditions and implement driving strategies that account for uncertainty about what other road users will do, such as driving more slowly than a human driver would in areas with high pedestrian traffic or in low-visibility conditions.
“Our analysis shows that it will be crucial for designers to prioritize safety over rider preferences if autonomous vehicles are to live up to their promise to be safer than human drivers,” said IIHS Research Scientist Alexandra Mueller, lead author of the study.
In response to the IIHS findings and resulting stories in the media, PAVE (Partners for Automated Vehicle Education) posted a counterpoint on Medium. The organization argues that AVs will go beyond addressing just the factors in the sensing and perceiving and incapacitation categories—the 34%.
It says that AV developers emphasize that safety is the primary aim of their work. PAVE points to CEO Chris Urmson of PAVE member Aurora who has said that his company is “proactively and intentionally taking the time to program our vehicles to operate like model drivers,” always following the existing rules no matter the jurisdiction. Another PAVE member, Argo AI, has explained in depth how its safety-focused culture permeates its testing policies and not just its system design.
The organization also cites the AV safety framework “Safety First for Automated Driving” (https://intel.ly/3dpqaG7), developed by PAVE members, among others, which says that an unsafe situation that cannot otherwise be safely handled calls for the AV to “prioritize traffic rules while making a safety-assured maneuver.”
Given the lack of evidence that AV developers will actually allow their vehicles to prioritize speed and illegal maneuvers over safety, PAVE concludes another 37% of potential crashes could be prevented by AVs. In addition, it emphasizes that AVs have a huge advantage over humans when it comes to environmental conditions and the category of “execution” errors, as these can generally be solved by making complex (for a human) calculations about physics (coefficients of traction related to road surfaces) that are well-proven in traction and stability control systems.
While AVs may not prevent all road deaths, even in the early stages of roll out a significant reduction is a good start. Regardless of the current status and debate, as automated vehicles and systems are further developed and brought to market, their life-saving potential will only get better.