Can we regulate AVs wisely when they still have so much to learn?
As autonomous vehicles slowly move closer to acceptance as part of our everyday lives, governments around the globe are crafting regulations to protect those cars, their passengers, and everyone around them. In the U.S., California recently announced its latest guidelines for self-driving cars, striving for all-encompassing safety. But the new regulations don’t reflect the whole truth about autonomous vehicles: AVs still haven’t learned how to perceive and process all the data they need to safely share the road with pedestrians, bicyclists, and human drivers. In fact, as of now self-driving cars lack the powers of perception that even babies rely on to read the world around them.
The 94 percent solution
A statistic we’ve heard often claims that autonomous cars will one day be 94 percent safer than people-driven vehicles. It may be a questionable assumption (based on the fact that 94 percent of U.S. car accidents are caused by human error), but it serves to spotlight the appeal of smart cars. With sophisticated sensors, cameras, radar, LiDAR, and GPS, self-driving vehicles are essentially “superhuman,” as one Ford spokesman puts it. The technology is capable of detecting potential danger and reacting to it much faster than people ever could, and therefore better equipped to avoid accidents than most drivers.
However, some experts argue that comparing self-driving vehicles to human drivers is impossible, because we have no substantial data on how human drivers actually perform. Moreover, in some ways humans will continue to be superior drivers for some time to come, even as the artificial intelligence behind autonomous vehicles continues to evolve. Any “alert, attentive, sober driver” is “really good at avoiding crashes,” according to Tom Dingus, director of the Virginia Tech Transportation Institute. “It will be difficult making automated vehicles that are as good at avoiding crashes as you are.”
Uber’s tragic error
From “a human perspective,” autonomous car accidents “seem strange” because AVs rarely brake before a crash, explains Brad Selman, director of Cornell University’s Intelligent Information Systems Institute. “Vehicles make decisions based on what their sensors detect. If its sensors don’t detect anything, the vehicle won’t react at all.”
This appeared to be the case when an Uber self-driving car struck and killed Elaine Herzberg in March 2018 as she walked her bicycle across a dark street in Tempe, Arizona. It was initially believed that the sensors had mysteriously failed to register her presence. Two months later, an internal investigation revealed that the sensors had in fact detected the pedestrian, but the software system dismissed the data as a harmless “false positive.” The Information reports that the system had been “tuned” to suppress the car’s reaction to false positives in order to provide a smoother ride.
Understanding human cues
People definitely excel in one area that self-driving technology struggles with—the subtleties of non-verbal communication. From the beginning, babies learn to communicate with their eyes and facial expressions, by nodding and shaking their heads. So it’s only natural for drivers, pedestrians, and bicyclists to exchange information in this way on the road, especially in congested urban areas. With just a glance, the slightest nod, or a small wave of the hand, drivers can tell a pedestrian to go ahead and cross the street, assure a cyclist that it’s okay to pass, or invite another driver to merge into traffic.
These things we do automatically, with little thought, are beyond the capabilities of autonomous vehicles. It’s hard to create an algorithm detailed enough to learn a complex combination of movements and facial expressions—and hard for AVs to understand human cues and intentions. For an autonomous car, that simple nod to the bicyclist represents a very sophisticated negotiation.
Consider this classic out-of-the-ordinary situation: a police officer stands in the middle of an intersection, directing traffic, taking over for the lights. As human drivers, we instantly understand what’s happening and know what to do. But for a self-driving car, there are a flood of factors to sort out—the officer’s location, the uniform, the intersection, the traffic lights—and a decision to make: what to do now?
The case for simulation
Carmakers have been choosing between two paths toward a solution. They program the AV to proceed with extreme caution, which may result in worse congestion and traffic jams; or they ignore the issue altogether. Simulation offers a third, more effective option.
A simulation platform combines artificial intelligence with deep learning and computer vision to road-test autonomous vehicles in a virtual environment. Simulation can recreate any situation in full scale, offering a highly realistic, completely safe means to test and train self-driving cars.
With our example of the officer directing traffic, simulation trains the vehicle in daylight and in darkness, in all weather conditions, until the machine recognizes the officer’s gaze, facial expressions, and hand gestures, and understands what they mean. Now the vehicle is prepared to handle the situation in real life—even to the point of knowing that a teenager wearing a hoodie gesturing in the middle of the street should be ignored.
New rules for a new era
California’s regulations, under review since 2012, are now in full effect, with one particularly important revision: human backup drivers are no longer required to ride along in autonomous vehicles. This indicates trust that carmakers have tested their vehicles “under controlled conditions that simulate, as closely as practicable,” the local drivers and pedestrians, weather, landscape, and road conditions they’ll encounter every day. And further down the road, AVs might be trusted to perceive the world in a more human way.