Intel focuses on research to build trust in AVs
The public’s apprehension about riding in autonomous vehicles (AVs) had been declining but it is still significant. The latest survey by AAA (American Automobile Association), taken just weeks after the late March Uber pedestrian fatality in Tempe, AZ, showed just slight sensitivity to news reports about it (perhaps within the statistical margin of error). The first 2018 survey in January had shown reluctance to ride at 63%, with a blip upward to 65% in late March following the heavily-publicized fatality.
The long-term trend is a slow but toward a favorable attitude, although the industry still faces numerous challenges in building passenger trust. The issues involved are being identified and addressed, noted Matt Yurdana, a creative director in Intel’s Internet of Things Experiences Group, speaking at the SAE WCX technical forum in April. Intel has focused considerable resources on the subject of building human trust in autonomous transportation.
He discussed a simple AV shuttle-type operation with a short route set up by Intel at its campus in Chandler, AZ. The system used a phone app to request the vehicle and schedule the trip, much like such ride-hailing services by Uber and Lyft. When the shuttle arrived for pick up, a screen near the rear side window displayed a welcome message by name for the passenger. That was intended to assure passengers that the vehicle was there for them. It was a first step in a process to give passengers a feeling of control of the overall situation, even though they weren’t driving.
The short drive was at legal speed limits, the shuttle announced and then stopped to allow a pedestrian to walk past. It slowed to nearly a complete stops to pass relatively smoothly over speed bumps, and to complete stops at stop signs and traffic lights that had turned red. As designed, the route included a construction-mandated detour and the vehicle responded smoothly, announcing that it had identified the situation and was recalculating the new route. When it arrived at its destination, it pulled to the side of the road and notified the passenger that it was safe to exit from a left side door.
Intel did extensive debriefings of 10 passengers to find out not only what they liked and didn’t like, but what changes they wanted to see, with a special focus on seven areas of contradictions that passengers expressed. They were:
Human vs. machine judgment: concerns were about traffic nuances and reaction time from lack of human judgment, but there was a belief that AVs will avoid human error, therefore could be safer.
Personalized space vs. lack of assistance: there was a taxi-like “just for me” possibility for a sole passenger, but concerns about how the disabled/elderly and children will get in and out of the car.
Finding the right amount of communication between passenger and vehicle: making people aware of issues during the drive vs. unburdening the passenger from having to be aware.
Giving up control of the vehicle vs. providing new forms of control.
Showing how the technology works vs. providing proof that it works: seeing proof that the vehicle was correctly assessing the road situation and properly responding gave passengers confidence.
Telling the passenger vs. listening to him: being able to “converse” with the vehicle, as with a human driver, was seen as an advantage, particularly for desired route changes and a request for an immediate stop.
Following the rules of safe driving vs. human interpretation of the rules: passengers recognized that their behavior as drivers wasn’t always safe or necessarily by the book (texting, etc.), acknowledgements that varied by age, experience, and driving style. By giving up control of the vehicle, they expected “by the book” safe driving. They also expected good physical protection in the vehicle.
The automatic turning of the steering wheel, Intel found, had the adverse effect of telling many passengers they were not in control. However, Yurdana said, “engineers loved it.”
The human voice making the announcements was a trust enhancer, Intel found, but the passengers indicated they would like some interaction, such as being able to talk to the vehicle as if there were a human driver, to modify a destination, or possibly to ask the vehicle to slow down or even stop if the weather suddenly changed such as during a sudden storm. Changes in weather and autonomous operation in darkness are long-known passenger concerns, particularly pedestrian detection following the Tempe, AZ, fatality.
Interestingly, as people became more familiar with the operation of the vehicle, they were so confident in the basic operation as safe, they wanted less information—with no need to keep announcing that it was stopping for a traffic light, pedestrian, etc. And at some points on the route, they would have liked the vehicle to go a bit faster, such as on obviously empty roads in good weather.
Yurdana noted that there is some rider familiarity with using shared transportation—trains, buses, planes, and even autonomous systems as monorails. But the cabins are large, so even if they’re crowded, the number of unrelated passengers produces a sense of privacy. Furthermore, where there is a shared taxi-type service such as Via, the ride is so short-term that the unrelated passengers typically do not face a relationship issue.
SAE also set up an indoor route during WCX for a Navya autonomous shuttle. The French-developed vehicle is operating in more than 60 cities in 11 countries around the world, including its noteworthy shuttle in downtown Las Vegas. Although it has a maximum speed of 45 km/h (28 mph), it was derated to 15 km/h (10 mph) for the SAE operation. It has a passenger capacity of 15 (11 seated), which included one passenger/emergency driver who merely sat for the ride (as if just a passenger). If necessary, he could quickly reach into a compartment for a joystick-type controller to manually operate the vehicle. However, it ran uneventfully in autonomous mode for the duration of the event.