The dangers of Procrustean design in automated driving
Procrustes is a legend of Greek myth. He was an innkeeper on an ancient road, and would host weary travelers with food, entertainment, and a special iron bed. But there was a catch: guests of Procrustes would have to fit his bed exactly, regardless of how tall or small they were. Once trapped in the bed, Procrustes would stretch or, ahem, “shorten” his guests' legs so that they would fit his exacting standards.
Now fast-forward several thousand years, and picture this all-too-familiar scenario: You slide into a rental car, take a quick glance at the owner's manual, turn on the ignition, and five minutes later merge onto the motorway. After a few minutes, you notice a small strange icon on the instrument cluster. A few minutes later, it disappears. A few minutes later it reappears, but now seems to be a different color. Some random lines have now also appeared, and you wonder if those were there before and you hadn't noticed.
What does all of this mean?
What is the car trying to tell you? Is it meant to be reassuring or meant to encourage an action? Are you in danger?
In semi-automated systems currently on the road (from park assist, to traffic-jam assist, to limited highway pilots), this is the exact scenario automakers are setting up for drivers. Controls are hidden. System status displays are extraordinarily unclear, reliant on tiny icons or nonsensical text warnings. Handoffs and takeovers are unreliable and system-centric. Warnings are intermittent and sometimes nonexistent.
The very usefulness of these on-market features remains questionable as well. For example, when traffic-jam assist is active, if the process to keep the driver in the loop involves re-grasping the steering wheel every 10 seconds, what value does this add to the driving experience? How is cognitive load truly being alleviated?
This is Procrustean design
It invites the driver into an iron bed of “approved” iconography and standards, which do nothing but confuse when placed into the context of a real-world drive. It forces users into a system designed for the automaker, rather than the user.
The Procrustean design problem, as we find from our wide range of consumer-focused research, is systemic. Automakers breathlessly rush this (undoubtedly impressive) technology to market with equipment and HMI (human-machine interface) assets that pre-date semi-autonomous software, which ignores many aspects of the human factor. Dealers, especially those from high-volume makes, are then woefully unprepared to educate potential buyers about the technology. Consumers then purchase the vehicle with the automated feature largely as a "box-ticking" exercise, then (at best) return for second delivery because they do not understand how it works. At worst, they use the automated feature in inappropriate ways on the road.
We are now finding that this pattern of Procrustean design has direct consequences. Consumer interest in park assist and other automated driving features has largely levelled off. In benchmarks, which evaluate new iterations of automated-driving features, we have found that designers are not improving these systems to make them more user-centric. And despite the spread of semi-autonomous features throughout model lines, wide swaths of consumers (including those in key car-buying demographics) remain hesitant about the features. Among those who show no interest, with whom the industry must work hardest to gain trust, a large percentage now question whether they will ever trust automated-driving technology.
The industry is now poised to take an even more troubling step, as many players are setting up the "next level" of automation for failure.
Among early Level 4 pilot tests, HMI is a slapped-together afterthought, using improvised tablet-based interfaces and largely the same cluster strategies we see now in on-market vehicles. It is an ecosystem that boot-straps the feature to existing equipment, rather than developing the system organically from human outward. It is an ecosystem that is reliant on intense operator training, rather than an operator-centric design approach. Ultimately, it is an ecosystem that awards providers who are first-to-roadway or who elicit the most media buzz, rather than awarding those who design safe, usable, and useful systems.
This trend has been especially frustrating to witness, because it can be so easily averted. Automakers and suppliers have the tools to enable human-outward design. Customizable digital clusters and distraction monitoring could easily be leveraged to build custom HMI specifically for safer Level 4 test vehicles. But instead, many hoping to cash in on the AV craze are myopically focused on other “easier” tasks like roadway mapping, obstacle detection, and far-future concepts for passenger comfort.
This approach is arguably even more dangerous than the rollout of park-assist and automated-driving features in on-market vehicles.
The very nature of Level 4 automation gives permission for the operator to engage in non-driving-related behavior.
We already know from existing academic and industry research that when offered Level 4 automation, operators no longer naturally attend to the road around them. Rather, they “check out.” They will take a nap. They might use their mobile devices. And even with extended warning periods (such as those we will supposedly realize with Level 4 systems), reaction times skyrocket.
This should alarm any stakeholder who tests automated vehicles on public roadways. In testbeds where Level 4 cars share the road with human-driven cars, pedestrians, and other live obstacles, Level 4 automation that simply attempts to “train out” human nature is not sufficient.
The entire compendium of human-factors research (including efforts that were used as the basis for several military standards) has found that we humans are quite resilient and can be trained to do all sorts of things. But unless more sophisticated driver-monitoring systems become standard, or we see an abrupt change in licensing procedures, no amount of training will ever prevent a typical human operator from using a Level 4 system to engage in non-driving behavior at inopportune times.
As human-factors specialists, we are taught that there is no such thing as human error.
There is only design error. Other industries have already learned this lesson; the aviation, military, and health-care fields have reaped the benefits of evolutions in advanced technology and automation. But these benefits came only after learning hard lessons about design practices and incorporating these lessons into their experiences. The common thread running through all of these lessons was an intent focus on the end user. It requires designing the technology (whether an autopilot system, body armor, or a radiation-therapy machine) to work around the human, rather than forcing the human to work around the technology.
Autonomous driving is another evolution. The automotive industry and those attempting to grab a future piece of this industry are only experiencing the beginning of this evolution, but thus far appear intent on ignoring the lessons these previous evolutions taught other fields. The future is exciting, but by ignoring 60 years of lessons previous evolutions have taught us, the near-future of transport is destined to be Procrustean.