Nvidia announced a commercially available Level 2+ automated driving system, Nvidia Drive AutoPilot, which integrates multiple AI technologies that is designed to enable supervised self-driving vehicles to go into production by next year.
At CES 2019, Continental and ZF announced Level 2+ self-driving solutions based on Nvidia Drive, with production starting in 2020.
As a Level 2+ self-driving solution, Nvidia Drive AutoPilot provides autonomous driving perception and a cockpit with artificial intelligence (AI) capabilities. Vehicle manufacturers can use it to bring to market automated driving features, as well as intelligent cockpit assistance and visualization capabilities.
“A full-featured, Level 2+ system requires significantly more computational horsepower and sophisticated software than what is on the road today,” said Rob Csongor, Vice President of Autonomous Machines at Nvidia. “Nvidia Drive AutoPilot provides these, making it possible for carmakers to quickly deploy advanced autonomous solutions by 2020 and to scale this solution to higher levels of autonomy faster.”
Drive AutoPilot integrates high-performance Nvidia Xavier system-on-a-chip (SoC) processors and the latest Nvidia Drive Software to process many deep neural networks (DNNs) for perception as well as complete surround camera sensor data from outside the vehicle and inside the cabin. This combination reportedly enables full self-driving autopilot capabilities, including highway merge, lane change, lane splits, and personal mapping. Inside the cabin, features include driver monitoring, AI copilot capabilities, and advanced in-cabin visualization of the vehicle’s computer vision system.
Drive AutoPilot is part of the Nvidia Drive platform. The new Level 2+ system complements the Nvidia Drive AGX Pegasus system that provides Level 5 capabilities for robotaxis.
Drive AutoPilot addresses the limitations of existing Level 2 ADAS systems, which a recent Insurance Institute for Highway Safety study showed offer inconsistent vehicle detections and poor ability to stay within lanes on curvy or hilly roads, resulting in a high occurrence of system disengagements where the driver abruptly had to take control.
“Lane keeping and adaptive cruise control systems on the market today are simply not living up to the expectations of consumers,” said Dominique Bonte, Vice President of Automotive Research at ABI Research. “The high-performance AI solutions from Nvidia will deliver more effective active safety and more reliable automated driving systems in the near future.”
Central to Nvidia Drive AutoPilot is the Xavier SoC, which is designed to deliver 30 trillion operations per second of processing capability. Architected for safety, Xavier has been designed for redundancy and diversity, with six types of processors and 9 billion transistors that enable it to process vast amounts of data in real time.
The Drive AutoPilot software stack integrates Drive AV software for handling challenges outside the vehicle, as well as Drive IX software for tasks inside the car.
Drive AV uses surround sensors for full, 360-degree perception and features accurate localization and path-planning capabilities. These enable supervised self-driving on the highway, from on-ramp to off-ramp. Going beyond basic adaptive cruise control, lane keeping, and automatic emergency braking, its surround perception capabilities handle situations where lanes split or merge, and safely perform lane changes.
Drive AV also includes DNN technologies that enable the vehicle to perceive a wide range of objects and driving situations, including DriveNet, SignNet, LaneNet, OpenRoadNet, and WaitNet. Nvidia says this AI software understands where other vehicles are, reads lane markings, detects pedestrians and cyclists, distinguishes different types of lights and their colors, recognizes traffic signs, and understands complex scenes.
In addition to providing precise localization to the world’s HD maps for vehicle positioning on the road, DRIVE AutoPilot offers a new personal mapping feature called “My Route,” which remembers where people have driven and can create a self-driving route even if no HD map is available.
Within the vehicle, Drive IX intelligent experience software enables occupant monitoring to detect distracted or drowsy drivers and provide alerts, or take corrective action if needed. It is also used to create intelligent user experiences, including the new ability for augmented reality. Displaying a visualization of the surrounding environment sensed by the vehicle, as well as planned route, instills trust in the system.
For next-generation user experiences in the vehicle, the AI capabilities of Drive IX can also be used to accelerate natural language processing, gaze tracking, or gesture recognition.
Continental is developing a scalable and affordable automated driving architecture that will bridge from Premium Assist to future automated functionalities. It uses Continental’s portfolio of radar, LiDAR, camera, and automated driving control unit technology powered by Nvidia Drive.
ZF ProAI offers a unique modular hardware concept and open software architecture using Nvidia Drive Xavier processors and Drive software.