Despite decades of active research and development, mass deployment of autonomous cars remains a future proposition. Ensuring the safety of autonomous systems remains a major hurdle to their deployment.

Experts have estimated that at least 8 billion miles need to be driven to ensure autonomous cars are safe. Reaching this exhaustive level of testing through real-world driving alone is impractical due to the time and cost such a trial and error approach entails. As a result, most companies are supplementing real-world testing with simulation. In 2018, Waymo announced that its self-driving fleets had driven 8 million physical miles but a more significant 5 billion simulated miles since 2009. 

The simulated driving miles, of course, have to provide data that are at least as good as, or better than, real-world testing to deliver benefits to the design and validation of autonomous systems. So, how can the industry ensure or improve the quality of autonomous driving simulations and thus systems?

Assisted and autonomous driving software systems contain three overarching functional modules: perception, planning, and action. Among these, perception presents unique challenges. Autonomous and lower-level driver assistance systems are constantly gathering information about the vehicle’s surroundings through various sensing devices. 

While a single type of sensor, such as radar or camera, could be enough to enable relatively simple features like forward collision monitoring, higher levels of vehicle autonomy require more contextual information to construct a complete understanding of the vehicle’s surroundings. Moreover, a system that depends entirely on object recognition naturally presents many problems. Unknown objects, suboptimal lighting, and weather conditions can confuse vision sensors, thus compromising the perception system and presenting substantial risk to the safety of the passengers on board. 

Therefore, a suite of sensors with complementary characteristics is used to improve the robustness of the perception system. Multi-modal sensor systems, however, produce more data, require higher computational power, and use more complex algorithms. In particular, simulations of such robust perception systems are computationally intensive and data rich, requiring many hours to simulate a driving scenario. To validate the more complex algorithms and hardware systems, a holistic and rapid testing methodology is needed to ensure advanced driving functions are reliable, safe, and consistently provide positive passenger experiences.

 

More efficient object detection

A more efficient means of object detection can help reduce the computational heft of complex AV sensor simulations. RoboK, a Cambridge University spinout, believes it has a solution.

Focusing on developing AI-based perception algorithms for low-power computing platforms, RoboK has developed a new method for fusing raw sensor data and performing depth estimation on low-power computing platforms, making it particularly important for bringing high-end capabilities to low-cost vehicles. This method allows rich information to be preserved for more advanced functionalities, paving the way for truly scalable ADAS/AV systems. The company has implemented this new method in 3D sensing software solutions based on a variety of sensors including a combination of cameras and radars. 

At the core of the RoboK solution is the efficient construction of 3D point clouds without needing GPUs or hardware acceleration. The 3D point clouds can be derived from laser scans, radar, or 3D reconstructions of camera images. Critically, the sparsity of the point clouds can be adjusted to lower the demand on memory bandwidth and to consume less power.

The company has also developed efficient detection models specifically for sparse 3D point clouds that ensure accurate object detection despite the limited 3D information. Many solutions to date have applied deep-learning models directly to raw 2D image data to determine, first, the object class, then its distance, and finally the likely time to a collision with the object. Instead, it applies machine-learning models to 3D point clouds to determine object class and depth simultaneously, resulting in more efficient object detection.

The efficient image processing enabled by the use of point clouds and pixel-level sensor fusion algorithms can be performed on low-power CPUs, such as Arm Cortex A53 cores without GPUs or hardware accelerators. The perception modules can therefore be integrated at the edge with sensor modules to provide intelligent insights without the need to transmit immense amounts of raw data to be processed centrally.

 

Validating advanced driving functions through collaboration

Siemens Digital Industries Software has collaborated with RoboK to build and validate advanced driving systems in virtual environments. The virtual environments are built upon PAVE360, a pre-silicon autonomous validation environment that offers a closed-loop simulation solution to design and validate autonomous and ADAS systems. 

The combined validation platform can generate high-fidelity virtual environments and simulated vehicle dynamics that are fed into virtual integrated circuit (IC) devices and self-driving algorithms to validate vehicle behavior.

RoboK’s 3D perception algorithms perform sensor fusion, enabling the virtual vehicle to “see” its environment. The algorithms can process data in real-time on the virtual platform. With this validation platform, AV and ADAS designers can make sure that every software or hardware design iteration can be tested and validated virtually, quickly, and, most importantly, before any hardware is produced.

To demonstrate the capabilities of the validation platform, RoboK and Siemens used the 3D perception module as part of an AEB (Autonomous Emergency Braking) system.

Part of the decision logic to determine braking is handled by the virtual ADAS ECU running on an Arm virtual platform. This ECU runs a full ADAS software stack using simulated camera sensors to determine depth and classification of nearby objects.

RoboK’s novel perception system processes incoming synthesized visual data, generated in high-fidelity by Siemens Simcenter Prescan, by reconstructing 3D images and creating point clouds from this data. The perception system then feeds the resulting point clouds to the ADAS ECU. With the core sensing algorithms running in real time on the virtual ECU, the software stack can be tested in a production environment.

As the sensor signals are processed, obstacles and distances are fed into the ECU and driving decisions involving braking, steering or acceleration are sent back to the simulated virtual world managed by Simcenter Prescan to update the sensor models. Hence a closed-loop simulation is formed enabling the testing of an unlimited number of complex driving scenarios of the entire vehicle.

 

Enabling scalable AV systems with efficient detection algorithms

A lightweight and fast 3D point-cloud processing algorithm that can run without the need of a dedicated graphics processing unit forms the foundation of a scalable AV or ADAS system. On the basis of such an efficient perception module, planning and action modules can be subsequently created to support advanced driving functions for higher degrees of autonomy.

Furthermore, 3D point-cloud algorithms can deliver greater efficiency and speed to virtual AV validation platforms. In virtual worlds, developers enjoy the luxury of testing and updating software and hardware designs with full flexibility. The integration of a 3D perception system enables a holistic methodology that can validate perception, planning and action modules in a more streamlined and autonomous manner.

With comprehensive and realistic testing covering formal methods and also corner cases, quick iteration and the optimisation of algorithms for performance and efficiency becomes possible. Overall, the goal to shrink development cycles while meeting validation and compliance objectives can be achieved not only for premium vehicles, but low-cost vehicles as well.