Autonomous Vehicles (AVs) are quickly moving from hype to reality. According to Emerj Artificial Intelligence Research’s recent report documenting plans from 11 of the biggest carmakers, several should be available from as early as next year, including from Honda, Toyota, and Renault-Nissan.

However, it is clear that deployment of AVs for mass volume production requires “more” than what is required by “traditional” cars. AVs need to actively interact with the driver, with other vehicles, with the infrastructure, and require significantly more validation. This cannot be accomplished solely by one player but requires cooperation among various heterogeneous players in the AV ecosystem. Recent technology collaborations, such as 3M’s next-generation digitally enabled Smart Code sign technology and Nvidia’s Drive Constellation simulation platform, have shown the importance of ecosystems in bringing self-driving cars to reality.

Illustrating the progress being made, and despite one high-profile accident, the safety record of Level 3 plus systems is exceptional. Indeed, California’s DMV compiles and makes publicly available statistics on human interventions for all companies testing AVs on its roads. Last year Waymo’s cars traveled 1.2 million mi with an intervention rate of one per 11,018 mi. Not only is this almost half the 2017 rate, it is fast approaching the average U.S. annual mileage (13,476 mi), and more than 1.5 times the UK average (7134 mi).

Many elements have played a part in these rapid improvements. Undoubtedly, processor, software, and sensor advances are key, but the technology needs to work seamlessly—and strong collaborative ecosystems, in areas such as development and vehicle-to-road-system integration, are essential in accelerating and proving self-driving vehicles are safe, effective, and viable.


Sensor advances

At the heart of AVs are sensing technologies. These are coupled with low-latency vehicle-to-vehicle and vehicle-to-infrastructure communication systems and the combined data interpreted by powerful artificial intelligence (AI)-based processors.

The three core sensor technologies are:

  • LiDAR: used for depth mapping. Current systems can range to more than 100 m with a wide field-of-view.
  • Radar: used for motion measurement (up to 300 km/h), object detection, and tracking with ranges up to 300 m.
  • Cameras: used for object recognition and classification.

Though not all vehicles will use the same mix of sensors (some are currently using only radar and imaging while others LiDAR and imaging), every added sensor provides more data and complements each other when combined through sensor fusion, substantially improving the accuracy, safety, and reliability of the overall system and vehicle.

Advances in each of the core sensor technologies are happening constantly. For example, LiDAR systems are now able to range to greater distances, even with low-reflectivity targets, while reducing system size and cost. Radar technologies are being developed that operate simultaneously in short- and long-range modes with improved levels of accuracy and with lower power dissipation—which are critical, especially in PHEVs (plug-in hybrid electric vehicles) and EVs (electric vehicles). In imaging, sensors are being made available with a wider choice of resolutions to address multiple needs for AVs. The high-dynamic range (providing high-quality images in challenging scenes that include very dark and very bright areas) of the latest image sensors combined with features, such as LFM (LED flicker mitigation), that overcomes the problem of displayed image flicker from increasingly popular LED vehicle, road signs, and street lighting sources are further examples of enabling technologies in this area.

Advances in both sensor technology and the AV ecosystem can also be seen in the way vehicles will be able to communicate with the roadway infrastructure. This can be critical—for example, enabling a vehicle to be warned of dangerous road conditions or an accident ahead.

An autonomous driving ecosystem can improve the effectiveness and safety of self-driving vehicles by defining and facilitating the way vehicles communicate with the road network to get warnings of dangerous conditions or an accident ahead. Short-range wireless communications are a key part of enabling this, but they’re also costly to roll out across the complete road network and could be vulnerable to a hack—so they require safety mechanisms and cybersecurity solutions be in place.

As such, collaborations between system manufacturers and semiconductor companies are helping realize vision-based approaches to help improve navigation for vehicles equipped with automated driving features. This can be implemented alongside wireless communication systems on main roads, as well as on smaller roads and temporary routes where it can be less feasible to roll out wireless infrastructure.

Image sensors are now capable of “seeing” much more than the human driver, and it is possible to use the signage to deliver additional information to further assist drivers beyond traditional advanced driver-assistance systems (ADASs), and pave the way toward autonomous driving.

The results of a the collaboration in this area were shown in January at CES, with ON Semiconductor’s AR0234AT CMOS image sensor integrated with 3M’s Smart Code sign technology.

Not only does the addition of vision technologies improve accuracy, provide redundancy, and allow vehicle-to-infrastructure communication to be rolled out in situations where wireless systems would be impractical, the visibility of such systems also helps demystify such technologies among the public, which should help improve consumer trust in self-driving vehicle technologies.

Processors in AVs face considerable computing challenges, not only fusing the outputs of disparate sensors, but also coping with the swathes of data these sensors (and in particular, vision systems) create. Ecosystems are therefore vital in guiding technology development across companies and reducing the strain put on the car’s processor.

Full hardware and software ecosystems that enable system developers to collaborate and use advanced development systems to accelerate the design and production of AVs are emerging. These may combine deep learning, sensor fusion, and surround vision to change the driving experience. An example of this type of ecosystem was on display in March at the GPU Technology Conference where Nvidia and ON Semiconductor demonstrated an open, cloud-based platform that provides real-time data from the image sensors to the Nvidia Drive Constellation.



The transportation industry is undergoing disruptive changes. Autonomous vehicles from virtually all manufacturers are set to enter production over the next few years, bringing a host of driving and societal benefits, not least is a significant reduction of road-traffic accidents. Enabling technology is improving rapidly, but the complexity and difficulties are also increasing exponentially. Automotive companies, technology companies, universities, and governments must collaborate to enable safe, secure, and timely deployment of AVs. The development of ecosystems is essential to guide and accelerate development as well as to prove self-driving vehicles are safe, effective, and viable.