LiDAR vs camera: Can autonomous vehicles really waive one?
Conversations around LiDAR were contentious last year after some dominant voices in the autonomous vehicle (AV) space criticized the technology and claimed that cameras held the key to the AV future. However, investments made into the LiDAR sector and automaker AV development plans continue to convey a different message.
Venture capitalists are bullish on LiDAR. According to CB Insights, they’ve put over $1.1 billion into LiDAR startups since 2015. Investments continue to pour into the space, and automakers believe in the technology too. All of the major automotive original equipment manufacturers (OEMs) except for one have requests out for proposals, or actual development projects in place, for LiDAR. With AV production deadlines coming up as soon as next year, these considerations, as well as time and resource investments, indicate a serious commitment to the technology.
LiDAR is clearly considered critical by investors and automakers, but camera technology also has its unique benefits. So, it’s not a question of which technology should be in AVs but whether we should be choosing between the two of them at all. Ultimately, AVs will need multiple sensor types for redundancy and functional safety’s sake.
Perceiving beyond the human eye
LiDAR is a crucial technology needed to ensure the safe deployment of AVs, and it’s the perfect complement to camera technology. To understand this, first consider the five basic human senses: sight, hearing, smell, taste, and touch. For humans to fully comprehend their environment, they need to use some, if not all, of these capabilities. Similarly, AVs need multiple “senses” to reliably perceive their environment. They need LiDAR, camera, and even radar technology, all of which have their merits. And in combination (i.e., sensor fusion) they perceive beyond the human eye and provide the assurance needed for functional safety of AVs.
Cameras are similar to the human eye in that they operate at wavelengths within the visible light spectrum and are passive sensors—meaning they rely on ambient light. While this quality is beneficial in terms of power and heat dissipation, it’s problematic in low-light environments and poor weather conditions for which ambient light is poor, if it exists at all. To operate in such conditions, humans rely on their vehicles’ headlights to drive at night and windshield wipers to drive during rain. The goal is for AVs to be safer and more reliable than their human-driven counterparts.
LiDAR technology is an active sensing technology, so it can “see” independently of ambient light, which is exactly when cameras, and humans, cannot. The technology generates its own light that it uses to gather data about its surroundings, and because of this it performs better than cameras at night and when it’s raining (see images). It also operates at different wavelengths than cameras (i.e., near-infrared radiation), making it less susceptible to ambient interference such as direct sunlight, blinding lights from oncoming cars, sudden light changes (i.e., shady areas, entering or exiting tunnels), and raindrops that may otherwise confuse camera technology by blocking the field of view (FOV). While a camera’s ability to see emitted light and color makes the identification of traffic lights and street signs easier, the technology’s unreliability in these common driving scenarios makes cameras alone a huge risk to AV safety.
Identifying what’s real and what’s not
The ability to perceive an object is one thing, but to also understand what it is—classify it, determine its legitimacy—is a whole other thing. LiDAR’s 3D point cloud, versus a camera’s 2D images, allows for better spatial separation between objects and for direct distance measurement. Its superior distance-measuring tools and depth perception make it impervious to vision irregularities such as reflective surfaces, repeated patterns, and textureless regions.
For AVs to have 3D sensing with cameras, they need to either fuse images of two cameras or fuse images of a single camera over time. Both options require extensive computing power to fuse and analyze images to determine the distance between objects; this also takes more time. LiDAR technology is able to do this directly. The lack of this intrinsic capability leaves cameras incredibly vulnerable.
Cameras can be easily thwarted by changes in viewpoints or angles where the apparent position of an object can change if it’s viewed along two different lines of sight (i.e., foreshortening). Cameras can also be fooled by projected or “phantom” 2D images that are not real, making them vulnerable to attackers with the intent to harm drivers or passengers. This depth perception issue is exacerbated when driving conditions and environments change drastically from original testing scenarios. LiDAR has also been fooled by shooting laser pulses toward it in the past, but recent advances have made it very resilient to such ambient light sources.
Ultimately, pairing LiDAR, camera, and radar technologies can provide the redundancy and level of certainty needed to ensure all objects in a scene are perceived and accurately identified by AVs—the ultimate goal for functional safety, and the safety of drivers and non-drivers alike. In addition, if data from the different technologies conflict, the vehicle is capable of making the cautious decision and using the majority vote to ensure safety.
All major automakers are looking for sensor solutions that combine a few different sensor types and provide superior perception to the human eye. Cameras and radar sensors may be insufficient when it comes to detecting small objects in long ranges and seeing in low-ambient illumination scenarios (i.e., at night and in adverse weather), but they work well with LiDAR, which offers the long-range detection and fine resolution that are essential for a safe AV future. While LiDAR technology is currently more expensive than camera technology, it will come with enhanced performance at a lower price point as manufacturing and production scales up. LiDAR technology can simply see things that cameras and radars cannot, and function in environments that might obfuscate a camera or radar’s vision. This is why LiDAR technology is a must for the future of AVs.