VayaVision announced the release of VAYADrive 2.0, an AV perception software engine that fuses raw sensor data together with AI tools intended to create an accurate 3D environmental model of the area around the self-driving vehicle.

VAYADrive 2.0 is intended to provide crucial information about dynamic driving environments that enables safer and reliable autonomous driving while optimizing cost-effective sensor technologies.

“This launch marks the beginning of a new era in autonomous vehicles, bringing to market an AV perception software based on raw data fusion,” said Ronny Cohen, CEO and Co-founder of VayaVision. “VAYADrive 2.0 increases the safety and affordability of self-driving vehicles and provides OEMs and T1s with the required level of autonomy for the mass-distribution of autonomous vehicles.”

The VAYADrive 2.0 software solution combines AI, analytics, and computer vision technologies with computational efficiency to scale up the performance of AV sensor hardware. The software is reportedly compatible with a wide range of cameras, LiDARs, and radars

The company says that VAYADrive 2.0 solves a key challenge facing the industry: the detection of “unexpected” objects. VAYADrive 2.0 upsamples sparse samples from distance sensors and assigns distance information to every pixel in the high-resolution camera image. This allows autonomous vehicles to receive crucial information on an object’s size and shape, to separate every small obstacle on the road, and to accurately define the shapes of vehicles, humans, and other objects on the road.

“VAYADrive 2.0’s raw data fusion architecture offers automotive players a viable alternative to inadequate ‘object fusion’ models that are common in the market,” said Youval Nehmadi, CTO and Co-founder of VayaVision. “This is critical to increasing detection accuracy and decreasing the high rate of false alarms that prevent self-driving vehicles from reaching the next level of autonomy.