Renesas Electronics Corporation and StradVision announced the joint development of a deep learning-based object recognition solution for smart cameras used in next-generation advanced driver-assistance system (ADAS) applications and cameras for ADAS Level 2 and above.

Said Naoki Yoshida, Vice President of Renesas’ Automotive Technical Customer Engagement Business Division, “This new joint deep learning-based solution optimized for R-Car SoCs will contribute to the widespread adoption of next-generation ADAS implementations and support the escalating vision sensor requirements expected to arrive in the next few years.”

“StradVision is excited to combine forces with Renesas to help developers efficiently advance their efforts to make the next big leap in ADAS,” said Junhwan Kim, CEO of StradVision. “This joint effort will not only translate into quick and effective evaluations, but also deliver greatly improved ADAS performance. With the massive growth expected in the front camera market in the coming years, this collaboration puts both StradVision and Renesas in excellent position to provide the best possible technology.”

StradVision’s deep learning-based object recognition software aims to delivers high performance in recognizing vehicles, pedestrians, and lane marking. This software has been optimized for Renesas R-Car automotive system-on-chip (SoC) products R-Car V3H and R-Car V3M. These R-Car devices incorporate a dedicated engine for deep-learning processing, called CNN-IP (Convolution Neural Network Intellectual Property), enabling them to run StradVision’s SVNet automotive deep-learning network at high speed with minimal power consumption. The object recognition solution resulting from this collaboration realizes deep learning-based object recognition while maintaining low power consumption, making its use suitable in mass-produced vehicles, encouraging ADAS adoption.

StradVision’s SVNet deep learning software is highly regarded for its recognition precision in low-light environments and its ability to deal with occlusion when objects are partially hidden by other objects. The basic software package for the R-Car V3H performs simultaneous vehicles, person, and lane recognition, processing the image data at a rate of 25 frames per second, enabling evaluation and POC development. Using these capabilities as a basis, if developers wish to customize the software with the addition of signs, markings, and other objects as recognition targets, StradVision says it provides support for deep learning-based object recognition covering all the steps from training through the embedding of software for mass-produced vehicles.

In addition to the CNN-IP dedicated deep learning module, the Renesas R-Car V3H and R-Car V3M feature the IMP-X5 image recognition engine. Combining deep learning-based complex object recognition and highly verifiable image recognition processing with man-made rules allows designers to build a robust system. In addition, the on-chip image signal processor (ISP) is designed to convert sensor signals for image rendering and recognition processing. This makes it possible to configure a system using inexpensive cameras without built-in ISPs, reducing the overall bill-of-materials (BOM) cost.

Renesas R-Car SoCs featuring the new joint deep learning solution, including software and development support from StradVision, is scheduled to be available to developers by early 2020.