The nonprofit industry consortium EEMBC (originally EDN Embedded Microprocessor Benchmark Consortium) has developed an autonomous driving benchmark suite for licensing. The ADASMark benchmarks provide a performance measurement and optimization tool for automotive OEMs and suppliers building next-generation advanced driver-assistance systems (ADAS).

The suite is intended to analyze the performance of the SoCs (systems on chips) used in ADAS implementations above SAE Level 2, which require compute-intensive object-detection and visual-classification capabilities. It uses real-world workloads that represent highly parallel applications, such as surround-view stitching, contour detection, and convolutional neural-net (CNN) traffic-sign classification. The benchmarks stress various forms of compute resources such as CPUs (central processing units), GPUs (graphical processing units), and hardware accelerators, allowing the user to determine the optimal utilization of available compute resources.

Key features of the suite include an OpenCL 1.2 Embedded Profile API (application programming interface) to ensure consistency between compute implementations; application flows created by a series of micro-benchmarks that measure and report performance for SoCs handling computer vision, autonomous driving, and mobile imaging tasks; and a traffic-sign-recognition CNN inference engine created by Au-Zone Technologies.

“While more and more automotive embedded systems are deploying multiple cores, there are still very few frameworks that can utilize their asymmetric compute resources,” said Peter Torelli, EEMBC president and CTO. “Representing the next step in the evolution of EEMBC's automotive benchmarks and multicore frameworks, ADASMark addresses this issue for the visual component of ADAS. It uses a new framework with a more relevant workload while still taking advantage of the GPU hardware in vehicles.”

A common solution for ADAS implementations above Level 2 uses a collection of visible-spectrum, wide-angle cameras placed around the vehicle, and an image-processing system that prepares these images for classification by a trained CNN. The output of the classifier feeds additional decision-making logic such as the steering and braking systems. This arrangement requires a significant amount of compute power, and assessing the limits of the available resources and how efficiently they are used is not a simple task. The ADASMark benchmark addresses this challenge by combining application use cases with synthetic test collections into a series of microbenchmarks that measure and report performance and latency for SoCs handling computer vision, autonomous driving, and mobile imaging tasks.

Specifically, the front-end of the benchmark contains the image-processing functions for dewarping, colorspace conversion (Bayer), stitching, Gaussian blur, and Sobel threshold filtering—which identifies regions of interest (ROI) for the classifier. The back-end image-classification portion of the benchmark executes a CNN trained to recognize traffic signs.

An input video stream comprised of four HD surround-cameras is provided as part of the benchmark. Each frame of the video (one for each camera) passes through the DAG (directed acrylic graph). At four nodes (blur, threshold, ROI, and classification), the framework validates the work of the pipeline for accuracy. If the accuracy is within the permitted threshold, the test passes.

The performance of the platform is recorded as the amount of execution time and overhead for only the portions of the DAG associated with vision, meaning the benchmark time does not include the main-thread processing of the video file, or the overhead associated with splitting the data streams to different edges of the DAG. The overall performance is inverse of the longest path in the DAG, which represents frames-per-second.

ADASMark came out of EEMBC’s Heterogeneous Compute Working Group, which includes members such as Intel, ARM, Nvidia, Texas Instruments, and NXP Semiconductors and meets typically about every other week. Active participation in the group affords member companies the possibility to influence (and contribute) the processing algorithms in the benchmark.

The new suite follow several automotive benchmarks from EEMBC; others are the 15-year-old AutoBench for 8- and 16-bit microcontrollers, and the 5 year newer AutoBench 2.0 for multicore processors.

In addition to paying members, “we have academic and company advisors,” said Torelli. “One of them, Au-Zone, has traffic-sign CNN classifier. They were willing to donate that trained CNN in exchange for a seat at the table, and to be first adopters of the benchmark.”

The working group is currently developing the next revision of ADASMark. While version 1.0 focuses on image conversion and stitching algorithm performance, the long-term goal, according to EEMBC materials, is to add more processing steps for pedestrian detection to the automotive case, and to implement facial feature recognition in the mobile case.

Companies adopting the benchmark suite would be publishing results for their customers as a single score, “and then under the hood there is a description of how they configured the pipeline, where they assigned each of the compute elements,…what the platform was, what hardware was used, and then whether they were custom open TL kernels or off-the-shelf,” said Torelli.

The score is a unitless number on a scale from about 2 to 100, said Torelli: “A high-performance GPU car is probably going to get in the 60-100 range, where a smaller, lower-power, embedded processor is probably going to get between 2 and 15.”

The distinction between those two extremes come down to cost and power, he added. The lower power is going to be on the order of a few watts, the GPU in the 100s of watts.

To request an ADASMark license, please visit here.