This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.
This Website Uses Cookies
By closing this message or continuing to use our site, you agree to our cookie policy. Learn More
This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.
Autonomous Vehicle Tech logo
search
cart
facebook twitter linkedin youtube
  • Sign In
  • Create Account
  • Sign Out
  • My Account
Autonomous Vehicle Tech logo
  • News
  • New Mobility
  • Vehicles & Systems
    • Product Management
  • Analysis
  • Regs & Standards
  • Infrastructure
  • Products
  • Magazine
    • Current Issue
    • Digital Editions
    • Subscribe
    • eNewsletter
    • Contact Us
    • Editorial Guidelines
    • Advertise
    • About
  • More
    • 2020 AVT ACES Award Winners
    • AVTech Futures
    • Submit an Article
    • Submit a Job Listing
    • Webinars
    • Videos
    • AVT Store
  • Buyers Guide
Home » EEMBC develops an autonomous driving benchmark suite for licensing
AutonomyNew Mobility

EEMBC develops an autonomous driving benchmark suite for licensing

Torelli Peter-ADASMark Benchmark Suite

Torelli Peter Torelli is President & CTO of EEMBC, the nonprofit that developed the ADASMark Benchmark Suite.

ADASMark pipeline

The ADASMark pipeline, from input to output, creates by a series of micro-benchmarks that measure and report performance for SoCs handling computer vision, autonomous driving, and mobile imaging tasks.

Torelli Peter-ADASMark Benchmark Suite
ADASMark pipeline
July 25, 2018
Kevin Jost
KEYWORDS Advanced driver assistance systems (ADAS) / Autonomy / benchmarks
Reprints
No Comments

The nonprofit industry consortium EEMBC (originally EDN Embedded Microprocessor Benchmark Consortium) has developed an autonomous driving benchmark suite for licensing. The ADASMark benchmarks provide a performance measurement and optimization tool for automotive OEMs and suppliers building next-generation advanced driver-assistance systems (ADAS).

The suite is intended to analyze the performance of the SoCs (systems on chips) used in ADAS implementations above SAE Level 2, which require compute-intensive object-detection and visual-classification capabilities. It uses real-world workloads that represent highly parallel applications, such as surround-view stitching, contour detection, and convolutional neural-net (CNN) traffic-sign classification. The benchmarks stress various forms of compute resources such as CPUs (central processing units), GPUs (graphical processing units), and hardware accelerators, allowing the user to determine the optimal utilization of available compute resources.

Key features of the suite include an OpenCL 1.2 Embedded Profile API (application programming interface) to ensure consistency between compute implementations; application flows created by a series of micro-benchmarks that measure and report performance for SoCs handling computer vision, autonomous driving, and mobile imaging tasks; and a traffic-sign-recognition CNN inference engine created by Au-Zone Technologies.

“While more and more automotive embedded systems are deploying multiple cores, there are still very few frameworks that can utilize their asymmetric compute resources,” said Peter Torelli, EEMBC president and CTO. “Representing the next step in the evolution of EEMBC's automotive benchmarks and multicore frameworks, ADASMark addresses this issue for the visual component of ADAS. It uses a new framework with a more relevant workload while still taking advantage of the GPU hardware in vehicles.”

A common solution for ADAS implementations above Level 2 uses a collection of visible-spectrum, wide-angle cameras placed around the vehicle, and an image-processing system that prepares these images for classification by a trained CNN. The output of the classifier feeds additional decision-making logic such as the steering and braking systems. This arrangement requires a significant amount of compute power, and assessing the limits of the available resources and how efficiently they are used is not a simple task. The ADASMark benchmark addresses this challenge by combining application use cases with synthetic test collections into a series of microbenchmarks that measure and report performance and latency for SoCs handling computer vision, autonomous driving, and mobile imaging tasks.

Specifically, the front-end of the benchmark contains the image-processing functions for dewarping, colorspace conversion (Bayer), stitching, Gaussian blur, and Sobel threshold filtering—which identifies regions of interest (ROI) for the classifier. The back-end image-classification portion of the benchmark executes a CNN trained to recognize traffic signs.

An input video stream comprised of four HD surround-cameras is provided as part of the benchmark. Each frame of the video (one for each camera) passes through the DAG (directed acrylic graph). At four nodes (blur, threshold, ROI, and classification), the framework validates the work of the pipeline for accuracy. If the accuracy is within the permitted threshold, the test passes.

The performance of the platform is recorded as the amount of execution time and overhead for only the portions of the DAG associated with vision, meaning the benchmark time does not include the main-thread processing of the video file, or the overhead associated with splitting the data streams to different edges of the DAG. The overall performance is inverse of the longest path in the DAG, which represents frames-per-second.

ADASMark came out of EEMBC’s Heterogeneous Compute Working Group, which includes members such as Intel, ARM, Nvidia, Texas Instruments, and NXP Semiconductors and meets typically about every other week. Active participation in the group affords member companies the possibility to influence (and contribute) the processing algorithms in the benchmark.

The new suite follow several automotive benchmarks from EEMBC; others are the 15-year-old AutoBench for 8- and 16-bit microcontrollers, and the 5 year newer AutoBench 2.0 for multicore processors.

In addition to paying members, “we have academic and company advisors,” said Torelli. “One of them, Au-Zone, has traffic-sign CNN classifier. They were willing to donate that trained CNN in exchange for a seat at the table, and to be first adopters of the benchmark.”

The working group is currently developing the next revision of ADASMark. While version 1.0 focuses on image conversion and stitching algorithm performance, the long-term goal, according to EEMBC materials, is to add more processing steps for pedestrian detection to the automotive case, and to implement facial feature recognition in the mobile case.

Companies adopting the benchmark suite would be publishing results for their customers as a single score, “and then under the hood there is a description of how they configured the pipeline, where they assigned each of the compute elements,…what the platform was, what hardware was used, and then whether they were custom open TL kernels or off-the-shelf,” said Torelli.

The score is a unitless number on a scale from about 2 to 100, said Torelli: “A high-performance GPU car is probably going to get in the 60-100 range, where a smaller, lower-power, embedded processor is probably going to get between 2 and 15.”

The distinction between those two extremes come down to cost and power, he added. The lower power is going to be on the order of a few watts, the GPU in the 100s of watts.

To request an ADASMark license, please visit here.

avt-subscribe

Recent Articles by Kevin Jost

Partnering for big AV challenges

Ford’s electrifies Mustang lineup with new crossover

Government and industry working together

Kevin-jost-200x200

Kevin Jost is the Editorial Director for BNP Media’s Autonomous Vehicle Technology, planning and directing the creation and commissioning of all content for the startup brand’s websites, social media, newsletters, and magazines as well as market research, native content, webcasts, directories, and events. He brings over 25 years of journalism and engineering experience to the brand and leads a global team of editors and contributors to create and deliver the latest content for automated, connected, electrified, & shared vehicle technologies. Contact him at jostk@bnpmedia.com.

Related Articles

Enabling an AV driver’s license test

BMW and Mobileye to develop crowd-sourced sensor data for automated driving

Mighty AI tackles training data for autonomous driving

You must login or register in order to post a comment.

Report Abusive Comment

Subscribe For Free!
  • Magazine Subscription
  • Online Registration
  • eNewsletter
  • Customer Service

More Videos

Popular Stories

2020 AVT ACES Award Winners

Autonomous Vehicle Technology announces its 2020 AVT ACES Award Winners

Toyota Research Institute’s P4 vehicle

Toyota Research Institute’s P4 vehicle testing two-track automated driving system development

Volkswagen launches first vehicle

Volkswagen launches first vehicle in its major electrification push

Argo AI’s self-driving system

Argo AI’s self-driving system attracts new high-level investment

Nvidia introduces first commercially available Level 2+ automated driving system

Nvidia introduces first commercially available Level 2+ automated driving system

AVTech-Futures-19-360


AVT Events/Webinars

Events

December 11, 2019

From Mechanical to Solid State LiDAR

This webinar explores technical aspects of automotive LiDAR, which is needed to provide high-definition 3D information of the car’s surroundings. Its focus on Solid State LiDAR is a specific design that does not contain any moving parts.
View All Submit An Event

Products

Autonomous Vehicles - Thematic Research (PDF download)

Autonomous Vehicles - Thematic Research (PDF download)

The global automotive industry – worth $3.5 trillion in annual revenues – faces four concurrent disruptive threats: the connected car, the electric vehicle, autonomous driving technology and the concept of transport-as-a-service. Each threat is potentially existential to legacy carmakers who operate in a low growth, low margin sector that rattles with over capacity, and which is seeing its supply lines reset by cumulative advances in enabling technologies typically deployed by Tier-1 automobile sub-system suppliers. This report focuses on autonomous driving technology.

See More Products

Autonomous Vehicle Technology’s Numbers & Stats


Autonomous Vehicle Technology

Autonomous Vehicle Technology December 2019

2019 December

This month we feature our AVT ACES: 2020 Award Winners, for which we recognize the most innovative technologies, products, and services of the year from your company in the areas of vehicle autonomy, connectivity, electrification, and mobility services (ACES). In addition, we look at testing trends at autonomous vehicle manufacturers and suppliers and the challenge of improving motorcycle safety with electronic rider-assist systems.

View More Create Account
  • More
    • About Us
    • Connect
    • Privacy Policy
  • Resources
    • Book Store
    • Events Calendar
    • Survey And Sample
    • Job Listings
  • Contact Us
  • Advertise With Us

Copyright ©2019. All Rights Reserved BNP Media.

Design, CMS, Hosting & Web Development :: ePublishing