AImotive announced that it has begun shipment of the latest release of its aiWare3 NN (neural network) hardware inference engine IP. The company states that the aiWare3P IP core incorporates new features that result in improved performance, lower power consumption, greater host CPU (central processing unit) offload, and simpler layout for larger chip designs.

"Our production-ready aiWare3P release brings together everything we know about accelerating neural networks for vision-based automotive AI (artificial intelligence) inference applications," said Marton Feher, Senior Vice President of Hardware Engineering for AImotive. "We now have one of the automotive industry's most efficient and compelling NN acceleration solutions for volume production L2/L2+/L3 AI."

Each aiWare3P hardware IP core offers up to 16 TMAC/s (>32 TOPS) at 2GHz, with multi-core and multi-chip implementations capable of delivering up to 50+ TMAC/s (>100 INT8 TOPS). The core is designed for AEC-Q100 extended temperature operation and includes a range of features to enable users to achieve ASIL-B and above certification. According to the company, key upgrades include:

  • Enhanced on-chip data reuse and movement, scheduling algorithms, and external memory bandwidth management
  • Improvements are designed to ensure that 100% of most NNs execute within the aiWare3P core without host CPU intervention
  • Range of upgrades reducing external memory bandwidth requirements
  • Advanced cross-coupling between C-LAM convolution engines and F-LAM function engines
  • Physical tile-based microarchitecture, enabling easier physical implementation of large aiWare cores
  • Logical tile-based data management, enabling efficient workload scalability up to the maximum 16 TMAC/s per core
  • Upgraded SDK (software development kit), including improved compiler and new performance analysis tools
  • The aiWare3P RTL will be shipping from January 2020.

For more information, visit