CEVA announces industry’s first high-performance sensor hub DSP architecture
CEVA, Inc. announced SensPro, what it says is the industry’s first high-performance sensor hub DSP architecture designed to handle the broad range of sensor processing and sensor fusion workloads for contextually aware devices.
SensPro reportedly addresses the need for specialized processors to efficiently handle the proliferation of different types of sensors that are required in smartphones, robotics, automotive, AR/VR headsets, voice assistants, smart home devices, and for emerging industrial and medical applications that are being revolutionized with initiatives like Industry 4.0. These sensors, among which are camera, radar, LiDAR, Time-of-Flight (ToF), microphones, and inertial measurement units (IMU), generate a multitude of data types and bit-rates derived from imaging, sound, RF, and motion, which can be used to create a full 3D contextually aware device.
Built from the ground up to maximize performance-per-watt for complex multi-sensor processing use cases, the company says the architecture offers a combination of high performance single and half precision floating-point math required for high dynamic range signal processing, point cloud creation, and deep neural network (DNN) training, along with a large amount of 8- and 16-bit parallel processing capacity required for voice, imaging, DNN inference processing, and Simultaneous Localization and Mapping (SLAM). The architecture incorporates the company’s widely used CEVA-BX scalar DSP, which it says offers a seamless migration path from single sensory system designs to multi-sensor, contextual-aware designs.
“The proliferation of sensors in intelligent systems continues to increase, providing more precise modeling of the environment and context,” said Dimitrios Damianos, Technology & Market Analyst of the Sensing Division at Yole Développement (Yole). “Sensors are becoming smarter, and the goal is not to get more and more data from them, but higher quality of data especially in cases of environment/surround perception such as: environmental sensor hubs that use a combo of microphones, pressure, humidity, inertial, temperature, and gas sensors (smart homes/offices) as well as situational awareness in ADAS/AV where many sensors (radar, LiDAR, cameras, IMU, ultrasonic, etc.) must work together to make sense of their surroundings.”
“The challenge is to process and fuse different types of data from different types of sensors,” said Yohann Tschudi, Technology & Market Analyst, Computing and Software. “Using a mix of scalar and vector processing, floating and fixed-point math, coupled with an advanced micro-architecture, SensPro offers system and SoC designers a unified processor architecture to address the needs of any contextually-aware multi-sensor device.”
According to the company, it uses a highly-configurable 8-way VLIW architecture, allowing it to be easily tuned to address a wide range of applications. It employs a state-of-the-art micro-architecture that combines scalar and vector processing units and incorporates an advanced, deep pipeline enabling operating speeds of 1.6 GHz at a 7 nm process node. It incorporates a CEVA-BX2 scalar processor for control code execution with a 4.3 CoreMark/MHz score. It adopts a wide SIMD scalable processor architecture for parallel processing and is configurable for up to 1024 8x8 MACs, 256 16x16 MACs, dedicated 8x2 Binary Neural Networks support, as well as 64 single precision and 128 half precision floating point MACs. This allows it to deliver 3 TOPS for 8x8 networks inferencing, 20 TOPS for Binary Neural Networks inferencing, and 400 GFLOPS for floating point arithmetic. Other key features include a memory architecture providing a bandwidth of 400 GB per second, 4-way instruction cache, 2-way vector data cache, DMA, and queue and buffer managers for offloading the DSP from data transactions.
SensPro is accompanied by an advanced set of software and development tools to expedite system designs including an LLVM C/C++ compiler, Eclipse based integrated development environment (IDE), OpenVX API, software libraries for OpenCL, CEVA deep neural network (CDNN) graph compiler including the CDNN-Invite API for inclusion of custom AI engines, CEVA-CV imaging functions, CEVA-SLAM software development kit and vision libraries, ClearVox noise reduction, WhisPro speech recognition, MotionEngine sensor fusion, and the SenslinQ software framework.
Initially, the DSPs will be available in three configurations, each including a CEVA-BX2 scalar processor and various vector units configured for optimal use-case handling:
- SP250 – single vector unit with 256 8x8 MACs targeting imaging, vision, and sound centric applications
- SP500F – single vector unit with 512 8x8 MACs and 64 single precision floating point MACs targeting SLAM centric applications
- SP1000 – dual vector units with 1024 8x8 MACs and binary networks support targeting AI centric applications
The architecture and cores will be made available for general licensing starting in Q3 2020.
For more information, visit www.ceva-dsp.com.