AI accelerator (NPU) IP - 16 to 32 TOPS
Expedera's scalable tile-based design includes a single controller (SSP), and multiple matrix-math units (MMP), accumulators (PSM), vector engines (VSP) and memory to store the network. Specific configurations depend on unique application requirements. The unified compute pipeline architecture enables highly efficient hardware scheduling and advanced memory management to achieve unsurpassed end-to-end low-latency performance. The patented architecture is mathematically proven to utilize the least amount of memory for neural network (NN) execution. This minimizes die area, reduces DRAM access, improves bandwidth, saves power, and maximizes performance.
View AI accelerator (NPU) IP - 16 to 32 TOPS full description to...
- see the entire AI accelerator (NPU) IP - 16 to 32 TOPS datasheet
- get in contact with AI accelerator (NPU) IP - 16 to 32 TOPS Supplier
Block Diagram of the AI accelerator (NPU) IP - 16 to 32 TOPS

AI accelerator IP
- Ultra low power AI inference accelerator
- Deeply Embedded AI Accelerator for Microcontrollers and End-Point IoT Devices
- Performance Efficiency Leading AI Accelerator for Mobile and Edge Devices
- High-Performance Edge AI Accelerator
- High power efficiency AI accelerator
- AI Accelerator: Neural Network-specific Optimized 1 TOPS