Edge AI/ML accelerator (NPU)
TinyRaptor reduces the inference time and power consumption needed to run Machine Learning (ML) Neural Networks (NN) while being scalable and a seamless solution to deploy AI/ML in every SoCs.
TinyRaptor is particularly well suited for edge computing applications on embedded platforms with both high-performance and low-power requirements.
Best product award: https://www.dolphin-design.fr/news/dolphin-design-wins-an-embedded-award-for-tiny-raptor-its-energy-efficient-neural-network-ai-accelerator/
View Edge AI/ML accelerator (NPU) full description to...
- see the entire Edge AI/ML accelerator (NPU) datasheet
- get in contact with Edge AI/ML accelerator (NPU) Supplier
Block Diagram of the Edge AI/ML accelerator (NPU) IP Core

NPU IP
- 4-/8-bit mixed-precision NPU IP
- Neural network processor designed for edge devices
- Edge AI Accelerator NNE 1.0
- Neural Network Processor for Intelligent Vision, Voice, Natural Language Processing
- AI accelerator - 4.5K, 9K, or 18K INT8 MACs, 16 to 32TOPS
- AI accelerator - 36K or 54K INT8 MACs, 32 to 128TOPS