AI Accelerator: Neural Network-specific Optimized 1 TOPS
While there are many general-purpose Neural Processing Units (NPUs), a one-size-fits-all solution is rarely the most efficient. By optimizing the E1 for specific neural networks, Expedera can significantly reduce NPU area and power—essential in cost- and power-constrained devices.
The Origin E1 family supports combinations of many common neural networks, including ResNet 50 V1, EfficientNet, NanoDet, Tiny YOLOv3, MobileNet V1, MobileNet SSD, BERT, CenterNet, Unet, and many others.
View AI Accelerator: Neural Network-specific Optimized 1 TOPS full description to...
- see the entire AI Accelerator: Neural Network-specific Optimized 1 TOPS datasheet
- get in contact with AI Accelerator: Neural Network-specific Optimized 1 TOPS Supplier
AI accelerator IP
- Ultra low power AI inference accelerator
- Deeply Embedded AI Accelerator for Microcontrollers and End-Point IoT Devices
- Performance Efficiency Leading AI Accelerator for Mobile and Edge Devices
- High-Performance Edge AI Accelerator
- High power efficiency AI accelerator
- AI accelerator (NPU) IP - 1 to 20 TOPS