4-/8-bit mixed-precision NPU IP
ENLIGHT, a high-performance neural network processor IP, features a highly optimized network model compiler that moves DRAM traffic from intermediate activation data by grouped layer partition and scheduling. Plus, it supports load balancing partition for multi-core NPU. With the industry's first adoption of 4-/8-bit mixed-quantization, it is easy to customize ENLIGHT at different core sizes and performance for the target market applications and achieve significant efficiencies in size, power, performance, and DRAM bandwidth.
A production-proven IP, ENLIGHT, has been licensed in a wide range of applications, including IP cameras, IoT, ADAS, and more.
View 4-/8-bit mixed-precision NPU IP full description to...
- see the entire 4-/8-bit mixed-precision NPU IP datasheet
- get in contact with 4-/8-bit mixed-precision NPU IP Supplier
Block Diagram of the 4-/8-bit mixed-precision NPU IP IP Core

NPU IP
- General Purpose Neural Processing Unit (NPU)
- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
- AI accelerator (NPU) IP - 1 to 20 TOPS
- AI accelerator (NPU) IP - 16 to 32 TOPS
- AI accelerator (NPU) IP - 32 to 128 TOPS
- General Purpose Neural Processing Unit (NPU)