Complete Neural Processor for Edge AI
BrainChip’s AI IP is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. BrainChip IP allows incremental learning and high-speed inferencing in a wide variety of use cases with high throughput and unsurpassed performance-per-watt.
BrainChip’s IP can be configured to perform convolutions (CNP) and fully connect (FNP) layers. Weight bit-precision is programmable to optimize throughput or accuracy and each weight is stored locally in embedded SRAM inside each NPU. The entire neural networks can be placed into the fabric, removing the need to swap weights in and out of DRAM resulting in a reduction of power consumption while increasing throughput.
BrainChip’s IP fabric can be placed either in a parallelized manner that would be ideal for ultimate performance, or space-optimized in order to reduce silicon utilization and further reduce power consumption. Additionally, users can modify clock frequency to optimize performance and power consumption further.
View Complete Neural Processor for Edge AI full description to...
- see the entire Complete Neural Processor for Edge AI datasheet
- get in contact with Complete Neural Processor for Edge AI Supplier
Block Diagram of the Complete Neural Processor for Edge AI
