PCIe 5.0 Controller supporting Endpoint, Root Port, Switch, Bridge and advanced features
High power efficiency AI accelerator
In order to realize a mobility society that connects people and cars, a smart city that connects people and cities, and a Cyber Physical System that reproduces the real world on computers in the industrial systems, the efficient execution of AI processing by edge devices with severe power and cost constraints has become an important issue in applying AI to a wide range of fields.
In conventional neural network processing, since the intermediate data of each layer is stored in an external memory with high power consumption, power efficiency is lowered. The AI accelerator “ML041”, developed by NSITEXE, divides input data (“Tiling”), and fuses the processing of multiple layers for each divided input data (“Layer fusion”), reducing the load/store transactions of intermediate data to the external memory and improving power efficiency. This will enable neural networks such as VGG 16, MobileNet and ResNet to run at 12 TOPS/W power efficiency (when implementing 7 nm generation SoCs).
Offers Configurability to meet your requirement
“ML041” also offers a configuration with built-in diagnostic circuitry to detect hardware random failures, enabling AI to be applied to safety-critical systems without the need for additional external diagnostic circuitry.
View High power efficiency AI accelerator full description to...
- see the entire High power efficiency AI accelerator datasheet
- get in contact with High power efficiency AI accelerator Supplier
Block Diagram of the High power efficiency AI accelerator IP Core
AI processor IP
- RT-630 Hardware Root of Trust Security Processor for Cloud/AI/ML SoC FIPS-140
- RT-630-FPGA Hardware Root of Trust Security Processor for Cloud/AI/ML SoC FIPS-140
- Complete Neural Processor for Edge AI
- Ultra low power inference engine
- Ultra Low Power Edge AI Processor
- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof