General Purpose Neural Processing Unit (NPU)
The Chimera GPNPU family provides a unified processor architecture that can handle matrix and vector operations and scalar (control) code in one execution pipeline. These workloads are traditionally handled separately by the NPU, DSP, and realtime CPU. The entire architecture is abstracted to the user as a single software-controlled core, allowing for the simple expression of complex parallel workloads.
The Chimera GPNPU is entirely driven by code, empowering developers to continuously optimize the performance of their models and algorithms throughout the device’s lifecycle.
Modern System-on-Chip (SoC) architectures deploy complex algorithms that mix traditional C++ based code with newly emerging and fast-changing machine learning (ML) inference code. This combination of graph code commingled with C++ code is found in numerous chip subsystems, most prominently in vision and imaging subsystems, radar and lidar processing, communications baseband subsystems, and a variety of other data- rich processing pipelines. Only Quadric’s Chimera GPNPU architecture can deliver high ML inference performance and run complex, data-parallel C++ code on the same fully programmable processor.
Compared to other ML inference architectures that force the software developer to artificially partition an algorithm solution between two or three different kinds of processors, Quadric’s Chimera processors deliver a massive uplift in software developer productivity while also providing current-day graph processing efficiency coupled with long-term future-proof flexibility.
Quadric’s Chimera GPNPUs are licensable processor IP cores delivered in synthesizable source RTL form. Blending the best attributes of both neural processing units (NPUs) and digital signal processors (DSPs), Chimera GPNPUs are aimed at inference applications in a variety of high-volume end applications including mobile devices, digital home applications, automotive and network edge compute systems.
View General Purpose Neural Processing Unit (NPU) full description to...
- see the entire General Purpose Neural Processing Unit (NPU) datasheet
- get in contact with General Purpose Neural Processing Unit (NPU) Supplier
Block Diagram of the General Purpose Neural Processing Unit (NPU)

NPU IP
- NPU - Deep Neural Network Programmable Accelerator (CNN)
- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
- AI accelerator (NPU) IP - 1 to 20 TOPS
- AI accelerator (NPU) IP - 16 to 32 TOPS
- AI accelerator (NPU) IP - 32 to 128 TOPS
- 4-/8-bit mixed-precision NPU IP