![]() |
|
![]() |
![]() |
|||||||||||||||||||||||||||||||||||
![]() |
Semidynamics Announces Cervell™ All-in-One RISC-V NPU Delivering Scalable AI Compute for Edge and Datacenter ApplicationsNew fully programmable Neural Processing Unit (NPU) combines CPU, Vector, and Tensor processing to deliver up to 256 TOPS for LLMs, Deep Learning, and Recommendation Systems. Barcelona, Spain – 6 May 2025 – Semidynamics, the only provider of fully customizable RISC-V processor IP, announces Cervell™, a scalable and fully programmable Neural Processing Unit (NPU) built on RISC-V. Cervell combines CPU, vector, and tensor capabilities in a single, unified all-in-one architecture, unlocking zero-latency AI compute across applications from edge AI to datacenter-scale LLMs. Delivering up to 256 TOPS (Tera Operations Per Second) at 2GHz, Cervell scales from C8 to C64 configurations, allowing designers to tune performance to application needs — from 8 TOPS INT8 at 1GHz in compact edge deployments to 256 TOPS INT4 in high-end AI inference. Says Roger Espasa, CEO of Semidynamics: Why NPUs Matter AI is rapidly becoming a core differentiator across industries — but traditional compute architectures weren’t built for its demands. NPUs are purpose-designed to accelerate the types of operations AI relies on most, enabling faster insights, lower latency, and greater energy efficiency. For companies deploying large models or scaling edge intelligence, NPUs are the key to unlocking performance without compromise. Cervell NPUs are purpose-built to accelerate matrix-heavy operations, enabling higher throughput, lower power consumption, and real-time response. By integrating NPU capabilities with standard CPU and vector processing in a unified architecture, designers can eliminate latency and maximize performance across diverse AI tasks, from recommendation systems to deep learning pipelines. Unlocking High-Bandwidth AI Performance Cervell is tightly integrated with Gazillion Misses™, Semidynamics’ breakthrough memory management subsystem. This enables:
The result is an NPU architecture that maintains full pipeline saturation, even in bandwidth-heavy applications like recommendation systems and deep learning. Built to Customer Specifications Like all Semidynamics cores, Cervell is fully customizable and customers may:
As demand grows for differentiated AI hardware, chip designers are increasingly looking for ways to embed proprietary features directly into their processor cores. While many IP providers offer limited configurability from fixed option sets, Semidynamics takes a different approach — enabling deep customization at the RTL level, including the insertion of customer-defined instructions. This allows companies to integrate their unique “secret sauce” directly into the solution protecting their ASIC investment from imitation and ensuring the design is fully optimized for power, performance, and area. With a flexible development model that includes early FPGA drops and parallel verification, Semidynamics helps customers accelerate time-to-market while reducing project risk. This flexibility, combined with RISC-V openness, ensures customers are never locked in — and always in control. Cervell At-a-Glance
|
![]() |
![]() |
![]() |
Home | Feedback | Register | Site Map |
![]() |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |