SiFive has developed a second generation architecture for its RISC-V core aimed at accelerating AI applications in the data centre and in edge devices.
www.eenewseurope.com, Sept. 09, 2025 –
The Intelligence family adds two entirely new cores, the X160 Gen 2 and X180 Gen 2, alongside upgraded versions of the X280, X390 and XM cores. All the cores include enhanced vector processing capabilities for AI workloads as well as improved memory access for lower latency.
The vector-enabled cores speed up AI workloads by serving as dedicated Accelerator Control Units that directly manage AI hardware, streamline data flow and reduce the software overhead that slows down AI applications.
The cores have a wide range of target applications. The 64bit multicore XM is aimed at datacentre chips, while the smaller 32bit X180 can be used in chips for autonomous robotics, industrial automation and smart IoT devices. They can add audio enhancement, health and fitness monitoring, home security and object detection, and predictive maintenance in industrial systems.
AI workloads will increase at least 20% across all forms of computing, including 78% in edge computing and 39% in on-premises data centres, says market researcher Deloitte.
The second generation design adds the vector extensions alongside a co-processor interface for better AI accelerator integration and a new design to reduce memory latency.
The 64bit X280 and XM cores implement the RVA23 profile, approved back in April, that enables rich operating systems (OS) stacks to run from standard binary OS distributions. This profile is essential to software portability across many hardware implementations and help to avoid lock-in to a particular software stack. RVA23 is the second major release of the application processor profile and builds on the foundation established by RVA20.
The Vector extension accelerates math-intensive workloads, including AI/ML, cryptography, and compression/decompression. flexible vector instructions that support multiple data formats.
RVA23 also includes a hypervisor extension will enable virtualization for enterprise workloads in both on-premises server and cloud-computing applications. This will accelerate the development of RISC-V-based enterprise hardware, operating systems, and software workloads. The Hypervisor extension will also provide better security for mobile applications by separating secure and non-secure components.
The memory interface uses an early dispatch scheme to hide the latency of the memory for the vector instructions, which for the 512bit vectors can require multiple load operations.
The scalar unit dispatches committed vector instructions to a Vector Command Queue (VCQ) and if a vector load is encountered, its address is sent out to the memory system in the L2 cache at the same time as the VCQ.
The responses from the memory are reordered in a configurable Vector Load Data Queue (VLDQ). The intent is that the load will pick the data up from the vector load data queue where it’s sitting waiting to be picked up, says John Simpson, senior principal architect at SiFive. This means all the data is available for the load instruction in the VCQ, effectively providing single cycle operation.
The specialized co-processor interfaces coprocessor interface allows the cores to be used as an Accelerator Control Unit (ACU), providing accelerator control and assist functions to a customer’s accelerator engine. This uses the
The SiFive Scalar Co-processor Interface (SSCI) interface is added to the vector VCIX interface and allows the accelerator to be driven by custom RISC-V instructions. The interface provides direct access to CPU registers with a flexible range of instruction opcode formats to speed up the interface.
This means an accelerator connected over SSCI and/or VCIX may also have its own access to system memory so that data can be preloaded into accelerator registers over VCIX, before a long operation is started by writing to a control register over SSCI. The Core Local Port can then give the CPU low latency access to accelerator local memory.
This architecture enables the data throughput of the accelerator to be independent of the processor and means the accelerator could be far larger and have higher bandwidth paths to memory than the CPU, says Simpson.
All five Gen 2 RISC-V cores are available for licensing immediately, with first silicon expected in Q2 2026. Two leading US chip designers have already licensed the X100 as a custom interface core with SSCI and VCIX to connect and control their own AI accelerators, adding to SiFive’s count of 40 design wins for the first generation cores.