By setting a clear, stable standard, the RVA23 profile's ratification is spurring top vendors to align on a common RISC-V hardware goal. All we need now is that hardware.
Aug. 07, 2025 –
CUDA is coming to RISC-V, in yet another vote of confidence for an ecosystem entering a new phase of maturity. NVIDIA becomes the latest in a growing list of vendors, including Red Hat and Canonical, to have taken the ratification of the RISC-V RVA23 profile as a go signal.
The announcement, made at RISC-V Summit China 2025, was a ‘strategic technology disclosure’, rather than a product launch. Afterwards, I spoke with the bearer of this good news – NVIDIA’s VP Multimedia Architecture Frans Sijstermans – about what needs to happen next.
Before we get to that, here’s some background for the uninitiated: CUDA is NVIDIA’s proprietary software framework for AI and machine learning using its GPUs – which offer speedups measured not in percentages, but in orders of magnitude. Since its launch in 2007, it has become a foundational tool in areas such as AI training, HPC, advanced simulations, genomics, autonomous mobility and massive inference pipelines.
With a developer base of over 4 million, NVIDIA’s 90% market share in GPU-based neural network training is largely driven by the ease at which developers can build and train AI models using CUDA, alongside deep learning frameworks like PyTorch and TensorFlow.
The team had considered porting CUDA to RISC-V back in 2022. But, Sijstermans tells me, bringing CUDA to RISC-V isn’t like cross-compiling a simple toolchain. It requires the ISA and surrounding platform to have reached a level of maturity, predictability, and performance that until very recently, didn’t exist outside commercial ISAs.
“When you compare where things were then to where they are now, though, it’s another story entirely”, he recalls. “There are still plenty of things to be finished, but it’s getting very, very close. We know we can bring CUDA to RISC-V and it’ll work with the new wave of RVA23 hardware.”