Back in 2021, RISC-V researchers demonstrated that Nvidia’s CUDA code can run on non-proprietary hardware like a RISC-V-based Vortex GPGPU through an OpenCL translator. Probably not the most efficient way to bring CUDA support to RISC-V, but this clearly showed that there is growing interest for RISC-V as a viable alternative to x86 and ARM processing architectures. Nvidia is now officially acknowledging RISC-V’s potential in the compute space by announcing native CUDA support for RISC-V.
The announcement was made at a RISC-V summit in China by Frans Sijstermans - Nvidia’s Vice President of Hardware Engineering, who also happens to be a RISC-V board director. The diagram presented at the event shows how the RISC-V processor can handle CUDA drivers at OS level, while the CUDA kernels are running on Nvidia’s GPUs. There is a DPU involved, apparently from Nvidia as well, which suggests that the diagram represents a compute system for HPC and data centers.
Tom’s Hardware notes that this move to offer CUDA support for open-source architectures like RISC-V could enable Team Green to diversify its ecosystem in China where RISC-V sees explosive adoption, in spite of the restrictions to market AI accelerators such as the GB200 and GB300 in this region.
Meanwhile, AMD is promoting its own Nvidia CUDA alternative in the form of ROCm, which is now in its seventh iteration and already supports the RISC-V architecture. Many companies have been vocal against the CUDA hegemony and Team Red is trying its best to break the monopoly, but ROCm adoption is still slow and it will probably take some more years before we see real competition in the compute software stack arena.