NVIDIA Ports CUDA to RISC-V, Betting Big on Open-Source ISA
NVIDIA envisions systems where its GPUs are the center of any acceleration, and supplementary RISC‑V CPUs oversee CUDA drivers, application logic, and the operating system, orchestrating parallel workloads entirely within the CUDA ecosystem. The diagram shown at the summit highlights a DPU handling networking tasks, creating a cohesive trio of compute, control, and data‑movement elements. If you recall, NVIDIA already uses NV-RISC-V CPU inside its GPUs for handling control logic on the GPU itself. That demonstrates NVIDIA’s strategy to build heterogeneous platforms that combine RISC-V controllers with its GPUs, DPUs, and networking silicon. Now that CUDA is fully supported on RISC-V, NVIDIA could look into alternatives for its Grace CPU to be built on the RISC-V ISA. As the open-source ISA slowly breaks into the server space with the RVA23 specification, which NVIDIA mandates for CUDA support, we could see some interesting heterogeneous designs.

RISC-V Foundation CEO Andrea Gallo, in an interview with TechPowerUp, confirmed that “There’s a team that is working on a server SOC and a server platform. This includes things like having the same interfaces for timers, clock, IOMMU, RAS and the related error reporting mechanisms. We all agree that we should use the same interfaces for specific peripherals, that are part, for example, of a server platform.” This puts confidence that there are companies preparing a major debut of RISC-V CPUs for server, and HPC too. We can’t wait to see what the market will bring now, given that the world’s largest company is now backing the world’s largest open-source ISA industry movement.