AMD funded a drop-in CUDA implementation built on ROCm: It's now open-source

# · 🔥 1,038 · 💬 408 · 5 months ago · www.phoronix.com · mfiguiere · 📷
While there have been efforts by AMD over the years to make it easier to port codebases targeting NVIDIA's CUDA API to run atop HIP/ROCm, it still requires work on the part of developers. Over the past two years AMD has quietly been funding an effort though to bring binary compatibility so that many NVIDIA CUDA applications could run atop the AMD ROCm stack at the library level - a drop-in replacement without the need to adapt source code. Here is more information on this "Skunkworks" project that is now available as open-source along with some of my own testing and performance benchmarks of this CUDA implementation built for Radeon GPUs. That open-source project aimed to provide a drop-in CUDA implementation on Intel graphics built atop Intel oneAPI Level Zero. ZLUDA was discontinued due to private reasons but it turns out that the developer behind that, Andrzej Janik, was contracted by AMD in 2022 to effectively adapt ZLUDA for use on AMD GPUs with HIP/ROCm. Rzej Janik spent the past two years bringing ZLUDA to Radeon GPUs and it works: many CUDA software can run on HIP/ROCm without any modifications - or other processes... Just run the binaries as you normally would while ensuring that the ZLUDA library replacements to CUDA are loaded. Rzej Janik reached out and provided access to the new ZLUDA implementation for AMD ROCm to allow me to test it out and benchmark it in advance of today's planned public announcement.
AMD funded a drop-in CUDA implementation built on ROCm: It's now open-source



Send Feedback | WebAssembly Version (beta)