Software allows CUDA code to run on AMD and Intel GPUs without changes — ZLUDA is back but both companies ditched it, nixing future updates

AMD Radeon RX 6700 XT
(Image credit: Tom's Hardware)

ZLUDA, the software that enabled Nvidia's CUDA workloads to run on Intel GPUs, is back but with a major change: It now works for AMD GPUs instead of Intel models (via Phoronix). And it seems further work on the project won't be happening, at least not major updates, with ZLUDA developer Andrzej Janik (going by the handle vosen) saying "realistically, it's now abandoned."

ZLUDA first popped up back in 2020, and showed great promise for making Intel GPUs compatible with CUDA, which forms the backbone of Nvidia's dominant and proprietary hardware-software ecosystem. Although Intel's only GPUs at the time were integrated graphics, the computing world was expecting the launch of Intel's Xe-based GPUs, such as Ponte Vecchio and Arc Alchemist. Now that those GPUs are out, ZLUDA would find lots of use, which was presumably the intention in 2020.

However, ZLUDA was taken off of GitHub in February 2021, with Janik citing "private reasons." With ZLUDA's return, the developer has decided to clarify what those reasons are, and it has to do with Intel and AMD. When Janik first started developing ZLUDA, he was an Intel employee and was lobbying internally for the company to adopt it. Intel requested Janik take the project down while they evaluated it, but as the developer puts it, "Intel decided there is no business case for running CUDA applications on Intel GPUs."

Subsequently, Janik left Intel and got in touch with AMD, which signed a contract concerning ZLUDA development. Just like Intel, AMD took its time evaluating ZLUDA and asked for ZLUDA to remain private before it came to a decision. Eventually, AMD made the same conclusion as Intel, that "there is no business case for running CUDA applications on AMD GPUs." Janik was then released from the contract and could finally bring ZLUDA back publicly.

Today's ZLUDA is very different from the 2020 version however. Instead of being built on Intel's oneAPI and including support for the company's GPUs, it is based on AMD's competing ROCm solution and only supports Radeon GPUs. It's not entirely clear why Intel support was yanked, but it may have something to do with the fact that ZLUDA's 2020 release only supported pre-Xe integrated graphics. By the time Arc Alchemist GPUs came out in 2022, Janik was working with AMD.

Additionally, the developer stated ZLUDA "will only possibly receive updates to run workloads I am personally interested in (DLSS)," meaning the project is more or less done. It seems Janik's ultimate goal was to get support from Intel or AMD, but with the two out of the picture, he says "we've run out of GPU companies."

That Intel and AMD aren't interested in making their GPUs compatible with the existing CUDA ecosystem is telling. It seems they would rather go head-to-head with CUDA with oneAPI and ROCm, which are newer and less developed but boast the benefit of being open-source. CUDA is still by far the more popular solution for professional and datacenter graphics software, and it's not clear if that's going to change any time soon, especially if Nvidia's GPUs continue to lead Intel's and AMD's in features and performance.

Matthew Connatser

Matthew Connatser is a freelancing writer for Tom's Hardware US. He writes articles about CPUs, GPUs, SSDs, and computers in general.

  • hotaru251
    we've run out of gpu companies

    i mean....Chinas got soemone if he really wanted to try to keep it going.
    Reply
  • bit_user
    That Intel and AMD aren't interested in making their GPUs compatible with the existing CUDA ecosystem is telling. It seems they would rather go head-to-head with CUDA with oneAPI and ROCm
    ROCm is not equivalent either to oneAPI or CUDA. AMD has a CUDA-like API, called HIP.

    Both AMD and Intel also have porting tools, which facilitate developers doing ports of codebases from CUDA to either oneAPI or HIP. Both companies would rather you switch from CUDA to these competing APIs, rather than leaving your code reliant on CUDA and merely using a compatibility-shim to run it on their GPUs.

    That is why they both turned him down, I'm sure. It's not that they don't see the importance of CUDA, but that their strategy involves trying to peel people away from CUDA, rather than building essentially second-tier implementations of Nvidia's CUDA-native GPUs.
    Reply
  • CmdrShepard
    Nobody sane (rich enough?) is switching away from CUDA.

    It works well, support for it is ubiquitous, and pretty much all AI and ML pipelines are designed around it, not to mention OptiX SDK and Iray.
    Reply
  • Shirou
    The overall vibe of this article is horrible.
    OSS is run based upon contributors, it by no means has been signaled to not be discontinued.
    Calling it dead in the water is a really distasteful take especially after the efforts of the developer.
    Reply
  • bit_user
    BTW, I think a better revenue strategy for the developer would've been to retain both oneAPI and ROCm/HIP support and see if he could find any CUDA users interested in sponsoring further development (i.e. features or optimizations).

    Also, it's pretty clear that if AMD had wanted a CUDA clone, they could've made HIP be exactly that. It's already a very near-clone, which I think they differentiated just to avoid claims of copyright infringement.
    Reply
  • DiegoSynth
    bit_user said:
    ROCm is not equivalent either to oneAPI or CUDA. AMD has a CUDA-like API, called HIP.

    Both AMD and Intel also have porting tools, which facilitate developers doing ports of codebases from CUDA to either oneAPI or HIP. Both companies would rather you switch from CUDA to these competing APIs, rather than leaving your code reliant on CUDA and merely using a compatibility-shim to run it on their GPUs.

    That is why they both turned him down, I'm sure. It's not that they don't see the importance of CUDA, but that their strategy involves trying to peel people away from CUDA, rather than building essentially second-tier implementations of Nvidia's CUDA-native GPUs.
    BUT it doesn't matter to end users. At the end of the day, what we need is an answer to this question: If I mount an AMD gpu on my motherboard, will I be able to run software (3d rendering or whatever) that runs CUDA code?
    If the answer is "no", any further explanation may only be of the companies' interest.
    Reply
  • bit_user
    DiegoSynth said:
    BUT it doesn't matter to end users. At the end of the day, what we need is an answer to this question: If I mount an AMD gpu on my motherboard, will I be able to run software (3d rendering or whatever) that runs CUDA code?
    If the answer is "no", any further explanation may only be of the companies' interest.
    Yes, I get it. And that's why I think he could probably find people & companies willing to fund further development on it.

    As I said, I think the main reason AMD and Intel haven't (quite) cloned CUDA is out of fear of copyright infringement. A secondary reason is probably that they're hoping to peel away some software and get it to use HIP or oneAPI, instead. I doubt the latter strategy will be very successful.
    Reply