AMD ROCm Comes To Windows On Consumer GPUs

Radeon RX 6900 XT
Radeon RX 6900 XT (Image credit: AMD)

AMD has shared two big news for the ROCm community. Not only is the ROCm SDK coming to Windows, but AMD has extended support to the company's consumer Radeon products, which are among the best graphics cards. Of course, there are some small compromises, but mainstream Radeon graphics card owners can experiment with AMD ROCm (5.6.0 Alpha), a software stack previously only available with professional graphics cards.

AMD introduced Radeon Open Compute Ecosystem (ROCm) in 2016 as an open-source alternative to Nvidia's CUDA platform. ROCm supports AMD's CDNA and RDNA GPU architectures, but the list is reduced to a select number of SKUs from AMD's Instinct and Radeon Pro lineups. AMD graphics card owners have gotten other SKUs to work, but they often only do so to a certain extent.

From the Instinct portfolio, we have the Instinct MI250X, MI250, MI210, MI100, and MI50 that feature full support. Meanwhile, only the Radeon Pro W6800 and Radeon Pro V620 from the Radeon Pro's ranks are on the list. AMD has broadened the list to support the Radeon RX 6900 XT, Radeon RX 6600, and, surprisingly, the eight-year-old Radeon R9 Fury. However, there is a small catch. Only the Radeon R9 Fury arrives with full software-level support from the ROCm platform, whereas the other two RDNA 2 offerings have partial support. For instance, the Radeon RX 6900 XT only supports the Heterogeneous Interface for Portability (HIP) SDK; meanwhile, only the HIP runtime is enabled on the Radeon RX 6600.

Swipe to scroll horizontally
GPUArchitectureSW LevelLLVM TargetLinuxWindows
Radeon RX 6900 XTRDNA 2HIP SDKgfx1030SupportedSupported
Radeon RX 6600RDNA 2HIP Runtimegfx1031SupportedSupported
Radeon R9 FuryFijiFullgfx803CommunityUnsupported

AMD had initially designed ROCm to work with Linux. There were workarounds to get ROCm to run on Windows-based systems, like virtualization methods like Docker or Windows Subsystem for Linux (WSL). Logically, there's a slight performance hit compared to running ROCm on a native Linux system. AMD has now embraced Windows on ROCm, which users have been asking for a long time. Sadly, only a few AMD SKUs are on the Windows support list.

None of AMD's Instinct accelerators support ROCm on Windows. Only the Radeon Pro W6800, Radeon RX 6900 XT, and Radeon RX 6600 are on the list for Windows support. The Radeon R9 Fury is a particular case. While it has full ROCm software support, the Fiji-based graphics card only works in Linux on a community level. It basically means that AMD doesn't have the Radeon R9 enabled by default in its software distributions. Instead, users will have to enable the graphics card themselves manually.

It's great to see AMD widening the ROCm ecosystem to include consumer graphics cards. The chipmaker seems to be marching in the right direction, even if it takes a sweet time to do so.

Zhiye Liu
RAM Reviewer and News Editor

Zhiye Liu is a Freelance News Writer at Tom’s Hardware US. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.

  • AndrewJacksonZA
    Too little, too late? I've been waiting for ROCm on Windows since launch - it's been a mess. I've been wanting to play around with it first on my RX470 and now on my RX6800XT.

    Intel's OneAPI seems to be the best way forward right now as far as open solutions go.
    Reply
  • emike09
    AMD's got a looong way to go to catch up to CUDA. Good luck AMD.
    Reply
  • bit_user
    Positive news! I hope the eventual goal is to support compute on all recent models, including RDNA1 and Vega iGPUs.

    They need to take a page from Nvidia's playbook and support compute up-and-down the entire product line. It has to work out-of-the-box, with the same ease of installation as graphics drivers. Only then can app developers realistically support compute on AMD GPUs.
    Reply
  • bit_user
    AMD introduced Radeon Open Compute Ecosystem (ROCm) in 2016 as an open-source alternative to Nvidia's CUDA platform.
    CUDA is primarily an API. AMD has a clone called HIP (Heterogeneous Interface for Portability), which runs atop multiple different hardware/software platforms, including Nvidia GPUs and there's allegedly even a port to Intel's oneAPI. HIP only supports AMD GPUs atop ROCm, which is why ROCm support for consumer GPUs is important.

    AMD's wish is that people would use HIP, instead of CUDA. Then, apps would seamlessly run on both Nvidia and AMD's GPUs. There are other GPU Compute APIs, such as OpenCL and WebGPU, although they lack some of CUDA's advanced features and ecosystem.
    Reply
  • Kamen Rider Blade
    bit_user said:
    CUDA is primarily an API. AMD has a clone called HIP (Heterogeneous Interface for Portability), which runs atop multiple different hardware/software platforms, including Nvidia GPUs and there's allegedly even a port to Intel's oneAPI. HIP only supports AMD GPUs atop ROCm, which is why ROCm support for consumer GPUs is important.

    AMD's wish is that people would use HIP, instead of CUDA. Then, apps would seamlessly run on both Nvidia and AMD's GPUs. There are other GPU Compute APIs, such as OpenCL and WebGPU, although they lack some of CUDA's advanced features and ecosystem.
    I'd rather nobody use any API that is "Vendor Specific".

    That's why OpenCL & DirectCompute exists.

    If you're targeting Windows, you have DirectCompute.

    If you're targeting Open Source and portability, OpenCL.

    No matter how nice each vendor specific / proprietary API is, not having to be locked into a hardware vendor would be better IMO.
    Reply
  • bit_user
    Kamen Rider Blade said:
    I'd rather nobody use any API that is "Vendor Specific".

    That's why OpenCL & DirectCompute exists.
    With DirectCompute, you're just trading GPU-specific for platform-specific. That's not much progress, IMO.

    Also, from what I can tell, DirectCompute is merely using compute shaders within Direct3D. It doesn't appear to be its own API. I'd speculate they're not much better or more capable than OpenGL compute shaders.

    You didn't mention Vulkan Compute, which is another whole can of worms. At least it's portable and probably more capable than compute shaders in either Direct3D or OpenGL. What it's not is suitable for scientific-grade or probably even financial-grade accuracy, like OpenCL.
    Reply
  • Kamen Rider Blade
    bit_user said:
    With DirectCompute, you're just trading GPU-specific for platform-specific. That's not much progress, IMO.

    Also, from what I can tell, DirectCompute is merely using compute shaders within Direct3D. It doesn't appear to be its own API. I'd speculate they're not much better or more capable than OpenGL compute shaders.

    You didn't mention Vulkan Compute, which is another whole can of worms. At least it's portable and probably more capable than compute shaders in either Direct3D or OpenGL. What it's not is suitable for scientific-grade or probably even financial-grade accuracy, like OpenCL.
    There really isn't one Open-sourced/Platform Agnostic GP GPU Compute API that meets all those requirements, is there?
    Reply
  • bit_user
    Kamen Rider Blade said:
    There really isn't one Open-sourced/Platform Agnostic GP GPU Compute API that meets all those requirements, is there?
    OpenCL has the precision and the potential, but big players like Nvidia and AMD no longer see it as central to their success in GPU Compute, the way they see D3D and Vulkan as essential to success in the gaming market. Intel is probably the biggest holdout in the OpenCL market. It forms the foundation of their oneAPI.

    One of the upsides I see from the Chinese GPU market is probably coalescing around OpenCL. We could suddenly see it re-invigorated. Or, maybe they'll turn their focus towards beefing up Vulkan Compute.

    Oh, and WebGPU is another standard to keep an eye on. It's the web community's latest attempt at GPU API for both graphics and compute workloads. Web no longer means slow - Web Assembly avoids the performance penalties associated with high-level languages like Javascript. And you can even run Web Assembly apps outside of a browser.
    Reply
  • Kamen Rider Blade
    bit_user said:
    OpenCL has the precision and the potential, but big players like Nvidia and AMD no longer see it as central to their success in GPU Compute, the way they see D3D and Vulkan as essential to success in the gaming market. Intel is probably the biggest holdout in the OpenCL market. It forms the foundation of their oneAPI.

    One of the upsides I see from the Chinese GPU market is probably coalescing around OpenCL. We could suddenly see it re-invigorated. Or, maybe they'll turn their focus towards beefing up Vulkan Compute.

    Oh, and WebGPU is another standard to keep an eye on. It's the web community's latest attempt at GPU API for both graphics and compute workloads. Web no longer means slow - Web Assembly avoids the performance penalties associated with high-level languages like Javascript. And you can even run Web Assembly apps outside of a browser.
    Why does this feel like another XKCD moment?
    Reply
  • bit_user
    Kamen Rider Blade said:
    Why does this feel like another XKCD moment?
    #927 !

    It's not, though. It just feels like it. Neither CUDA nor HIP are standards. Nor is Direct3D.

    WebGPU is a standard, but then it's meant to succeed WebGL and WebCL, probably not unlike how Vulkan succeeded OpenGL (and WebCL never really caught on). I've never used WebGL, but it sounds very closely-tied to OpenGL ES, and that's basically dead.

    I suppose Web Assembly is a bit like Java bytecode. I don't honestly know enough about either one to meaningfully compare them. Java really seems to have fallen out of favor and hopefully avoided the underlying reasons for that.
    Reply