ROCm 6.3 adds several new features including a Fortran compiler, and SGLang
ROCm gets more abilities for enterprise customers to take advantage of
AMD has announced ROCm version 6.3, which adds many new updates to the ROCm ecosystem. The latest iteration of the open-source driver stack features several additions, including SGLang, FlashAttention-2, and a Fortran Compiler.
SGLang is a new runtime in ROCm 6.3 that purportedly improves latency, throughput, and resource utilization by optimizing "cutting-edge" generative AI models on AMD's homebrewed Instinct GPUs. SGLang purportedly achieves up to 6X higher performance on large language model inferencing and comes with pre-configured Docker containers that use Python to accelerate AI, multimodal workflows, and scalable cloud backends.
FlashAttention-2 is the next iteration of FlashAttention, which reduces memory usage and compute demands with Transformer AI models. FlashAttention-2 purportedly features up to 3x speedup improvements over version one for backward and forward passes, accelerating AI model training time.
AMD has implemented a Fortran compiler into ROCm 6.3, enabling users to run legacy Fortran-based applications on AMD's modern Instinct GPUs. The compiler features direct GPU offloading through OpenMP for scientific workloads, backward compatibility allowing the developers to continue writing Fortran code for existing legacy applications, and simplified integrations with HIP kernels and ROCm libraries.
Multi-NodeFFT support enables high-performance distributed FFT computations in ROCm 6.3. This feature purportedly simplifies multi-node scaling, reducing developers' complexity and enabling seamless scalability across massive datasets.
ROCm 6.3 introduces enhancements to the computer vision libraries rocDecode, rocJPEG, and rocAL, enabling support for the AV1 codec, GPU-accelerated JPEG decoding, and better audio augmentation.
ROCm is an open-source stack of software and drivers designed to run on AMD Instinct GPUs. The platform aims to provide features that enable or improve enterprise GPU-accelerated applications such as high-performance computing (HPC), AI/Machine Learning, communication, and more.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.
TSMC is reportedly in talks with Nvidia to make Blackwell GPUs in Arizona — Blackwell silicon needs to be shipped back to Taiwan for assembly
Intel says Arc Xe4 Druid GPUs are already in the works — Software optimization is the only remaining step for Xe3 Celestial as it approaches launch with Panther Lake
-
GenericUsername109 What about full support of consumer GPUs? Like what CUDA has offered since forever?Reply -
Kurt Lust ROCm has had a Fortran compiler for a long time. So far it was based on "classic flang", LLVM backend with a front-end based on code donated by the PGI (now part of NVIDIA). What has changed is that AMD is now also making its version of the new flang compiler available to those who want to try it out, as an alternative to classic flang, which it will ultimately replace. The new compiler offers OpenMP offload and hence more GPU support than the classic compiler.Reply -
Kurt Lust
The officially supported list is rather short. Basically some high-end RDN3 cards and then some RDN2 cards in the pro series. One GCN5.1/gfx906 card is in deprecated mode. That architecture served as the basis for CDNA later on.GenericUsername109 said:What about full support of consumer GPUs? Like what CUDA has offered since forever?
However, there is nothing that makes those RDNA3 or RDNA2 cards special. Except that they have more compute units and more memory than other cards. So those other cards may also work, but are not officially supported, also because AMD considers their memory capacity a bit low. In fact, elsewhere in the documentation of ROCm I found a reference based only on architectures and 24GB of GPU RAM recommended with 8 GB as a minimum.
The trouble for AMD is that their rendering cards (RDNA) differ more from their compute cards (CDNA architecture) than is the case between NVIDIA rendering and compute GPUs. So let's see what happens when UDNA comes out, likely later in 2026.