Nvidia to drop CUDA support for Maxwell, Pascal, and Volta GPUs with the next major Toolkit release

GTX 1080 Ti
(Image credit: Nvidia)

The official release notes for Nvidia's CUDA 12.9 Toolkit explicitly indicate that the next major release will no longer support Maxwell, Pascal, and Volta-based GPUs. Note that this deprecation is only limited to the compute side, as these GPUs will likely continue receiving normal GeForce drivers for the time being. That being said, this is likely the last SDK version that can be used to develop CUDA applications targeting the aforementioned architectures.

While the previous release hinted at this change, Nvidia's stronger wording now serves as a definitive signal for developers to shift to more modern architectures. CUDA 12.x series (and before) will still allow application development for these GPUs. The deprecation targets offline compilation and library support. Essentially, future CUDA compilers (nvcc) will lack the ability to generate machine code compatible with these GPUs. In the same vein, upcoming versions of CUDA-accelerated libraries like cuBLAS, cuDNN, etc., will not offer support for GPUs built using these architectures.

Nvidia has not specified an exact date for the upcoming major release (likely CUDA 13.x). Similarly, we aren't sure how many interim releases are to follow in the 12.9.x branch. Either way, this is quite a significant change as Nvidia is dropping three major architectures with one swing. Volta's consumer equivalent Turing (RTX 20) is next in line, but it likely has a lot more to offer before it too hits the chopping block.

"Maxwell, Pascal, and Volta architectures are now feature-complete with no further enhancements planned. While CUDA Toolkit 12.x series will continue to support building applications for these architectures, offline compilation and library support will be removed in the next major CUDA Toolkit version release. Users should plan migration to newer architectures, as future toolkits will be unable to target Maxwell, Pascal, and Volta GPUs."

CUDA 12.9 Toolkit release notes

Nvidia's Maxwell architecture was introduced in early 2014 with the GTX 745, GTX 750, and GTX 750 Ti series, along with the GTX 800M series on mobile. Maxwell even found its way into the original Nintendo Switch's Tegra SoC. A refresh later in 2014, featuring the GM20X series dies, brought several enhancements with the GTX 900 series. Pascal was soon to follow in 2016, serving as the basis for the legendary GTX 1080 Ti, and powering the Quadro P-series for workstations (mobile and desktop) along with Nvidia's Tesla P4 accelerators.

Most consumers might not be familiar with the name Volta, but this architecture marked the debut of Nvidia's Tensor Cores in 2017. Fun fact, the Volta-based GV100 is Nvidia's second-largest chip at 815mm2, only second to the monstrous GA100 (Ampere) at 826mm2. Volta would serve as the stepping stone for Nvidia's strides into the AI acceleration market, followed by Turing, Ampere, Hopper, and now Blackwell, which have since grown its valuation to nearly $2.8 trillion.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

TOPICS
Hassam Nasir
Contributing Writer

Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.

  • A Stoner
    It is hard to comprehend what this means in the grand scheme of things.

    Are these CUDA programs things that us nominal game users need to be concerned with or is this purely about programs that use the GPU for processing other things?

    I mean, it would totally suck to have a nice working 1080P system with an older GPU only to not be able to play the latest games that come out.
    Reply
  • ThatMouse
    A Stoner said:
    I mean, it would totally suck to have a nice working 1080P system with an older GPU only to not be able to play the latest games that come out.
    Ya I'm wondering too, such as MAME, HTPC, and server applications where you might need to do some light transcoding, and buying new hardware is not necessary for the light loads. I've had to ditch old hardware due to lack of driver support, it just wasn't worth the time.
    Reply
  • Mattzun
    This is NOT a bid deal in the grand scheme of things.

    CUDA is not used in games.

    Current versions of CUDA apps will continue to function on the older cards.

    Most CUDA apps will eventually transition to the new toolkit
    When that happens, they will drop support for older cards on new releases of the program.
    Reply
  • bit_user
    A Stoner said:
    It is hard to comprehend what this means in the grand scheme of things.
    Nvidia typically maintains a couple releases of CUDA. Someone could still download an older release branch of CUDA and build apps that will run on older GPUs (so long as they don't require features only found on newer ones, like Tensor cores or Ray Tracing). Also, old releases of apps will still work, because they're built on an older CUDA release.

    A Stoner said:
    Are these CUDA programs things that us nominal game users need to be concerned with or is this purely about programs that use the GPU for processing other things?

    I mean, it would totally suck to have a nice working 1080P system with an older GPU only to not be able to play the latest games that come out.
    This is mainly about AI and other GPU compute apps. I think CUDA isn't used by most games.

    In general, Linux is really good at supporting legacy hardware. You can play OpenGL and even Direct3D games on some really old GPUs. However, those are standard APIs, while CUDA is something proprietary that Nvidia controls.
    Reply
  • bit_user
    ThatMouse said:
    Ya I'm wondering too, such as MAME, HTPC, and server applications where you might need to do some light transcoding,
    Yeah, there are a few different ways to do GPU-based transcoding, on Linux. Nvidia prefers you use their proprietary APIs, which I think do have CUDA dependencies. However, VDPAU and VAAPI are standard APIs that I expect wouldn't be affected by this change. So, whether or not it'll break your workflow (i.e. once the apps you mention transition to a newer CUDA version) probably depends on the details.
    Reply