Doom Eternal Runs Faster on Old AMD GPUs Than Comparable Nvidia GPUs

(Image credit: Tom's Hardware)

You may feel like the best way to future proof your PC is to throw in the most powerful Nvidia graphics card on the market, but nothing may be further from the truth. It's been known for a while that Nvidia typically leaves older cards out in driver optimizations, whereas AMD will continue to support its older GPUs for quite some time. This is proven once again my Doom Eternal's performance on older GPUs, as tested by the folks from Hardware Unboxed.

In its testing, Hardware Unboxed found that in pairs of GPUs from AMD and Nvidia, where they used to battle neck-in-neck at their release, AMD's GPUs today come out with significantly higher performance than Nvidia's counterparts in testing Doom Eternal.

Take for example the GTX 780 and its main competitor from when it was out seven years ago, the almighty AMD Radeon R9 290. Back then, the two battled closely with one or the other winning at different games, but in Doom Eternal we see something completely different. Whereas the GTX 780 is able to put down an average of 45 FPS, the R9 290, pushes out 116 FPS on average -- that's more than twice as fast as Nvidia's card!

Kepler cards really appear to have suffered as they've aged, and though Nvidia closed the gap a little with its newer GPUs, there are still huge performance gaps between what used to be very competitive cards.

We would say there's no excuse for Nvidia to leave its older cards out given that they're a much bigger company than AMD. However, let's be honest: can we really expect eight-year old GPUs to be receiving driver optimizations for the latest modern games?

Not really, and we shouldn't expect this from AMD, either. Nevertheless, if anyone from AMD is reading along: hats off to your driver dev team. We wholeheartedly appreciate your work.

Niels Broekhuijsen

Niels Broekhuijsen is a Contributing Writer for Tom's Hardware US. He reviews cases, water cooling and pc builds.

  • Rdslw
    nvidia really depends on driver optimizations per game. It sets all the ratios between components to squeeze every last drop out of GPU.
    amd have general profiles that are X:Y ratios, DONE.
    nvidia have per game profiles that you remember from updates saying "tomb raider supported" that set those values to 0.89:1.2 becase that game needs more Y less X.
    AMD cpu was similar peformer but with NV optimizations, they were above by that ~20% margin.
    but they cannot support all the games for all the gpu's so now they lack the optimization table.
    Reply
  • gg83
    Rdslw said:
    nvidia really depends on driver optimizations per game. It sets all the ratios between components to squeeze every last drop out of GPU.
    amd have general profiles that are X:Y ratios, DONE.
    nvidia have per game profiles that you remember from updates saying "tomb raider supported" that set those values to 0.89:1.2 becase that game needs more Y less X.
    AMD cpu was similar peformer but with NV optimizations, they were above by that ~20% margin.
    but they cannot support all the games for all the gpu's so now they lack the optimization table.
    So does AMD save time and money this way? Or is it hard to tell?
    Reply
  • salgado18
    Rdslw said:
    nvidia really depends on driver optimizations per game. It sets all the ratios between components to squeeze every last drop out of GPU.
    amd have general profiles that are X:Y ratios, DONE.
    nvidia have per game profiles that you remember from updates saying "tomb raider supported" that set those values to 0.89:1.2 becase that game needs more Y less X.
    AMD cpu was similar peformer but with NV optimizations, they were above by that ~20% margin.
    but they cannot support all the games for all the gpu's so now they lack the optimization table.
    What do you mean by X:Y ratio?
    Reply
  • jgraham11
    Fine wine Technology! AMD has been a great forward looking company. Notice that more of
    their technology is released as Open-Sourced, meaning others can build on it and make it
    better! Vulcan, the new adaptive sharpening filters, adaptive sync. etc...

    I've always appreciated AMD's tendency to supply better memory to their cards compared to
    Nvidia, and they've done it for years!
    Reply
  • jeremyj_83
    "We would say there's no excuse for Nvidia to leave its older cards out given that they're a much bigger company than AMD. " There is a term for this, planned obsolescence. NVidia forces you to replace your GPU far more often since they stop optimizing games for older tech. This is a common practice for the biggest companies in a sector, see Apple.
    Reply
  • larkspur
    This is a quote from Tom's performance review of Doom: Eternal:
    Doom Eternal needs 2942 MiB of VRAM to do 1080p with its low preset. It will still run on a 2GB card like the GTX 1050, even though it doesn't have the requisite 3GB of VRAM, but you can't even try bumping most settings higher. If you want to run at 1080p medium, you'll need a card with 4GB or more VRAM (3502 MiB to be precise), while 1080p high also comes in just under the 4GB barrier at 4078 MiB. Ultra needs 5230 MiB at 1080p, 5437 MiB at 1440p and 6025 MiB at 4K—so at least 6GB of VRAM. Nightmare pushes just beyond 6GB, to 6254 MiB, and ultra nightmare needs 6766 MiB at 1080p—an 8GB GPU will suffice in either case, at resolutions up to 4K.

    Since Doom: Eternal has proven to need plenty of vRAM, I would think the old GTX 780 with its paltry 3gb is handicapped. The R9 290 has 4gb and is clearly performing better in this game. Driver optimizations play a factor, but less than 4gb of RAM in today's demanding games can really be a handicap.
    Reply
  • jeremyj_83
    larkspur said:
    This is a quote from Tom's performance review of Doom: Eternal:

    Since Doom: Eternal has proven to need plenty of vRAM, I would think the old GTX 780 with its paltry 3gb is handicapped. The R9 290 has 4gb and is clearly performing better in this game. Driver optimizations play a factor, but less than 4gb of RAM in today's demanding games can really be a handicap.
    Using Tom's own benchmarks at 1080p low shows the lowly 2GB 1050 pulling 54.6 FPS average/40.8 99th%. That is already higher than the 780 which according to this article says: "Whereas the GTX 780 is able to put down an average of 45 FPS, the R9 290, pushes out 116 FPS on average." The optimization factor is still seen in Tom's own review between the R9 390 and the GTX 1060 6GB. Most games at 1080p the 1060 6GB outperforms the 390 https://www.anandtech.com/bench/product/2303?vs=2301, however, there are some where the 390 is tied or a little a head. We see in the GPU shootout that the 390 & 1060 are tied at 1080p, except ultra for some reason, but 1440p & 2160p the 390 is 10+% faster.
    Reply
  • King_V
    larkspur said:
    This is a quote from Tom's performance review of Doom: Eternal:

    Since Doom: Eternal has proven to need plenty of vRAM, I would think the old GTX 780 with its paltry 3gb is handicapped. The R9 290 has 4gb and is clearly performing better in this game. Driver optimizations play a factor, but less than 4gb of RAM in today's demanding games can really be a handicap.

    True - but is the 3GB handicap so bad that it would be the primary cause of bringing the framerates to LESS than half of what the R9 290 manages?
    Reply
  • larkspur
    King_V said:
    True - but is the 3GB handicap so bad that it would be the primary cause of bringing the framerates to LESS than half of what the R9 290 manages?
    Without a doubt, driver optimizations are a major factor. This is how they actually get the game to work decently with the 1050 2gb. I just wanted to point out that Nvidia has traditionally released cards with less vRAM than their AMD competition and this also causes them to reach obsolescence at an earlier date. The fact that various iterations of GCN have been used since the HD 7xxx days all the way through Vega also helps AMD provide longer driver support... we're talking 8 years of GCN iterations!
    Reply
  • jeremyj_83
    larkspur said:
    Without a doubt, driver optimizations are a major factor. This is how they actually get the game to work decently with the 1050 2gb. I just wanted to point out that Nvidia has traditionally released cards with less vRAM than their AMD competition and this also causes them to reach obsolescence at an earlier date. The fact that various iterations of GCN have been used since the HD 7xxx days all the way through Vega also helps AMD provide longer driver support... we're talking 8 years of GCN iterations!
    8 years of GCN is nothing. CUDA was first used by nVidia in 2007 with the G80 GPUs and is still used by all GeForce, Tesla, and Quadro GPUs.
    Reply