RTX 5090D tested with nine-years-old Xeon CPU that cost $7 — it does surprisingly well in some games, if you enable MFG

RTX 5090 Gallery Shot
(Image credit: Nvidia)

A Chinese tech reviewer benchmarked Nvidia's all-new "for China" RTX 5090D against a variety of CPUs, from AMD's flagship Ryzen 7 9800X3D, one of the best CPUs for gaming, to Intel's Core Ultra 9 285K, and all the way down to a nine-years-old 14-core Xeon E5-2680 v4 based on the Broadwell architecture. Despite its age, the $7 chip was at least moderately competitive with modern CPUs when using DLSS frame generation, especially with DLSS4 MFG.

The reviewer tested Counter-Strike 2, Marvel Rivals, and Cyberpunk 2077 on an assortment of CPUs paired to the RTX 5090D. The list of CPUs consists of the Core Ultra 9 285K, Core Ultra 7 265K, Core Ultra 5 245K, Core i7-14700KF, Core i5-14600KF, Core i5-12400F, Core i3-12100F, and Xeon E5-2680 v4. On the AMD side, Ryzen 9 9900X, Ryzen 7 9800X3D, Ryzen 5 9600X, Ryzen 9 7900X, Ryzen 7 7800X3D, Ryzen 5 7500F, and Ryzen 5 5600 were tested. That's a wide range of processors extending back nearly a decade.

Counter-Strike 2 showcased the lowest results for the Xeon E5-2680 v4. At 4K maximum settings, the CPU's average frame rate was 33% slower than the next-closest Core i3-12100F, never mind higher performance chips like the Ryzen 7 9800X3D that were almost twice as fast. The Broadwell-based chip delivered an average frame rate of 168 FPS, with the Core i3-12100F at 248 FPS. The fastest CPUs topped out at up to 322 FPS — all without any frame generation. 1% Low framerates were even worse, with the Xeon E5-2680 v4 netting just 72 FPS, with the fastest CPU landing at 163 FPS — 2.26X faster.

The Xeon chip's awful frame rate in CS2 shows the problem with pairing such a powerful GPU with an "archaic" processor. But this testing of the RTX 5090D also serves as the foundation for the other tests.

Marvel Rivals at 4K highest quality settings almost completely reverses the CS2 results. All the CPUs, from the Core i3-12100F to the Core Ultra 9 285K, shared virtually identical frame rates of 111-115 FPS, revealing a bottleneck that's not CPU related. The Xeon E5-2680 v4 wasn't far behind, with an average frame rate of 101. That makes the fastest Ryzen 7 9800X3D only 14% quicker than the nine-years-old Xeon chip. But again, minimum FPS is important, and the Xeon only managed 65 FPS compared to 80–99 FPS on the other processors. It's up to 34% slower on minimum framerates.

TOPICS
Aaron Klotz
Contributing Writer

Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.

With contributions from
  • -Fran-
    Well, this is their play.

    When they need to emulate X86 in their ARM CPUs, they will just say "hey, look; even emulated runs pretty darn well in our SoCs!".

    Not bad or good, but I'm pretty sure that's how it's going to play out.

    Regards.
    Reply
  • P.Amini
    TL;DR What's the point?
    Reply
  • OneMoreUser
    Pushing Fake Frames isn't doing well, it is faking.
    Reply
  • Gururu
    I found this finding highly disturbing. It just kind of highlights the trickery.
    Reply
  • Alvar "Miles" Udell
    4K isn't a CPU limited resolution, so this shouldn't come as any surprise.
    Reply
  • JTWrenn
    Interesting way to show some of the areas that cpu bottlenecks get stuck in but...without input lag testing I feel like it doesn't tell the whole story. I wonder if the fps was high but input lag was way off the charts?

    Also kind of makes me wonder how cpu bottlenecked everything is right now.
    Reply
  • Amdlova
    Intel without avx512 better to buy a used 2697AV4 :) 35usd Good clock and multitread
    Reply
  • Lamarr the Strelok
    The i3 12100 specs aren't listed.Cores/threads? 168 fps is "awful" for the xeon in CS2 where that game barely uses any of the 14 cores from it?It's great for people to try getting as many fps as possible,but this is way beyond what's needed for a fun time.Maybe pro gaming would make a difference but anyone getting one of these still can game on it for some games.
    Reply
  • Lamarr the Strelok
    OneMoreUser said:
    Pushing Fake Frames isn't doing well, it is faking.
    FSR 3.1 has been a life saver for me.I thought it was nonsense for already hi end cards but it works great for me.Upscaling and FG are great for my low end rig.It sometimes adds graphics artifacts in stalker but that's normal for a stalker game.
    I used to like and laugh along to the "fake frames" comments but no more. I call it frame gen now.I still hate ray tracing in general though.
    Reply
  • abufrejoval
    P.Amini said:
    TL;DR What's the point?
    First raction: what's the point of running Raspberry PIs in a cluster?

    Or Doom inside a PDF?

    As a parent and occasional CS lecturer, I'd say a lot of it is around encouraging critical thinking and do your own testing instead of just adding to a giant pile of opinions based on other opinions influenced by who knows who.

    It's also around acknowleding that the bottlenecks and barriers in gaming performance are not a constant, but have shifted and moved significantly. And a lot of the current discussion is about seeking for new escapes from non-square returns on linear improvements.

    For the longest time Intel has pushed top clocks as the main driver for the best in gaming performance. And it wasn't totally wrong, especially since single core CPUs were the norm for decades and single threaded logic remains much simpler to implement.

    I bought big Xeons years ago, because they turned incredibly cheap when hyperscalers started dropping them. My first Haswell 18-core E5-2696 v3 was still €700, but at that time I needed a server for my home-lab and there was nothing else in that price range, especially with 44 PCIe lanes and 128GB RAM capacity.

    Those were the very same cores you could also find on their desktop cousins, but typically lower turbo clocks, because they had to share their power budget with up to 21 others and it made very little sense to run server loads on a few high clocks: you want to be very near the top of CMOS knee for optimum compute/electrical power.

    Of course, when you use them in a workstation with a mix of use cases, that tradeoff would make a lot more sense, which is why I was happy to use the Chinese variants of these E5 Xeons, which gave you a bit more turbo headroom.

    I've upgraded it to a Broadwell E5-2696 v4 somewhat more recently, because that was €160 for the CPU, a bit of leftover thermal paste and one hour for the swap job. I guess it's my equivalent of putting a chrome exhaust on V8 truck or giving your older horse a new saddle, not strictly an economic decision, but not bad husbandry, either.

    Yet it also has Windows 11 (IoT) and VMs run better, because Broadwell had actually some pretty nifty ISA extensions made for cloud use.

    Synthetic benchmarks put both chips at very similar performance and energy use as a Ryzen 7 5800X3D that used to sit next to it (which became a Ryzen R9 5950X later), both with 128GB of ECC RAM.

    Progress during the eight years between that first Xeon and the Ryzen was measured mostly in per-core performance improving to the point where 8 Zen cores could do the work of 18/22 Xeon cores, but top turbo scalar performance of a Xeon core was also at around 50% of what a Zen3 can do. All core turbo on Xeons is much lower than for Zen, that's where the extra cores come in: essentially it gives you 18/22 E-cores in modern parlance, at 110 Watts of actual (150 Watt TDP) max power use... that's where modern E-cores will show their progress, probably 1/4 power consumption at iso performance.

    With the Xeon at around €5000 retail when new vs. €500 for Zen, these numbers are another nice metric for progress.

    But since IT experimentation is both my job and my hobby, I recently used it to test just how much of an impact or difference there was between those two systems, when running modern games on a modern GPU with equal multi-core power, but vastly different max per core performance. In other words: just how well had games adapted to an environment that had given them more but weak cores for some years now on consoles (due to manufacturing price pressure)?
    And while I didn't do this exhaustively, my finding were very similar: modern game titles spread the load so much better that these older CPUs with extra cores are quite capable enough to run games at acceptable performance. And no, it doesn't have to be a 22 core part, even if it just happens to have 55MB of L3 cache, which makes it almost a "3D" variant. It also has 4 memory channels...

    And that also explained why two to four years ago "gamer X99" mainboards were astonishingly popular on Aliexpress: X99 and C612 are quite the same hardware and like those Xeons it's quite likely these parts were given a 2nd life serving gamers, who, couldn't afford leading edge hardware and rather well served by what they got.

    So the point: you may do better by not just jumping for the leading edge.
    But because that takes the means to explore, naive buyers may not be able to exploit that and pay the price elsewhere.

    Did I mention (enough) that Windows 11 IoT runs quite well on these chips? Still nine years of support and very low risk of being Co-Piloted.
    Reply