Ryzen 7 5800X3D Beats Ryzen 7 5800X By 9% In Geekbench 5

Ryzen 7 58003XD
Ryzen 7 58003XD (Image credit: AMD)

The Ryzen 7 5800X3D will hit the market on April 20 at $449. Equipped with AMD's 3D V-Cache technology, the Ryzen 7 5800X3D will contend with the best CPUs for gaming.

AMD has promised that the Ryzen 7 5800X3D offers a 15% gaming uplift over its current Ryzen 9 5900X. It's a pretty big claim, considering that the Ryzen 9 5900X has four extra Zen 3 cores and higher clock speeds. It remains to be seen whether 3D V-Cache can provide such a substantial performance increase. However, hardware detective Benchleaks has uncovered two Ryzen 7 5800X3D benchmarks that offer us a small preview of what the L3 cache-heavy chip can do.

The Ryzen 7 5800X3D has the same eight-core, 16-thread configuration as the regular Ryzen 7 5800X. It has 64MB more L3 cache due to the 3D V-Cache design. Although the Ryzen 7 5800X3D has the same 105W TDP, the processor has a 400MHz lower base clock and 200 MHz lower boost clock than the Ryzen 7 5800X. Robert Hallock, director of technical marketing at AMD, has already confirmed that the Ryzen 7 5800X3D ships with a lower voltage limit between 1.3V to 1.35V instead of AMD's other Ryzen 5000 (Vermeer) parts that play between 1.45V to 1.5V. The design limits the Ryzen 7 5800X3D's clock speeds and contributes to the lack of overclocking support.

AMD Ryzen 7 5800X3D Benchmarks

Swipe to scroll horizontally
ProcessorSingle-Core ScoreMulti-Core Score
Core i7-12700K1,89813,888
Ryzen 9 5900X1,67114,006
Ryzen 7 5800X1,67110,333
Ryzen 7 5800X3D1,63311,250

Scores for the Core i7-12700K, Ryzen 9 5900X, and Ryzen 7 5800X are from Geekbench 5's processor database.

The Ryzen 7 5800X3D's single-core performance didn't come as a shocker. Instead, the chip features lower clock speeds, explaining why the Ryzen 7 5800X delivered up to 2.3% higher single-core performance than the Ryzen 7 5800X3D. Therefore, it only makes sense that the Ryzen 7 5800X3D wouldn't beat the Ryzen 9 5900X or Core i7-12700K, often considered the direct rival to the Ryzen 7 5800X.

Once we switched over to the multi-core results, we saw the Ryzen 7 5800X3D outpaced the Ryzen 7 5800X by 8.9%. However, the soon-to-be-released Ryzen part was still no match for the Ryzen 9 5900X or Core i7-12700K, where there was almost a 25% margin.

The Ryzen 7 5800X3D's forte is gaming, according to AMD. The chipmaker estimated that the Ryzen 7 5800X3D is, on average, 15% faster than the Ryzen 9 5900X and 7% more quickly than the Core i7-12700K. The question is whether it'll be worth it.

With the recent price cuts on Ryzen 5000, the Ryzen 9 5900X sells for $448.98 (cheaper if you live near a Micro Center). Meanwhile, the Core i7-12700K retails for $384.98. So if the Ryzen 7 5800X3D ($449) delivers, consumers can get a superior gaming chip at the same price as a Ryzen 9 5900X. The significant tradeoff is that they'll lose out on productivity performance since not even AMD's 3D V-Cache can compensate for the lower core count on the Ryzen 7 5800X3D. 

The Ryzen 7 5800X3D looks like an even worse deal than the Core i7-12700K. Consumers would have to pay 17% more money for a mediocre 7% increase in gaming performance. Nonetheless, we're less than a month away from the Ryzen 7 5800X3D's launch, so we should keep an open mind before the reviews arrive.

Zhiye Liu
News Editor and Memory Reviewer

Zhiye Liu is a news editor and memory reviewer at Tom’s Hardware. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.

  • hotaru.hino
    For high performance gaming though, single core performance tends to be where it's at, and seeing a weaker show here doesn't lead me to believe it will be as good as AMD claims, except maybe in well implemented DX12/Vulkan games running on a Radeon GPU.
    Reply
  • spongiemaster
    So why does the cache help in the multicore test but not the single core one?
    Reply
  • Historical Fidelity
    spongiemaster said:
    So why does the cache help in the multicore test but not the single core one?
    My guess would be the ability of the L3 cache to hold more data that all the cores need. With just single core only the current jobs data is stored on the cache, when it’s done with that job the cache is refreshed with the new single core job and so on and so on. Since the multi core test will run a different job on each core simultaneously, having more L3 allows more of the individual data each core needs for its respective job to be stored in cache which means less cache misses being sent to RAM access which is about 6-7x higher latency vs cache access. TLDR, a big enough L3 allows all cores to be fed data at low latency on the 5800x3d vs resorting to high latency ram when the 5800x’s L3 cache is full.
    Reply
  • hotaru.hino
    spongiemaster said:
    So why does the cache help in the multicore test but not the single core one?
    We can look back at AnandTech's deep dive on Zen 3 to get answers for this.
    It's important to note that the L3 cache is a victim cache of L2. Basically, when L2 gets full, the oldest thing gets kicked out into L3. Also as a result, L3 cannot prefetch data.
    Zen 3's increased cache came at a cost of higher latency and lower bandwidth per core. It was meant more to service multi-core workloads, since this allows every core in the CCX to have the same latency when accessing each other's data
    They made the observation that the L2 TLB is only 2K entries long, with each entry covering 4KiB. This means per core, the TLB can only cover 8MiB of L3 cache. So a single core has issues even without the extra cache from V-CacheThere's also another issue: you can only have so much cache before you'll spend too much time looking through cache than it would've taken to look it up from RAM.

    For example if we look at this chart from AnandTech's article:
    https://images.anandtech.com/doci/16214/lat-5950.png(note the y-axis is logarithmic)

    If we assume the latency is linear after 16MiB, which appears to be 15ns in the worst case, and L3 cache gets full, you're now looking at 90ns just to look through the whole thing. It would've been faster to go look at RAM at that point.
    Reply
  • Historical Fidelity
    hotaru.hino said:
    We can look back at AnandTech's deep dive on Zen 3 to get answers for this.
    It's important to note that the L3 cache is a victim cache of L2. Basically, when L2 gets full, the oldest thing gets kicked out into L3. Also as a result, L3 cannot prefetch data.
    Zen 3's increased cache came at a cost of higher latency and lower bandwidth per core. It was meant more to service multi-core workloads, since this allows every core in the CCX to have the same latency when accessing each other's data
    They made the observation that the L2 TLB is only 2K entries long, with each entry covering 4KiB. This means per core, the TLB can only cover 8MiB of L3 cache. So a single core has issues even without the extra cache from V-CacheThere's also another issue: you can only have so much cache before you'll spend too much time looking through cache than it would've taken to look it up from RAM.

    For example if we look at this chart from AnandTech's article:
    https://images.anandtech.com/doci/16214/lat-5950.png(note the y-axis is logarithmic)

    If we assume the latency is linear after 16MiB, which appears to be 15ns in the worst case, and L3 cache gets full, you're now looking at 90ns just to look through the whole thing. It would've been faster to go look at RAM at that point.
    Yes you make valid points and you are certainly learned in the sci-art of micro architecture, however I would like to correct a few things you bring up that might help you understand why bigger L3 is helpful in certain cpu workloads. First, the ram latencies you are referencing (less than 90ns) are the best case scenario IE data is requested on an open active row and column which is prepped to be read. This is almost never the case. Aida 64 and other latency and bandwidth benchmarks always give the best case latency for cache and ram so you must take each to be equivalent in how their respective latencies react to different data locations IE my ryzen 9 5950x L3 has a 10.4ns latency according to Aida 64 but as the graph you provided shows worst case is closer to 90ns. The same goes for RAM, my cas 14 3800mhz ram has an Aida 64 latency of 60.2ns but worst case is closer to 1 us or more due to row precharge, access, read, etc. duties to find and transfer the data + a trfc refresh cycle that can halt the data fetch in its tracks and cause the fetch to restart.

    Second, a victim cache (unlike inclusive cache) is simply a term used to describe an overflow cache. You are correct that the oldest data in L1/2 gets booted to L3 but second, just as important, is if the jobs data is too big for L1/2 (which often is the case) the L3 acts as a slower L2 cache extension to allow more of the needed data for the job at hand from having to be continually fetched from the much slower ram then eliminated from cache from disuse only for the deleted data a microsecond or so later being re-fetched from RAM again due to many workloads re-using the same data multiple times (obviously like you said, if said data value is popular it will stay in L2 but if it’s disused for more than a couple microseconds then it is booted into L3 and eventually deleted as L2 continues to push disused data into L3, thus having more L3 prolongs the disused data life in cache in case it’s called upon again, which happens more often than you think). Thus, as I surmised in my answer, having more L3 cache helps with multi core workloads by making ram access a rarer instance while feeding the cores with more consistent data flow. That is why the single core scores are very similar between the 2 ryzen 5800 series cpus (because the data for 1 cores job is small enough to fit in the 5800x’s L1/2/3 but not big enough to fit all the data needed to drive 16 independent jobs at the same time. The 5800x3d has enough L3 to fit more of geekbench’s 16 thread workload and thus limits the amount of ram accesses. That is why AMD claims that the v-cache improves gaming performance while suffering from lower clock frequency. If a game thread can be kept fed with low latency cache accesses instead of high latency ram accesses then the cpu core wastes less clock cycles waiting for the required data and thus gets more done on average per clock cycle.

    Any way, I hope you don’t take my rebuttal in the wrong light, you do make valid points but microarchitectures are very complex and thus their behaviors are complex as well. Unless you study architectures for a living it’s harder to gain detailed knowledge.
    Reply
  • watzupken
    Actually the results are not surprising. In fact, I feel AMD should not even bother about releasing a stop gap solution here because in my opinion, this offers too little performance uplift, and too much of a price increase for most users. While it is true that Intel have regained their performance crown, but the most important thing is price. Zen 3 may be older, but if the price is right, it is still a viable option. But for USD 449, which is basically the launch price of the R7 5800X, the performance uplift is uninteresting/ unexciting, so I doubt they will see any improvement in sales, nor will they be able to retain the performance crown.
    Reply
  • salgado18
    watzupken said:
    Actually the results are not surprising. In fact, I feel AMD should not even bother about releasing a stop gap solution here because in my opinion, this offers too little performance uplift, and too much of a price increase for most users. While it is true that Intel have regained their performance crown, but the most important thing is price. Zen 3 may be older, but if the price is right, it is still a viable option. But for USD 449, which is basically the launch price of the R7 5800X, the performance uplift is uninteresting/ unexciting, so I doubt they will see any improvement in sales, nor will they be able to retain the performance crown.
    I don't think it is a stop gap, but more of a prototype test. If this works well in real world use and with consumers, the tech can be used in future processors, with lowered prices and better performance. If it doesn't, this will be the first and last 3D V-Cache CPU.
    Reply
  • JamesJones44
    Probably should have just cancelled this thing after the initial engineering samples didn't pan out. I doesn't look like from these early benchmarks this chip fits in anywhere, especially since it can't be overclocked at all.
    Reply
  • hotaru.hino
    salgado18 said:
    I don't think it is a stop gap, but more of a prototype test. If this works well in real world use and with consumers, the tech can be used in future processors, with lowered prices and better performance. If it doesn't, this will be the first and last 3D V-Cache CPU.
    The problem I'm seeing is AMD hyped this up to be some sort of game changer and it'll bring it appreciable performance upgrades over the non V-cache CPUs. While I don't disagree they wanted to throw this tech out on something to see how it works, I think they should've released it with less fanfare.
    Reply
  • gggplaya
    Phillip Corcoran said:
    Graphics Command Centre.
    hotaru.hino said:
    The problem I'm seeing is AMD hyped this up to be some sort of game changer and it'll bring it appreciable performance upgrades over the non V-cache CPUs. While I don't disagree they wanted to throw this tech out on something to see how it works, I think they should've released it with less fanfare.

    If you can get 9-15% IPC gain without an architectural change, then it's a win for consumers. If they put 3D cache on future chips and try to solve the lower clock speed issues. Then it's a winning feature. It's at the point where it's showing great promise, but it's not a game changer unless the cpu clock can match current offerings.
    Reply