AMD's Zen 5 AVX-512 performance tested — Zen 5 performs significantly better than Zen 4 on Linux without consuming any more power
A far cry from Intel's original AVX-512 implementation
A 512-bit AVX-512 pipeline was one of the most significant upgrades AMD implemented into its Zen 5 CPU architecture — and, as a result — its Ryzen 9000 series CPUs. Phoronix published benchmark results of the new Ryzen 9 9950X in AVX-512 to see how much more performant and efficient Zen 5's AVX-512 capabilities are compared to the prior generation Ryzen 9 7950X.
Phoronix tested 90 applications and benchmarks in Linux, featuring the Ryzen 9 9950X and Ryzen 9 7950X. Both chips were benchmarked with AVX-512 on and off to see the performance and power efficiency differences each chip gains or loses with AVX-512 acceleration.
In the 90 benchmarks tested, the Ryzen 9 9950X saw an overall performance gain of 27% compared to the Ryzen 9 7950X with AVX turned on. Disabled, the margins were much narrower, with the 9950X outperforming the 7950X by 15%.
On vs. off, the Ryzen 9 9950X impressively gained 56% more performance on average across all benchmarks compared to having AVX-512 acceleration turned off. The 7950X similarly saw a still impressive 41% performance improvement with AVX-512 acceleration turned on vs off.
Phoronix also saw excellent power efficiency with the new Ryzen 9 9950X. Despite having a full-blown AVX-512 pipeline, the Zen 5 chip only consumed a couple more watts at full load than the AVX-512 disabled. On average, the 9950X consumed 205.19 watts at its peak with AVX-512 acceleration turned on. Turned off, the chip consumed 203.94 watts.
The Ryzen 9 7950X outperforms the 9950X in this specific measurement. Turning AVX-512 on or off Zen 4 does not incur any peak wattage improvement or regression. However, the chip consumed noticeably more power overall than the 9950X.
Because the chip's excellent power efficiency in AVX-512 workloads resulted in very few differences in frequency and CPU temperatures on the 9950X, utilizing AVX-512 made the 9950X run at slightly higher clock speeds and slightly lower temperatures. The 7950X saw virtually no change in CPU thermals or clock speeds.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Phoronix's testing confirms that AMD has a very performant and highly power-efficient AVX-512 implementation in Zen 5. Zen 5 is the first AMD architecture with an entire AVX-512 pipeline featuring a 512-bit data path. Zen 4 was technically the first AMD architecture to support AVX-512. Still, AMD cleverly reused its existing AVX-256 pipeline to run AVX-512 instructions in what's known as a dual-issue AVX-512 pipeline that "double pumps" instructions to achieve AVX-512 acceleration functionality.
Zen 5's AVX-512 is the best implementation of AVX-512 acceleration we have ever seen, a far cry from Intel's adaptation in its older architectures. When AVX-512 first came out on Intel CPUs, supported Intel chips had to sacrifice a significant amount of clock speed when running AVX-512 instructions while also consuming a ton of power. Zen 5 is the complete opposite and can—still, AMDpeak boost clocks in AVX-512 workloads.
Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.
-
Sleepy_Hollowed Wowza, this is nuts.Reply
I might compile a Linux distribution and it’s apps with AVX 512 enabled now just for the hell of it if I get my hands on one of these. -
bit_user
I think "AVX" might've been used as a shorthand, in this case. Just to be clear: what he did was test with AVX-512 vs. no AVX-512. In most or all of these benchmarks, there will be a fallback path involving AVX2. So, there's probably some form of AVX-family instructions being used in both cases. If not, the performance discrepancy would be even more stark!The article said:In the 90 benchmarks tested, the Ryzen 9 9950X saw an overall performance gain of 27% compared to the Ryzen 9 7950X with AVX turned on.
Also, given no core count increase and virtually the same memory speeds and cache sizes, a 27% generational uplift is pretty massive! Intel has a CPU with 16 P-cores and AVX-512, called the Xeon W5-2465X. Its list price is about $1400 or about 2x what the R9 9950X costs and it features a TDP of 200W (PL2= 240W). Motherboard costs for those CPUs are probably also about 2x. I'd love to see the two face off on these AVX-512 benchmarks, because I'll bet the Ryzen stomps it at half the price and 3/4ths the power!
I think you read the chart in the way that made more sense. Paradoxically, the chart actually says each CPU used a little less power with AVX-512 on!The article said:the Zen 5 chip only consumed a couple more watts at full load than the AVX-512 disabled. On average, the 9950X consumed 205.19 watts at its peak with AVX-512 acceleration turned on. Turned off, the chip consumed 203.94 watts.
The temperature data is consistent with this: 2-3 degrees higher temps for AVX-512 off.
I know that's what his frequency charts say, but I think that's not what really happened. If you look closely, he charts only the highest frequency core, which is pretty silly for a test like this. Not all threads will be running AVX-512 heavy code paths and not all cores will have 2 SMT threads running on them. The more lightly-loaded cores will be the ones clocking higher.The article said:utilizing AVX-512 made the 9950X run at slightly higher clock speeds
I think the reason he just looked at peak frequency instead of the average was to exclude idle cores dragging down the mean. That might be fine for some benchmarks. However, in cases like this, such an approach really fails to provide the kind of insight we'd like to have.
In light of that, my conjecture for why power and temperatures decreased with AVX-512 on is that frequencies did indeed drop slightly, for the cores running AVX-512 -heavy code. That's the only sensible explanation I see for it. -
AkroZ It's not like average consumer need AVX-512, only specialized applications (mostly professional) use it.Reply
Video games can slightly use it for 3D (matrix transformations => 3D animations), and some neural networks (IA) which is most likely why AMD put effort on it. -
bit_user
I don't entirely disagree, but there have been some interesting applications of it to accelerate string processing.AkroZ said:It's not like average consumer need AVX-512, only specialized applications (mostly professional) use it.
https://github.com/simdjson/simdjson?tab=readme-ov-file#performance-results
However, that data appears to be just for AVX2 (uploaded March 2021; its filename suggests it was measured on Zen 2 EPYC). When optimizing with AVX-512, they managed to find another 60% performance improvement!
https://lemire.me/blog/2022/05/25/parsing-json-faster-with-intel-avx-512/
Something else about it that a lot of people might not know is that it's not restricted to processing 512-bit vectors. The same instructions will also operate on 128-bit and 256-bit operands. Furthermore, there are aspects of it which facilitate vectorization, such a dedicated set of mask registers that perform per-lane predication. It also doubles the number of software-visible vector registers. Along with a few other details, these improvements make it a superior alternative to all of the prior vector ISA extensions, such as the SSE family and AVX/AVX2.
When you look at it that way, its benefits really needn't be limited to "professional" and scientific applications. However, that's unlikely to happen, now that Intel withdrew support for it, on their mainstream CPUs. Instead, we'll have to wait for a couple more years, until AVX10 support rolls out and gains enough market share for developers to target. AVX10.1 is basically just window dressing on AVX-512, except it provides the option of having implementations limited to just 128-bit and 256-bit operands, which Intel has said they intend to use in their client CPUs.
For just matrix operations, homogeneous coordinates only need 128-bit (assuming fp32 coefficients). There are ways to use wider vectors than that, but mainly if you switch to a SIMD-oriented programming model.AkroZ said:Video games can slightly use it for 3D (matrix transformations => 3D animations),
CPU-based rendering and video compression also benefit from it, but perhaps you lump that in with "professional" applications.AkroZ said:and some neural networks (IA) which is most likely why AMD put effort on it. -
jeremyj_83
Do you know if a single AVX512 pipe can take 2x 256bit or 4x 128bit instructions in parallel instead of a single 512bit instruction? If so it makes even more sense to put it into a CPU as it would help with the other instructions as well.bit_user said:I don't entirely disagree, but there have been some interesting applications of it to accelerate string processing.
https://github.com/simdjson/simdjson?tab=readme-ov-file#performance-results
However, that data appears to be just for AVX2 (uploaded March 2021; its filename suggests it was measured on Zen 2 EPYC). When optimizing with AVX-512, they managed to find another 60% performance improvement!
https://lemire.me/blog/2022/05/25/parsing-json-faster-with-intel-avx-512/
Something else about it that a lot of people might not know is that it's not restricted to processing 512-bit vectors. The same instructions will also operate on 128-bit and 256-bit operands. Furthermore, there are aspects of it which facilitate vectorization, such a dedicated set of mask registers that perform per-lane predication. It also doubles the number of software-visible vector registers. Along with a few other details, these improvements make it a superior alternative to all of the prior vector ISA extensions, such as the SSE family and AVX/AVX2.
When you look at it that way, its benefits really needn't be limited to "professional" and scientific applications. However, that's unlikely to happen, now that Intel withdrew support for it, on their mainstream CPUs. Instead, we'll have to wait for a couple more years, until AVX10 support rolls out and gains enough market share for developers to target. AVX10.1 is basically just window dressing on AVX-512, except it provides the option of having implementations limited to just 128-bit and 256-bit operands, which Intel has said they intend to use in their client CPUs.
For just matrix operations, homogeneous coordinates only need 128-bit (assuming fp32 coefficients). There are ways to use wider vectors than that, but mainly if you switch to a SIMD-oriented programming model.
CPU-based rendering and video compression also benefit from it, but perhaps you lump that in with "professional" applications. -
bit_user
Depends on the implementation. In Zen 4, the data would suggest not. Otherwise, we should expect to see a smaller relative benefit from enabling AVX-512.jeremyj_83 said:Do you know if a single AVX512 pipe can take 2x 256bit or 4x 128bit instructions in parallel instead of a single 512bit instruction? If so it makes even more sense to put it into a CPU as it would help with the other instructions as well.
This writeup certainly arrives at that conclusion:
"the only way to utilize all this new hardware is to use 512-bit instructions. None of the 512-bit hardware can be split to service 256-bit instructions at twice the throughput. The upper-half of all the 512-bit hardware is "use it or lose it". The only way to use them is to use 512-bit instructions."
http://www.numberworld.org/blogs/2024_8_7_zen5_avx512_teardown/#512_bit_required
IMO, the most natural way to divide up 512-bit pipelines would be if they were actually 256-bit and most AVX-512 ops were implemented as a pair of 256-bit ops. That's similar to (if not exactly) what Zen 4 did and it's perhaps why Zen 4's AVX2 performance was closer to that of Zen 5.
In Lion's Cove, Intel is apparently adding 33% more capacity to their set of 256-bit pipes:
Source: https://www.tomshardware.com/pc-components/cpus/intel-unwraps-lunar-lake-architecture-up-to-68-ipc-gain-for-e-cores-16-ipc-gain-for-p-cores/2