Intel's latest flagship 128-core Xeon CPU costs $17,800 — Granite Rapids sets a new high pricing watermark
$139 per core.
When Intel formally introduced its Xeon 6 6900-series 'Granite Rapids' on September 24, it didn't announce pricing, which was a bit surprising. However, after some prodding, the company has now added pricing to its Ark database. As it turns out, Intel's flagship Xeon 6980P processor with 128 high-performance cores costs $17,800, the highest pricing we've seen for a modern x86 CPU — significantly more than AMD's EPYC 'Genoa' 9654 offering with 96 cores, which costs $11,805.
Model | Price | Price Per Core | Cores/Threads | Base/Boost (GHz) | TDP | L3 Cache (MB) | cTDP (W) |
---|---|---|---|---|---|---|---|
Xeon 6980P (GNR) | $17,800 | $139 | 128 / 256 | 2.0 / 3.9 | 500W | 504 | - |
Xeon 6979P (GNR) | $15,750 | $131 | 120 / 240 | 2.1 / 3.9 | 500W | 504 | - |
EPYC Genoa 9654 | $11,805 | $123 | 96 / 192 | 2.4 / 3.7 | 360W | 384 | 320-400 |
Xeon 6972P (GNR) | $14,600 | $152 | 96 / 192 | 2.4 / 3.9 | 500W | 480 | - |
Xeon 6952P (GNR) | $11,400 | $119 | 96 / 192 | 2.1 / 3.9 | 400W | 480 | ? |
EPYC Genoa 9634 | $10,304 | $123 | 84 / 168 | 2.25 / 3.7 | 290W | 384 | 240-300 |
Xeon 6960P (GNR) | $13,750 | $191 | 72 / 144 | 2.7 / 3.9 | 500W | 432 | - |
Intel Xeon 8592+ (EMR) | $11,600 | $181 | 64 / 128 | 1.9 / 3.9 | 350W | 320 | - |
EPYC Genoa 9554 | $9,087 | $142 | 64 / 128 | 3.1 / 3.75 | 360W | 256 | 320-400 |
In fact, at $17,800, Intel's Xeon 6980P is the most expensive standard CPU launched in recent years. Intel could not match AMD in terms of core count and multi-thread performance for years, so it didn't give its chips extreme price tags over the last few years. By contrast, AMD needed to grab market share away from Intel, so while its EPYC processors were expensive, they were not that expensive.
Formally, Intel's 28-core Xeon Scalable 8280L (with support for up to 4.5TB of DDR4-2933 memory, an amount that even Xeon 6980P cannot handle) launched at $17,906 in Q2 2019, but got a price cut to $14,898 shortly thereafter.
Around the same time, Intel also released its 56-core Xeon Platinum 9282 'Cascade Lake-AP' CPU that got close to AMD's EPYC in terms of core count, but it was only available to select OEMs and came in a BGA package. So, it was not exactly a widely available model, and public pricing information was never provided. Yet, given that the regular 28-core Xeon Scalable 8280 was priced at $11,460 and the model 9282 was essentially two 8280 pieces of silicon in a single package, the Xeon Platinum 9282 was probably priced accordingly — it was likely more expensive than the Xeon 6980P for those server makers that used it.
In addition to the range-topping Xeon 6980P processor, there are three more Granite Rapids CPU models in the 6900-series range, and they are comparatively quite costly, too.
The 120-core Xeon 6979P has a recommended customer price of $15,750, which works out to $131 per core. More interesting is that Intel's 96-core Xeon 6972P carries a price tag of $14,600 ($152 per core), which is $2,795 more than AMD's 96-core EPYC 9654 ($123 per core). Moving down the ladder, there is the 72-core Xeon 6960P for $13,750 ($191 per core), which is again more expensive than AMD's EPYC 9654 ($123 per core) despite featuring significantly fewer cores.
The only Granite Rapids processor that is cheaper than the AMD EPYC 9654 is Intel's 96-core Xeon 6952P with relatively low base clocks. Indeed, given its specifications, this CPU looks very competitive.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
However, we'll need to wait for AMD's coming Zen 5 EPYC Turin server chips to assess Intel's competitive positioning. Those chips will be released next week.
Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
-
DS426 This is Intel's plan to slow down AMD's rapid market share gains in the x86 datacenter market??Reply -
JRStern I can never figure out from the press coverage just what is what. Exactly what applications work best on these high core count servers? Intel *could* always match core count, but there was no good reason to, too many cores just cause contention and blocking and cache overload and IO queues, not to mention core licensing issues.Reply
But AMD went there more for marketing hype than any real benefit, and now Intel has been dragged into it too. Or at least that's how it looks to me.
Now, in SQL Server you might benefit from a bunch of cores (if you can afford the license, or Azure level), but the way it works is lots of small queries only need one core each, but some big queries can run a lot faster if they are free to grab 4 or 8 or 32 cores for a few seconds or minutes. So the optimal situation is to have a bunch of cores that sit idle 50-80% of the time and are only used for some big (and mostly sloppy) queries. But Microsoft's licenses used to require paying for them linearly - as if they were going to be used 100% of the time. So Microsoft suppressed demand for high core counts on servers from about Y2K until I'm not sure when - are they still doing that, or have they reintroduced a per-processor license that maxes out at 10 or 16 or something?
SMH -
bit_user
I think they're mostly used in the form of smaller virtual machines.JRStern said:Exactly what applications work best on these high core count servers?
This is absurd. AMD wouldn't get very far ahead of customer demand. Customers like more cores per rack unit, since it's more space-efficient and also more energy-efficient.JRStern said:But AMD went there more for marketing hype than any real benefit, and now Intel has been dragged into it too. Or at least that's how it looks to me.
I think you've just answered your own question. People will provision smaller VMs. If they're idle most of the time, then they can be oversubscribed, which enables the datacenter operator to reap even more benefit and/or customers to save more money vs. running instances on bare hardware.JRStern said:in SQL Server you might benefit from a bunch of cores (if you can afford the license, or Azure level), but the way it works is lots of small queries only need one core each, but some big queries can run a lot faster if they are free to grab 4 or 8 or 32 cores for a few seconds or minutes. So the optimal situation is to have a bunch of cores that sit idle 50-80% of the time and are only used for some big (and mostly sloppy) queries. But Microsoft's licenses used to require paying for them linearly - as if they were going to be used 100% of the time. -
Kamen Rider Blade
We do need new regulations on CPU licensing of software. It's asinine to license on a core count basis in the era of ever-increasing core counts.JRStern said:I can never figure out from the press coverage just what is what. Exactly what applications work best on these high core count servers? Intel *could* always match core count, but there was no good reason to, too many cores just cause contention and blocking and cache overload and IO queues, not to mention core licensing issues.
But AMD went there more for marketing hype than any real benefit, and now Intel has been dragged into it too. Or at least that's how it looks to me.
Now, in SQL Server you might benefit from a bunch of cores (if you can afford the license, or Azure level), but the way it works is lots of small queries only need one core each, but some big queries can run a lot faster if they are free to grab 4 or 8 or 32 cores for a few seconds or minutes. So the optimal situation is to have a bunch of cores that sit idle 50-80% of the time and are only used for some big (and mostly sloppy) queries. But Microsoft's licenses used to require paying for them linearly - as if they were going to be used 100% of the time. So Microsoft suppressed demand for high core counts on servers from about Y2K until I'm not sure when - are they still doing that, or have they reintroduced a per-processor license that maxes out at 10 or 16 or something?
SMH
There should be new Laws/Rules/Regulations with a Federal Ban on Software Licensing based on "Core Count" or Instance limits within a Single Physical CPU.
Licensing should be based on a "Per Single Physical CPU" basis only with no limits on the amount of instances per Indiviual Physical CPU.
That would simplify things dramatically.
You want to run the software on more machines/sockets, then you pay for those as needed. -
Stomx
I remember few years back when first 5nm TSMC Apple Bionic processors wth 12B transistors appeared their manufacturing price was shocking, just around $1.5 per core and selling price something like twice of that. The whole 6-core processor was sold at around unthinkable price around $20.Admin said:At nearly $18,000, Intel's Xeon 6980P 'Granite Rapids' could be the industry's most expensive CPU in modern history.
Intel's latest flagship 128-core Xeon CPU costs $17,800 — Granite Rapids sets a new high watermark : Read more
At exactly the same time AMD also 5nm also TSMC server 8-core chiplets with twice smaller number of transistors around 6B were selling at $100 ***per core*** . Intel chiplets are much larger in size, clearly more than 40 cores count, and hence per core should cost more than minuscule 8-core AMD ones due to smaller yield. But should both AMD and Intel for almost the same with minor differences silicon have factor of 50-100 difference with Apple?
Of course cores with Apple have different sizes and better would be to compare prices per unit number, like 1 Billion, transistors. but the conclusions about huge fishy discrepancy will be approximately the same. All of them are sold like hot cakes in huge quantities and there is no argument in substantially smaller market for anyone of them. -
Stomx Of course multicore processors are needed and so far they scale up and perform great. And they could do great even with AI, just add there native hardware for missing 8 and 16 bit arithmetic.Reply
My understanding is that Intel around one-two decades ago stopped supporting single precision FP32 natively in favor of 64-bit one. Singe precision calculations were started to be done in double precision 64-bit mode and then truncating the final result to 32bits. As a result single and double precision tests started to give the same timing results.
Now if they return all that back, and may be even adding 4-bit one some AI need, the AI would also start singing and dancing on classical processors. Every time precision drops from FP64 to FP32 to FP16 and FP8 the performance increases twice, do the math -
JamesJones44
There are a few applications in the enterprise space I can think of, but the biggest one is probably databases. Especially those who are hosted as shared services (think Amazon RDS) will benefit greatly from higher core counts. Utilities also typically host massive databases for their real-time measurement processes that are often stored in operational databases. Utilities measurement points and devices producing those points have been growing rapidly since the ESG revolution has started (more renewables online with backup sources, people adding wind and solar to their homes, some even use electric vehicles to add energy to the grid during night time), more cores allows them head room for scale as without adding new hardware.JRStern said:Exactly what applications work best on these high core count servers? -
JRStern
But there are a dozen other parameters that all need to be in balance, you can't just drop 128 motors onto your Mazda Miata, or 128 wheels, or 128 seats, and think it's all good.JamesJones44 said:There are a few applications in the enterprise space I can think of, but the biggest one is probably databases.
To a first approximation a very simple database that assigns one core per query, can be crudely scaled up by putting 128 cores on a chip. But there are going to be limits, putting 65000 of them on the chip will not make it super-duper fast or give it immense capacity, cache and DRAM and IO will all bottleneck, and just multi-task contention and management. -
jed351 I don't know why this is even an article. Nobody is buying at the listed price.Reply
Our new cluster with 8490H (Sapphire rapids top SKU). Each blade's total cost (2 CPUs, RAM, infiniband) is about the price of a 8490H on Intel website. -
JamesJones44
Of course. In the history of computers the limitation has always been can the application(s) keep the processor feed due to the other limits of the system. However, for large operational databases, keeping a processor feed is almost never an issue given the large volume of requests. CPUs for real time operational databases are almost always the limiting factor, that is why there are so many operational databases adding the ability to do compute with GPUs on top of CPUs (among other reasons to leverage vectorization with GPUs). It's also why distributed databases have really accelerated in the market which allows for horizontal compute scale, but horizontal compute comes with latency, which can be undesirable in certain use cases.JRStern said:But there are a dozen other parameters that all need to be in balance, you can't just drop 128 motors onto your Mazda Miata, or 128 wheels, or 128 seats, and think it's all good.
To a first approximation a very simple database that assigns one core per query, can be crudely scaled up by putting 128 cores on a chip. But there are going to be limits, putting 65000 of them on the chip will not make it super-duper fast or give it immense capacity, cache and DRAM and IO will all bottleneck, and just multi-task contention and management.
In general an application or database that is written to scale with more compute cores will benefit from more compute cores. Efficiency might be reduced, but more load can be handled.
https://researchcomputing.princeton.edu/support/knowledge-base/scaling-analysis