Intel Xeon Platinum 8176 Scalable Processor Review

Test Platforms & How We Test

The Processors

We know, we know. This is another all-Intel line-up in an enterprise-oriented processor review. That's a natural side effect of the company's ~99.6% market share. Until AMD's EPYC processors become widely available, there really aren't any suitable x86 alternatives. However, we have a nice selection of Ivy Bridge, Haswell, and Broadwell-EP processors to document Intel's steady march of improvements up to the Xeon Platinum 8176. 

Xeon Platinum 8176
$8,719.00-
E5-2697 v4
$2,799.99Newegg
E5-2699 v3
$4,799.95Newegg
E5-2643 v3
$1,995.95Newegg
E5-2690 v2
$2,259.95Newegg
E5-2680 v2
$1,348.95Newegg
E5-2670 v2
$1,273.95Newegg
Cores
28
18
18
6
10
10
10
Threads
56
36
36
12
20
20
20
Frequency
2.1 GHz
2.3 GHz
2.3 GHz
3.4 GHz
3.0 GHz
2.8 GHz
2.5 GHz
Max Turbo Boost
2.8 GHz
3.6 GHz
3.6 GHz
3.7 GHz
3.6 GHz
3.6 GHz
3.3 GHz
Cache
38.5MB
45MB
45MB
20MB
25MB
25MB
25MB
TDP
165W
145W
145W
135W
130W
115W
115W
Max. Memory Speed
DDR4-2666
DDR4-2400
DDR4-2133
DDR4-2133
DDR3-1866
DDR3-1866
DDR3-1866
Socket
FCLGA3647
FCLGA2011-3
FCLGA2011-3
FCLGA2011-3
FCLGA2011
FCLGA2011
FCLGA2011

The Test Platforms

  • Intel Purley S2P2Y3Q Server

Intel sent a Server System S2P2SY3Q test platform powered by a dual-socket Intel Server Board 2600WF. Two Xeon Platinum 8176 processors, which feature 28 cores and 56 threads apiece, are complemented by 12 32GB Hynix DDR4-2666 DIMMs. That provides a total of 56C/112T and 432GB of memory. The Software Development Platform includes two redundant 80 PLUS 1100W power supplies. The PSUs, like the fans, are hot-pluggable to avoid downtime issues due to component failure.

  • Intel Wildcat Pass S2G3SY1Q Server

We tested the Broadwell-EP-based Xeon E5-2697 v4, the Haswell-EP-based Xeon E5-2699 v3, and the E5-2643 v3 on an Intel Software Development Platform server. The pre-production Grantley-R EP S2G3SY1Q (Wildcat Pass) Broadwell Qualification 2U test bed originally came with two Xeon E5-2697 v4 CPUs with 18 Hyper-Threaded cores and 45MB of shared cache apiece.

The test platform features Intel's C610 chipset family and includes eight 32GB SK hynix DDR4-2400 DIMMs (HMA84GL7AMR4N-UH). Intel provides this server for use as a software development platform; it's not designed for use in a production environment. As such, it lacks some of the features that facilitate redundancy, such as dual PSUs. One of the PSU bays is covered, but the other houses a single 900W power supply.

  • Intel Server System R2208GZ4GC

We tested the Ivy Bridge-based (v2) CPUs in Intel's Server System R2208GZ4GC, which features the S2600GZ motherboard (C602 chipset) housed in a production-class chassis with the requisite redundant and hot-swappable fans, along with dual hot-swappable 750W power supplies. We installed 64GB of Kingston DDR3-1600 memory in 8GB modules.

How We Test

We benchmarked the servers with the open source Linux-Bench script, which is available on Linux-Bench.com and GitHub. ServeTheHome and others in the open source community maintain it. The suite runs from an Ubuntu 14.04 LiveCD either on local storage or through a KVM-over-IP connection. The script installs dependencies and runs several well-known independent open source benchmarks that characterize CPU performance.

Most enterprise deployments are built for specific needs and workloads, and as tempting as application testing is, there are far too many variables to make the results applicable to all but a small subset of users. The benchmarks in this article encompass several industry-standard tools that quantify performance trends, but it's noteworthy that optimized deployments could unlock even more performance.

MORE: Best CPUs

MORE: Intel & AMD Processor Hierarchy

MORE: All CPU Content

This thread is closed for comments
31 comments
    Your comment
  • the nerd 389
    Do these CPUs have the same thermal issues as the i9 series?

    I know these aren't going to be overclocked, but the additional CPU temps introduce a number of non-trivial engineering challenges that would result in significant reliability issues if not taken into account.

    Specifically, as thermal resistance to the heatsink increases, the thermal resistance to the motherboard drops with the larger socket and more pins. This means more heat will be dumped into the motherboard's traces. That could raise the temperatures of surrounding components to a point that reliability is compromised. This is the case with the Core i9 CPUs.

    See the comments here for the numbers:
    http://www.tomshardware.com/forum/id-3464475/skylake-mess-explored-thermal-paste-runaway-power.html
  • Snipergod87
    983009 said:
    Do these CPUs have the same thermal issues as the i9 series? I know these aren't going to be overclocked, but the additional CPU temps introduce a number of non-trivial engineering challenges that would result in significant reliability issues if not taken into account. Specifically, as thermal resistance to the heatsink increases, the thermal resistance to the motherboard drops with the larger socket and more pins. This means more heat will be dumped into the motherboard's traces. That could raise the temperatures of surrounding components to a point that reliability is compromised. This is the case with the Core i9 CPUs. See the comments here for the numbers: http://www.tomshardware.com/forum/id-3464475/skylake-mess-explored-thermal-paste-runaway-power.html


    Wouldn't be surprised if they did but also wouldn't be surprised in Intel used solder on these. Also it is important to note that server have much more airflow than your standard desktop, enabling better cooling all around, from the CPU to the VRM's. Server boards are designed for cooling as well and not aesthetics and stylish heat sink designs
  • InvalidError
    983009 said:
    the thermal resistance to the motherboard drops with the larger socket and more pins. This means more heat will be dumped into the motherboard's traces.

    That heat has to go from the die, through solder balls, the multi-layer CPU carrier substrate, those tiny contact fingers and finally, solder joints on the PCB. The thermal resistance from die to motherboard will still be over an order of magnitude worse than from the die to heatsink, which is less than what the VRM phases are sinking into the motherboard's power and ground planes. I wouldn't worry about it.
  • jowen3400
    Can this run Crysis?
  • bit_user
    Quote:
    The 28C/56T Platinum 8176 sells for no less than $8719

    Actually, the big customers don't pay that much, but still... For that, it had better be made of platinum!

    That's $311.39 per core!

    The otherwise identical CPU jumps to a whopping $11722, if you want to equip it with up to 1.5 TB of RAM instead of only 768 GB.

    Source: http://ark.intel.com/products/120508/Intel-Xeon-Platinum-8176-Processor-38_5M-Cache-2_10-GHz
  • Kennyy Evony
    jowen3400 21 minutes ago
    Can this run Crysis?

    Jowen, did you just come up to a Ferrari and ask if it has a hitch for your grandma's trailer?
  • qefyr_
    W8 on ebay\aliexpress for $100
  • bit_user
    2508511 said:
    W8 on ebay\aliexpress for $100

    I wouldn't trust a $8k server CPU I got for $100. I guess if they're legit pulls from upgrades, you could afford to go through a few @ that price to find one that works. Maybe they'd be so cheap because somebody already did cherry-pick the good ones.

    Still, has anyone had any luck on such heavily-discounted server CPUs? Let's limit to Sandybridge or newer.
  • JamesSneed
    328798 said:
    Quote:
    The 28C/56T Platinum 8176 sells for no less than $8719
    Actually, the big customers don't pay that much, but still... For that, it had better be made of platinum! That's $311.39 per core! The otherwise identical CPU jumps to a whopping $11722, if you want to equip it with up to 1.5 TB of RAM instead of only 768 GB. Source: http://ark.intel.com/products/120508/Intel-Xeon-Platinum-8176-Processor-38_5M-Cache-2_10-GHz


    That is still dirt cheap for a high end server. An Oracle EE database license is going to be 200K+ on a server like this one. This is nothing in the grand scheme of things.
  • bit_user
    87433 said:
    An Oracle EE database license is going to be 200K+ on a server like this one. This is nothing in the grand scheme of things.

    A lot of people don't have such high software costs. In many cases, the software is mostly home-grown and open source (or like 100%, if you're Google).
  • bit_user
    983009 said:
    I know these aren't going to be overclocked, but the additional CPU temps introduce a number of non-trivial engineering challenges that would result in significant reliability issues if not taken into account.

    Actually, the main reason to solder these is because datacenter operators like to save energy on cooling by running their CPUs rather hot.

    I think you guys should de-lid and find out!
  • bit_user
    2497595 said:
    it is illegal and you could get in trouble for buying engineering samples when they arrive in your country if you live in USA or some countries in EU .

    Wow. Source?

    Unless they're stolen (because it's illegal to receive stolen property, regardless of whether you know it is), how on earth can it be illegal to buy any CPU?

    I can see how it might be a civil offense to sell them, if they're covered by NDA or some other sort of contract, but that would only pertain to the party breaking contract (i.e. the seller). Regardless, I wouldn't want engineering samples because they usually have significant bugs or limitations.
  • bit_user
    2497595 said:
    engineering samples are owned by Intel/AMD and if some one sells them then they are stolen .

    So, then why doesn't the owner get in trouble when Intel/AMD/etc. wants it back? Or is the ownership just a legal fiction created to establish grounds for pursuing buyers?

    2497595 said:
    as for engineering samples full of bugs and limitations ? not really they work fine .

    I have limited experience with them, but I have to disagree. Surely, some work alright. But that's not categorically true. And whenever benchmarks start to leak out about some new CPU or GPU, you always read caveats that they might be from engineering samples that aren't running at full speed.
  • none12345
    "as for bugs ? it is VERY RARE to happen in ES these days..."

    You ment to say very common. All processors have eratta in them. I think you mean serious bugs, but all of them have bugs.
  • adamboy64
    This was a great read. It was good to get up to speed on the new Xeon lineup, even though I'm far from understanding all the technical details.
    Thank you.
  • GR1M_ZA
    Would like to see comparison between the new EPYC Server CPU's and these.
  • cats_Paw
    MSI Afterburner cant run on this. Too many threads to fit in the screen.
  • aldaia
    328798 said:
    Quote:
    The 28C/56T Platinum 8176 sells for no less than $8719
    Actually, the big customers don't pay that much, but still... For that, it had better be made of platinum! That's $311.39 per core! The otherwise identical CPU jumps to a whopping $11722, if you want to equip it with up to 1.5 TB of RAM instead of only 768 GB. Source: http://ark.intel.com/products/120508/Intel-Xeon-Platinum-8176-Processor-38_5M-Cache-2_10-GHz


    Adding to that, we recently renovated our supercomputer. We have almost 3500 dual-socket compute nodes. That's nearly 7000 24-core Xeon 8160. Other than 4 less cores per unit, its identical to Xeon 8176. I don't really know how much we paid for each Xeon, not even high management knows that, since we ordered the supercomputer as a whole to the best bidder.

    The whole supercomputer is €34 million. €4 million are devoted to the disc system, and €30 million to the compute subsystem + some work on the electrical and cooling systems. The compute system includes the racks, the interconnection network, cabling (more than 50 Km of cabling) and several months installing and testing components. I assume most of the cost is due to the compute nodes.

    As a guessing exercise, lets say that €25 million are devoted to the compute nodes, that is €7150 per node, which includes 2 sockets , motherboard, memory, SSD disc, redundant power source and router to connect to other nodes. Guessing again I would say that each Xeon 8160 should be somewhere around €2000-2500. Xeon 8160 is listed at $4702
  • captaincharisma
    328798 said:
    87433 said:
    An Oracle EE database license is going to be 200K+ on a server like this one. This is nothing in the grand scheme of things.
    A lot of people don't have such high software costs. In many cases, the software is mostly home-grown and open source (or like 100%, if you're Google).


    which is why the majority of businesses are still stuck on windows XP and 7 PC's only able to use internet explorer 6 for a web browser
  • Trevor_45
    These tests are all fine and good for IT professionals. But I want to see some gaming results! Just for the entertainment value. PLEASE!

    Yes, it's a server chip not meant for gaming blah blah blah. Just run the games. k thx.
  • Rob1C
    Now we wait for 7nm Wars.
  • jimmysmitty
    983009 said:
    Do these CPUs have the same thermal issues as the i9 series? I know these aren't going to be overclocked, but the additional CPU temps introduce a number of non-trivial engineering challenges that would result in significant reliability issues if not taken into account. Specifically, as thermal resistance to the heatsink increases, the thermal resistance to the motherboard drops with the larger socket and more pins. This means more heat will be dumped into the motherboard's traces. That could raise the temperatures of surrounding components to a point that reliability is compromised. This is the case with the Core i9 CPUs. See the comments here for the numbers: http://www.tomshardware.com/forum/id-3464475/skylake-mess-explored-thermal-paste-runaway-power.html


    You mean thermal issues that will never be seen because server CPUs are never OCed? Most server CPUs will not be maxed out 24x7. A single server with this CPU will probably be cut up into at least 6 different server roles using VM.

    Either way the i9 seems to be fine at stock speeds. The biggest issues arise when overclocking, which is the same with every CPU.

    Temps are also irrelevant as they do not have a proper setup for it. Most servers in datacenters, where these will normally reside, have a hot and cold side. The cold side is normally kept in the 60s so the air coming in is very cold and the hot side is all the expelled air being pushed over the RAM, CPUs and CPUs (HDDs too if you have them in your server instead of a SAN) and gets damn hot. Our server room up in North Dakota lost power about 4 months ago, when it is still very cool outside, and the backup batteries kept the servers running long enough without AC that it hit 165f in the room.

    Anything that the consumer side is affected by wont normally affect the server market as they are very different beasts all together.
  • bit_user
    34444 said:
    328798 said:
    In many cases, the software is mostly home-grown and open source (or like 100%, if you're Google).
    which is why the majority of businesses are still stuck on windows XP and 7 PC's only able to use internet explorer 6 for a web browser

    I think the majority of businesses still on Win7 are just too cheap to upgrade or don't want the hassle. At this point, you might be right about the businesses still on XP.

    Anyway, that's not what I had in mind. I was talking about homegrown datacenter & cloud apps, as this is a server chip.