Intel Xeon Platinum 8176 Scalable Processor Review

Power Consumption

Skylake Power Optimizations

Power consumption is of critical importance in the data center. It's an ongoing expense that must be factored into total cost of ownership and managed accordingly. It also correlates with waste heat, necessitating cooling, eating up more power in kind.

One of the best ways to reduce costs is getting more useful work done per watt of power. Intel's Scalable Processor family improves efficiency compared to generations prior by serving up higher performance, but using less power per core.

Much like mainstream Skylake-based chips, the new Xeons employ Speed Shift technology, which cedes control of power states to the processor instead of relying on constant (and latent) hints from the operating system. Instead, the OS defines preferences, such as minimum and maximum performance levels, and the processor handles fine-grained adjustments. An expanded set of P-states allows the processor to control frequency and voltages on a more granular level, thereby saving power and accelerating response time. Speed Shift also eliminates the latency associated with P-state commands from the operating system. 

This is a step forward from the Hardware Controlled Power Management (HWPM) feature in Broadwell-EP. Among other optimizations, Intel developed independent per-core voltage and frequency domains that allow the processor to dynamically manage key uncore components like the mesh topology and shared L3 cache. The larger L2 also reduces the number of requests from the LLC. Those requests require a trip across the mesh, and because all data movement consumes power, fewer requests means less power consumption.

Linux-Bench Power Consumption

We logged platform power consumption during our Linux-Bench run, so these measurements also include the effects of DRAM and power supply efficiency.

Factoring in the amount of work performed per watt provides the best view of overall power efficiency, we think. Generalizing remains a challenging endeavor, though. Because the Scalable Processor family covers such a wide range of target markets and relevant workloads, it's hard to identify a handful of tests applicable to everyone. As such, consider these results a basic indicator of overall power consumption.

The 8176's lofty power use numbers are the result of 10 more cores than the most similar previous-gen CPU. Those cores do get more done per watt though, particularly in threaded benchmarks. For applications that aren't as aggressively parallelized, it's best to buy a processor with fewer cores able to operate at a higher clock rate. Intel's "M" processors offer the best mix of core counts and per-core performance, but they're premium products with matching price tags.

Watt Hours

The Platinum wields 55% more cores than Intel's E5-2699 v3, but it only consumes 14 more watt-hours during the Linux-Bench script. That's pretty impressive, especially if you factor in higher performance, too.

Power Maximum Load And Idle

Power draw is recorded every second in our enterprise lab. During the Linux-Bench tests, we captured multiple high-draw bursts that only appeared for one second, cresting at 711W. The same granularity is used during our Linpack tests, but because of the 8176's lower AVX frequencies, we only recorded a 670W peak.

The 8176's extra 10 cores lead to higher overall power draw at idle and under full load. As you can see in the second chart, though, which calculates per-core consumption by dividing the total by the core count, Intel's 8176 uses far less power per core than the company's previous-gen CPUs. This paints a nice picture of improved efficiency.

MORE: Best CPUs

MORE: Intel & AMD Processor Hierarchy

MORE: All CPU Content

This thread is closed for comments
31 comments
    Your comment
  • the nerd 389
    Do these CPUs have the same thermal issues as the i9 series?

    I know these aren't going to be overclocked, but the additional CPU temps introduce a number of non-trivial engineering challenges that would result in significant reliability issues if not taken into account.

    Specifically, as thermal resistance to the heatsink increases, the thermal resistance to the motherboard drops with the larger socket and more pins. This means more heat will be dumped into the motherboard's traces. That could raise the temperatures of surrounding components to a point that reliability is compromised. This is the case with the Core i9 CPUs.

    See the comments here for the numbers:
    http://www.tomshardware.com/forum/id-3464475/skylake-mess-explored-thermal-paste-runaway-power.html
  • Snipergod87
    983009 said:
    Do these CPUs have the same thermal issues as the i9 series? I know these aren't going to be overclocked, but the additional CPU temps introduce a number of non-trivial engineering challenges that would result in significant reliability issues if not taken into account. Specifically, as thermal resistance to the heatsink increases, the thermal resistance to the motherboard drops with the larger socket and more pins. This means more heat will be dumped into the motherboard's traces. That could raise the temperatures of surrounding components to a point that reliability is compromised. This is the case with the Core i9 CPUs. See the comments here for the numbers: http://www.tomshardware.com/forum/id-3464475/skylake-mess-explored-thermal-paste-runaway-power.html


    Wouldn't be surprised if they did but also wouldn't be surprised in Intel used solder on these. Also it is important to note that server have much more airflow than your standard desktop, enabling better cooling all around, from the CPU to the VRM's. Server boards are designed for cooling as well and not aesthetics and stylish heat sink designs
  • InvalidError
    983009 said:
    the thermal resistance to the motherboard drops with the larger socket and more pins. This means more heat will be dumped into the motherboard's traces.

    That heat has to go from the die, through solder balls, the multi-layer CPU carrier substrate, those tiny contact fingers and finally, solder joints on the PCB. The thermal resistance from die to motherboard will still be over an order of magnitude worse than from the die to heatsink, which is less than what the VRM phases are sinking into the motherboard's power and ground planes. I wouldn't worry about it.
  • jowen3400
    Can this run Crysis?
  • bit_user
    Quote:
    The 28C/56T Platinum 8176 sells for no less than $8719

    Actually, the big customers don't pay that much, but still... For that, it had better be made of platinum!

    That's $311.39 per core!

    The otherwise identical CPU jumps to a whopping $11722, if you want to equip it with up to 1.5 TB of RAM instead of only 768 GB.

    Source: http://ark.intel.com/products/120508/Intel-Xeon-Platinum-8176-Processor-38_5M-Cache-2_10-GHz
  • Kennyy Evony
    jowen3400 21 minutes ago
    Can this run Crysis?

    Jowen, did you just come up to a Ferrari and ask if it has a hitch for your grandma's trailer?
  • qefyr_
    W8 on ebay\aliexpress for $100
  • bit_user
    2508511 said:
    W8 on ebay\aliexpress for $100

    I wouldn't trust a $8k server CPU I got for $100. I guess if they're legit pulls from upgrades, you could afford to go through a few @ that price to find one that works. Maybe they'd be so cheap because somebody already did cherry-pick the good ones.

    Still, has anyone had any luck on such heavily-discounted server CPUs? Let's limit to Sandybridge or newer.
  • JamesSneed
    328798 said:
    Quote:
    The 28C/56T Platinum 8176 sells for no less than $8719
    Actually, the big customers don't pay that much, but still... For that, it had better be made of platinum! That's $311.39 per core! The otherwise identical CPU jumps to a whopping $11722, if you want to equip it with up to 1.5 TB of RAM instead of only 768 GB. Source: http://ark.intel.com/products/120508/Intel-Xeon-Platinum-8176-Processor-38_5M-Cache-2_10-GHz


    That is still dirt cheap for a high end server. An Oracle EE database license is going to be 200K+ on a server like this one. This is nothing in the grand scheme of things.
  • bit_user
    87433 said:
    An Oracle EE database license is going to be 200K+ on a server like this one. This is nothing in the grand scheme of things.

    A lot of people don't have such high software costs. In many cases, the software is mostly home-grown and open source (or like 100%, if you're Google).
  • bit_user
    983009 said:
    I know these aren't going to be overclocked, but the additional CPU temps introduce a number of non-trivial engineering challenges that would result in significant reliability issues if not taken into account.

    Actually, the main reason to solder these is because datacenter operators like to save energy on cooling by running their CPUs rather hot.

    I think you guys should de-lid and find out!
  • bit_user
    2497595 said:
    it is illegal and you could get in trouble for buying engineering samples when they arrive in your country if you live in USA or some countries in EU .

    Wow. Source?

    Unless they're stolen (because it's illegal to receive stolen property, regardless of whether you know it is), how on earth can it be illegal to buy any CPU?

    I can see how it might be a civil offense to sell them, if they're covered by NDA or some other sort of contract, but that would only pertain to the party breaking contract (i.e. the seller). Regardless, I wouldn't want engineering samples because they usually have significant bugs or limitations.
  • bit_user
    2497595 said:
    engineering samples are owned by Intel/AMD and if some one sells them then they are stolen .

    So, then why doesn't the owner get in trouble when Intel/AMD/etc. wants it back? Or is the ownership just a legal fiction created to establish grounds for pursuing buyers?

    2497595 said:
    as for engineering samples full of bugs and limitations ? not really they work fine .

    I have limited experience with them, but I have to disagree. Surely, some work alright. But that's not categorically true. And whenever benchmarks start to leak out about some new CPU or GPU, you always read caveats that they might be from engineering samples that aren't running at full speed.
  • none12345
    "as for bugs ? it is VERY RARE to happen in ES these days..."

    You ment to say very common. All processors have eratta in them. I think you mean serious bugs, but all of them have bugs.
  • adamboy64
    This was a great read. It was good to get up to speed on the new Xeon lineup, even though I'm far from understanding all the technical details.
    Thank you.
  • GR1M_ZA
    Would like to see comparison between the new EPYC Server CPU's and these.
  • cats_Paw
    MSI Afterburner cant run on this. Too many threads to fit in the screen.
  • aldaia
    328798 said:
    Quote:
    The 28C/56T Platinum 8176 sells for no less than $8719
    Actually, the big customers don't pay that much, but still... For that, it had better be made of platinum! That's $311.39 per core! The otherwise identical CPU jumps to a whopping $11722, if you want to equip it with up to 1.5 TB of RAM instead of only 768 GB. Source: http://ark.intel.com/products/120508/Intel-Xeon-Platinum-8176-Processor-38_5M-Cache-2_10-GHz


    Adding to that, we recently renovated our supercomputer. We have almost 3500 dual-socket compute nodes. That's nearly 7000 24-core Xeon 8160. Other than 4 less cores per unit, its identical to Xeon 8176. I don't really know how much we paid for each Xeon, not even high management knows that, since we ordered the supercomputer as a whole to the best bidder.

    The whole supercomputer is €34 million. €4 million are devoted to the disc system, and €30 million to the compute subsystem + some work on the electrical and cooling systems. The compute system includes the racks, the interconnection network, cabling (more than 50 Km of cabling) and several months installing and testing components. I assume most of the cost is due to the compute nodes.

    As a guessing exercise, lets say that €25 million are devoted to the compute nodes, that is €7150 per node, which includes 2 sockets , motherboard, memory, SSD disc, redundant power source and router to connect to other nodes. Guessing again I would say that each Xeon 8160 should be somewhere around €2000-2500. Xeon 8160 is listed at $4702
  • captaincharisma
    328798 said:
    87433 said:
    An Oracle EE database license is going to be 200K+ on a server like this one. This is nothing in the grand scheme of things.
    A lot of people don't have such high software costs. In many cases, the software is mostly home-grown and open source (or like 100%, if you're Google).


    which is why the majority of businesses are still stuck on windows XP and 7 PC's only able to use internet explorer 6 for a web browser
  • Trevor_45
    These tests are all fine and good for IT professionals. But I want to see some gaming results! Just for the entertainment value. PLEASE!

    Yes, it's a server chip not meant for gaming blah blah blah. Just run the games. k thx.
  • Rob1C
    Now we wait for 7nm Wars.
  • jimmysmitty
    983009 said:
    Do these CPUs have the same thermal issues as the i9 series? I know these aren't going to be overclocked, but the additional CPU temps introduce a number of non-trivial engineering challenges that would result in significant reliability issues if not taken into account. Specifically, as thermal resistance to the heatsink increases, the thermal resistance to the motherboard drops with the larger socket and more pins. This means more heat will be dumped into the motherboard's traces. That could raise the temperatures of surrounding components to a point that reliability is compromised. This is the case with the Core i9 CPUs. See the comments here for the numbers: http://www.tomshardware.com/forum/id-3464475/skylake-mess-explored-thermal-paste-runaway-power.html


    You mean thermal issues that will never be seen because server CPUs are never OCed? Most server CPUs will not be maxed out 24x7. A single server with this CPU will probably be cut up into at least 6 different server roles using VM.

    Either way the i9 seems to be fine at stock speeds. The biggest issues arise when overclocking, which is the same with every CPU.

    Temps are also irrelevant as they do not have a proper setup for it. Most servers in datacenters, where these will normally reside, have a hot and cold side. The cold side is normally kept in the 60s so the air coming in is very cold and the hot side is all the expelled air being pushed over the RAM, CPUs and CPUs (HDDs too if you have them in your server instead of a SAN) and gets damn hot. Our server room up in North Dakota lost power about 4 months ago, when it is still very cool outside, and the backup batteries kept the servers running long enough without AC that it hit 165f in the room.

    Anything that the consumer side is affected by wont normally affect the server market as they are very different beasts all together.
  • bit_user
    34444 said:
    328798 said:
    In many cases, the software is mostly home-grown and open source (or like 100%, if you're Google).
    which is why the majority of businesses are still stuck on windows XP and 7 PC's only able to use internet explorer 6 for a web browser

    I think the majority of businesses still on Win7 are just too cheap to upgrade or don't want the hassle. At this point, you might be right about the businesses still on XP.

    Anyway, that's not what I had in mind. I was talking about homegrown datacenter & cloud apps, as this is a server chip.