Intel Xeon Platinum 8176 Scalable Processor Review
Why you can trust Tom's Hardware
Final Analysis
The Xeon Scalable Processor launch is indicative of Intel’s larger agenda. The company is doubling down on its storage, networking, and FPGA efforts. Not surprisingly, then, it builds several (complementary) technologies into the new Scalable Processors and Lewisburg chipset.
The consolidated platform fuels an almost entirely converged solution for enterprise-oriented customers. Chipset-integrated 10 GbE networking and on-package Omni-Path connectivity could eliminate the need for add-in NICs altogether. QuickAssist technology, which provides a boost to some storage and networking workloads, may replace dedicated accelerators. And we might see Intel's vROC feature render dedicated HBA and/or RAID controllers obsolete for large NVMe-based SSD deployments. We’re told to expect integrated FPGAs in the future, but Intel already has Altera-derived add-in options available.
Intel also has Optane DIMMs in development, which would widen its reach even more. We expected those modules to make their debut with the Purley platform, but they've clearly been pushed back. Regardless, the company is serious about storage. Optane DC P4800X SSDs are on offer for the best possible performance in storage-bound workloads, along with Intel's Memory Drive technology that merges Optane into the memory hierarchy.
Our performance metrics revealed sizeable gains in multi-threaded workloads compared to the Broadwell-EP generation. Higher Turbo Boost clock rates helped improve single-threaded results in applications poorly optimized for parallelism, too. The Platinum 8176’s enhanced memory controller improves throughput, really benefiting memory-bound tasks. As you saw, our benchmarks showed phenomenal bandwidth figures. The addition of AVX-512 boosts per-core compute power in properly-optimized software, while changes to the cache hierarchy and mesh topology more generally improve the CPU's performance profile. It is worth noting that Intel's architectural tweaks don't benefit all applications; it's possible that optimization could change this in the future, though.
Although we saw higher power consumption numbers from the Platinum 8176-based machine than our comparison systems with fewer cores, a bit of math shows Intel's latest achieving much better per-core efficiency than any generation prior.
Most data center and enterprise customers are locked into three- and five-year refresh cycles due to maintenance contracts. These folks typically upgrade to new servers as those contracts expire. Anyone replacing hardware that old with Xeon Scalable Processors is going to realize a massive performance and efficiency improvement.
Intel’s data center dominance is about to be challenged by AMD’s EPYC processors. Fortunately for the incumbent, it's going to take some time for the EPYC ecosystem to mature and prove itself trustworthy. Still, considering AMD’s solid line-up of launch partners, the company is off to a good start. Until we’ve seen concrete data from AMD’s latest, though, it’s hard to speculate on how warmly businesses embrace the underdog. Much of AMD’s success is likely to hinge on cost more than performance. The Scalable Processor family features Intel’s famous segmentation and high cost structure, and there’s no doubt enterprise buyers look forward to a competitive AMD to keep Intel on its toes.
Intel’s advances, taken as a whole, are impressive. The company is also transitioning to a Data Center-First strategy that will find the newest architectures debuting on server platforms. IT professionals, rejoice!
MORE: Best CPUs
MORE: Intel & AMD Processor Hierarchy
MORE: All CPU Content
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Paul Alcorn is the Managing Editor: News and Emerging Tech for Tom's Hardware US. He also writes news and reviews on CPUs, storage, and enterprise hardware.
-
the nerd 389 Do these CPUs have the same thermal issues as the i9 series?Reply
I know these aren't going to be overclocked, but the additional CPU temps introduce a number of non-trivial engineering challenges that would result in significant reliability issues if not taken into account.
Specifically, as thermal resistance to the heatsink increases, the thermal resistance to the motherboard drops with the larger socket and more pins. This means more heat will be dumped into the motherboard's traces. That could raise the temperatures of surrounding components to a point that reliability is compromised. This is the case with the Core i9 CPUs.
See the comments here for the numbers:
http://www.tomshardware.com/forum/id-3464475/skylake-mess-explored-thermal-paste-runaway-power.html -
Snipergod87 19926080 said:Do these CPUs have the same thermal issues as the i9 series?
I know these aren't going to be overclocked, but the additional CPU temps introduce a number of non-trivial engineering challenges that would result in significant reliability issues if not taken into account.
Specifically, as thermal resistance to the heatsink increases, the thermal resistance to the motherboard drops with the larger socket and more pins. This means more heat will be dumped into the motherboard's traces. That could raise the temperatures of surrounding components to a point that reliability is compromised. This is the case with the Core i9 CPUs.
See the comments here for the numbers:
http://www.tomshardware.com/forum/id-3464475/skylake-mess-explored-thermal-paste-runaway-power.html
Wouldn't be surprised if they did but also wouldn't be surprised in Intel used solder on these. Also it is important to note that server have much more airflow than your standard desktop, enabling better cooling all around, from the CPU to the VRM's. Server boards are designed for cooling as well and not aesthetics and stylish heat sink designs -
InvalidError
That heat has to go from the die, through solder balls, the multi-layer CPU carrier substrate, those tiny contact fingers and finally, solder joints on the PCB. The thermal resistance from die to motherboard will still be over an order of magnitude worse than from the die to heatsink, which is less than what the VRM phases are sinking into the motherboard's power and ground planes. I wouldn't worry about it.19926080 said:the thermal resistance to the motherboard drops with the larger socket and more pins. This means more heat will be dumped into the motherboard's traces.
-
bit_user The 28C/56T Platinum 8176 sells for no less than $8719
Actually, the big customers don't pay that much, but still... For that, it had better be made of platinum!
That's $311.39 per core!
The otherwise identical CPU jumps to a whopping $11722, if you want to equip it with up to 1.5 TB of RAM instead of only 768 GB.
Source: http://ark.intel.com/products/120508/Intel-Xeon-Platinum-8176-Processor-38_5M-Cache-2_10-GHz -
Kennyy Evony jowen3400 21 minutes agoReply
Can this run Crysis?
Jowen, did you just come up to a Ferrari and ask if it has a hitch for your grandma's trailer? -
bit_user
I wouldn't trust a $8k server CPU I got for $100. I guess if they're legit pulls from upgrades, you could afford to go through a few @ that price to find one that works. Maybe they'd be so cheap because somebody already did cherry-pick the good ones.19927274 said:W8 on ebay\aliexpress for $100
Still, has anyone had any luck on such heavily-discounted server CPUs? Let's limit to Sandybridge or newer. -
JamesSneed 19927188 said:The 28C/56T Platinum 8176 sells for no less than $8719
Actually, the big customers don't pay that much, but still... For that, it had better be made of platinum!
That's $311.39 per core!
The otherwise identical CPU jumps to a whopping $11722, if you want to equip it with up to 1.5 TB of RAM instead of only 768 GB.
Source: http://ark.intel.com/products/120508/Intel-Xeon-Platinum-8176-Processor-38_5M-Cache-2_10-GHz
That is still dirt cheap for a high end server. An Oracle EE database license is going to be 200K+ on a server like this one. This is nothing in the grand scheme of things.
-
bit_user
A lot of people don't have such high software costs. In many cases, the software is mostly home-grown and open source (or like 100%, if you're Google).19927866 said:An Oracle EE database license is going to be 200K+ on a server like this one. This is nothing in the grand scheme of things.