The Cascade Lake-based Xeons, officially referred to as Second Generation Xeon Scalable processors, arrive at a critical time for the company. They offer the promise of more cores and more performance at similar price points compared to a lot of the mainstream models.
Intel's Xeon powers an estimated ~96% of the world's servers. However, AMD's first-gen EPYC processors are starting to nibble away market share. Big businesses tend to wait for architectures to mature before adopting them, which is why the second-gen EPYC Rome models pose a real threat to Intel's dominance. They'll utilize a 7nm process that is denser than Intel's 14nm node, while purportedly offering better power efficiency. That smaller manufacturing process will enable up to 64 cores and 128 threads in a single package, besting Intel's finest.
Those CPUs are expected to surface later this year, leaving Intel with a big gap to plug as it awaits the arrival of its 14nm Cooper Lake processors, and then the repeatedly delayed 10nm Ice Lake Xeon chips in 2020.
Faced with EPYC Rome's market-topping 64 cores and 128 threads, Intel also introduced its new Xeon Platinum 9000-series, armed with as many as 56 cores, 112 threads, and 12 memory channels crammed into a package that dissipates up to 400W. These new behemoths, which are essentially two Skylake-SP CPUs in a single socket, only come in OEM servers. They aren't available on their own.
Credit: IntelDespite the impressive arsenal of Cascade Lake chips we're being introduced to, this launch is most recognizable as another evolutionary step forward for Intel's ambitions to become a platform company rather than just a peddler of data center chips. The addition of Optane DC Persistent Memory DIMMs opens up new avenues for Intel in the memory market. Moreover, the company is expanding upon complementary businesses with new SSDs, in both NAND and Optane flavors, along with 100G Columbiaville networking solutions.
Even with Intel's obvious goal of becoming a full solutions provider, at the end of the day, its success depends on the ability to deliver compelling processors. Let's take a look at the latest CPUs in our lab.
Intel Cascade Lake Xeon Platinum 8280, Platinum 8268, and Gold 6230
Cascade Lake Xeons employ the same Skylake-SP microarchitecture as their predecessors, meaning we won't see performance improvements attributable to underlying design changes. Intel does offer a few enhancements to woo new customers, such as support for faster DRAM, support for up to 4.5TB of Optane DC Persistent Memory DIMMs on the Platinum and Gold models, higher maximum memory capacity, more L3 cache on many mid-range models, the 14nm++ process that Intel says improves frequencies and power consumption, and support for new instructions tailored for AI workloads.
Intel also uses the same die configurations (XCC, HCC, LCC) with the mesh interconnect for its mainstream Platinum, Gold, Bronze, and Silver models. As a result, core counts still top out at 28, trailing AMD's EPYC Naples line-up that comes with up to 32 cores and 64 threads.
Credit: IntelThe key message this time around is that you get more performance at every price point. We can see that delivered across the stack in the form of an extra 200 MHz of base/Turbo Boost frequency over the Skylake-SP models, along with the step up to six-channel DDR4-2933 (instead of DDR4-2666). Memory capacity is now up to 1TB per chip, with more expensive models supporting either 2TB or 4.5TB. It's noteworthy, though, that the base models' 1TB of memory support per socket still trails the EPYC's 2TB.
Overall, Intel claims that its new chips offer a 30% gen-on-gen performance increase. The Gold 6230 is representative of many of the company's improvements. It comes with higher Turbo Boost frequencies, four additional Hyper-Threaded cores, and more L3 cache, all at the same $1,894 price point as its predecessor.
|Cascade Lake Platinum 8280||Skylake-SP Platinum 8180||AMD EPYC 7601 ||Cascade Lake Platinum 8268||Skylake-SP Platinum 8168||Cascade Lake Gold 6230||Skylake-SP Gold 6130|
|Socket||LGA 3647||LGA 3647||SP4 ||LGA 3647||LGA 3647||LGA 3647||LGA 3647|
|28 / 56||28 / 56||32 / 64||24 / 48||24 / 48||20 / 40||16 / 32|
|Base Freq.||2.7 GHz||2.5 GHz||2.2 GHz ||2.9 GHz||2.7 GHz||2.1 GHz||2.1 GHz |
|Turbo Freq.||4.0 GHz||3.8 GHz||3.2 GHz||3.9 GHz||3.7 GHz||3.9 GHz||3.7 GHz|
|Memory Support||6-Channel DDR4-2933||6-Channel DDR4-2666||8-Channel DDR4-2666||6-Channel DDR4-2933||6-Channel DDR4-2666||6-Channel DDR4-2933||6-Channel DDR4-2666|
|Scalability (up to)||8-Socket||8-Socket||2-Socket||8-Socket||8-Socket||4-Socket||4-Socket|
Like the previous-gen Xeon Scalable processors, Intel's Cascade Lake models drop into an LGA 3647 interface (Socket P) on platforms with C610 (Lewisburg) platform controller hubs, and the processors are compatible with existing server boards. Intel's OEM partners have also released a wave of new platforms that support the latest technologies, including Optane DC Persistent memory.
By virtue of the same underlying microarchitecture, Intel's Cascade Lake processors still offer 48 lanes of PCIe 3.0. AMD's EPYC processors come with 128 lanes, which turns into a big advantage for dense NVMe storage servers and the multi-GPU setups used for deep learning. But Intel is working to chip away at its connectivity shortcoming. A new DL Boost suite adds support for multiple features that the company says give it a 14x speed-up in AI inference workloads.
Intel also adds support for VNNI (Vector Neural Network Instructions), which optimize instructions for the smaller data types commonly used in machine learning tasks. VNNI fuses three instructions together to boost int8 (VPDPBUSD) performance or a pair of instructions to boost int16 (VPDPWSSD) throughput. These operations still conform to the familiar AVX-512 voltage/frequency curve.
Intel's Optane DC Persistent Memory also makes its debut with the Cascade Lake platform. These new DIMMs slot into the DRAM interface, just like a normal memory module. They are available in 128, 256, and 512GB capacities, and can be used as either memory or storage. Unlike DRAM, 3D XPoint retains data after power is removed, thus enabling radical new use cases. The goal is to bridge the gap between storage and memory, potentially boosting capacity up to 6.5TB in a dual-socket server at a much friendlier price point.
The DIMMs are addressable in either App Direct or Memory Mode. The former exposes the DIMMs as a storage device and the latter allows applications to use Optane DIMMs as a slower tier of memory. In that mode, Optane Memory can hold "cold" data typically stored in main memory, while frequently-accessed data is held in system memory, meaning it essentially serves as a cache for the Optane DIMMs. Applications can also directly control data placement into the DRAM and Optane Memory tiers, though they have to be tuned for maximum benefit.
Intel also designed a new memory controller to support the DIMMs and assist in identifying data for caching. However, it isn't sharing the finer architectural details. We know that the DIMMs are physically and electrically compatible with the JEDEC standard DIMM slot, but use an Intel-proprietary DDR-T protocol to deal with the uneven latency that stems from writing data to persistent memory. Optane DIMMs share a memory channel with normal DIMM slots, and they are only compatible with Intel's Cascade Lake (and newer) Xeons. Of the caveats to be aware of, the most important is that Optane DIMMs only run at DDR4-2666, which limits all memory in the system to the same speed.
MORE: Best CPUs
MORE: All CPUs Content