Sapphire Rapids Uncovered: 56 Cores, 64GB HBM2E, Multi-Chip Design

(Image credit: Intel)

A leaked slide purported to be from Intel's roadmap summarizes what we know about Intel's upcoming 4th Generation Xeon Scalable 'Sapphire Rapids' processor, with some additional details. (Thanks, VideoCardz.)

Intel has always envisioned its Sapphire Rapids processor and the Eagle Stream platform as a revolutionary product. Paired with Intel's Xe-HPC 'Ponte Vecchio' compute GPU, Sapphire Rapids will power Intel's first exascale supercomputer that will rely on an AI+HPC paradigm. In the datacenter, the new CPU will have to support a host of new technologies that are not yet available, as datacenter workloads are changing. Sapphire Rapids radically differs from its predecessors on multiple levels, including microarchitecture, memory hierarchy, platform, and even design ideology.

The Sapphire Rapids CPU

The Sapphire Rapids CPU will adopt a multi-chip module design (or rather a multi-chiplet module) featuring four identical chips located next to each other and packed using Intel's EMIB technology. Each chip contains 14 Golden Cove cores, the same cores used for the Alder Lake CPUs. However, these cores will feature numerous datacenter/supercomputer enhancements when compared to their desktop counterparts.

In particular, Sapphire Rapids will support Advanced Matrix Extensions (AMX); AVX512_BF16 extension for deep learning; Intel’s Data Streaming Accelerator (DSA), a data copy and transformation accelerator that offloads appropriate assignments from CPU cores (NVMe calls are rather expensive); architectural LBRs (last branch recording); and HLAT (a hypervisor-managed linear address translation). The maximum number of cores supported by Sapphire Rapids will be 56, but there will naturally be models with 44, 28, or even 24 cores.

As far as memory is concerned, Sapphire Rapids will support HBM2E, DDR5, and Intel's Optane Persistent Memory 300-series (codenamed Crow Pass) non-volatile DIMMs. At least some Sapphire Rapids CPU SKUs will carry up to 64GB of HBM2E DRAM, offering 1TB/s of bandwidth per socket. We don't know whether these will be separate HBM2E packages placed next to CPU chiplets, or if they'll be stacked below them using Intel's Foveros packaging technology.

The processor will also feature eight DDR5-4800 memory channels supporting one module per channel (thus offering 307.2 GB/s of bandwidth per socket). Today, 1DPC sounds like a limitation, but even using Samsung's recently announced 512GB RDIMM modules eight channels will bring 4TB of memory, and higher capacity DDR5 modules will be available later.

Finally, Sapphire Rapids processors can be paired with Intel's Optane Persistent Memory 300-series 3DXPoint-based modules, which are said to increase bandwidth substantially compared to existing offerings. Optane modules are meant to bring a lot of relatively cheap memory closer to the CPU to accelerate applications like in-memory databases, so many of Intel's partners would like to have these modules. However, considering the fact that it's unclear which company will produce 3D XPoint for Intel starting in 2022 (as Micron is pulling away from 3D XPoint production and abandons the project), we have no idea whether such modules will be launched at all. Theoretically, Intel could validate JEDEC-standard upcoming NVDIMMs with its next-generation CPUs, but this is speculation.

Intel's Sapphire Rapids processors will be made using the company's 10nm Enhanced SuperFin technology that's optimized for performance. With all the advantages that the new CPUs will bring, they will be rather power hungry. The information says that their maximum TDP will hit 350W (up from 270W in case of the Ice Lake-SP), so there's a question on what sort of cooling they'll require. Meanwhile, Intel's upcoming LGA4677 socket will probably be able to deliver a huge amount of power to the CPU.

The Eagle Stream Platform

Being aimed at a wide variety of workloads, Intel's Eagle Stream platform will support one, two, four, and eight LGA4677 sockets. Cooling will be an interesting topic to discuss in regards to high-performance Sapphire Rapids SKUs for HPC applications, which sometimes use eight CPUs per machine. Meanwhile, these CPUs will use Intel's UPI 2.0 interface that will deliver up to 16 GT/s data transfer rates, up from 11.2 GT/s today. Each CPU will have up to four UPI 2.0 links (probably external links).

As far as other enhancements are concerned, Intel's Sapphire Rapids processor will support up to 80 PCIe 5.0 lanes (with x16, x8, x4 bifurcation) at 32 GT/s, and a PCIe 4.0 x2 link. On top of PCIe Gen5, the CPUs will support the CXL 1.1 protocol to optimize CPU-to-device (for accelerators) as well as CPU-to-memory (for memory expansion and storage devices) communications. 

Some Grains of Salt

Intel started sampling of its 4th Generation Xeon Scalable 'Sapphire Rapids' processors several months ago, so it's not surprising that a number of their previously unknown features and capabilities (e.g., HBM2E support and MCM design) were revealed by various unofficial sources in the recent months. In fact, we expect more interesting leaks as more server makers gain access to the new CPUs.

Unfortunately, the leaks were never confirmed by Intel or excerpts from its leaked documents, so it's possible some of the information is incorrect. The slide from an alleged Intel roadmap confirms many of Sapphire Rapids' capabilities that are (or were) at least planned to be supported, but keep in mind that these are not the final specifications of Intel's products that will ship in 2022.

At this point we cannot confirm legitimacy of the slide, though we can confirm that a substantial portion of information revealed by the paper is indeed correct and has been either confirmed by Intel, or our sources with knowledge of the matter. Meanwhile, we have no idea how old the slide is, so take it with a grain of salt.

Anton Shilov
Freelance News Writer

Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • JayNor
    "However, considering the fact that it's unclear which company will produce 3D XPoint for Intel starting in 2022"

    Intel is evidently developing gen3, Crow Pass, on their own, associated with Sapphire Rapids. They also have shown a gen4 Optane, Donahue Pass, on some leaked roadmaps, associated with the Granite Rapids server chips.
    Reply
  • JayNor
    the DSA can apparently accommodate Optane memory, according to the on-line documents.

    Its cache flush operation is curious.

    https://software.intel.com/content/www/us/en/develop/articles/intel-data-streaming-accelerator-architecture-specification.html
    Reply