Micron Unveils 128GB and 256GB CXL 2.0 Expansion Modules

Micron
(Image credit: Micron)

Micron on Monday introduced its CZ120 memory expansion modules that comply with the CXL 2.0 Type 3 specification and feature a PCIe 5.0 x8 interface. The modules are designed to expand DRAM capacity and bandwidth for servers that need more high-performance memory to run workloads that require loads of RAM, including in-memory databases and software-as-a-service. Samples of the modules are now available to interested parties.

The Micron CZ120 memory modules are equipped with 128GB and 256GB of memory, which is in line with what typical RDIMMs offer. The expansion modules come in a E3.S 2T form-factor with a PCIe 5.0 x8 interface and are based on Microchip's SMC 2000 controller compliant with the CXL 2.0 Type 3 standard as well as Micron's memory chips made using the company's 1α (1-alpha) DRAM production node. 

From a performance point of view, Micron's CZ120 CXL 2.0 memory expansion modules provide bandwidth of up to 36 GB/s (measured by running MLC workload with 2:1 read/write ratio on a single CZ120 memory expansion module) and are not significantly slower than DDR5-4800 RDIMMs with 128GB and 256GB memory onboard that offer a peak bandwidth of 38.4 GB/s.  

Modern servers based on AMD's 4th Generation EPYC 'Genoa' CPU with a 12-channel DDR5-4800 memory subsystem feature memory bandwidth of up to 460.8 GB/s per socket, whereas machines powered by Intel's 4thGeneration Xeon Scalable processor featuring an 8-channel DDR5-4800 DRAM system can boast with a 307.2 GB/s. While in both cases CPUs get ample of bandwidth, there are workloads that need more DRAM and higher bandwidth and CXL 2.0 memory expansion modules are designed for this very purpose.  

Micron claims that the addition of four 256GB CZ120 memory expansion modules to a machine equipped with 12 64GB DDR5 RDIMMs (768GB) can enable a 24% greater memory read/write bandwidth per CPU than servers using RDIMM memory alone, whereas additional memory capacity will enable server to process up to 96% more database queries per day.

"Micron is advancing the adoption of CXL memory with this CZ120 sampling milestone to key customers," said Siva Makineni, vice president of the Micron Advanced Memory Systems Group. "We have been developing and testing our CZ120 memory expansion modules utilizing both Intel and AMD platforms capable of supporting the CXL standard. Our product innovation coupled with our collaborative efforts with the CXL ecosystem will enable faster acceptance of this new standard, as we work collectively to meet the ever-growing demands of data centers and their memory-intensive workloads."

Micron has not disclosed when it plans to ship its CZ120 memory expansion modules commercially and how much will they cost. It is likely that the products will be deployed sometimes in 2024 after interested parties validate and qualify them, though some companies may deploy them sooner, whereas other will likely test them for longer periods, depending on workloads.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • bit_user
    I get the sense they're holding back, while they test the waters. I think you should probably be able to pack more capacity in that form factor, even before resorting to chip-stacking.

    BTW, I think both Genoa and Sapphire Rapids only go as high as CXL 1.1. I wonder if Emerald Rapids is going to move up to CXL 2.0. The difference isn't raw speed, but rather a matter of features and functionality. You have to go all the way to CXL 3.0, before there's a bump in speed (i.e. PCIe 6.0 PHY).
    Reply
  • thestryker
    bit_user said:
    I get the sense they're holding back, while they test the waters. I think you should probably be able to pack more capacity in that form factor, even before resorting to chip-stacking.
    I'm betting that this is likely the best they can do with 16Gb chips even using both sides of the PCB. I would assume that a move to 24Gb would happen before stacking, and maybe even 32Gb depending on if they hit manufacturing timelines.
    bit_user said:
    BTW, I think both Genoa and Sapphire Rapids only go as high as CXL 1.1. I wonder if Emerald Rapids is going to move up to CXL 2.0. The difference isn't raw speed, but rather a matter of features and functionality. You have to go all the way to CXL 3.0, before there's a bump in speed (i.e. PCIe 6.0 PHY).
    Intel has been pretty cagey about EMR specs so I'd assume it's 1.1 again while Granite/Sierra should be 2.0. With any luck there will be details at Hotchips, but if not I'd imagine there should be some during the Innovation event.
    Reply
  • bit_user
    thestryker said:
    Intel has been pretty cagey about EMR specs so I'd assume it's 1.1 again while Granite/Sierra should be 2.0.
    I think I might have seen a leaked roadmap, and it's as you say.

    IIRC, 2.0 adds switching (or maybe it's multi-level switching?), and that will enable much larger memory capacities by allowing more of these devices than there are CXL lanes to support them. 3.0 adds support for fabrics, and that should be an enabler for having shared RAM pools at rack-scale.
    Reply
  • Li Ken-un
    CXL would have been the perfect use for 3D XPoint (Optane) media:
    Denser than RAM, and thus higher-capacity modules;
    But also byte-addressable like RAM; and
    Both faster and lower latency than NANDToo bad we’re stuck with either the proprietary DIMMs which work only in special Intel proprietary systems or slower PCIe-limited SSDs.
    Reply
  • bit_user
    Li Ken-un said:
    CXL would have been the perfect use for 3D XPoint (Optane) media:
    Intel were supposedly working on such a product, when Optane got canceled.

    Li Ken-un said:

    Denser than RAM, and thus higher-capacity modules;
    But also byte-addressable like RAM; and
    Both faster and lower latency than NAND
    You can do almost as well (in some respects, better) with NAND-backed DRAM + power-loss capacitors. Optane was denser and cheaper than DRAM, but not much. It wouldn't offer a capacity advantage over die-stacked DDR5.

    The biggest problem with Optane DIMMs was extremely limited software support for using it as persistent memory. If you use it like a fast SSD, then you lose the benefits of byte-addressability and kernel overhead kills most of the performance advantage vs. just having an Optane NVMe SSD.
    Reply