Samsung Developing 24Gb DDR5 ICs: 768GB DDR5 Modules Possible

Samsung
(Image credit: Samsung)

Samsung said this week that it is developing 24Gb DDR5 memory devices at the request of its customers that operate cloud datacenters. Such ICs will enable the company to build memory modules of up to 768GB capacity for servers as well as high-capacity memory solutions for client PCs. In addition, Samsung disclosed some peculiarities about its DRAM technology which uses extreme ultraviolet (EUV) lithography. 

24Gb DDR5 Chips in Development 

"In order to meet the demand and request by the cloud companies, we are also developing a maximum 24Gb DDR5 product," a Samsung executive said at an earnings conference this week, according to a transcript by SeekingAlpha

Samsung has already demonstrated its 512GB registered DIMM (RDIMM) memory module that uses 32 16GB stacks based on eight 16Gb DRAM devices. The 8-Hi stacks use through silicon via interconnects to ensure low power and quality signalling.  

By using 24Gb memory ICs in 8-Hi stacks, Samsung could increase capacity of one stack to 24GB and capacity of 32-chip module to 768GB. Using such an RDIMM, a server CPU featuring eight memory channels and supporting two modules per channel could be equipped with over 12TB of DDR5 memory. To put the number in the context, today's Intel Xeon Scalable 'Ice Lake-SP' CPU designed for memory-hungry workloads can support up to 6TB of DRAM. 

In addition, Samsung could build 96GB, 192GB or 384GB modules for mainstream and ultra-dense servers that do not use more than one RDIMM per channel, but could certainly take advantage of extra DRAM capacity. 

For client applications usage of 24Gb memory chips instead of 16Gb ICs could increase memory module capacity by 50%, so expect 24GB and 48GB DDR5 modules at some point. Meanwhile, Samsung notes that for the time being 16Gb DDR5 devices will be mainstream, so even if 24Gb devices will be used for client applications, do not expect them too soon. 

DRAM Shrinking Gets Harder 

One of the key capabilities that DDR5 memory has in addition to increased data transfer rates and performance-enhancing features is ability to increase per-device capacity all the way to 64Gb (up from 16Gb in case of DDR4) as well as stack up to 8-16 DRAMs (depending on capacity) within one chip (up from four in case of DDR4).  

Increasing a memory IC device capacity two times from 16Gb to 32Gb is challenging since it gets increasingly hard to shrink DRAM transistors and capacitor structures as newer process technologies no longer provides tangible node-to-node density improvements. For example, Samsung refers to its most advanced DUV-only fabrication process as to a 15nm node, whereas its latest D1a technology that relies on EUV across five layers is called 14nm.  

"Our 14 nm DRAM is the smallest design rule in the industry's 14 nm class," a Samsung executive said. "We will mass produce this product in the second half by applying EUV to five layers." 

Such a small step — from 15 nm to 14 nm — is conditioned by Samsung's conservative approach and unwillingness to increase risks associated with usage of new equipment and aggressive density increase. Yet it still emphasizes that increasing per-device density will not be that easy even with DDR5 and its yield-increasing enhancements as well as EUV lithography. To that end, Samsung's development of a 24Gb DDR5 device is fully justified. 

Previously the company has not disclosed the number of EUV layers that D1a uses. By using EUV instead of multi-patterning with DUV Samsung shrinks the number of process steps and DRAM costs. Samsung is currently sampling its 14 nm D1a-based 16Gb DRAMs with customers and plans to start their mass production in the second half of 2021. 

When?

Samsung is the first memory manufacturer to pre-announce its 24Gb DDR5 ICs and will be among the first to capitalize on such high-capacity DRAM devices. The only question is when. 

Perhaps for competitive reasons, Samsung did not reveal when it intends to start production of 24Gb DDR5 DRAMs and high-capacity memory modules on their base. To address immediate needs for ultra-high-capacity DDR5 memory modules when Intel's Xeon Scalable 'Sapphire Rapids' processors hit the market in mid-2022, Samsung has 16Gb-based 512GB RDIMMs that are sampling to various server customers right now.  

Therefore, 24Gb-based high-capacity modules will likely be available sometimes later unless there are customers willing to deploy them as soon as possible without a long validation process.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • enewmen
    I'm still waiting for the 512GB Intel DIMM using optane and fast cache. The optane-based sticks should also have ultra low watt sleep states.
    Reply
  • enewmen said:
    I'm still waiting for the 512GB Intel DIMM using optane and fast cache. The optane-based sticks should also have ultra low watt sleep states.
    For what? You probably couldn’t even afford it. It will never be a consumer products
    Reply
  • jeremyj_83
    enewmen said:
    I'm still waiting for the 512GB Intel DIMM using optane and fast cache. The optane-based sticks should also have ultra low watt sleep states.
    512GB Optane DIMMs are incredibly expensive at about $8500 per DIMM. When using Optane DIMMs, you need enough DRAM to act as a high speed buffer to hid the latency and vastly slower performance of Optane vs DRAM. Outside of the server space there is little reason to go with Optane DIMMs.
    Reply
  • enewmen
    Mandark said:
    For what? You probably couldn’t even afford it. It will never be a consumer products
    I can't afford it if it does exist. However, I want to see the progress that will trickle down to the consumer level eventually. This should open up applications that are not currently possible.
    One application that comes to mind is using all the 64-bit address space. Basically, all applications and data will be available in memory at all times without needing to load and unload programs from storage. Just thinking.
    Reply
  • I think it’ll be sometime before we see that level of memory modules for consumers. Some day
    Reply
  • USAFRet
    enewmen said:
    I can't afford it if it does exist. However, I want to see the progress that will trickle down to the consumer level eventually. This should open up applications that are not currently possible.
    One application that comes to mind is using all the 64-bit address space. Basically, all applications and data will be available in memory at all times without needing to load and unload programs from storage. Just thinking.
    That is like being in 2002, and waiting for solid state drive technology to filter down to the consumer.

    The amount of brainpower I will expend on this new thing is exactly zero, until it actually appears.

    Actually, though...I've already expended more than that in this post.
    Reply