AMD Confirms Twelve DDR5 Memory Channels For Zen 4 EPYC CPUs

Samsung
(Image credit: Samsung)

AMD has published a set of patches for the company's EDAC (Error Detection and Correction) driver code for the next-generation EPYC processors based on the Zen 4 microarchitecture. The new patches indicate that the upcoming CPUs will support unprecedented memory bandwidth and capacity per socket.

The patches (found by Phoronix) bring in support for DDR5 registered DIMMs (RDIMMs) and DDR5 load-reduced DIMMs (LRDIMMs) for the fourth-generation EPYC processors codenamed Genoa (Family 19h Models 10h-1Fh and A0h-AFh CPUs).

The patches also confirm that the upcoming EPYC 7004-series will support up to 12 memory controllers per socket, up from eight for AMD's existing server parts. Unfortunately, we do not know how many DIMMs per channel (DPC) the chips will support.

Twelve 64-bit DDR5 memory channels would theoretically increase the memory bandwidth available to Genoa processors to a whopping 460.8 GB/s per socket, a significant increase compared to the 204.8 GB/s available to current-generation EPYC CPUs with DDR4-3200. 

Memory bandwidth alone will not be the only improvement on next-generation EPYC 'Genoa' CPUs. Twelve memory channels will also enable higher memory capacities for the new processors. Samsung has already demonstrated 512GB DDR5 RDIMMs and confirmed that 768GB DDR5 RDIMMs were possible. Even using 12 512GB modules, AMD's next-generation server processors could support up to 6TB of memory (up from 4TB today). 

However, if Genoa supports two RDIMMs per channel, that capacity will stretch up to 12TB of DDR5. AMD could increase the capacity per memory channel and per socket further With LRDIMMs (due to octal-ranked module architecture), albeit at the cost of performance.

AMD's EPYC 7004-series 'Genoa' processors will bring tangible memory improvements compared to existing server processors, which will naturally improve their real-world performance. 

 

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • Alvar "Miles" Udell
    Not exactly a surprise. Milan already has 64 cores and if Genoa is to top out at 96 cores, it's going to need 12 channels to keep an 8:1 ratio.

    Also since this was leaked in August, it's not really new news.

    Zen 4 Madness: AMD EPYC Genoa With 96 Cores, 12-Channel DDR5 Memory, and AVX-512 | Tom's Hardware (tomshardware.com)
    Reply
  • -Fran-
    Now AMD needs to tell everyone AM5 is going to be QuadChannel. Come on AMD, we all want this. Give QuadChannel on mainstream.

    Regards :P
    Reply
  • jeremyj_83
    Alvar Miles Udell said:
    Not exactly a surprise. Milan already has 64 cores and if Genoa is to top out at 96 cores, it's going to need 12 channels to keep an 8:1 ratio.

    Also since this was leaked in August, it's not really new news.

    Zen 4 Madness: AMD EPYC Genoa With 96 Cores, 12-Channel DDR5 Memory, and AVX-512 | Tom's Hardware (tomshardware.com)
    With DDR5 AMD could have gotten away with fewer channels due to the increased bandwidth. Staying with 8 channel would have given a 62.5% increase in RAM bandwidth per socket. I think the bigger thing is AMD wanted more RAM density. I have said on forums that I feel Sapphire Rapids will be behind the curve since it is only going to have 8 channel DDR5. For virtualization you need RAM. While having a lot of CPU helps, the shared nature of virtualization makes it pretty easy to over provision CPU. However, when you over provision RAM you will very quickly run into performance issues. For example, I have some server hosts that have 128 vCPUs but have allocated 158 vCPUs, or a 23% over provisioning. I could easily go to 50% CPU over provision and not have any performance issues (right now it is showing 14% CPU usage).

    The key word is it was leaked. You cannot take leaks as proof. This report means that the leak was accurate which is always nice. That said there were rumors back in March that said it would be 12 channels.
    Reply
  • Soaptrail
    However, if Genoa supports two RDIMMs per channel, that capacity will stretch up to 12TB of DDR5. AMD could increase the capacity per memory channel and per socket further With LRDIMMs (due to octal-ranked module architecture), albeit at the cost of performance.

    Woah! Is Rambus making a comeback! here is the history for the neophytes, https://en.wikipedia.org/wiki/Rambus#Lawsuits
    Reply
  • JayNor
    Intel added 4 stacks of HBM to the version of SPR used in HPC. It is being used in Aurora. Those are 16GB per stack. I believe those can be configured as 8x128wide memory channels per stack ... so, effectively adding 32 memory channels to the 8 existing ddr5 channels.

    One interesting option with the pcie5/cxl is that large external memory pools will become available via memory pool controllers that will sit on that bus. It will also enable memory package power and physical formats different from DIMMs to exist in those pools. Intel is already working on support for Optane via that path, which would reduce the complexity and timing constraints of their DDR5 memory accesses.

    Another interesting configuration would be to use only the HBM stacks as local DDR, and to replace all the external memory pins with more PCIE5.
    Reply
  • jeremyj_83
    JayNor said:
    Intel added 4 stacks of HBM to the version of SPR used in HPC. It is being used in Aurora. Those are 16GB per stack. I believe those can be configured as 8x128wide memory channels per stack ... so, effectively adding 32 memory channels to the 8 existing ddr5 channels.

    One interesting option with the pcie5/cxl is that large external memory pools will become available via memory pool controllers that will sit on that bus. It will also enable memory package power and physical formats different from DIMMs to exist in those pools. Intel is already working on support for Optane via that path, which would reduce the complexity and timing constraints of their DDR5 memory accesses.

    Another interesting configuration would be to use only the HBM stacks as local DDR, and to replace all the external memory pins with more PCIE5.
    I believe that the HBM Intel has added is acting like an L4 cache and not viewed as RAM by the OS/Hypervisor. While that will help to alleviate main RAM bandwidth constraints (much like AMDs vCache), Intel will still be at a disadvantage when it comes to RAM capacity like it was with Xeon Scaleable until Ice Lake. Right now the most popular RDIMMs are 64GB DIMMs. You get good capacity and they are relatively cheap. My guess is the 128GB DDR5, maybe the 256GB, will be the go to for the next gen servers. At 1 DIMM per channel with 128GB DIMMs you get 1TB RAM/socket in 8 channel vs 1.5TB RAM/socket in 12 channel. Being at an absolute RAM capacity disadvantage really hurts when it comes to servers. Intel can try pushing next gen Optane DIMMs to "fix" the issue compared to AMD, but Optane DIMMs while cheaper that RAM aren't that much cheaper for the added complexity. Not to mention that if this is a production environment your software needs to be able to support Optane. For example, SAP HANA 2 SPS3 (came out in 2018) supports Optane in App Direct mode and NOT Memory mode. However, there are a lot of restrictions on that especially if you are running your DB on VMware as most companies will be doing. In VMware you are relegated to Cascade Lake only with P100 DIMMs on 6.7 U3 EP14 or 7.0 P01 or later. On top of that you need DIMMs + Optane DIMMs so the savings aren't huge, like 10% just a few months ago for a lot of complexity. Does that really make any sense?
    Reply
  • wifiburger
    and... still no DDR5 on the market

    moving the power delivery on the mem sticks was the biggest fail ever for DDR5

    I don't think any of these memory producers will recover in 2022 & AM5 being DDR5 only is dead on arrival
    Reply
  • Eximo
    Certainly will be interesting. If DDR5 becomes scarce enough, they might keep making DDR4 motherboards for 13th gen.

    AMD will have a hard decision to make with AM5. Or has their timing already paid off since they can wait for the DDR5 market to settle before releasing a CPU dependent on it.
    Reply