DDR5 Specification Released: Fast RAM With Built-In Voltage Regulators

DDR5 RAM (Image credit: Micron)

The JEDEC Solid State Technology Association has finally concluded the specification for DDR5 SDRAM, it announced Tuesday. Like DDR4 and its predecessors, DDR5 aims to drive memory density and frequency to new heights.

DDR5 RAM sticks will have the same number of pins, 288, as DDR4 DRAM modules. The pin layout, however, is different. That means you won't be able to use DDR5 modules on a DDR4 slot. As expected, the new design commands a fresh home, which in this case will be a DDR5 slot.

DDR4 employs a 16-bank structure with four bank groups. DDR5's improved design comprises 32 banks distributed over eight bank groups. The burst length on DDR5 is doubled from eight to 16. DDR5 comes equipped with the Same Bank Refresh function (SBRF), allowing the PC to tap into other banks while one is operating.

DDR5 enables up to four times higher capacity per module in comparison to DDR4. DDR4 modules handle 16 Gbit chips and max out at 32GB. DDR5, on the other hand, can leverage 64 Gbit chips, which pushes the maximum capacity on a single module from 32GB up to a whopping 128GB. 

Things look even brighter on the enterprise side. DDR5 supports die stacking, so memory vendors can potentially stack up to 16 dies onto one chip. As a result, a single Load-Reduced DIMM (LRDIMM) can come with a capacity of 4TB. 

DDR5 vs DDR4

Swipe to scroll horizontally
Header Cell - Column 0 DDR5DDR4
Data Rates3,200 - 6,400 MTps1,600 - 3,200 MTps
Device Densities8Gb - 64Gb2Gb - 16Gb
Max UDIMM Size128GB32GB
Bank Groups (BG) / Banks8 BG x 2 banks (8Gb x4/x8), 4 BG x 2 banks (8Gb x16), 8 BG x 4 banks (16-64Gb x4/x8), 4 BG x 4 banks (16-64Gb x16)4 BG x 4 banks (x4/x8), 2 BG x 4 banks (x16)
Burst LengthBL16, BL32 (and BC8 OTF, BL32 OTF)BL8 (and BL4)
REFRESH Commands All bank and same bankAll bank
VDD / VDDQ / VPP1.1 / 1.1 / 1.81.2 / 1.2 / 2.5

In addition to increasing module capacity fourfold, DDR5 serves up a healthy boost in memory bandwidth over DDR4. For comparison, the official data rates for DDR4 span from 1,600 MTps to 3,200 MTps. DDR5 starts where DDR4 left off, ranging from 3,200 MTps - 6,400 MTps -- double the maximum bandwidth of DDR4.

JEDEC's specifications are a good reference point, but the best RAM vendors have been known to push the limits on their own. Nowadays, you can find DDR4 memory kits with data rates up to 5,000 MTps, so you can expect DDR5 memory kits to eventually exceed the 6,400 MTps mark. With that in mind, SK Hynix's ambitious goal to release a DDR5-8400 module seems feasible. 

Keeping with tradition, DDR5 also brings operating voltage improvements. A lower operating voltage means better power efficiency. DDR5 modules will run at 1.1V, as opposed to 1.2V on DDR4

JEDEC has taken voltage regulation a step further. One of the motherboard's duties is regulating voltage for each individual RAM stick. This will be a thing of the past with DDR5. DDR5 modules will feature their own voltage regulators integrated on the sticks. This will take the load off the motherboard but will likely drive up price.

DDR5 Release Date 

On Tuesday, Micron also launched its Technology Enablement Program to facilitate the transition to DDR5. The program gives approved partners early access to technical information and support, as well as DDR5 components and samples.

DDR5 adoption should commence in 2021. Server platforms will likely be the first to welcome DDR5 before the standard arrives on consumer platforms. 

On the Intel front, Sapphire Rapids CPUs will come with DDR5 support. As for AMD, the chipmaker's upcoming fourth-generation EPYC (codename Genoa) processors should also do DDR5. 

Zhiye Liu
RAM Reviewer and News Editor

Zhiye Liu is a Freelance News Writer at Tom’s Hardware US. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.

  • InvalidError
    Single DIMM now going up to to 4TB with DDR5 LRDIMM die-stacking? Wonder if Intel will still have an itch to charge extra for memory support beyond 1.5TB :)
    Reply
  • drtweak
    InvalidError said:
    Single DIMM now going up to to 4TB with DDR5 LRDIMM die-stacking? Wonder if Intel will still have an itch to charge extra for memory support beyond 1.5TB :)


    Right I was thinking the same exact thing. I have seen some PC's, although from a few years ago in the DDR3 Day on servers that have like 1.5 TB of ram. The whole chassie has like RAM Daughter boards ALL OVER THE PLACE! like there are dozens and dozens of sticks of ram. Now all that times 2-3 can fit on one single stick.
    Reply
  • PeterDru
    A few minutes ago "Drve platforms" now reads "Server platforms".
    It makes me wonder what are "Drve platforms"?
    And what other articles have been modified in the meantime... just like in "1984".
    Reply
  • Kamen Rider Blade
    Currently, with DDR4; using 2 Rows and Double-Sided + 2 GiB/RAM Package, they can only fit in 64 GiB per DIMM.

    You can buy 64 GiB DIMM Modules right now:
    https://www.newegg.com/p/pl?N=100007611%20601349177%20601275379&Order=1
    If they go 1x Height DIMM Specs with 2 Rows and Double-Sided + 8 GiB/RAM Module, they should be able to fit in 256 GiB per DIMM.

    Remember these specialty "Double-Height" DIMM's?
    https://www.gamersnexus.net/guides/3462-zadak-32gb-3200mhz-double-capacity-dimm
    If Memmory Manufacturers go 2x Height DIMM Specs with 4 Rows and Double-Sides + 8 GiB/RAM Module, they should be able to fit in 512 GiB per DIMM

    But I would bet that would be limited to Enterprise setups at best. Only they can truly benefit from that much RAM.
    Reply
  • King_V
    I know this is more a corner(ish) case, but I'm wondering how AMD's APU graphics performance will be affected by this doubling of bandwidth.
    Reply
  • Kamen Rider Blade
    King_V said:
    I know this is more a corner(ish) case, but I'm wondering how AMD's APU graphics performance will be affected by this doubling of bandwidth.

    The Integrated Radeon Graphics should scale nicely with more bandwidth =D
    Reply
  • InvalidError
    King_V said:
    I know this is more a corner(ish) case, but I'm wondering how AMD's APU graphics performance will be affected by this doubling of bandwidth.
    DDR5-6400 will likely carry an eye-watering price tag for a while and not make much sense for a budget build. Performance-wise, it'll depend on whether AMD decides to scale up the IGP size to match. DDR5-4800 will likely be the mainstream speed for a while, so I could imagine AMD increasing the shader unit count by 50% on top of Navi IPC gains, which would translate to 60-70% net IGP performance gain.
    Reply
  • mitch074
    InvalidError said:
    DDR5-6400 will likely carry an eye-watering price tag for a while and not make much sense for a budget build. Performance-wise, it'll depend on whether AMD decides to scale up the IGP size to match. DDR5-4800 will likely be the mainstream speed for a while, so I could imagine AMD increasing the shader unit count by 50% on top of Navi IPC gains, which would translate to 60-70% net IGP performance gain.
    Could be, but previous benchmarks proved that past a certain threshold, iGPU performance didn't scale as well with RAM speed, number of unit notwithstanding (that was with 2x00G-3x00G IGPs) making me think that AMD would have to reduce latency between the APU and the RAM for these new throuoghputs to really show an improvement. Now they may already be there with Renoir, we'll have to wait a few months still to be sure.
    Reply
  • nofanneeded
    DDR5 comes equipped with the Same Bank Refresh function (SBRF), allowing the PC to tap into other banks while one is operating.

    Here we go again , more security holes by new hardware design.
    Reply
  • InvalidError
    mitch074 said:
    previous benchmarks proved that past a certain threshold, iGPU performance didn't scale as well with RAM speed
    You can't scale RAM performance beyond the point where you have ~100% core utilization and IGPs are under-powered by design because they have to leave a fair chunk of memory bandwidth for the CPU. Latency shouldn't be a major concern for GPUs since they have multiple mechanisms to hide. However, with IGPs, you cannot test the IGP independently from the CPU and the CPU is certainly far more latency-sensitive, enough so in most cases to explain away any IGP benchmark differences. You do get nearly perfect IGP performance scaling from single-channel to dual-channel and memory clocks across the budget-friendly range, which clearly indicate that IGPs are typically heavily constrained by memory bandwidth.

    (Well, it does vary quite a bit depending on the particular benchmark.)
    Reply