Micron to Introduce GDDR7 Memory in 1H 2024

Micron
(Image credit: Micron)

Micron said on Wednesday that it would introduce its first GDDR7 memory chips in the first half of next year. The new type of memory promises to offer higher performance than GDDR6 and GDDR6X, but it will require brand-new memory controllers and therefore GPUs.

"We plan to introduce our next-generation G7 product on our industry-leading 1ß node in the first half of calendar year 2024," said Sanjay Mehrotra, chief executive of Micron.

GDDR7 SGRAM will be the next generation memory for GPUs that will be used for some of the best graphics cards as well as other devices that require high bandwidth, but do not necessarily need expensive HBM3 memory. Samsung envisions that GDDR7 will offer data transfer speeds in the range of 36 GT/s, though it remains to be seen when the novel type of SGRAM will offer this level of performance.

Earlier this year Cadence revealed that GDDR7 memory will use PAM3 signaling, which promises to let it boast with higher bandwidth compared to GDDR6 (which uses PAM2 or NRZ encoding) without complications and higher power consumption imposed by GDDR6X (which uses PAM4 signaling). 

It should be noted that formal introduction of a new type of memory does not necessarily mean its immediate commercial adoption. Since GDDR7 uses a completely different encoding than GDDR6 or GDDR6X, it will require all-new memory controllers and therefore GPUs. While it is logical to expect AMD, Intel, and Nvidia to introduce their next generation GPUs in 2024 or early in 2025, only these three companies know when exactly these graphics processors will ship.

Cadence already has its GDDR7 verification solution, so adopters can ensure that theire controllers and physical interfaces will be compliant with the GDDR7 specification eventually.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • -Fran-
    I'm pretty sure Intel, AMD and nVidia wanted this in late 2022 instead.

    Still, maybe we could see a mid-refresh with GDDR7? Maybe?

    Regards.
    Reply
  • InvalidError
    Normal memory is 2-3W per DIMM, GDDR6X is ~4W per chip, wonder how much power GDDR7 is going to guzzle at 32+Gbps.

    At some point, HBM is going to become necessary just to keep the IO power in check.
    Reply
  • Usage of this next-gen mem type in 'consumer' products seems like a pipe dream for now.

    Currently NVIDIA provides the fastest memory solution with it's RTX 40 series GPUs in the form of the GDDR6X which provides up to 22 Gbps pin speeds, and AMD's Radeon RX 7000 series cards on the other hand utilize the standard 20 Gbps GDDR6 tech.

    So let's assume GDDR7 would deliver the following bandwidth figures, based on 36 Gbps pin speeds:
    128-bit @ 36 Gbps: 576 GB/s192-bit @ 36 Gbps: 846 GB/s256-bit @ 36 Gbps: 1152 GB/s320-bit @ 36 Gbps: 1440 GB/s384-bit @ 36 Gbps: 1728 GB/s
    Reply
  • gg83
    Metal Messiah. said:
    Usage of this next-gen mem type in 'consumer' products seems like a pipe dream for now.

    Currently NVIDIA provides the fastest memory solution with it's RTX 40 series GPUs in the form of the GDDR6X which provides up to 22 Gbps pin speeds, and AMD's Radeon RX 7000 series cards on the other hand utilize the standard 20 Gbps GDDR6 tech.

    So let's assume GDDR7 would deliver the following bandwidth figures, based on 36 Gbps pin speeds:
    128-bit @ 36 Gbps: 576 GB/s192-bit @ 36 Gbps: 846 GB/s256-bit @ 36 Gbps: 1152 GB/s320-bit @ 36 Gbps: 1440 GB/s384-bit @ 36 Gbps: 1728 GB/s
    Can you also explain the PAM3 vs PAM4? Is it for power management? Why would ddr6x use pam4 and ddr7 use pam3?
    Reply
  • InvalidError
    gg83 said:
    Can you also explain the PAM3 vs PAM4? Is it for power management? Why would ddr6x use pam4 and ddr7 use pam3?
    The rollback from PAM4 to PAM3 was almost certainly because they analyzed results from GDDR6X and concluded that timing jitter and phase noises from pushing higher clocks with PAM3 would be easier to deal with than keeping signals clean enough for PAM4.
    Reply
  • YouFilthyHippo
    If Jeedeedeeareseven comes out in the first half of 2024, when does Jeedeedeeareseveneques come out? Any chance we will see that on the 5090Ti?
    Reply
  • FunSurfer
    YouFilthyHippo said:
    If Jeedeedeeareseven comes out in the first half of 2024, when does Jeedeedeeareseveneques come out? Any chance we will see that on the 5090Ti?
    RTX 5000 set for spring 2025 release, so the timing is good for GDDR7 on the cards. As for individual cards:
    RTX 5090 Ti chance of sporting GDDR7: 100%
    RTX 5090 chance of sporting GDDR7: 100%
    RTX 5080 Ti chance of sporting GDDR7: 100%
    RTX 5080 chance of sporting GDDR7: 50%
    RTX 5070 Ti chance of sporting GDDR7: 50%
    RTX 5070 chance of sporting GDDR7: 10%
    RTX 5060 Ti chance of sporting GDDR7: 0%
    RTX 5060 chance of sporting GDDR7: 0%
    Reply
  • bit_user
    FunSurfer said:
    As for individual cards:
    ...
    RTX 5080 chance of sporting GDDR7: 50%
    RTX 5070 Ti chance of sporting GDDR7: 50%
    RTX 5070 chance of sporting GDDR7: 10%
    RTX 5060 Ti chance of sporting GDDR7: 0%
    RTX 5060 chance of sporting GDDR7: 0%
    Why? We've seen how Nvidia is keen on cutting the number of chips and memory channels, on lower-end models. Faster RAM would let them do even more of that.
    Reply
  • thestryker
    InvalidError said:
    Normal memory is 2-3W per DIMM, GDDR6X is ~4W per chip, wonder how much power GDDR7 is going to guzzle at 32+Gbps.

    At some point, HBM is going to become necessary just to keep the IO power in check.
    GDDR6 @ 20gbps is ~4.8W and GDDR6X @ 20gbps is ~4.65W (21gbps is ~4.87W)

    I'm really curious how much they've been able to drop the power consumption on GDDR7, because the improvement from GDDR5X to GDDR6X was pretty small. If the difference between GDDR6X and GDDR7 matches that you'd be looking at ~6.65W per chip. The real question client side becomes whether or not higher capacities are available which is something I haven't seen in any documents or press releases yet. If they're able to launch 32Gb capacity that would mean client side they could drop bus width down fairly significantly without losing memory bandwidth over what is available today so long as they use higher end chips.
    Reply
  • InvalidError
    bit_user said:
    Why? We've seen how Nvidia is keen on cutting the number of chips and memory channels, on lower-end models. Faster RAM would let them do even more of that.
    Reviews bashing Nvidia and AMD to a lesser degree for insufficient VRAM and VRAM bandwidth might motivate them to rethink their bus shrinkage strategy. Current-gen stuff often runs into scaling issues previous-gen stuff doesn't have when dipping below 256bits/16GB. If next-gen bumps performance up ~50% for a given marketing tier like the 5090 rumors say it might, trimming bus some more with GDDR7 won't be an option. At best, it might make it possible to keep things the same as they are now.

    thestryker said:
    I haven't seen in any documents or press releases yet. If they're able to launch 32Gb capacity that would mean client side they could drop bus width down fairly significantly without losing memory bandwidth over what is available today so long as they use higher end chips.
    We already have 24Gbits DDR5, 24Gbits GDDR7 should be quite easily feasible. As mentioned in my bit reply above though, I don't think further memory interface trim will be possible as they'll need the extra bandwidth to feed the higher performance cores required to avoid having another "lowest sales in 20+ years" generation on their hands.
    Reply