Nvidia rumored to ditch its first-gen custom memory form factor for newer version — SOCAMM1 for faster ‘SOCAMM2’ standard

Micron
(Image credit: Micron)

Nvidia has reportedly cancelled its first-gen SOCAMM (System-on-Chip Attached Memory Module) rollout and is shifting development focus to a new version known as SOCAMM2. This is according to Korean outlet ETNews citing unnamed industry sources. Based on a machine translation, the outlet claims that SOCAMM1 was halted after technical setbacks and that SOCAMM2 sample testing is now underway with all three major memory vendors.

The abandonment of SOCAMM1, if accurate, resets what was expected to be a fast-tracked rollout of modular LPDDR-based memory in Nvidia’s data center stack. SOCAMM has been positioned as a new class of high-bandwidth, low-power memory for AI servers, delivering similar benefits to HBM but at a lower cost.

Nvidia has not commented on the report — and never comments on rumors in any case — and none of the memory vendors have confirmed a change in direction. But with demand for AI memory exploding, and HBM supply becoming more and more constrained, SOCAMM is shaping up to become a major component in Nvidia’s silicon roadmap, making a jump from SOCAMM1 to SOCAMM2 a feasible move.

Luke James
Contributor

Luke James is a freelance writer and journalist.  Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory. 

  • bit_user
    The article said:
    SOCAMM (System-on-Chip Attached Memory Module)
    Huh? Is that really what it stands for??

    I figured SOCAMM was mixing the terms SODIMM (Small-Outline DIMM) and CAMM (Compression-Attached Memory Module), to give us a Small-Outline Compression-Attached Memory Module. That at least would make more sense. Nothing about SOCAMM directly involves a system-on-a-chip, so far as I'm aware.

    I can definitely find examples consistent with my reading of it, but I haven't yet come across an official source.
    https://embeddedcomputing.com/technology/storage/socamm-the-new-memory-kid-on-the-ai-block https://www.fierceelectronics.com/embedded/microns-new-socamm-memory-device-part-nvidia-blackwell-ultra https://www.ainvest.com/news/micron-leads-nvidia-preferred-supplier-generation-socamm-memory-solution-2506/
    Reply
  • Stomx
    Yea, the writer was confused little bit because his translation also had some sense.

    I also was confused ones with acronym TBW (TeraByte Written) thinking this was TeraByte per Week. What confused me was acronym DWPD (Drive Writes Per Day).

    As a result I was thinking that NVMe SSDs are almost eternal devices which will never wear. But when found that my drives start to catch errors and problems in a year or two I found hard what TBW actually is and how eternal they are
    Reply
  • bit_user
    Stomx said:
    I also was confused ones with acronym TBW (TeraByte Written) thinking this was TeraByte per Week. What confused me was acronym DWPD (Drive Writes Per Day).
    Those are easily looked up, though. If you were using either of those terms in a published article, I'd fully expect you to do so.

    Furthermore, if you know anything about CAMMs (as I'd hope the author does) or SODIMMs, then you should really do a double-take before accepting the explanation that its meaning is essentially SoC-AMM. The only reason I cut the author some slack, in this case, is that I couldn't find any official doc or press release from Nvidia on SOCAMM, which is why I cited those other three sources. However, I did just find this press release from Micron:
    https://investors.micron.com/news-releases/news-release-details/micron-innovates-data-center-edge-nvidia
    In that press release, they do spell it out:
    "Micron Technology, Inc. (Nasdaq: MU), today announced it is the world’s first and only memory company shipping both HBM3E and SOCAMM (small outline compression attached memory module) products for AI servers in the data center."
    As the (then) only producer of SOCAMMs, Micron gets the official word on what it means.

    So, that's that.
    Reply
  • thestryker
    There's been a lot of talk about SOCAMM cancelation so it'll be interesting to see what the reality ends up being. It's an interesting concept and I'd love to know what the issues are they've been running into (I don't expect these will be made public any time soon, if ever).
    Reply
  • bit_user
    thestryker said:
    There's been a lot of talk about SOCAMM cancelation so it'll be interesting to see what the reality ends up being. It's an interesting concept and I'd love to know what the issues are they've been running into (I don't expect these will be made public any time soon, if ever).
    Well, those rumors totally make sense, in light of the revision news. So, what would've been cancelled was v1 of the standard.

    It's nice to hear that v2 of the standard will have a baseline speed of 9600 MT/s. Keep in mind that the latency penalty of LPDDR is inversely proportional to the interface speed. So, I'm not too bothered by this being LPDDR-based, especially if it can continue scaling to yet higher speeds.
    Reply