Nvidia rumored to ditch its first-gen custom memory form factor for newer version — SOCAMM1 for faster ‘SOCAMM2’ standard
Revised spec now being sampled by Micron, Samsung, and SK.

Nvidia has reportedly cancelled its first-gen SOCAMM (System-on-Chip Attached Memory Module) rollout and is shifting development focus to a new version known as SOCAMM2. This is according to Korean outlet ETNews citing unnamed industry sources. Based on a machine translation, the outlet claims that SOCAMM1 was halted after technical setbacks and that SOCAMM2 sample testing is now underway with all three major memory vendors.
The abandonment of SOCAMM1, if accurate, resets what was expected to be a fast-tracked rollout of modular LPDDR-based memory in Nvidia’s data center stack. SOCAMM has been positioned as a new class of high-bandwidth, low-power memory for AI servers, delivering similar benefits to HBM but at a lower cost.
Nvidia itself has already listed SOCAMM in product documentation. Its GB300 NVL72 spec sheet confirms support for up to 18TB of LPDDR5X-based SOCAMM and bandwidths of 14.3 TB/s.
Micron announced that it was the “first and only memory company” shipping SOCAMM products for AI servers in the data center back in March. In contrast, Samsung and SK hynix had stated in conference calls that they were getting ready for mass production in Q3 2025. If SOCAMM1 has truly been shelved as the unnamed sources claim, then the timing could give Samsung and SK a second shot to close the gap with Micron.
ETNews suggests that SOCAMM2 will boost data rates from 8,533 MT/s to 9,600 MT/s and may support LPDDR6, though no vendor has confirmed that yet. Industry reports from earlier this summer forecasted 600,000 to 800,000 SOCAMM units shipping this year, suggesting that Nvidia was serious about deploying SOCAMM1 at scale. Whether those plans were paused or simply evolving remains unclear.
Nvidia has not commented on the report — and never comments on rumors in any case — and none of the memory vendors have confirmed a change in direction. But with demand for AI memory exploding, and HBM supply becoming more and more constrained, SOCAMM is shaping up to become a major component in Nvidia’s silicon roadmap, making a jump from SOCAMM1 to SOCAMM2 a feasible move.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.
-
bit_user
Huh? Is that really what it stands for??The article said:SOCAMM (System-on-Chip Attached Memory Module)
I figured SOCAMM was mixing the terms SODIMM (Small-Outline DIMM) and CAMM (Compression-Attached Memory Module), to give us a Small-Outline Compression-Attached Memory Module. That at least would make more sense. Nothing about SOCAMM directly involves a system-on-a-chip, so far as I'm aware.
I can definitely find examples consistent with my reading of it, but I haven't yet come across an official source.
https://embeddedcomputing.com/technology/storage/socamm-the-new-memory-kid-on-the-ai-block https://www.fierceelectronics.com/embedded/microns-new-socamm-memory-device-part-nvidia-blackwell-ultra https://www.ainvest.com/news/micron-leads-nvidia-preferred-supplier-generation-socamm-memory-solution-2506/