Samsung introduces SOCAMM2 LPDDR5X memory module for AI data centers — new standard set to offer reduced power consumption and double the bandwidth versus DDR5 RDIMMs

MEMBER EXCLUSIVE
Samsung SOCAMM2 LPDDR5X
(Image credit: Samsung)

Samsung has announced its own SOCAMM2 LPDDR5-based memory module designed specifically for AI data center platforms, positioning it to bring the power efficiency and bandwidth advantages of LPDDR5X to servers without the long-standing trade-off of permanent soldering, while aligning the form factor with an emerging JEDEC standard for accelerated and AI-focused systems.

Samsung says it is already working with Nvidia on Nvidia-accelerated infrastructure built around the module, positioning SOCAMM2 as the natural response to rising memory power costs, density constraints, and serviceability concerns in large-scale deployments.

At a high level, SOCAMM2 is aimed at a specific and growing class of systems where CPUs or CPU-GPU superchips are paired with large pools of system memory that must deliver high bandwidth at lower power than conventional server DIMMs can provide, and all within a smaller footprint. As inference workloads expand and AI servers transition to sustained, always-on operation, memory power efficiency can’t continue to be viewed as a secondary optimization; it is a material contributor to rack-level operating cost. SOCAMM2 is a reflection of this.

Why LPDDR is moving into the data center

LPDDR has long been associated with smartphones, an ideal application for its low-voltage operation and aggressive power management. In servers, however, its adoption has been limited by one practical issue more than any other: LPDDR is typically soldered directly to the board, which complicates upgrades, repairs, and hardware reuse at scale. That makes it a difficult sell for hyperscalers and other potential adoptees who expect to refresh memory independently of the rest of the platform.

SOCAMM2 is Samsung’s attempt to address this mismatch. The module uses LPDDR5X devices, but packages them into a detachable, compression-attached form factor designed for server deployments. Samsung highlights that SOCAMM2 has twice the bandwidth compared with DDR5 RDIMMs, along with reduced power consumption and a more compact footprint that can ease board routing and cooling in dense systems. The company also emphasizes serviceability, arguing that modular LPDDR allows memory to be replaced or upgraded without scrapping entire boards, reducing downtime and total cost of ownership over a system’s lifetime.

Samsung’s SOCAMM2 is expected to comply with the JEDEC JESD328 standard for compression-attached memory modules under the CAMM2 umbrella. The standard aims to make LPDDR-based memory modules interchangeable and vendor agnostic in the same way as standard RDIMMs are today, while preserving the signal integrity needed to run LPDDR5X at very high data rates. As AI racks consume increasingly large memory pools, DDR5 will continue to incur power and thermal penalties that scale poorly with capacity. SOCAMM2 will offer a way to raise effective bandwidth while cutting energy consumption, provided it can be integrated into platforms that support modular components.

SOCAMM2 versus RDIMM

SK hynix

(Image credit: SK hynix)

Understanding where SOCAMM2 fits requires looking at the full memory hierarchy in AI systems. At the top sits HBM, tightly coupled into the same package as GPUs or accelerators to deliver extreme bandwidth at the cost of price and capacity constraints. HBM is indispensable for training and high-throughput inference, but it is not a general-purpose memory solution. Below that, traditional DDR5 DIMMs provide large, relatively inexpensive capacity for CPUs, but with higher power draw and lower bandwidth per pin.

SOCAMM2 is aimed at this lower tier. By using LPDDR5X, it can operate at lower voltages and achieve higher per-pin data rates than DDR5, translating into better bandwidth per watt for CPU-attached memory. Samsung positions it as complementary to HBM rather than competitive, filling the gap between accelerator local memory and slower, more power-hungry system memory.

Samsung’s messaging suggests that SOCAMM2 is particularly well-suited to inference-heavy deployments, where sustained throughput and energy efficiency matter more than peak training performance. In those environments, shaving watts from memory power can have outsized effects at the rack and data hall level, especially as inference workloads tend to run continuously rather than in bursts.

There is, however, a fundamental trade-off baked into SOCAMM2's design in terms of latency. LPDDR5X achieves higher bandwidth and lower power through design choices that increase access latency compared with standard DDR5 DRAM. That's one of the reasons why LPDDR has been limited to tightly controlled system designs rather than socketed server or desktop memory.

AI workloads, on the other hand, operate under a different set of constraints. Training and inference pipelines are bandwidth-bound and highly parallel, with performance dominated by sustained data movement. In that context, LPDDR5X's higher latency is largely amortized, while its higher transfer rates and lower power consumption deliver measurable gains.

So, while modular LPDDR form factors have struggled to gain traction in deployments like consumer desktops, where interactive applications (such as games) are acutely sensitive to memory latency, it has found a more natural fit in AI applications where throughput and efficiency are more important.

Standardization, ecosystem support, and open questions

One of the most consequential aspects of SOCAMM2 is not the module itself, but the fact that it is being aligned with a JEDEC standard. Memory buyers are wary of proprietary form factors that lock them into a single vendor, and server platforms live or die by ecosystem support. By tying SOCAMM2 to an open specification, other memory suppliers and platform vendors will obviously participate.

Micron has already publicly stated that it is sampling SOCAMM2 modules with capacities reaching 192 GB, indicating that the form factor is not limited to niche configurations. High-capacity modules are essential if SOCAMM2 is to be taken seriously as a replacement or supplement to RDIMMs in AI servers, where per socket memory footprints can be enormous.

Even with standardization underway, several technical questions remain open. Thermal behavior under sustained load is one of them. LPDDR devices are efficient, but packing many of them into a compact module introduces heat density challenges, particularly in horizontally mounted configurations. Signal integrity at the upper end of LPDDR5X data rates is another concern, particularly as platforms approach the limits of what board layouts and connectors can reliably support.

Micron SOCAMM module

(Image credit: Micron)

Reliability and error handling could also present challenges. Enterprise buyers expect robust ECC support, telemetry, and predictable failure modes. JEDEC’s inclusion of SPD and management features in the SOCAMM2 specification is meant to address this, but real-world validation will depend on platform implementations and firmware maturity.

Finally, there is the question of cost. LPDDR5X is not inherently cheaper than DDR5, and SOCAMM2 adds new packaging and mechanical complexity. Its value proposition rests on total system economics rather than module price in isolation. Lower power draw can reduce cooling requirements and operating costs over the years of deployment, and modularity can improve asset utilization by allowing memory to be reused or upgraded independently. Whether those savings outweigh any upfront premium will vary by deployment and is likely to be a deciding factor in adoption.

Ultimately, Samsung’s SOCAMM2 announcement fits into a broader pattern of the data center industry revisiting assumptions that were baked in when servers were built primarily for general-purpose computing. AI workloads have changed the balance between compute, memory, power, and serviceability, and memory vendors are responding with form factors that would have seemed unnecessary a decade ago. SOCAMM2 does not redefine server memory on its own, but it reflects a recognition that the traditional memory DIMM might not be a viable solution for AI systems at scale.

Luke James
Contributor

Luke James is a freelance writer and journalist.  Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.