Intel's Lunar Lake CPUs to use on-package Samsung LPDDR5X memory
Samsung is rumored to supply LPDDR5X for Intel's Lunar Lake.
Intel has contracted Samsung to supply it LPDDR5X devices that it will use as on-package memory for its upcoming codenamed Lunar Lake processors due later this year, according to a DigiTimes report citing South Korean media. If the information is correct, this is a big design win for Samsung, as Intel will supply tens of millions of Lunar Lake CPUs over the next few years. Keep in mind that this is a leak and could be inaccurate.
Intel's Lunar Lake MX platform is reportedly designed primarily for thin-and-light laptops. It is set to come with either 16GB or 32GB of LPDDR5X-8533 memory-on-package, reducing the platform's footprint and improving performance compared to traditional platforms featuring either memory modules or soldered-down memory chips. Given that Lunar Lake is set to support on-package memory exclusively, Samsung could sell a boatload of its LPDDR5X-8533 memory products to Intel as the company's laptop platforms are sold in tens of millions of units quantities.
Meanwhile, we do not know whether Samsung will be the exclusive LPDDR5X supplier for Lunar Lake. Since Intel will sell Lunar Lake processors with on-package memory, it will clearly sell these products with pre-tested/validated memory devices. Yet, nothing would also stop Intel from validating LPDDR5X from Micron and SK Hynix.
Intel has touted Lunar Lake processors as featuring a brand-new microarchitecture designed from the ground up to offer breakthrough performance-per-watt efficiency. Based on recent slides, Intel's Lunar Lake MX platform will rely on a multi-chiplet Foveros-based design consisting of a CPU + GPU chiplet, a system-on-chip tile, and two memory packages. The CPU chiplet is expected to pack up to eight general-purpose cores (four high-performance Lion Cove and four Skymont energy-efficient cores), 12MB cache, up to eight Xe2 GPU clusters, and up to a six-tile NPU 4.0 AI accelerator. The platform is projected to have an 8W power envelope for fanless systems and a 17W – 30W envelope for designs with decent active cooling systems.
For over three years, Apple has used on-package memory for all of its Apple Silicon M-series chips for Macs. With Intel's Lunar Lake MX, this may become an industry-wide trend for thin-and-light laptops. Meanwhile, systems that require configurability, repairability, and upgradeability will continue to use SODIMMs based on commodity DDR5 memory, as well as recently introduced LPCAMM2 modules featuring LPDDR5X that brings together high performance and low power consumption.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
Jobs eviscerated at Chinese Arm chip design firm in wake of restrictions from TSMC — lack of access to 7nm node could cause 150 employees to be laid off
Chinese chip firms say a new round of US sanctions won’t stop China’s chip industry — Chinese government responds with its own export restrictions anyways
-
usertests and up to a six-tile NPU 4.0 AI accelerator
Help. I don't know squat about Intel's NPUs. How many TOPS do they claim for Meteor Lake, and how many tiles is it? Why is this NPU version 4.0?
Lunar Lake is supposed to have triple the AI performance, so I'd guess 2-tile in Meteor Lake. -
bit_user
Dunno, but it's not faster than their 128 EU iGPU (which lacks XMX cores, IIRC):usertests said:Help. I don't know squat about Intel's NPUs. How many TOPS do they claim for Meteor Lake,
Source: https://www.tomshardware.com/news/intel-details-core-ultra-meteor-lake-architecture-launches-december-14
Based on this slide, I'm guessing 2?usertests said:and how many tiles is it?
(same source as above)
Intel's original AI accelerator was called GNA (Gaussian/Neural Accelerator), which underwent 2 revisions. I guess they must be including those, in order to count the upcoming one as generation 4.usertests said:Why is this NPU version 4.0?
Yes. Given Gelsinger's recent comments about Lunar Lake having 3x AI performance, it lines up nicely with scaling from 2 engines to 6.usertests said:Lunar Lake is supposed to have triple the AI performance, so I'd guess 2-tile in Meteor Lake. -
bit_user It is set to come with either 16GB or 32GB of LPDDR5X-8533 memory-on-package, reducing the platform's footprint and improving performance compared to traditional platforms featuring either memory modules or soldered-down memory chips.
The bandwidth will be nice, but the real benefits are cost & power savings. I think you could probably achieve LPDDR5X-8533 off-package soldered (or LPCAMM modules?), but possibly at greater cost & power.
Note that LPDDR5(X) has significantly worse latency than regular DDR5. So, it's not a pure win, especially if they limit the memory datapath to just 128-bit. The main benefit for Apple of using on-package memory is their ability to scale up to 512-bit, but that certainly won't happen in a "thin & light" x86 laptop.