Intel and Samsung Display cooperate to advance next-gen AI PCs into 'unchartered territory'
The companies have signed an agreement to develop displays tailored for AI devices like AI PCs.

Intel and Samsung Display have signed a memorandum of understanding (MOU) to develop displays tailored for AI devices, such as AI PCs, reports ZDNet Korea. With this collaboration, Intel might enhance its mobile platforms with better displays tailored for capabilities of its GPUs. Samsung, on the other hand, might secure additional presence on the market of premium laptops.
Through this collaboration, Samsung Display aims to refine screens that work seamlessly with Intel's processors with AI capabilities, such as Core 200-series 'Lunar Lake' and the upcoming Core 300-series 'Panther Lake' and Core 400-series 'Nova Lake' CPUs that will hit the market in 2025 and 2026, respectively. The goal is to 'enhance computing experience,' though this is a rather vague description of the mutual work.
In fact, Intel and Samsung Display have been working with each other for years already. For example, Samsung's latest Galaxy Book 5 laptop uses Intel's Core 200-series processors as well as the company's own OLED displays.
"With the partnership with Intel, which keeps advancing the future of personal computing, we will be able to accelerate innovating next-generation display technologies," said Lee Ho-jung, Samsung Display's vice president for small and medium-sized product planning, in a statement published by Korea Times. "The partnership will usher in an unchartered territory of laptop computer user experiences and allow the two companies to lead the global AI PC market.”
Intel is by any means not new to collaborating with developers of displays. Back in the day, it worked together with Innolux and Sharp to develop low-power display technology to enhance the battery life of Intel-based notebooks. Before that, the company worked with LG to enable its WiDi wireless display technology on LG's TVs and displays. Intel and Samsung now believe that their collaborative work will push innovation forward and improve user experience with AI-integrated devices, though again, this is a very unclear description of what is to come.
Samsung Display competes fiercely against companies like BOE and LG Electronics over the premium laptop market. The collaboration with Intel will secure Samsung Display a place inside next-generation Intel Core-based laptops. Perhaps, the joint marketing campaigns will further draw attention to technologies developed by both companies and will certainly help with additional brand recognition.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
-
Pierce2623 Why do we use the term “AI PCs” when any processor with modern extension support can run AI workloads on the CPU? Using such a ridiculous term for a PC with a TINY piece of of fixed function silicon doing matrix math is silly. You can even get usb port matrix accelerators (generally based on the Google Coral) and effectively accelerate smaller LLMs etc as long as you understand how to get it working(which funnily enough also applies to the current integrated TPU/NPU solutions).Reply -
usertests Sounds like a load of BS.Reply
I don't know if anyone has measured it yet, but supposedly a decent chunk of APU die area is going to XDNA2 (50 TOPS). Lisa Su joked about it on stage. And that could be set to increase if they are looking to, for example, double the performance with the next iteration. Newer process nodes will keep the size in check, but a lot of transistors are being devoted to this.Pierce2623 said:Using such a ridiculous term for a PC with a TINY piece of of fixed function silicon doing matrix math is silly.
Better find something to do with NPUs, because they are here to stay for at least the next few years. -
bit_user I see that the source publication also used the phrase "unchartered territory", but that makes no sense. It should be "uncharted territory".Reply -
bit_user
Yeah, I can't think of anything AI-related you could do with a display, other than upscaling. Maybe they have some clever ideas, but I think it's just marketing spin put on their continued partnership.usertests said:Sounds like a load of BS.
It's not small, but looks bigger than 4 full-size Zen 5 cores and their L2 cache.usertests said:I don't know if anyone has measured it yet, but supposedly a decent chunk of APU die area is going to XDNA2 (50 TOPS). Lisa Su joked about it on stage. And that could be set to increase if they are looking to, for example, double the performance with the next iteration. Newer process nodes will keep the size in check, but a lot of transistors are being devoted to this.
Source: https://www.techpowerup.com/325035/amd-strix-point-silicon-pictured-and-annotated
I've tried, but pretty much everything I can think of is either something you could do with a neural network or something you could at least express in the type of processing graphs they use. It's further complicated by the fact that most of their compute power is decidedly low-precision, low-range. Also, I think they basically require data to be DMA'd in/out of local memory, so you can't really do anything with them that would require much random access.usertests said:Better find something to do with NPUs, because they are here to stay for at least the next few years.
GPUs are far more flexible for mapping onto general compute workloads. They make heavy use of SMT, as a means of hiding random access latencies, which works well, as long as you're doing something sufficiently parallelizable. They also have excellent support for fp32 and even int32, which are much better suited to doing general computation.
What's interesting about the PS5 Pro is that Sony took the approach of specializing RDNA2's compute units to better handle AI workloads (up to 300 TOPS worth!), instead of bolting on an adjunct NPU. AMD and Sony now have a joint project to better explore such architectures, which seems like it might influence UDNA. -
TerryLaze
So that customers can easily tell what they are looking at so they can buy or avoid buying it, the same reason anything is labeled.Pierce2623 said:Why do we use the term “AI PCs” when any processor with modern extension support can run AI workloads on the CPU? -
Pierce2623
But these “AI” PCs don’t offer any services you can’t access without an NPU. Do what makes them AI? So everything with a modern GPU is also an AI PC since they have enough shaders to easily compete with an NPU? What about just running AI models on your CPU? You can still get real time responses from stuff like Llama with no GPU acceleration? Most modern phone chip had an NPU before the AI craze even caught on. I’m afraid you’re confusing marketing with “letting you know what you’re buying”.TerryLaze said:So that customers can easily tell what they are looking at so they can buy or avoid buying it, the same reason anything is labeled. -
bit_user
NPUs offer more inferencing horsepower than either CPU or GPU, and better power efficiency as well. The amount of horsepower is important for realtime processing tasks, like image upscaling or other video processing techniques (e.g. background removal for video conferencing). Power-efficiency is important for laptops, so they don't need a bulky, heavy cooling solution and so customers can use these features while on battery.Pierce2623 said:But these “AI” PCs don’t offer any services you can’t access without an NPU.
The main race for AI PCs is in laptops, which can only accommodate a dGPU at considerable expense, bulk, and power consumption.Pierce2623 said:So everything with a modern GPU is also an AI PC since they have enough shaders to easily compete with an NPU? -
Pierce2623
My point was a modern iGPU offers more inferencing power than an NPU. Look at Lunar Lake. The GPU offers more “AI TOPS” (which is literally just Tflops at a lower precision format) than the NPU and AMD’s would too if they rated it for “AI TOPS”.bit_user said:NPUs offer more inferencing horsepower than either CPU or GPU, and better power efficiency as well. The amount of horsepower is important for realtime processing tasks, like image upscaling or other video processing techniques (e.g. background removal for video conferencing). Power-efficiency is important for laptops, so they don't need a bulky, heavy cooling solution and so customers can use these features while on battery.
The main race for AI PCs is in laptops, which can only accommodate a dGPU at considerable expense, bulk, and power consumption. -
bit_user
Okay, if we take the example of Lunar Lake, because Intel went so big on its iGPU, it does actually overtake the NPU on raw TOPS (60 vs. 48) . However, the iGPU apparently consumes about 50% more area and I'm sure it burns much more power at 60 TOPS than the NPU does. In Meteor Lake, Intel had some slides showing their relative power efficiency, but I'm not seeing equivalent slides for Lunar Lake.Pierce2623 said:My point was a modern iGPU offers more inferencing power than an NPU. Look at Lunar Lake. The GPU offers more “AI TOPS” (which is literally just Tflops at a lower precision format) than the NPU and AMD’s would too if they rated it for “AI TOPS”.
It's power-efficiency that's the main value-add of these NPUs, and it's why AMD said they're not in a hurry to include them in desktop CPUs.
Sources:
https://www.tomshardware.com/pc-components/cpus/intel-unwraps-lunar-lake-architecture-up-to-68-ipc-gain-for-e-cores-16-ipc-gain-for-p-cores/5 https://www.tomshardware.com/pc-components/cpus/intel-unwraps-lunar-lake-architecture-up-to-68-ipc-gain-for-e-cores-16-ipc-gain-for-p-cores/5 I read this in an interview with an AMD exec. Don't remember where, but maybe I can find it if you don't believe me.
One thing you could do with a dedicated NPU is almost completely offloading AI-upscaling from the GPU.