Intel's AI PC chips aren't selling well — instead, old Raptor Lake chips boom

Intel
(Image credit: Intel)

Times are already tough for Intel, but now it turns out its new, heavily-promoted AI PC chips aren't selling as well as expected, thus creating a shortage of production capacity for its older chips. The news comes as the CEO announced looming layoffs and a poor financial report sent the company's stock tumbling.

Intel says its customers are buying less expensive previous-generation Raptor Lake chips instead of the new, and significantly more expensive, AI PC models like the Lunar Lake and Meteor Lake chips for laptops.

asdf

(Image credit: Intel)

As you can see above, Intel's Q1 financial results for its Client Computing Group (CCG), which addresses the consumer chip market, are less than stellar, as revenue slumped 8% compared to the same period last year.

It is also noteworthy that Intel's last two generations of chips have been plagued by persistent reliability issues that necessitate full chip replacements for impacted products. Those chips are fabbed on the 'Intel 7' process node, so some of the unexpected production capacity constraints could be impacted by larger-than-expected numbers of returns.

Naturally, Intel's take on the state of the AI PC revolves around its own products, which have obviously seen lackluster uptake, so we're especially interested to hear AMD's take on the matter when it reveals its results in ten days. We'll also have in-depth reporting on CPU market statistics shortly thereafter to track the impact of potential share losses for Intel. Stay tuned.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Paul Alcorn
Editor-in-Chief

Paul Alcorn is the Editor-in-Chief for Tom's Hardware US. He also writes news and reviews on CPUs, storage, and enterprise hardware.

  • Gururu
    I don't even know what AI means anymore. So does it just mean that the hardware runs ChatGPT faster?
    Reply
  • JRStern
    Gururu said:
    I don't even know what AI means anymore. So does it just mean that the hardware runs ChatGPT faster?
    Doesn't even do that.
    They'd like it to, but they either know it doesn't or ... ok they know, but it's a secret!
    AI means whatever marketing wants it to mean, it mostly means "you need to send Intel money now!"

    Someday this will all work out, but Intel's going to need to develop actual expertise in software (and marketing) first or it will happen but they'll be gone.
    Reply
  • JRStern
    So somebody tell me, what kind of price difference is this?
    More than $100?
    Don't you get more raw performance with the newer chips, besides this (unused) NPU?
    Reply
  • Mr Majestyk
    NPU = Near Pointless Unit.

    Nice smack up the side of the head to Microsoft to, We don't want your copilot trash
    Reply
  • sygreenblum
    Mr Majestyk said:
    NPU = Near Pointless Unit.

    Nice smack up the side of the head to Microsoft to, We don't want your copilot trash
    Your just saying that because most people won't use it or don't need it, and those that do will certainly be using a dedicated GPU with 50 times more performance, instead.

    But you see, most people and professionals isn't Intels market for this chip. Honestly, I'm not actually sure what market they're targeting but it seems like there should be one, considering they designed a chip for something right?
    Reply
  • usertests
    Mr Majestyk said:
    NPU = Near Pointless Unit.

    Nice smack up the side of the head to Microsoft to, We don't want your copilot trash
    I'd be fine with an NPU, if it was supported. If you can't easily run LLMs, Stable Diffusion, etc. on it on Windows or Linux, then there's not much point. If it's slower than the iGPU for these tasks, there's even less of a point. "Copilot" is another matter entirely but it might not even use the NPU for most of its stuff. Like that MS Paint image generation that uses a central server and wants you to pay, rather than using an NPU at all.
    Reply
  • endocine
    wonder what the sales numbers are like for arrow lake vs raptor lake/refresh.
    Reply
  • JRStern
    usertests said:
    I'd be fine with an NPU, if it was supported. If you can't easily run LLMs, Stable Diffusion, etc. on it on Windows or Linux, then there's not much point. If it's slower than the iGPU for these tasks, there's even less of a point. "Copilot" is another matter entirely but it might not even use the NPU for most of its stuff. Like that MS Paint image generation that uses a central server and wants you to pay, rather than using an NPU at all.
    As I understand it NPU was copied from a similar unit on many cell phones that's used for picture editing.
    Somewhere in the future is "neuromorphic" computing which with any luck will be about 6 orders of magnitude more efficient than current LLMs and something like an NPU will be useful at the edge.
    But this ain't that.
    Reply
  • hotaru251
    JRStern said:
    So somebody tell me, what kind of price difference is this?
    the top of line core ultra cpu they have atm is Intel® Core™ Ultra 9 Processor 285K which is around $600 for desktop version.
    The 14900k is 450-500
    the 13900k is 400(ish)

    laptop ofc have premium prices so liekly looking at a 300 or so difference
    Reply
  • usertests
    JRStern said:
    As I understand it NPU was copied from a similar unit on many cell phones that's used for picture editing.
    Somewhere in the future is "neuromorphic" computing which with any luck will be about 6 orders of magnitude more efficient than current LLMs and something like an NPU will be useful at the edge.
    But this ain't that.
    Sure, they've been around for at least 8 years in ARM SoCs (such as Apple A11 in 2017). And Intel included a tiny Gaussian and Neural Accelerator (GNA) in Cannon Lake in 2018. Today's NPUs accelerate ML-focused low-precision operations and aren't "neuromorphic" in the sense of mimicking a brain.

    The main point of contention right now is that a decent amount of die area is going to the NPU. Lisa Su even joked about the NPU size with a Microsoft exec. iGPUs could be made larger instead, while performing some of the same operations, which is what Sony chose for PS5 Pro. The NPUs ought to have superior TOPS/Watt (has anyone on the planet benchmarked this?) which is theoretically useful in laptops. If it gets used.

    It's plausible that they will get bigger or hold a steady proportion of the die. For example, the area used by XDNA stays the same on a new node but performance increases from 50 to 100 TOPS.
    Reply