Intel Arc May Miss Q1 2022 Launch Window

Intel
(Image credit: Intel)

Just days after reaffirming its plan to launch Arc Alchemist discrete graphics processors in the first quarter of this year, Intel quietly removed any mention of Q1 from its website, leaving only 2022. Could this mean a delay for Intel's standalone GPU? We asked Intel for clarification, and it told us, "We are targeting the first Alchemist products to be in market in Q1 2022" (emphasis ours). That leads us to some speculation about the pending release.

Intel recently removed all mentions of Q1 2022 for the Arc Alchemist launch from its website, as noticed by VideoCardz. Right now, Intel's Arc graphics solutions — hardware, software, and services — are said to be "coming 2022." Just days ago, they were set to arrive in Q1 2022. It sounds like some of them will indeed be available in the first quarter, but it looks like Intel wants to focus on delivering mobile GPUs, so we expect GPUs with 128 execution units (EUs, now also called Vector Engines) to arrive first, with higher performance parts coming later.

Before (left) and after (right) the changes to the text on the Intel Arc website were made. (Image credit: TomsHardware/Intel)

Right now, it seems possible that Intel's highest-end discrete GPUs with 512 EUs and maybe even 256 EUs for desktops will not be released in the first quarter. Meanwhile, launches of notebooks based on the same hardware are not really aligned, so promising to release all Arc Alchemist solutions in Q1 does not make a lot of sense, as Intel cannot talk for its partners.

Let's dive a little bit deeper into the history of Intel's Arc Alchemist (aka Intel DG2) family and see how the company's rhetoric has changed in the last 1.5 years in a bid to explain why we think that the blue company intends to start rolling out its new lineup with mobile GPUs first and why promising Arc in Q1 is not something that Intel wants to do.

From Gamers First...

Intel confirmed development of its Xe-HPG architecture for GPUs aimed at gamers in August 2020, and a month after that rumors surfaced that the company was looking at Q4 2021 as a possible launch timeframe for graphics cards based on the Xe-HPG. Back in 2020, Intel's official rhetoric about Xe-HPG-based products was that they were going to compete against the best graphics cards based on GPUs from AMD and Nvidia.

"We know at Intel that gamers are the hardest bunch to impress," said Raja Koduri (via EE Times), Intel's graphics chief. "They want products that have the best performance, best performance per watt, best performance per dollar, and the latest and greatest features. All at the same time. We had to leverage the best aspects of the three designs we had in progress to build a gaming optimized GPU."

The rumor mill changed its tune early in 2021 and started to point at very early 2022 as a potential release timeframe for Intel's DG2 desktop family. Eventually, Intel confirmed that its Arc Alchemist GPUs would be available in Q1 2022, but rumors indicated that the company delayed actual desktop product launch from CES 2022 to March 2022.

...To Notebooks and Creators

Earlier this week the CPU giant reaffirmed this timeframe, but with a different context. Instead of showing how good its Arc Alchemist discrete graphics cards for desktops would be in games, the company demonstrated the advantages that a standalone GPU can bring to an Alder Lake-based laptop in video encoding. The company further said that there were "more than 50 new mobile and desktop customer designs announced with Intel Arc graphics" and that it was "an exciting time for gamers and creators around the world."

Winning 50 designs with DG2 after maybe half of a dozen designs with DG1 is certainly quite an achievement, but it is important to note that mass market PC OEMs do not use high-performance standalone GPUs, so we have no idea how successful those expensive discrete GPUs are with PC makers (especially suppliers of desktops). Meanwhile, Intel did not mention anything regarding discrete desktop graphics cards. Furthermore, content creators these days hardly need a high-end discrete GPU, but rather proper video encoding/decoding performance. There is of course still mention of gamers, but hardware designers quite often attribute entry-level standalone graphics solutions to gamers as well.

Different Goals

Now that Intel emphasizes laptops with standalone Arc GPUs and does not talk about graphics boards for gamers, it is necessary to point out that while discrete graphics processors for notebooks and desktops may share the same silicon, they are actually different products with different usage models and development goals. To that end, what works well for desktops does not necessarily apply to laptops and vice versa, and this concerns both hardware tuning and software optimizations.

When GPU IHVs design a standalone graphics board for enthusiast-grade desktops, they focus on stability, performance, and features. This essentially translates into developing fine drivers and creating a feasible GPU configuration with high clocks to win reviews. Features like ray tracing and upscaling/antialiasing methods are good ways to attract attention, but gamers never forgive glitchy drivers and low performance. Power consumption and bill-of-materials (BOM) costs are not as important as performance in games and lack of glitches in drivers.

With notebook GPUs, things are different. Power consumption as well as heat dissipation get the utmost importance, which is why in some cases GPU developers have to send engineers to assist their partners with integrating their chips into laptops to ensure maximum reliability. To make it easier for PC makers to integrate these mobile GPUs, vendors typically offer these parts in rather modest configurations and with lowered clocks.

Since laptops nowadays are used by far more people than desktops, it is reasonable to assume that they are used to dealing with a wider range of software as well. IHVs need to ensure compatibility with more apps, sometimes even at the cost of performance. Performance and lack of issues with games are still important, but not as important as in the case of desktops GPUs. In fact, desktop discrete graphics cards used by OEMs are developed with similar goals as mobile GPUs, which is why we can encounter some odd configurations.

Arc Not Coming in Q1?

While Intel is a big company, it doesn't have infinite resources, so if it wants to address laptops for creators and OEM desktops first, it needs to prioritize the launch of smaller/energy-efficient parts and make appropriate preparations. Meanwhile, having just introduced the Alder Lake-based laptops (many with AMD or Nvidia discrete GPUs), PC makers may not be interested in refreshing their lineups in just a couple of months with Intel Arc-powered offerings.

Assuming that Intel's bigger discrete GPU hardware for desktops works fine, reallocating resources from desktop graphics cards for enthusiasts to other products will have an impact on the launch schedule of the former, but do not expect the impact to be dramatic as software optimizations benefit all graphics processors based on the same architecture. Still, even a month delay for high-end standalone graphics cards for desktops could upset potential buyers of Intel's Arc Alchemist graphics cards.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • Alvar "Miles" Udell
    Raja Koduri missing a launch target? That never happened before...
    Reply
  • watzupken
    Given the current situation, delays are always possible. However the unfortunate fact is that the delay is going to hurt Intel quite badly since the later they release their dedicated GPUs, the closer these GPUs will tend need to compete against the next gen GPUs from AMD and Nvidia. The consolation for Intel is that they will likely still sell out their GPUs (assuming it works well for gaming and mining), and still make money since all the GPUs prices are super inflated.
    Reply
  • shady28
    Intel missed a big opportunity not having Arc ready last year. They are blowing an opportunity to come in gangbusters and take over market share that Nvidia and AMD can't fill due to low production and high demand.

    There's no guarantee that crypto and mining will continue to be a thing this year, lots of reasons to think it won't, it's very possible they could release into a market saturated with new and used GPUs later this year. That would be an unmitigated disaster for them.
    Reply
  • VforV
    We are living in the age of delays, so nothing new here, but yes, intel is coming too little too late.

    I've said it multiple times that they should have launched at the latest Q4 2021 and now they are slipping to Q2 2022... that's less than 6 months, probably even 4 months until RDNA3 and Lovelace come with at least 2x perf increase over the best GPUs we have today. And intel is not even gonna deliver a competitor for the best GPUs we have today, which means they will be like -3X behind the top end of next gen. They will look pretty pathetic if proven true...
    Reply
  • shady28
    VforV said:
    We are living in the age of delays, so nothing new here, but yes, intel is coming too little too late.

    I've said it multiple times that they should have launched at the latest Q4 2021 and now they are slipping to Q2 2022... that's less than 6 months, probably even 4 months until RDNA3 and Lovelace come with at least 2x perf increase over the best GPUs we have today. And intel is not even gonna deliver a competitor for the best GPUs we have today, which means they will be like -3X behind the top end of next gen. They will look pretty pathetic if proven true...

    The only thing that is going to matter is supply, price/performance will work itself out. Intel's parts are aimed at (normally) low and mid range, we're talking 1650 super - 3070 levels (maybe) of performance.

    It won't matter what Nvidia and AMD do if they can't get volume. Intel probably has volume, last year they used their TSMC allocation for HPC on the supercomputer Aurora, but not so much this year. This means they'll be able to knock out serious volume on that TSMC node. Volume is great for taking over market share in a supply constrained market like we have now.

    My real point is, if the constrained supply issue dissipates - which it will if crypto crashes (and it seems to be trying to doing so) - that's when the performance\price will matter, when people have choices at relatively low prices.

    If crypto shoots back up, then Intel will certainly have no issue selling all it can make at MSRP or higher.

    Having said that, we're just talking about desktop dGPU here. Intel is releasing laptop based ARC dGPU, right now. Laptop is about 85% of the client market, no idea what % of the dGPU market it is though.

    Nothing is decided here yet because we aren't there yet, but certainly I get the feeling Intel is blowing an opportunity, crypto seems very cyclical with a year or two of heavy gains followed by a couple of years of famine. Intel may wind up selling into the famine ;)
    Reply
  • VforV
    shady28 said:
    The only thing that is going to matter is supply, price/performance will work itself out. Intel's parts are aimed at (normally) low and mid range, we're talking 1650 super - 3070 levels (maybe) of performance.

    It won't matter what Nvidia and AMD do if they can't get volume. Intel probably has volume, last year they used their TSMC allocation for HPC on the supercomputer Aurora, but not so much this year. This means they'll be able to knock out serious volume on that TSMC node. Volume is great for taking over market share in a supply constrained market like we have now.

    My real point is, if the constrained supply issue dissipates - which it will if crypto crashes (and it seems to be trying to doing so) - that's when the performance\price will matter, when people have choices at relatively low prices.

    If crypto shoots back up, then Intel will certainly have no issue selling all it can make at MSRP or higher.

    Having said that, we're just talking about desktop dGPU here. Intel is releasing laptop based ARC dGPU, right now. Laptop is about 85% of the client market, no idea what % of the dGPU market it is though.

    Nothing is decided here yet because we aren't there yet, but certainly I get the feeling Intel is blowing an opportunity, crypto seems very cyclical with a year or two of heavy gains followed by a couple of years of famine. Intel may wind up selling into the famine ;)
    I'm certainly not one to advocate for nvidia, but in this regard I'm pretty sure nvidia will have the highest volume in discrete GPUs, but maybe laptops not so much, so I can see intel having there more.

    Either way regardless of availability the issue I underlined above is that if they launch in Q2 and their best GPU is let's say the best scenario, at 3070ti level... well in Q3 Lovelace and RNDA3 equivalent class of a 3070ti will have more than x2 performance close to the same price as Arc. So in that case Arc wold need to be half the price to matter, if that happens.

    Would you buy a (theoretically) $500 Arc GPU equivalent of 3070ti when you can buy a 4070(ti) for $600? Or an RX 7700 XT/7800 for $600?
    Assuming they keep their price difference even if they have scalper prices, no one in their right mind would buy a whole generation older GPU at almost the same price as the new one.... well, except miners probably. So if that happens that 3070ti Arc GPU would need to be $300.
    Reply
  • shady28
    VforV said:
    I'm certainly not one to advocate for nvidia, but in this regard I'm pretty sure nvidia will have the highest volume in discrete GPUs, but maybe laptops not so much, so I can see intel having there more.

    Either way regardless of availability the issue I underlined above is that if they launch in Q2 and their best GPU is let's say the best scenario, at 3070ti level... well in Q3 Lovelace and RNDA3 equivalent class of a 3070ti will have more than x2 performance close to the same price as Arc. So in that case Arc wold need to be half the price to matter, if that happens.

    Would you buy a (theoretically) $500 Arc GPU equivalent of 3070ti when you can buy a 4070(ti) for $600? Or an RX 7700 XT/7800 for $600?
    Assuming they keep their price difference even if they have scalper prices, no one in their right mind would buy a whole generation older GPU at almost the same price as the new one.... well, except miners probably. So if that happens that 3070ti Arc GPU would need to be $300.

    Double the performance is highly optimistic speculation for one. A normal boost would be 20-30% within a tier. i.e. this would make a 4060 = 3070, and a 4070 = 3080. We might get more we might get less, but this is what I would expect.

    The reality right now is that you can go to stockx.com and buy a 3070 for $1000, while a 3060 runs around 700. These are the lower prices at this moment, normal retail when you can find one is usually higher than this by 10-20%.

    If we combine a reasonable performance assumption for next gen GPUs with reality of market prices , then what you're looking at will be something like :

    4060 at 3060 prices is $700 = 3070 performance for $700 which is > high end ARC price of $650

    So unless the supply issues abate then Intel will have no issues getting $650 for 3070 level performance even after a theoretical 4060 release.

    On a related note, I'm not sure where the thought that these new release will ease supply constraints comes from. NVidia went to Samsung specifically for more volume / lower cost, and now they are going to TSMC. If anything, this is going to put more pressure on TSMC. Basically every GPU in existence is coming off one fab company. This is a formulae for disaster.
    Reply
  • VforV
    shady28 said:
    Double the performance is highly optimistic speculation for one. A normal boost would be 20-30% within a tier. i.e. this would make a 4060 = 3070, and a 4070 = 3080. We might get more we might get less, but this is what I would expect.
    This is where we part ways, because based on past leaks I actually believe MLiD when he says it will be about 2x perf of this gen, especially the top tiers vs top tiers. He's been saying this for a while now and he was right on many occasions in the past on other leaks, so I have no reasons to doubt it now. Also a few of the trusted twitter leakers are saying the same things...

    So based on that we already know what's coming, at least the bigger picture. It will be 2x perf or even more, from both nvidia and AMD. There is a reason RDNA3 is going MCM before everyone else. The level of perf increase will make Turing and RDNA2 seem like expensive GPU bricks, especially in the controversially RT feature (above 2x perf), which will work much better on next gen GPUs.

    Because of that I stand for what I said above. Arc in Q2 2022 is too little too late at 3070ti level, best case scenario.
    Reply
  • JarredWaltonGPU
    VforV said:
    This is where we part ways, because based on past leaks I actually believe MLiD when he says it will be about 2x perf of this gen, especially the top tiers vs top tiers. He's been saying this for a while now and he was right on many occasions in the past on other leaks, so I have no reasons to doubt it now. Also a few of the trusted twitter leakers are saying the same things...

    So based on that we already know what's coming, at least the bigger picture. It will be 2x perf or even more, from both nvidia and AMD. There is a reason RDNA3 is going MCM before everyone else. The level of perf increase will make Turing and RDNA2 seem like expensive GPU bricks, especially in the controversially RT feature (above 2x perf), which will work much better on next gen GPUs.

    Because of that I stand for what I said above. Arc in Q2 2022 is too little too late at 3070ti level, best case scenario.
    MLID has been BADLY wrong so many times it's hardly even worth discussing. Remember the $150-$200 "RDNA Leak" back in December 2019? Laughable, but he went with it. The cards launched at $400 (after being dropped $50 at the last minute). When you fire multiple shotgun blasts at a target, you'll inevitably get a few hits, but YouTube is notoriously bad at increasing the amount of fake news and speculation.

    Yes, Nvidia is going from what is effectively 10nm class Samsung to 5nm class TSMC. That will help a lot. But most likely Nvidia will use the shrink to create smaller chips that are 50% faster than the previous generation at best. And it still needs to increase memory bandwidth a similar amount to scale that much. More than a 256-bit interface is costly, and 384-bit is basically the limit, which means there's a good chance memory bandwidth only increases a bit while GPU compute increases more.

    But again, the real problem is that the cost of TSMC N5 is going to be more than double the cost per square millimeter of Samsung 8N. So even if Nvidia wants to do a big 600+ mm^2 chip like GA102, and even if it could feed it with enough memory bandwidth, it will end up being way more expensive than a 3090. So Nvidia will balance performance increases with die size and cost, and probably go after something like an "RTX 4080" that performs 20-30% better than a 3080 with a theoretical price of maybe $999 and a die size closer to 400mm^2. Hopper will still get a massive chip, but that's because Hopper will only go into supercomputers and maybe workstations and those can handle the $15,000 price tag.
    Reply
  • VforV
    JarredWaltonGPU said:
    MLID has been BADLY wrong so many times it's hardly even worth discussing. Remember the $150-$200 "RDNA Leak" back in December 2019? Laughable, but he went with it. The cards launched at $400 (after being dropped $50 at the last minute). When you fire multiple shotgun blasts at a target, you'll inevitably get a few hits, but YouTube is notoriously bad at increasing the amount of fake news and speculation.

    Yes, Nvidia is going from what is effectively 10nm class Samsung to 5nm class TSMC. That will help a lot. But most likely Nvidia will use the shrink to create smaller chips that are 50% faster than the previous generation at best. And it still needs to increase memory bandwidth a similar amount to scale that much. More than a 256-bit interface is costly, and 384-bit is basically the limit, which means there's a good chance memory bandwidth only increases a bit while GPU compute increases more.

    But again, the real problem is that the cost of TSMC N5 is going to be more than double the cost per square millimeter of Samsung 8N. So even if Nvidia wants to do a big 600+ mm^2 chip like GA102, and even if it could feed it with enough memory bandwidth, it will end up being way more expensive than a 3090. So Nvidia will balance performance increases with die size and cost, and probably go after something that performance 20-30% better than a 3080 with a theoretical price of maybe $999 and a die size closer to 400mm^2. Hopper will still get a massive chip, but that's because Hopper will only go into supercomputers and maybe workstations and those can handle the $15,000 price tag.
    Sure, he got some things wrong, but he also gets a lot of them right. Price is the easiest one to get wrong and more so these days, so I don't diss the guy because of that.

    It's not like Coreteks and all the press media (including this site) that believed RDNA2 will be 2080Ti level and were wrong, while in the mean time MLiD always said to expect at least 3080 performance and we got actually 3090 performance.

    I don't really want to continue a debate now on how many times he was wrong and how many times he was right. The fact is he's getting better and better and has more reliable sources now, than in 2019.

    About Lovelace and RDNA3: if nvidia does not push to the absolute limits both the size of the chips, the core speeds, IPC and everything else in between, they will lose badly to RDNA3 which can scale higher much easier since it's MCM. That's why we got a 450W 3090 Ti now, because nvidia not only does not like to lose, they don't like even parity - they (as in, Jensen more exactly) "need" to beat the opponent at every metric possible.

    This is the same reason why after a 450W 3090 Ti which is a psychological move too, so we get used to even higher power usage, we should not complain when Lovelace will come with 550W or more power usage to fight with their monolithic design an MCM RDNA3 with lower power draw. Although the top RDNA3 chips will also have a higher power draw than RDNA2, maybe 450W, but still lower than what Lovelace will require - and still nvidia will lose vs RDNA3, at least in raster.

    Depending on how hard nvidia pushes everything on that monolithic chip will be the case of losing badly or not so bad vs RDNA3.

    As for prices, expect even higher ones across the board and who ever wins the next gen GPU war, will ask at least $2000 MSRP if not more for the top halo GPU and we can expect real prices even higher in shops.
    Reply