Skip to main content

Intel Demos Discrete Arc GPUs, Still Coming in Q1

Intel
(Image credit: Intel)

We've been hearing about Intel's Arc Alchemist in various forms since Raja Kudori left AMD's Radeon group for Intel all the way back in November 2017. It's no secret that he's been working to build a new graphics architecture and team in the ensuing years. And it's also proof of how difficult it can be to break into the graphics industry. But now, after four years of work, Intel Arc is taxiing down the runway and preparing for takeoff. Will AMD and Nvidia finally have some serious competition in the market for the best graphics cards? We're about to find out. Maybe.

The difficult thing about building a GPU isn't just the hardware. Drivers matter. On paper, GPUs just do a whole lot of math calculations and texturing, but we've seen numerous theoretically capable designs never get far. Intel's DG1 was clearly more of a proof of concept than a product that would actually compete with AMD or Nvidia, but it did pave the way with some needed driver updates — several of the issues we noted with our DG1 testing have since been fixed.

Still, the hardware definitely plays a role, and we have little idea of what to expect from Intel Arc. Architecture matters, and we'll need to actually taste the proverbial pudding before we can render judgement. Look at AMD and Nvidia as an example: RX 6900 XT 'only' delivers 23 TFLOPS of compute with 512GBps of bandwidth, while the RTX 3090 pushes 35.6 TFLOPS and 936GBps of bandwidth. You'd think Nvidia would run away with the performance crown, but across our GPU benchmarks test suite, the RTX 3090 only leads the RX 6900 XT by 3%. That's largely thanks to AMD's Infinity Cache, but other architectural design decisions also come into play.

All we really know about Intel Arc Alchemist consists of potential theoretical specs. We've seen leaks suggesting Arc could run at up to 2.45GHz, but even with 512 Vector Engines (the new nomenclature for Intel's Execution Units of previous GPUs), that's only a theoretical maximum of 20.1 TFLOPS. Leaks of the lower spec A380 with 128 Vector Engines would check in at 5 TFLOPS. That could be competitive with AMD's RX 6500 XT and Nvidia's RTX 3050, or it might fall well short.

Image 1 of 6

Intel

(Image credit: Intel)
Image 2 of 6

Intel

(Image credit: Intel)
Image 3 of 6

Intel

(Image credit: Intel)
Image 4 of 6

Intel

(Image credit: Intel)
Image 5 of 6

Intel

(Image credit: Intel)
Image 6 of 6

Intel

(Image credit: Intel)

To help assuage our fears about lackluster performance, Intel today demonstrated… video encoding, spread across both the integrated and discrete Intel GPUs in an upcoming laptop. Sigh. Arc gaming performance discussions consisted of talk about Death Stranding Director's Cut getting support for Intel's XeSS, an alternative to Nvidia's DLSS, and not much else. I sure hope Intel is sandbagging, because right now the lack of real-world gaming demonstrations of Arc have me worried.

It makes plenty of sense for Intel to go after the mobile market first. Intel sells more laptop chips than desktops, and there's more of a chance to differentiate through power and platform optimizations. However, the Intel Deep Link and Hyper Encode demonstration doesn't feel like something most people are worried about. Intel showed a Davinci Resolve video encode running 40% faster using Hyper Encode compared to just using the discrete GPU for encoding, which is great, but that 40% boost won't extend to traditional gaming.

AMD and Nvidia have basically abandoned CrossFire and SLI multi-GPU rendering. It was too much work and effort for inconsistent rewards. With Intel's integrated Xe Graphics solutions using a different architecture than the Arc dedicated GPUs, we can't see general purpose multi-GPU rendering being likely. We might get XeSS running on the integrated graphics, or other tech like multi-GPU encoding, but we want something more.

Ultimately, the Arc information so far revealed at CES 2022 has been underwhelming. Intel still has up to three months before the stated retail launch, and perhaps we'll still end up impressed, but don't hold your breath. We need to see performance, power use, and compatibility with a large suite of games. Until then, as Intel's presenter put it, "Stay tuned, more excitement is ahead." Alternatively, as a friend put it, this is the ideal time for a newcomer to the GPU space. Intel could drop a turd in a box and it would still sell, since the current GPU prices are enough to make gamers weep.

Jarred Walton

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.