Skip to main content

Intel Arc Alchemist: Release Date, Specs, Everything We Know

Intel Xe Graphics mock up
(Image credit: Intel)

Intel has been hyping up Xe Graphics for about two years, but the Intel Arc Alchemist GPU will finally bring some needed performance and competition for Team Blue in the discrete GPU space. This is the first 'real' dedicated Intel GPU since the i740 back in 1998 — or technically, a proper discrete GPU after the Intel Xe DG1 paved the way earlier this year. The competition among the best graphics cards is fierce, and Intel's current integrated graphics solutions basically don't even rank on our GPU hierarchy (UHD Graphics 630 sits at 1.8% of the RTX 3090 based on just 1080p medium performance).

Could Intel, purveyor of low performance integrated GPUs—"the most popular GPUs in the world"—possibly hope to compete? Yes, actually, it can. Plenty of questions remain, but after Intel revealed additional details of its Arc Alchemist GPU architecture at Intel Architecture Day 2021, we're cautiously hopeful that the final result will be better than previous attempts. Intel has also been gearing up its driver team for the launch, fixing compatibility and performance issues on existing graphics solutions. Frankly, there's nowhere to go from here but up.

The difficulty Intel faces in cracking the dedicated GPU market can't be underestimated. AMD's Big Navi / RDNA 2 architecture has competed with Nvidia's Ampere architecture since late 2020. While the first Xe GPUs arrived in 2020, in the form of Tiger Lake mobile processors, and Xe DG1 showed up by the middle of 2021, neither one can hope to compete with even GPUs from several generations back. Overall, Xe DG1 performed about the same as Nvidia's GT 1030 GDDR5, a weak-sauce GPU hailing from May 2017. It was also a bit better than half the performance of 2016's GTX 1050 2GB, despite having twice as much memory.

So yes, Intel has a steep mountain to ascend if it wants to be taken seriously in the dedicated GPU space. Here's the breakdown of the Arc Alchemist architecture, which gives us a glimpse into how Intel hopes to reach the summit. Actually, we're just hoping Intel can make it to the base camp, leaving the actual summiting for the future Battlemage, Celestial, and Druid architectures. But we'll leave those for a future discussion. 

(Image credit: Intel)
Intel Arc Alchemist At A Glance

Specs: Up to 512 Vector Units / 4096 Shader Cores
Memory: Likely up to 16GB GDDR6
Process: TSMC N6 (refined N7)
Performance: RTX 3070 / RX 6800 level, maybe
Release Date: Q1 2022
Price: Intel needs to be competitive

Intel's Xe Graphics aspirations hit center stage in early 2018, starting with the hiring of Raja Koduri from AMD, followed by chip architect Jim Keller and graphics marketer Chris Hook, to name just a few. Raja was the driving force behind AMD's Radeon Technologies Group, created in November 2015, along with the Vega and Navi architectures. Clearly, the hope is that he can help lead Intel's GPU division into new frontiers. Obviously, Arc Alchemist represents the results of several years worth of labor.

Not that Intel hasn't tried this before. Besides the i740 in 1998, Larrabee and the Xeon Phi had similar goals back in 2009, though the GPU aspect never really panned out. Plus, Intel has steadily improved the performance and features in its integrated graphics solutions over the past couple of decades (albeit at a slow and steady snail's pace). So, third time's the charm, right?

Of course, there's much more to building a good GPU than just saying you want to make one, and Intel has a lot to prove. Here's everything we know about the upcoming Intel Arc Alchemist, including specifications, performance expectations, release date, and more.

Potential Intel Arc Alchemist Specifications and Price

This concept rendering of Intel's Xe Graphics is a reasonable guess at what a larger card could look like, but definitely not the final product. (Image credit: Intel)

We'll get into the details of the Arc Alchemist architecture below, but let's start with the high-level overview. We know that Intel has at least two different GPU die planned for Arc Alchemist, and we expect there will be a third GPU in the middle space that uses a harvested chip with the larger die. There might be more configurations than three, but that's the minimum we expect to see. Here's what we expect to see in terms of specifications.

Intel Arc Alchemist Expected Specifications
Arc High-EndArc Mid-RangeArc Entry
GPUArc 00071Arc 00071?Arc 00329
Process (nm)TSMC N6TSMC N6TSMC N6
Transistors (billion)~20~20 (partial)~8
Die size (mm^2)~396mm2 (24x16.5)~396mm2 (24x16.5)~153mm2 (12.4x12.4)
Vector Engines512384128
GPU cores (ALUs)409630721024
Clock (GHz)2.0–2.32.0–2.32.0–2.3
VRAM Speed (Gbps)16?14–1614–16
VRAM (GB)16 GDDR6126 GDDR6
Bus width25619296
ROPs128?96?32?
TMUs256?192?64?
TFLOPS16.4-18.4?12.3-14.1?4.1–4.7?
Bandwidth (GB/s)512?336–384?168–192?
TBP (watts)300?225?75?
Launch DateQ1 2022Q1 2022Q1 2022
Launch Price$599?$399?$199?

As we dig deeper throughout this article, we'll discuss where some of the above information comes from, but we're fairly confident about many of the core specs on the full large and small Arc Alchemist chips. Based on the wafer and die shots, along with other information, we expect Intel to enter the dedicated GPU market (not counting the DG1) with products spanning the entire budget to high-end range.

For now, we anticipate three products built from two chips, but that could change. Intel has just a few variations on its CPU cores, for example, but ends up selling dozens of different products. But Intel has ruled the CPU world for decades, while its GPU efforts are far behind the competition. So eliminating the cruft and focusing on just three core products would make more sense in our view.

The actual product names haven't been announced, though the Intel Arc branding will be used. We could see Something like Arc 1800, Arc 1600, and Arc 1200 roughly corresponding to the i7, i5, and i3 CPU branding, or we could see something entirely different. There's no sense getting lost in the weeds right now, though, as Intel will eventually decide on and reveal the actual names.

Prices and some of the finer details are estimates based on expected performance and market conditions. Of course, actual real-world performance will play a big role in determining how much Intel can charge for the various graphics card models, but if — and that's a pretty big if! — the high-end card can compete with AMD's RX 6800 and Nvidia's RTX 3070 Ti, we would expect Intel to price it accordingly.

There's an alternative view as well. Intel could lowball the pricing and look to make a splash in the dedicated graphics card market. Considering the shortages on current AMD and Nvidia GPUs and the extreme pricing we see at online stores — it's often not much better than the eBay scalper prices we track in our GPU price index — many gamers might be more interested in giving Intel GPUs a shot if they're priced at half the cost of the competition, even if they're slower. That's probably a bit too wishful thinking, as Intel will want to make a profit, and the extreme demand for GPUs means Intel likely won't have to undercut its competitors by that much.

That takes care of the high-level overview. Now let's dig into the finer points and discuss where these estimates come from.

Arc Alchemist: Beyond the Integrated Graphics Barrier 

(Image credit: Intel)

Over the past decade, we've seen several instances where Intel's integrated GPUs have basically doubled in theoretical performance. Despite the improvements, Intel frankly admits that integrated graphics solutions are constrained by many factors: Memory bandwidth and capacity, chip size, and total power requirements all play a role.

While CPUs that consume up to 250W of power exist — Intel's Core i9-10900K and Core i9-11900K both fall into this category — competing CPUs that top out at around 145W are far more common (e.g., AMD's Ryzen 5000 series). Plus, integrated graphics have to share all of those resources with the CPU, which means it's typically limited to about half of the total power budget. In contrast, dedicated graphics solutions have far fewer constraints.

Consider the first generation Xe-LP Graphics found in Tiger Lake (TGL). Most of the chips have a 15W TDP, and even the later-gen 8-core TGL-H chips only use up to 45W (65W configurable TDP). Except TGL-H also cut the GPU budget down to 32 EUs (Execution Units), where the lower power TGL chips had 96 EUs.

In contrast, the top AMD and Nvidia dedicated graphics cards like the Radeon RX 6900 XT and GeForce RTX 3080 Ti have a power budget of 300W to 350W for the reference design, with custom cards pulling as much as 400W.

We don't know precisely how high Intel plans to go on power use with Arc Alchemist, aka Xe HPG, but we expect it to land in the same ballpark as AMD and Nvidia GPUs — around 300W. What could an Intel GPU do with 20X more power available? We'll find out when Intel's Arc Alchemist GPU launches.

Intel Arc Alchemist Architecture 

(Image credit: Intel)

Intel may be a newcomer to the dedicated graphics card market, but it's by no means new to making GPUs. Current Rocket Lake and Tiger Lake CPUs use the Xe Graphics architecture, the 12th generation of graphics updates. The first generation of Intel graphics was found in the i740 and 810/815 chipsets for socket 370, back in 1998-2000. Arc Alchemist, in a sense, is second-gen Xe Graphics (i.e., Gen13 overall), and it's common for each generation of GPUs to build on the previous architecture, adding various improvements and enhancements. The Arc Alchemist architecture changes are apparently large enough that Intel has ditched the Execution Unit naming of previous architectures and the main building block is now called the Xe-core.

To start, Arc Alchemist will support the full DirectX 12 Ultimate feature set. That means the addition of several key technologies. The headline item is ray tracing support, though that might not be the most important in practice. Variable rate shading, mesh shaders, and sampler feedback are also required — all of which are also supported by Nvidia's RTX 20-series Turing architecture from 2018, if you're wondering. Sampler feedback helps to optimize the way shaders work on data and can improve performance without reducing image quality.

The Xe-core contains 16 Vector Engines (formerly called Execution Units), each of which operates on a 256-bit SIMD chunk (single instruction multiple data). The Vector Engine can process eight FP32 instructions simultaneously, each of which is traditionally called a "GPU core" in AMD and Nvidia architectures, though that's a misnomer. It's unclear what other data types are supported by the Vector Engine (possibly FP16 and DP4a), but it's joined by a second new pipeline, the XMX Engine (Xe Matrix eXtensions).

Each XMX pipeline operates on a 1024-bit chunk of data, which can contain 64 individual pieces of FP16 data. The Matrix Engines are effectively Intel's equivalent of Nvidia's Tensor cores, and they're being put to similar use. They offer a huge amount of potential FP16 computational performance, and should prove very capable in AI and machine learning workloads. More on this below.

(Image credit: Intel)

Xe-core represents just one of the building blocks used for Intel's Arc GPUs. Like previous designs, the next level up from the Xe-core is called a render slice (analogous to an Nvidia GPC, sort of) that contains four Xe-core blocks. In total, a render slice contains 64 Vector and Matrix Engines, plus additional hardware. That additional hardware includes four ray tracing units (one per Xe-core), geometry and rasterization pipelines, samplers (TMUs, aka Texture Mapping Units), and the pixel backend (ROPs).

The above block diagrams may or may not be fully accurate down to the individual block level. For example, looking at the diagrams, it would appear each render slice contains 32 TMUs and 16 ROPs. That would make sense, but Intel has not yet confirmed those numbers (even though that's what we used in the above specs table).

The ray tracing units are perhaps the most interesting addition, but other than their presence and their capabilities — they can do ray traversal, bounding box intersection, and triangle intersection — we don't have any details on how the RT units compare to AMD's ray accelerators or Nvidia's RT cores. Are they faster, slower, or similar in overall performance? We'll have to wait to get hardware in hand to find out for sure.

Intel did provide a demo of Alchemist running an Unreal Engine demo that uses ray tracing, but it's for an unknown game, running at unknown settings ... and running rather poorly, to be frank. Hopefully that's because this is early hardware and drivers, but skip to the 4:57 mark in this Arc Alchemist video from Intel to see it in action. Based on what was shown there, we suspect Intel's Ray Tracing Units will be similar to AMD's Ray Accelerators, which means even the top Arc Alchemist GPU will only be roughly comparable to AMD's Radeon RX 6600 XT — not a great place to start, but then RT performance and adoption still aren't major factors for most gamers.

(Image credit: Intel)

Finally, Intel uses multiple render slices to create the entire GPU, with the L2 cache and the memory fabric tying everything together. Also not shown are the video processing blocks and output hardware, and those take up additional space on the GPU. The maximum Xe HPG configuration for the initial Arc Alchemist launch will have up to eight render slices. Ignoring the change in naming from EU to Vector Engine, that still gives the same maximum configuration of 512 EU/Vector Engines that's been rumored for more than a year.

Intel didn't quote a specific amount of L2 cache per render slice or for the entire GPU. We do know there will be multiple Arc configurations, though. So far, Intel has shown one with two render slices and a larger chip used in the above block diagram that comes with eight render slices. Intel also revealed that its Xe HPC GPUs (aka Ponte Vecchio) would have 512KB of L1 cache per Xe-core, and up to 144MB of L2 "Rambo Cache" per stack, but that's a completely different part, and the Xe HPG GPUs will likely have less L1 and L2 cache. Still, given how much benefit AMD saw from its Infinity Cache, we wouldn't be shocked to see 32MB or more of total cache on the largest Arc GPUs.

While it doesn't sound like Intel has specifically improved throughput on the Vector Engines compared to the EUs in Gen11/Gen12 solutions, that doesn't mean performance hasn't improved. DX12 Ultimate includes some new features that can also help performance, but the biggest change comes via boosted clock speeds. Intel didn't provide any specific numbers, but it did state that Arc Alchemist can run at 1.5X frequencies compared to Xe LP, and it also said that Alchemist (Xe HPG) delivers 1.5X improved performance per watt. Taken together, we're looking at potential clock speeds of 2.0–2.3GHz for the Arc GPUs, which would yield a significant amount of raw compute.

The maximum configuration of Arc Alchemist will have up to eight render slices, each with four Xe-cores, 16 Vector Engines per Xe-core, and each Vector Engine can do eight FP32 operations per clock. Double that for FMA operations (Fused Multiply Add, a common matrix operation used in graphics workloads), then multiply by a potential 2.0–2.3GHz clock speed, and we get the theoretical performance in GFLOPS:

8 (RS) * 4 (Xe-core) *16 (VE) * 8 (FP32) * 2 (FMA) * 2.0–2.3 (GHz) = 16,384–18,841.6 GFLOPS

Obviously, GFLOPS (or TFLOPS) on its own doesn't tell us everything, but 16-19 TFLOPS for the top configurations is certainly nothing to scoff at. Nvidia's Ampere GPUs theoretically have a lot more compute — the RTX 3080, as an example, has a maximum of 29.8 TFLOPS — but some of that gets shared with INT32 calculations. AMD's RX 6800 XT, by comparison 'only' has 20.7 TFLOPS, but in many games, it delivers similar performance to the RTX 3080. In other words, raw theoretical compute absolutely doesn't tell the whole story; Arc Alchemist could punch above — or below! — its theoretical weight class.

Still, let's give Intel the benefit of the doubt for a moment. Depending on final clock speeds, Arc Alchemist comes in below the theoretical level of the current top AMD and Nvidia GPUs, but not by much. On paper, at least, it looks like Intel could land in the vicinity of the RTX 3070/3070 Ti and RX 6800 — assuming drivers and everything else doesn't hold it back.

XMX: Matrix Engines and Deep Learning for XeSS 

Image 1 of 3

Intel Arc Alchemist and Xe HPG Architecture

(Image credit: Intel)
Image 2 of 3

Intel Arc Alchemist and Xe HPG Architecture

(Image credit: Intel)
Image 3 of 3

Intel Arc Alchemist and Xe HPG Architecture

(Image credit: Intel)

We briefly mentioned the XMX blocks above. They're potentially just as useful as Nvidia's Tensor cores, which are used not just for DLSS, but also for other AI applications, including Nvidia Broadcast. Intel also announced a new upscaling and image enhancement algorithm that it's calling XeSS: Xe Superscaling.

Intel didn't go deep into the details, but it's worth mentioning that Intel recently hired Anton Kaplanyan. He worked at Nvidia and played an important role in creating DLSS before heading over to Facebook to work on VR. It doesn't take much reading between the lines to conclude that he's likely doing a lot of the groundwork for XeSS now, and there are many similarities between DLSS and XeSS.

XeSS uses the current rendered frame, motion vectors, and data from previous frames and feeds all of that into a trained neural network that handles the upscaling and enhancement to produce a final image. That sounds basically the same as DLSS 2.0, though the details matter here, and we assume the neural network will end up with different results.

Intel did provide a demo using Unreal Engine showing XeSS in action (see below), and it looked good when comparing 1080p upscaled via XeSS to 4K against the native 4K rendering. Still, that was in one demo, and we'll have to see XeSS in action in actual shipping games before rendering any verdict.

More important than how it works will be how many game developers choose to use XeSS. They already have access to both DLSS and AMD FSR, which target the same problem of boosting performance and image quality. Adding a third option, from the newcomer to the dedicated GPU market no less, seems like a stretch for developers. However, Intel does offer a potential advantage over DLSS.

XeSS is designed to work in two modes. The highest performance mode utilizes the XMX hardware to do the upscaling and enhancement, but of course, that would only work on Intel's Arc GPUs. That's the same problem as DLSS, except with zero existing installation base, which would be a showstopper in terms of developer support. But Intel has a solution: XeSS will also work — in a lower performance mode — using DP4a instructions.

DP4a is widely supported by other GPUs, including Intel's previous generation Xe LP and multiple generations of AMD and Nvidia GPUs (Nvidia Pascal and later, or AMD Vega 20 and later), which means XeSS in DP4a mode will run on virtually any modern GPU. Support might not be as universal as AMD's FSR, which runs in shaders and basically works on any DirectX 11 or later capable GPU as far as we're aware, but quality could be better than FSR as well. It would also be very interesting if Intel supported Nvidia's Tensor cores, through DirectML or a similar library, but that wasn't discussed.

The big question will still be developer uptake. We'd love to see similar quality to DLSS 2.x, with support covering a broad range of graphics cards from all competitors. That's definitely something Nvidia is still missing with DLSS, as it requires an RTX card. But RTX cards already make up a huge chunk of the high-end gaming PC market, probably around 80% or more (depending on how you quantify high-end). So Intel basically has to start from scratch with XeSS, and that makes for a long uphill climb. On the bright side, it will provide the XeSS Developer Kit this month, giving it plenty of time to get things going. So it's possible (though unlikely) we could even see games implementing XeSS before the first Arc GPUs hit retail.

Arc Alchemist and GDDR6

(Image credit: Intel)

Intel hasn't commented on what sort of memory it will use with the various Arc Alchemist GPUs. Rumors say it will be GDDR6, probably running at 16Gbps… but that's just guesswork. At the same time, it isn't easy to imagine any other solution that would make sense. GDDR5 memory still gets used on some budget solutions, but the fastest chips top out at around 8Gbps — half of what GDDR6 offers.

There's also HBM2e as a potential solution, but while that can provide substantial increases to memory bandwidth, it would also significantly increase costs. The data center Xe HPC will use HBM2e, but none of the chip shots for Xe HPG show HBM stacks of memory, which again leads us back to GDDR6.

There will be multiple Xe HPG / Arc Alchemist solutions, with varying capabilities. The larger chip, which we've focused on so far, appears to have eight 32-bit GDDR6 channels, giving it a 256-bit interface. That means it might have 8GB or 16GB of memory on the top model, and we'll likely see trimmed down 192-bit and maybe 128-bit interfaces on lower-tier cards. The second Arc GPU Intel has shown only appears to have a 96-bit interface, probably for 6GB of GDDR6.

The Intel Xe DG1 card goes an entirely different route and uses a 128-bit LPDDR4X interface and 4GB VRAM, but that was a special case. It only works in select Intel motherboards, and performance was, frankly, underwhelming. We don't expect Intel to make the same mistake with Arc Alchemist.

Arc Alchemist Die Shots and Analysis 

(Image credit: Intel)

Much of what we've said so far isn't radically new information, but Intel did provide a few images and video evidence that provides some great indications of where Intel will land. So let's start with what we know for certain.

Intel will partner with TSMC and use the N6 process (an optimized variant of N7) for Arc Alchemist and Xe HPG. That means it's not technically competing for the same wafers as AMD uses for its Zen 2, Zen 3, RDNA, and RDNA 2 GPUs. At the same time, AMD and Nvidia could also use N6 as well — it's design is compatible with N7, so Intel's use of TSMC certainly doesn't help AMD or Nvidia production capacities.

TSMC likely has a lot of tools that overlap between N6 and N7 as well, meaning it could run batches of N6, then batches and N7, switching back and forth. That means there's potential for this to cut into TSMC's ability to provide wafers to other partners. And speaking of wafers...

(Image credit: Intel)

Raja showed a wafer of Arc Alchemist chips at Intel Architecture Day. By snagging a snapshot of the video and zooming in on the wafer, the various chips on the wafer are reasonably clear. We've drawn lines to show how large the chips are, and based on our calculations; it looks like the larger Arc die will be around 24x16.5mm (~396mm^2), give or take 5–10% in each dimension. We counted the dies on the wafer as well, and there appear to be 144 whole dies, which would also correlate to a die size of around 396mm^2.

That's not a massive GPU — Nvidia's GA102, for example, measures 628mm^2 and AMD's Navi 21 measures 520mm^2 — but it's also not small at all. AMD's Navi 22 measures 335mm^2, and Nvidia's GA104 is 393mm^2, so Xe HPG would be larger than AMD's chip and similar in size to the GA104 — but made on a smaller manufacturing process. Still, putting it bluntly: Size matters.

This may be Intel's first read dedicated GPU since the i740 back in the late 90s, but it has made many integrated solutions over the years, and it has spent the past several years building a bigger dedicated GPU team. Die size alone doesn't determine performance, but it gives a good indication of how much stuff can be crammed into a design. A chip that's around 400mm^2 in size suggests Intel intends to be competitive with at least the RTX 3070 and RX 6800, which is likely higher than some were expecting.

(Image credit: Intel)

Besides the wafer shot, Intel also provided these two die shots for Xe HPG. Yes, these are clearly two different GPU dies, numbered 00071 on the larger chip and 00329 on the smaller one. They're artistic renderings rather than actual die shots, but they do have some basis in reality.

The larger die has eight clusters in the center area that would correlate to the eight render slices. The memory interfaces are along the bottom edge and the bottom half of the left and right edges, and there are four 64-bit interfaces, for 256-bit total. Then there's a bunch of other stuff that's a bit more nebulous, for video encoding and decoding, display outputs, etc.

A 256-bit interface puts Intel's Arc GPUs in an interesting position. That's the same interface width as Nvidia's GA104 (RTX 3060 Ti/3070/3070 Ti) and AMD's Navi 21. Will Intel follow AMD's lead and use 16Gbps memory, or will it opt for more conservative 14Gbps memory like Nvidia? And could Intel take a cue from AMD's Infinity Cache? We don't know yet.

The smaller die looks to have two render slices, giving it just 128 Vector Engines. It also looks like it only has a 96-bit memory interface (the blocks in the lower-right edges of the chip), which could put it at a disadvantage relative to other cards. Then there's the other 'miscellaneous' bits and pieces. Obviously, performance will be substantially lower than the bigger chip, and this would be more of an entry-level part.

While the smaller chip should be slower than all the current RX 6000 and RTX 30-series GPUs, it does put Intel in an interesting position. Depending on the clock speed, one render slice should equate to around 4.1–4.9 TFLOPS of compute. That could still match or exceed the GTX 1650 Super, with additional features that the GTX 16-series GPUs lack, and hopefully, Intel would provide the GPU with at least 6GB of memory. Basically, Nvidia and AMD haven't announced any new GPUs in the entry-level market, so this would be a nice addition.

 Will Intel Arc Be Good at Mining Cryptocurrency? 

(Image credit: Intel)

With the current GPU shortages on the AMD and Nvidia side, fueled in part by cryptocurrency miners, people will inevitably want to know if Intel's Arc GPUs will face similar difficulties. Publicly, Intel has said precisely nothing about mining potential and Xe Graphics. However, given the data center roots for Xe HP/HPC (machine learning, High-Performance Compute, etc.), Intel has probably at least looked into the possibilities mining presents. Still, it's certainly not making any marketing statements about the suitability of the architecture or GPUs for mining. But then there's also the above image (for the entire Intel Architecture Day presentation), with a physical Bitcoin and the text "Crypto Currencies," and you start to wonder.

Generally speaking, Xe might work fine for mining, but the most popular algorithms for GPU mining (Ethash mostly, but also Octopus and Kawpow) have performance that's predicated almost entirely on how much memory bandwidth a GPU has. For example, Intel's fastest Arc GPUs will likely use 16GB (maybe 8GB) of GDDR6 with a 256-bit interface. That would yield similar bandwidth to AMD's RX 6800/6800 XT/6900 XT as well as Nvidia's RTX 3060 Ti/3070, which would, in turn, lead to performance of around 60-ish MH/s for Ethereum mining.

Intel likely isn't going to use GDDR6X, but it might have some other features that would boost mining performance as well — if so, it hasn't spilled the beans yet. Nvidia has memory clocked at 14Gbps on the RTX 3060 Ti and RTX 3070, and (before the LHR models came out) it could do about 61–62 MH/s. AMD has faster 16Gbps memory, and after tuning, ends up at closer to 65 MH/s. That's realistically about where we'd expect the fastest Arc GPU to land, and that's only if the software works properly on the card.

Considering Arc GPUs won't even show up until early 2022, and given the volatility of cryptocurrencies, it's unlikely that mining performance has been an overarching concern for Intel during the design phase. Ethereum is by far the best coin for GPU mining right now, and most estimates say it represents over 90% of the GPU power used on mining. Ethereum 2.0 is slated to move from proof-of-work mining to proof-of-stake in December, which means no more GPU mining on that coin. This means building a GPU around Ethereum mining would be a bad idea right now. That still doesn't mean Arc Alchemist will be bad — or good — at mining, though.

Best-case (or worst-case, depending on your perspective), we anticipate mining performance will roughly match AMD's Navi 21 and Nvidia's GA104 GPUs. The mining software will likely need major updates and driver fixes to even work properly on future GPUs, though. I did give mining a shot using the Xe DG1, and it failed all of the NiceHashMiner benchmarks, but that's not saying much as most of the software didn't even detect a "compatible" GPU. At launch, I'd expect the Arc GPUs to be in a similar situation, but we'll have to see how things shape up over time.

Arc Alchemist Launch Date and Future GPU Plans

(Image credit: Intel)

The core specs for Arc Alchemist are shaping up nicely, and the use of TSMC N6 and potentially a 400mm^2 die with a 256-bit memory interface all point to a card that should be competitive with the current high-end GPUs from AMD and Nvidia — but behind the top performance models. As the newcomer, Intel needs the first Arc Alchemist GPUs to come out swinging. However, as discussed in our look at the Intel Xe DG1, there's much more to building a good graphics card than hardware — which is probably why the DG1 exists, to get the drivers and software ready for Arc.

Alchemist represents the first stage of Intel's dedicated GPU plans, and there's more to come. Along with the Alchemist codename, Intel revealed codenames for the next three generations of dedicated GPUs: Battlemage, Celestial, and Druid. Now we know our ABCs, next time won't you build a GPU with me? Those might not be the most awe-inspiring codenames, but we appreciate the logic of going in alphabetical order.

Tentatively, with Alchemist using TSMC N6, we might see a relatively fast turnaround for Battlemage. It could use TSMC's N5 process and ship by the end of 2022 — which would perhaps be wise, considering we expect to see Nvidia's Lovelace RTX 40-series GPUs next year, and probably AMD's RDNA 3 architecture as well. Shrink the process, add more cores, tweak a few things to improve throughput, and Battlemage could keep Intel on even footing with AMD and Nvidia. Or it could arrive woefully late and deliver less performance.

Intel needs to iterate on the future architectures and get them out sooner than later if it hopes to put some pressure on AMD and Nvidia. For now, we finally have a relatively hard launch date of Q1 2022 — we expect we'll see and learn more at CES 2022, and it would be great to see Arc Alchemist launch in January rather than March.

Intel's steampunk Oblivion concept graphics card, coming in 2035. Or 1865. (Shown in early 2020 at CES.) (Image credit: Intel)

Final Thoughts on Intel Arc Alchemist 

The bottom line is that Intel has its work cut out for it. It may be the 800-pound gorilla of the CPU world, but it has stumbled even there and faltered over the past several years. AMD's Ryzen has gained ground, closed the gap, and is now ahead of Intel in most metrics, and Intel's manufacturing woes are apparently bad enough that it needs to turn to TSMC to make its dedicated GPU dreams come true.

As the graphics underdog, Intel needs to come out with aggressive performance and pricing, and then iterate and improve at a rapid pace. And please don't talk about how Intel sells more GPUs than AMD and Nvidia. Technically, that's true, but only if you count incredibly slow integrated graphics solutions that are at best sufficient for light gaming and office work. Then again, a huge chunk of PCs and laptops are only used for office work, which is why Intel has repeatedly stuck with weak GPU performance.

There are many aspects to Arc Alchemist that we don't know, like the final product names and card designs. For example, will the Arc cards have a blower fan, dual fans, or triple fans? It doesn't really matter, as any of those with the right card design can suffice. We also expect Intel to partner with other companies like Asus, Gigabyte, and MSI to help make the graphics cards as well, though how far those companies are willing to pursue Arc ultimately comes down to the most important factors: price, availability, and performance.

We're curious about the real-world ray tracing performance, compared to both AMD and Nvidia, but that's not the critical factor. The current design has a maximum of 32 ray tracing units (RTUs), but we know next to nothing about what those units can do. Each one might be similar in capabilities to AMD's ray accelerators, in which case Intel would come in pretty low on the ray tracing pecking order. Alternatively, each RTU might be the equivalent of several AMD ray accelerators, perhaps even faster than Nvidia's Ampere RT cores. While it could be any of those, we suspect it will probably land lower on RT performance rather than higher, leaving room for growth with future iterations.

Again, the critical elements are going to be performance, price, and availability. The Intel Xe DG1 is only about as fast as good integrated graphics solutions. For a dedicated graphics card, it's a complete bust. As a vehicle to pave the way for Arc, though? Maybe it did it's job — Intel has certainly gotten better about driver updates during the past year or two, and the latest beta drivers are supposed to fix the issues I saw in my DG1 testing. But DG1 was also a 30W card, built off a GPU that primarily gets used in 15W laptops. Arc Alchemist sets its sights far higher.

We'll find out how Intel's discrete graphics card stacks up to the competition in the next six months or so. Ideally, we'll see the Intel Arc Alchemist at least match (or come close to) RTX 3070 and RX 6800 levels of performance, with prices that are lower than the AMD and Nvidia cards, and then get a bunch of cards onto retail shelves. We may also get entry level Arc cards that cost less than $200 and don't suck (meaning, not DG1). If Intel can do that, the GPU duopoly might morph into a triopoly in 2022.

Jarred Walton

Jarred Walton's (Senior Editor) love of computers dates back to the dark ages, when his dad brought home a DOS 2.3 PC and he left his C-64 behind. He eventually built his first custom PC in 1990 with a 286 12MHz, only to discover it was already woefully outdated when Wing Commander released a few months later. He holds a BS in Computer Science from Brigham Young University and has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

  • InvalidError
    It would be nice if the entry-level part really did launch at $200 retail and perform like a $200 GPU should perform scaled based on process density improvement since the GTX1650S.
    Reply
  • VforV
    The amount of speculation and (?) in the Specification table is funny... :D

    If Arc's top GPU is gonna be 3070 Ti level (best case scenario) and it will cost $600 like the 3070 Ti, it's gonna be a big, BIG fail.

    I don't care their name is intel, they need to prove themselves in GPUs and to do that they either need to 1) beat the best GPUs of today 3090/6900XT (which they won't) or to 2) have better, a much better price/perf ratio compared to nvidia and AMD, and also great software.

    So option 2 is all that they have. 3070 TI perf level should not be more than $450, better yet $400. And anything lower in performance should also be lower in price, accordingly.

    Let's also not forget Ampere and RDNA2 refresh suposedly coming almost at the same time with intel ARC, Q1 2022 (or sooner, even Q4 2021, maybe). Yikes for intel.
    Reply
  • JayNor
    Intel graphics is advertising a discussion today at 5:30ET on youtube, "Intel Arc Graphics Q&A with Martin Stroeve and Scott Wasson!"
    Reply
  • nitts999
    VforV said:
    If Arc's top GPU is gonna be 3070 Ti level (best case scenario) and it will cost $600 like the 3070 Ti, it's gonna be a big, BIG fail.

    Why do people quote MSRPs? There is nowhere you can buy a 3070-TI for $600.

    If I could buy an Intel 3070ti equivalent for $600 I would do it in a heartbeat. Availability and "street price" are the only two things that matter. MSRPs for non-existent products are a waste of breath.
    Reply
  • Yuka
    As I said before, what will make or break this card is not how good the silicon is in theory, but the driver support.

    I do not believe the Intel team in charge of polishing the drivers has had enough time; hell, not even 2 years after release is enough time to get them ready!

    I do wish for Intel to do well, since it'll be more competition in the segment, but I have to say I am also scared because it's Intel. Their strong-arming game is worse than nVidia. I'll love to see how nVidia feels on the receiving end of it in their own turf.

    Regards.
    Reply
  • waltc3
    ZZZZz-z-z-z-z-zzzzz....wake me when you have an actual product to review. Until then, we really won't "know" anything, will we?....;) Right now it's just vaporware. It's not only Intel doing that either--there's is quite a bit of vaporware-ish writing about currently non-existent nVidia and AMD products as well. Sure is a slow product year....If this sounds uncharitable, sorry--I just don't get off on probables and maybes and could-be's...;)
    Reply
  • InvalidError
    Yuka said:
    I do not believe the Intel team in charge of polishing the drivers has had enough time; hell, not even 2 years after release is enough time to get them ready!
    I enabled my i5's IGP to offload trivial stuff and stretch my GTX1050 until something decent appears for $200. Intel's Control Center appears to get fixated on the first GPU it finds so I can't actually configure UHD graphics with it. Not sure how such a bug/shortcoming in drivers can still exist after two years of Xe IGPs in laptos that often also have discrete graphics.

    Intel's drivers and related tools definitely need more work.
    Reply
  • Yuka
    InvalidError said:
    I enabled my i5's IGP to offload trivial stuff and stretch my GTX1050 until something decent appears for $200. Intel's Control Center appears to get fixated on the first GPU it finds so I can't actually configure UHD graphics with it. Not sure how such a bug/shortcoming in drivers can still exist after two years of Xe IGPs in laptos that often also have discrete graphics.

    Intel's drivers and related tools definitely need more work.
    It baffles me how people that has an Intel iGPU has never actually had the experience of suffering trying to use it for daily stuff and slightly more advanced things than just power a single monitor (which, at times, it can't even do properly).

    I can't even call their iGPU software "barebones", because even basic functionality is sketchy at times. And for everything they've been promising, I wonder how their priorities will turn out to be. I hope Intel realized they won't be able to have the full cake and will have to make a call on either consumer side (games support and basic functionality) or their "pro"/advanced side of things they've been promising (encoding, AI, XeSS, etc).

    Yes, I'm being a negative Nancy, but that's fully justified. I'd love to be proven wrong though, but I don't see that happening :P

    Regards.
    Reply
  • btmedic04
    VforV said:
    The amount of speculation and (?) in the Specification table is funny... :D

    If Arc's top GPU is gonna be 3070 Ti level (best case scenario) and it will cost $600 like the 3070 Ti, it's gonna be a big, BIG fail.

    I don't care their name is intel, they need to prove themselves in GPUs and to do that they either need to 1) beat the best GPUs of today 3090/6900XT (which they won't) or to 2) have better, a much better price/perf ratio compared to nvidia and AMD, and also great software.

    So option 2 is all that they have. 3070 TI perf level should not be more than $450, better yet $400. And anything lower in performance should also be lower in price, accordingly.

    Let's also not forget Ampere and RDNA2 refresh suposedly coming almost at the same time with intel ARC, Q1 2022 (or sooner, even Q4 2021, maybe). Yikes for intel.

    I disagree with this. In this market, if anyone can supply a GPU with 3070 Ti performance at $600 and keep production ahead of demand, they are going to sell like hotcakes regardless of who has the fastest GPU this generation. Seeing how far off Nvidia and AMD are from meeting demand currently gives Intel a massive opportunity provided that they can meet or exceed demand.

    Yuka said:
    As I said before, what will make or break this card is not how good the silicon is in theory, but the driver support.

    I do not believe the Intel team in charge of polishing the drivers has had enough time; hell, not even 2 years after release is enough time to get them ready!

    I do wish for Intel to do well, since it'll be more competition in the segment, but I have to say I am also scared because it's Intel. Their strong-arming game is worse than nVidia. I'll love to see how nVidia feels on the receiving end of it in their own turf.

    Regards.

    This right here is my biggest concern. How good will the drivers be and how quickly will updates come. Unlike AMD, Intel has the funding to throw at its driver team and developers, but money cant buy experience creating high performance drivers. only time can do that

    I remember the days of ATi, Nvidia, 3dFX, S3, and Matrox to name a few. Those were exciting times and I hope for all of our sake that intel succeeds with Arc. We as consumers need a third competitor at the high end. This duopoly has gone on long enough
    Reply
  • ezst036
    btmedic04 said:
    In this market, if anyone can supply a GPU with 3070 Ti performance at $600 and keep production ahead of demand, they are going to sell like hotcakes......

    In the beginning this may not be possible. AFAIK Intel will source out of TSMC for this, which simply means more in-fighting for the same floor space in the same video card producing fabs.

    When Intel gets its own fabs ready to go and can add additional fab capacity for this specific use case that isn't now available today, that's when production can (aim toward?)stay ahead of demand, and the rest of what you said, I think is probably right.

    If TSMC is just reducing Nvidias and AMDs to make room for Intels on the fab floor, the videocard production equation isn't changing. - unless TSMC shoves someone else to the side in the CPU or any other production area. I suppose that's a thing.
    Reply