Tom's Hardware Verdict
The Intel Arc A380 provides too little too late for most people. It has some impressive video encoding hardware, but the gaming performance ranges from acceptable to abysmal while Intel continues to hammer on the drivers. It's not the worst graphics card we've seen, but there are plenty of better options.
Pros
- +
Decently priced
- +
Excellent video encoding hardware
- +
6GB VRAM
- +
A third GPU company
Cons
- -
Spotty performance in games
- -
Only matches the GTX 1060 6GB from 2016
- -
What's with the 8-pin power connector?
- -
Driver concerns and broken features
Why you can trust Tom's Hardware
The Intel Arc A380 has to be one of the worst graphics card launches in history — not the hardware itself, necessarily, but the retail launch of the hardware. By all indications, Intel knew the drivers were broken when the hardware was ready for release earlier this year. Rather than taking sufficient time to fix the drivers before the retail launch, and with the clock ticking as new AMD and Nvidia GPUs are on the horizon, Intel decided to ship its Arc GPUs first in China — likely not the sort of approach a company would take if the product were worthy of making our list of the best graphics cards.
Several months later, after plenty of negative publicity courtesy of GPUs that made their way to other shores, and with numerous driver updates come and gone, Arc A380 has officially launched in the US with a starting price of $139. The single offering on Newegg sold out and is currently back ordered, but that's likely more to do with limited supplies than high demand. Still, the A380's not all bad, and we're happy to see Team Blue rejoin the dedicated GPU market for the first time in over 24 years. (And no, I don't really count the Intel DG1 from last year, since it only worked on specific motherboards.)
How does the Arc A380 stack up to competing AMD and Nvidia GPUs, and what's all the hype about AV1 hardware encoding acceleration? You can see where it lands in our GPU benchmarks hierarchy, which if you want a spoiler is… not good. But let's get to the details.
Arc Alchemist Architecture Recap
We've provided extensive coverage on Intel's Arc Alchemist architecture, dating back to about one year ago. At the time we first wrote that piece, we were anticipating a late 2021 or early 2022 launch. That morphed into a planned March 2022 launch, then eventually a mid-2022 release — and it's not even a full release, at least not yet. Arc A380 is merely the first salvo, at the very bottom of the price and performance ladder. We've seen plenty of hints of the faster Arc A750, which appears to be close to RTX 3060 performance based on Intel's own benchmarks, and that should launch within the next month or so. What about the faster still Arc A770 or mid-tier Arc A580 and other products? Only time will tell.
Arc Alchemist represents a divergence from Intel's previous graphics designs. There's probably plenty of overlap in certain elements, but Intel has changed names for some of the core building blocks. Gone are the "Execution Units (EUs)," which are now called Vector Engines (VEs). Each VE can compute eight FP32 operations per cycle, which gets loosely translated into "GPU cores" or GPU shaders and is roughly equivalent to the AMD and Nvidia shaders.
Intel groups 16 VEs into a single Xe-Core, which also includes other functionality. Each Xe-Core thus has 128 shader cores and roughly translates as equivalent to an AMD Compute Unit (CU) or Nvidia Streaming Multiprocessor (SM). They're basically all SIMD (single instruction multiple data) designs, and like the competition, Arc Alchemist has enhanced the shaders to meet the full DirectX 12 Ultimate feature set.
That naturally means having ray tracing hardware incorporated into the design, and Intel has one Ray Tracing Unit (RTU) per Xe-Core. The exact details of the ray tracing hardware aren't entirely clear yet, though based on testing each Intel RTU might match up decently against an Nvidia Ampere RT core.
Intel didn't stop there. Alongside the VEs and RTUs and other typical graphics hardware, Intel also added Matrix Engines, which it calls XMX Engines (Xe Matrix eXtensions). These are similar in principle to Nvidia's Tensor cores and are designed to crunch though lots of less precise data for machine learning and other uses. An XMX Engine is 1024-bits wide and can process either 64 FP16 operations or 128 INT8 operations per cycle, giving Arc GPUs a relatively large amount of compute power.
Intel Arc A380 Specifications
With that brief overview of the architecture out of the way, here are the specifications for the Arc A380, compared to a couple of competing AMD and Nvidia GPUs. While we provide theoretical performance here, remember that not all teraflops and teraops are created equal. We need real-world testing to see what sort of actual performance the architecture can deliver.
Graphics Card | Arc A380 | RX 6500 XT | RX 6400 | GTX 1650 Super | GTX 1650 |
---|---|---|---|---|---|
Architecture | ACM-G11 | Navi 24 | Navi 24 | TU116 | TU117 |
Process Technology | TSMC N6 | TSMC N6 | TSMC N6 | TSMC 12FFN | TSMC 12FFN |
Transistors (Billion) | 7.2 | 5.4 | 5.4 | 6.6 | 4.7 |
Die size (mm^2) | 157 | 107 | 107 | 284 | 200 |
SMs / CUs / Xe-Cores | 8 | 16 | 12 | 20 | 14 |
GPU Cores (Shaders) | 1024 | 1024 | 768 | 1280 | 896 |
Tensor Cores | 128 | — | — | — | — |
Ray Tracing 'Cores' | 8 | 16 | 12 | — | — |
Base Clock (MHz) | 2000 | 2310 | 1923 | 1530 | 1485 |
Boost Clock (MHz) | 2450 | 2815 | 2321 | 1725 | 1665 |
VRAM Speed (Gbps) | 15.5 | 18 | 16 | 12 | 8 |
VRAM (GB) | 6 | 4 | 4 | 4 | 4 |
VRAM Bus Width | 96 | 64 | 64 | 128 | 128 |
ROPs | 32 | 32 | 32 | 48 | 32 |
TMUs | 64 | 64 | 48 | 80 | 56 |
TFLOPS FP32 (Boost) | 5 | 5.8 | 3.6 | 4.4 | 3 |
TFLOPS FP16 (MXM/Tensor if Available) | 40 | 11.6 | 7.2 | 8.8 | 6 |
Bandwidth (GBps) | 186 | 144 | 128 | 192 | 128 |
Video Encoding | H.264, H.265, AV1, VP9 | — | — | H.264, H.265 (Turing) | H.264, H.265 (Volta) |
TDP (watts) | 75 | 107 | 53 | 100 | 75 |
Launch Date | Jun 2022 | Jan 2022 | Jan 2022 | Nov 2019 | Apr 2019 |
Launch Price | $139 | $199 | $159 | $159 | $149 |
On paper, Intel's Arc A380 basically competes against AMD's RX 6500 XT and RX 6400, or Nvidia's GTX 1650 Super and GTX 1650. It's priced slightly lower than the competition, especially looking at current online prices for new cards, with roughly similar features. There are some important qualifications to note, however.
Nvidia doesn't have ray tracing hardware below the RTX 3050 (or RTX 2060). Similarly, none of the AMD or Nvidia GPUs in this segment support tensor hardware either, giving Intel a potential advantage in deep learning and AI applications — we've included FP16 throughput for the GPU cores on the AMD and Nvidia cards by way of reference, though that's not entirely apples-to-apples.
Intel is the only GPU company that currently has AV1 and VP9 hardware accelerated video encoding. We're expecting AMD and Nvidia to add AV1 support to their upcoming RDNA 3 and Ada architectures, and possibly VP9 as well, but we don't have official confirmation on how that will play out. We'll look at encoding performance and quality later in this review as well, though note that the GTX 1650 uses Nvidia's older NVENC hardware that delivers a lower quality output than the newer Turing (and Ampere) version.
The Arc A380 has theoretical compute performance of 5.0 teraflops, which puts it slightly behind the RX 6500 XT but ahead of everything else. It's also the only GPU in this price class to ship with 6GB of GDDR6 memory, with a 96-bit memory interface. That gives the A380 more memory bandwidth than AMD but without Infinity Cache, and less memory bandwidth than Nvidia's GPUs. Power use targets 75W, though overclocked cards can exceed that, just like with AMD and Nvidia GPUs.
The ray tracing capabilities are harder to pin down. To quickly recap, Nvidia's Turing architecture on the RTX 20-series GPUs had full hardware ray tracing capabilities, and each RT core can do one ray/triangle intersection calculation per cycle, plus there's hardware support for BVH (bounding volume hierarchy) traversal. It's not clear how many ray/box BVH intersections per cycle the RT cores manage, as Nvidia to my knowledge hasn't provided any specific number.
Nvidia's Ampere architecture added a second ray/triangle intersection unit to the RT cores, potentially doubling the throughput. (It seems the Turing BVH hardware was "faster" than the ray/triangle hardware in most cases, so Ampere focused on improving the triangle rate.) In practice, Nvidia says Ampere's RT cores are typically 75% faster than Turing's RT cores, as Ampere can't always fill all the ray/triangle execution slots.
AMD's RDNA 2 architecture handles things a bit differently. It can do one ray/triangle intersection calculation per cycle on each Ray Accelerator, basically like Turing. However, it uses GPU shaders (technically texture units) for BVH traversal, at a rate of four ray/box intersections per cycle. That rate isn't too bad in theory, considering the number of texture units in RDNA 2, but the BVH work ends up conflicting with other shader and texture work. Ultimately, it makes AMD's current Ray Accelerators slower and less efficient than Nvidia's RT cores, and perhaps more memory intensive as well (judging by real-world performance).
Intel's RTUs are similar to Nvidia's RT cores in that they can do both ray/box intersections and ray/triangle intersections in hardware. The above video explains things in more detail, but the raw throughput is up to 12 ray/box BVH intersections per cycle and one ray/triangle intersection per cycle, per RTU. Intel also has a dedicated BVH cache to improve hit rates and performance, and a Thread Sorting Unit that optimizes the output from the RTUs to better match shading workloads for the Xe-Cores.
Intel makes the claim, more or less, that it's RTUs are actually more capable than Nvidia's Ampere RT cores, which would also mean the RTUs are better than Turing and RDNA 2 as well. To prove this point, sort of, Intel showed ray tracing performance in 17 games, pitting the Arc A770 against the RTX 3060.
Overall, Intel shows ray tracing performance on the A770 that's about 12% faster than the RTX 3060. Of course that's not the A380, and there are plenty of other factors that go into gaming performance as we're not doing pure ray tracing yet. The A770 also has 32 RTUs compared to the RTX 3060's 30 RT cores. Still, Intel's RTUs sound pretty decent on paper.
The thing is, with only eight RTUs, the A380 definitely won't be a ray tracing powerhouse — Nvidia for example has 20 or more RT cores in its RTX lineup, or 16 if you include the mobile RTX 3050 in the list, Nvidia's slowest RTX chip. AMD on the other hand has as few as 12 Ray Accelerators in its RX 6000-series parts, and integrated RDNA 2 implementations like the Steam Deck can have as few as eight RAs, though RT performance understandably suffers quite a lot — not that you need ray tracing, even four years after hardware first supported the functionality.
- MORE: Best Graphics Cards
- MORE: GPU Benchmarks and Hierarchy
- MORE: All Graphics Content
Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.
Intel tempers expectations for next-gen Falcon Shores AI GPU — Gaudi 3 missed AI wave, Falcon will require fast iterations to be competitive
Minisforum's AM5 mini-PC gets Ryzen 9 9950X upgrade for $919 — adding 64GB RAM and 2TB SSD pushes the price tag to $1,199
Nvidia revives LAN party after 13 years to celebrate RTX 50-series GPU launch — GeForce LAN 50 is a 50-hour LAN party across four different cities
-
cyrusfox Thanks for putting up an review on this. I really am looking for Adobe Suite performance, Photoshop and lightroom. IMy experience is even with a top of the line CPU (12900k) it chugs throuhg some GPU heavy task and was hoping ARC might already be optimized for that.Reply -
brandonjclark While it's pretty much what I expected, remember that Intel has DEEP DEEP pockets. If they stick with this division they'll work it out and pretty soon we'll have 3 serious competitors.Reply -
Giroro What settings were used for the CPU comparison encodes? I would think that the CPU encode should always be able to provide the highest quality, but possibly with unacceptable performance.Reply
I'm also having a hard time reading the charts. Is the GTX 1650 the dashed hollow blue line, or the solid hollow blue line?
A good encoder at the lowest price is not a bad option for me to have. Although, I don't have much faith that Intel will get their drivers in a good enough state before the next generation of GPUs. -
JarredWaltonGPU
Are you viewing on a phone or a PC? Because I know our mobile experience can be... lacking, especially for data dense charts. On PC, you can click the arrow in the bottom-right to get the full-size charts, or at least get a larger view which you can then click the "view original" option in the bottom-right. Here are the four line charts, in full resolution, if that helps:Giroro said:What settings were used for the CPU comparison encodes? I would think that the CPU encode should always be able to provide the highest quality, but possibly with unacceptable performance.
I'm also having a hard time reading the charts. Is the GTX 1650 the dashed hollow blue line, or the solid hollow blue line?
A good encoder at the lowest price is not a bad option for me to have. Although, I don't have much faith that Intel will get their drivers in a good enough state before the next generation of GPUs.
https://cdn.mos.cms.futurecdn.net/dVSjCCgGHPoBrgScHU36vM.pnghttps://cdn.mos.cms.futurecdn.net/hGy9QffWHov4rY6XwKQTmM.pnghttps://cdn.mos.cms.futurecdn.net/d2zv239egLP9dwfKPSDh5N.pnghttps://cdn.mos.cms.futurecdn.net/PGkuG8uq25fNU7o7M8GbEN.png
The GTX 1650 is a hollow dark blue dashed line. The AMD GPU is the hollow solid line, CPU is dots, A380 is solid filled line, and Nvidia RTX 3090 Ti (or really, Turing encoder) is solid dashes. I had to switch to dashes and dots and such because the colors (for 12 lines in one chart) were also difficult to distinguish from each other, and I included the tables of the raw data just to help clarify what the various scores were if the lines still weren't entirely sensible. LOL
As for the CPU encoding, it was done with the same constraints as the GPU: single pass and the specified bitrate, which is generally how you would set things up for streaming (AFAIK, because I'm not really a streamer). 2-pass encoding can greatly improve quality, but of course it takes about twice as long and can't be done with livestreaming. I did not look into other options that might improve the quality at the cost of CPU encoding time, and I also didn't look if there were other options that could improve the GPU encoding quality.
I suspect Arc won't help much at all with Photoshop or Lightroom compared to whatever GPU you're currently using (unless you're using integrated graphics I suppose). Adobe's CC apps have GPU accelerated functions for certain tasks, but complex stuff still chugs pretty badly in my experience. If you want to export to AV1, though, I think there's a way to get that into Premiere Pro and the Arc could greatly increase the encoding speed.cyrusfox said:Thanks for putting up an review on this. I really am looking for Adobe Suite performance, Photoshop and lightroom. IMy experience is even with a top of the line CPU (12900k) it chugs throuhg some GPU heavy task and was hoping ARC might already be optimized for that. -
magbarn Wow, 50% larger die size (much more expensive for Intel vs. AMD) and performs much worse than the 6500XT. Stick a fork in Arc, it's done.Reply -
Giroro
I'm viewing on PC, just the graph legend shows a very similar blue oval for both cardsJarredWaltonGPU said:Are you viewing on a phone or a PC? Because I know our mobile experience can be... lacking, especially for data dense charts -
JarredWaltonGPU
Much of the die size probably gets taken up by XMX cores, QuickSync, DisplayPort 2.0, etc. But yeah, it doesn't seem particularly small considering the performance. I can't help but think with fully optimized drivers, performance could improve another 25%, but who knows if we'll ever get such drivers?magbarn said:Wow, 50% larger die size (much more expensive for Intel vs. AMD) and performs much worse than the 6500XT. Stick a fork in Arc, it's done. -
waltc3 Considering what you had to work with, I thought this was a decent GPU review. Just a few points that occurred to me while reading...Reply
I wouldn't be surprised to see Intel once again take its marbles and go home and pull the ARCs altogether, as Intel did decades back with its ill-fated acquisition of Real3D. They are probably hoping to push it at a loss at retail to get some of their money back, but I think they will be disappointed when that doesn't happen. As far as another competitor in the GPU markets goes, yes, having a solid competitor come in would be a good thing, indeed, but only if the product meant to compete actually competes. This one does not. ATi/AMD have decades of experience in the designing and manufacturing of GPUs, as does nVidia, and in the software they require, and the thought that Intel could immediately equal either company's products enough to compete--even after five years of R&D on ARC--doesn't seem particularly sound, to me. So I'm not surprised, as it's exactly what I thought it would amount to.
I wondered why you didn't test with an AMD CPU...was that a condition set by Intel for the review? Not that it matters, but It seems silly, and I wonder if it would have made a difference of some kind. I thought the review was fine as far it goes, but one thing that I felt was unnecessarily confusing was the comparison of the A380 in "ray tracing" with much more expensive nVidia solutions. You started off restricting the A380 to the 1650/Super, which doesn't ray trace at all, and the entry level AMD GPUs which do (but not to any desirable degree, imo)--which was fine as they are very closely priced. But then you went off on a tangent with 3060's 3050's, 2080's, etc. because of "ray tracing"--which I cannot believe the A380 is any good at doing at all.
The only thing I can say that might be a little illuminating is that Intel can call its cores and rt hardware whatever it wants to call them, but what matters is the image quality and the performance at the end of the day. I think Intel used the term "tensor core" to make it appear to be using "tensor cores" like those in the RTX 2000/3000 series, when they are not the identical tensor cores at all...;) I was glad to see the notation because it demonstrates that anyone can make his own "tensor core" as "tensor" is just math. I do appreciate Intel doing this because it draws attention to the fact that "tensor cores" are not unique to nVidia, and that anyone can make them, actually--and call them anything they want--like for instance "raytrace cores"...;) -
JarredWaltonGPU
Intel seems committed to doing dedicated GPUs, and it makes sense. The data center and supercomputer markets all basically use GPU-like hardware. Battlemage is supposedly well underway in development, and if Intel can iterate and get the cards out next year, with better drivers, things could get a lot more interesting. It might lose billions on Arc Alchemist, but if it can pave the way for future GPUs that end up in supercomputers in five years, that will ultimately be a big win for Intel. It could have tried to make something less GPU-like and just gone for straight compute, but then porting existing GPU programs to the design would have been more difficult, and Intel might actually (maybe) think graphics is becoming important.waltc3 said:I wouldn't be surprised to see Intel once again take its marbles and go home and pull the ARCs altogether, as Intel did decades back with its ill-fated acquisition of Real3D. They are probably hoping to push it at a loss at retail to get some of their money back, but I think they will be disappointed when that doesn't happen. As far as another competitor in the GPU markets goes, yes, having a solid competitor come in would be a good thing, indeed, but only if the product meant to compete actually competes. This one does not. ATi/AMD have decades of experience in the designing and manufacturing of GPUs, as does nVidia, and in the software they require, and the thought that Intel could immediately equal either company's products enough to compete--even after five years of R&D on ARC--doesn't seem particularly sound, to me. So I'm not surprised, as it's exactly what I thought it would amount to.
I wondered why you didn't test with an AMD CPU...was that a condition set by Intel for the review? Not that it matters, but It seems silly, and I wonder if it would have made a difference of some kind. I thought the review was fine as far it goes, but one thing that I felt was unnecessarily confusing was the comparison of the A380 in "ray tracing" with much more expensive nVidia solutions. You started off restricting the A380 to the 1650/Super, which doesn't ray trace at all, and the entry level AMD GPUs which do (but not to any desirable degree, imo)--which was fine as they are very closely priced. But then you went off on a tangent with 3060's 3050's, 2080's, etc. because of "ray tracing"--which I cannot believe the A380 is any good at doing at all.
Intel set no conditions on the review. We purchased this card, via a go-between, from China — for WAY more than the card is worth, and then it took nearly two months to get things sorted out and have the card arrive. That sucked. If you read the ray tracing section, you'll see why I did the comparison. It's not great, but it matches an RX 6500 XT and perhaps indicates Intel's RTUs are better than AMD's Ray Accelerators, and maybe even better than Nvidia's Ampere RT cores — except Nvidia has a lot more RT cores than Arc has RTUs. I restricted testing to cards priced similarly, plus the next step up, which is why the RTX 2060/3050 and RX 6600 are included.
The only thing I can say that might be a little illuminating is that Intel can call its cores and rt hardware whatever it wants to call them, but what matters is the image quality and the performance at the end of the day. I think Intel used the term "tensor core" to make it appear to be using "tensor cores" like those in the RTX 2000/3000 series, when they are not the identical tensor cores at all...;) I was glad to see the notation because it demonstrates that anyone can make his own "tensor core" as "tensor" is just math. I do appreciate Intel doing this because it draws attention to the fact that "tensor cores" are not unique to nVidia, and that anyone can make them, actually--and call them anything they want--like for instance "raytrace cores"...;)
Tensor cores refer to a specific type of hardware matrix unit. Google has TPUs, various other companies are also making tensor core-like hardware. Tensorflow is a popular tool for AI workloads, which is why the "tensor cores" name came into being AFAIK. Intel calls them Xe Matrix Engines, but the same principles apply: lots of matrix math, focusing especially on multiply and accumulate as that's what AI training tends to use. But tensor cores have literally nothing to do with "raytrace cores," which need to take DirectX Raytracing structures (or VulkanRT) to be at all useful. -
escksu The ray tracing shows good promise. The video encoder is the best. 3d performance is meh but still good enough for light gaming.Reply
If it's retails price is indeed what it shows, then I believe it will sell. Of course, Intel won't make much (if any) from these cards.