Why you can trust Tom's Hardware
The Arc A380 just about makes it to 60 fps across our 1080p medium test suite, but of course that means some games are falling below that mark — sometimes by quite a lot. Overall performance comes in just ahead of the RX 6400, with a slightly larger improvement over the GTX 1650. The RX 6500 XT and GTX 1650 Super remain significantly faster, however, and the meager 3% improvement from overclocking isn't going to narrow the gulf.
Checking out the individual game results, Borderlands 3 and Total War: Warhammer 3 are the biggest wins for the A380 in comparison to the RX 6400 and GTX 1650. The A380 also makes it to 30 fps or more in every game in our standard test suite. Microsoft Flight Simulator and Warhammer 3 are the only games that didn't average 60 fps or more, and both remained playable.
Looking at the AMD and Nvidia competition, the A380 does reasonably well for our selected games. The RX 6400 only manages to beat the A380 in Forza Horizon 5 (by 5%) and Red Dead Redemption 2 (9%), with everything else either tied or favoring the A380. Nvidia's GTX 1650 meanwhile takes a 6% lead in Flight Simulator, but the A380 counters with more significant leads Borderlands 3, Horizon Zero Dawn, Red Dead Redemption 2, and Warhammer 3.
One final point of interest is the Core i9-9900K results with the A380. With the latest drivers, most of the differences between the i9-12900K and the i9-9900K are negligible. Overall, the 12900K is 5% faster, but there are a couple of larger leads. Forza shows an 8% win for the newer platform and Horizon Zero Dawn shows a 16% lead, while the remaining six games show a 5% or less difference.
Moving to 1080p ultra actually improves the A380's positioning relative to the GTX 1650 and RX 6400, mostly thanks to the extra VRAM. Overall, the A380 is 13% faster than the GTX 1650 and 27% faster than the RX 6400. It also just barely manages more than a 30 fps average across our test suite, where the 1650 and 6400 come up short.
In the individual games, if you're merely aiming for adequate performance (meaning 30 fps or more), half of the games we tested make that mark and half fall short. The games that don't get to 30 fps are Flight Simulator, Forza Horizon 5, Total War: Warhammer 3, and Watch Dogs Legion.
Where there was little difference between the A380 on the 9900K and 12900K at 1080p medium, the same can't be said for our 1080p ultra performance. Higher resolution textures and other data mean the GPU has to access information over the PCIe interface more often, and with an x8 PCI 3.0 interface that appears to be a problem in quite a few games. The 12900K is 20% faster at 1080p ultra overall, with Forza Horizon 5 showing a 40% improvement and Red Dead Redemption 2 running 71% faster. Only Flight Simulator and Warhammer 3 don't seem to care that much about the drop in interface speed.
What about 1440p Ultra? Yeah, that's not going to work well unless you're playing less taxing games. Far Cry 6 and Horizon Zero Dawn both manage to break 30 fps on the A380, barely, but everything else falls well short — as in, 20 fps or less, with Total War: Warhammer 3 almost falling into the single digits.
It's perhaps interesting to note that the A380 does manage to pull ahead of the RX 6500 XT in our 1440p ultra test suite, by 5% overall. That's a pyrrhic victory, and it's not just the extra VRAM helping out. The GTX 1650 Super for example still has 4GB but maintains its overall lead over the A380. Memory bandwidth in other words limits what the RX 6500 XT can do at 1440p ultra.
Intel Arc A380 Ray Tracing Performance
With only eight RTUs (ray tracing units), we didn't really expect much from the Arc A380's ray tracing capabilities. However, it does end up being a bit better than AMD's RX 6500 XT and RX 6400… sort of.
The problem is that AMD's 4GB cards can't enable DXR in Control, while the A380 currently doesn't work with Minecraft. Minecraft is quite a bit more demanding than Control, but even if we omit those two games from our test suite, the A380 still pulls ahead of the RX 6500 XT, never mind the even slower RX 6400. Looking just at the four remaining games, for reference, the A380 averages 18.8 fps, the RX 6500 XT averages 12.7 fps, and the RX 6400 gets just 10.4 fps.
Does that matter much? For the A380, no, not really, but it does perhaps bode well for Intel's higher spec Arc GPUs like the A580, A750, and A770. Those will of course cost quite a lot more than the A380, but we're already seeing potentially better ray tracing performance from just eight Intel RTUs compared to 12 and 16 AMD Ray Accelerators.
At the same time, looking at Nvidia's slowest RTX solution, the RTX 3050, shows just how much further AMD and Intel need to go before they can reach competitive levels of performance in demanding DXR games. The 3050 has 20 RT cores compared to the RX 6600's 28 Ray Accelerators, and the RTX 3050 is still 7% faster in our DXR suite. By way of reference, in non-DXR games, the RX 6600 outperforms the RTX 3050 by over 30%.
The RTX 3050 demolishes the A380 by 80% at 1080p medium, though it's also about twice as expensive. Even if the A380 does poorly with DXR compared to Nvidia, we can't help but be curious about how an Arc A750 with 24 RTUs might fare, not to mention the 32 RTUs on the A770. For that matter, even the rumored Arc A580 with 16 RTUs might be able to compete with the RTX 3050, but we'll have to wait and see.
- MORE: Best Graphics Cards
- MORE: GPU Benchmarks and Hierarchy
- MORE: All Graphics Content
Current page: Intel Arc A380 Gaming Performance
Prev Page Intel Arc A380 Test Setup and Overclocking Next Page Intel Arc A380 Video Encoding Performance and QualityJarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.
-
cyrusfox Thanks for putting up an review on this. I really am looking for Adobe Suite performance, Photoshop and lightroom. IMy experience is even with a top of the line CPU (12900k) it chugs throuhg some GPU heavy task and was hoping ARC might already be optimized for that.Reply -
brandonjclark While it's pretty much what I expected, remember that Intel has DEEP DEEP pockets. If they stick with this division they'll work it out and pretty soon we'll have 3 serious competitors.Reply -
Giroro What settings were used for the CPU comparison encodes? I would think that the CPU encode should always be able to provide the highest quality, but possibly with unacceptable performance.Reply
I'm also having a hard time reading the charts. Is the GTX 1650 the dashed hollow blue line, or the solid hollow blue line?
A good encoder at the lowest price is not a bad option for me to have. Although, I don't have much faith that Intel will get their drivers in a good enough state before the next generation of GPUs. -
JarredWaltonGPU
Are you viewing on a phone or a PC? Because I know our mobile experience can be... lacking, especially for data dense charts. On PC, you can click the arrow in the bottom-right to get the full-size charts, or at least get a larger view which you can then click the "view original" option in the bottom-right. Here are the four line charts, in full resolution, if that helps:Giroro said:What settings were used for the CPU comparison encodes? I would think that the CPU encode should always be able to provide the highest quality, but possibly with unacceptable performance.
I'm also having a hard time reading the charts. Is the GTX 1650 the dashed hollow blue line, or the solid hollow blue line?
A good encoder at the lowest price is not a bad option for me to have. Although, I don't have much faith that Intel will get their drivers in a good enough state before the next generation of GPUs.
https://cdn.mos.cms.futurecdn.net/dVSjCCgGHPoBrgScHU36vM.pnghttps://cdn.mos.cms.futurecdn.net/hGy9QffWHov4rY6XwKQTmM.pnghttps://cdn.mos.cms.futurecdn.net/d2zv239egLP9dwfKPSDh5N.pnghttps://cdn.mos.cms.futurecdn.net/PGkuG8uq25fNU7o7M8GbEN.png
The GTX 1650 is a hollow dark blue dashed line. The AMD GPU is the hollow solid line, CPU is dots, A380 is solid filled line, and Nvidia RTX 3090 Ti (or really, Turing encoder) is solid dashes. I had to switch to dashes and dots and such because the colors (for 12 lines in one chart) were also difficult to distinguish from each other, and I included the tables of the raw data just to help clarify what the various scores were if the lines still weren't entirely sensible. LOL
As for the CPU encoding, it was done with the same constraints as the GPU: single pass and the specified bitrate, which is generally how you would set things up for streaming (AFAIK, because I'm not really a streamer). 2-pass encoding can greatly improve quality, but of course it takes about twice as long and can't be done with livestreaming. I did not look into other options that might improve the quality at the cost of CPU encoding time, and I also didn't look if there were other options that could improve the GPU encoding quality.
I suspect Arc won't help much at all with Photoshop or Lightroom compared to whatever GPU you're currently using (unless you're using integrated graphics I suppose). Adobe's CC apps have GPU accelerated functions for certain tasks, but complex stuff still chugs pretty badly in my experience. If you want to export to AV1, though, I think there's a way to get that into Premiere Pro and the Arc could greatly increase the encoding speed.cyrusfox said:Thanks for putting up an review on this. I really am looking for Adobe Suite performance, Photoshop and lightroom. IMy experience is even with a top of the line CPU (12900k) it chugs throuhg some GPU heavy task and was hoping ARC might already be optimized for that. -
magbarn Wow, 50% larger die size (much more expensive for Intel vs. AMD) and performs much worse than the 6500XT. Stick a fork in Arc, it's done.Reply -
Giroro
I'm viewing on PC, just the graph legend shows a very similar blue oval for both cardsJarredWaltonGPU said:Are you viewing on a phone or a PC? Because I know our mobile experience can be... lacking, especially for data dense charts -
JarredWaltonGPU
Much of the die size probably gets taken up by XMX cores, QuickSync, DisplayPort 2.0, etc. But yeah, it doesn't seem particularly small considering the performance. I can't help but think with fully optimized drivers, performance could improve another 25%, but who knows if we'll ever get such drivers?magbarn said:Wow, 50% larger die size (much more expensive for Intel vs. AMD) and performs much worse than the 6500XT. Stick a fork in Arc, it's done. -
waltc3 Considering what you had to work with, I thought this was a decent GPU review. Just a few points that occurred to me while reading...Reply
I wouldn't be surprised to see Intel once again take its marbles and go home and pull the ARCs altogether, as Intel did decades back with its ill-fated acquisition of Real3D. They are probably hoping to push it at a loss at retail to get some of their money back, but I think they will be disappointed when that doesn't happen. As far as another competitor in the GPU markets goes, yes, having a solid competitor come in would be a good thing, indeed, but only if the product meant to compete actually competes. This one does not. ATi/AMD have decades of experience in the designing and manufacturing of GPUs, as does nVidia, and in the software they require, and the thought that Intel could immediately equal either company's products enough to compete--even after five years of R&D on ARC--doesn't seem particularly sound, to me. So I'm not surprised, as it's exactly what I thought it would amount to.
I wondered why you didn't test with an AMD CPU...was that a condition set by Intel for the review? Not that it matters, but It seems silly, and I wonder if it would have made a difference of some kind. I thought the review was fine as far it goes, but one thing that I felt was unnecessarily confusing was the comparison of the A380 in "ray tracing" with much more expensive nVidia solutions. You started off restricting the A380 to the 1650/Super, which doesn't ray trace at all, and the entry level AMD GPUs which do (but not to any desirable degree, imo)--which was fine as they are very closely priced. But then you went off on a tangent with 3060's 3050's, 2080's, etc. because of "ray tracing"--which I cannot believe the A380 is any good at doing at all.
The only thing I can say that might be a little illuminating is that Intel can call its cores and rt hardware whatever it wants to call them, but what matters is the image quality and the performance at the end of the day. I think Intel used the term "tensor core" to make it appear to be using "tensor cores" like those in the RTX 2000/3000 series, when they are not the identical tensor cores at all...;) I was glad to see the notation because it demonstrates that anyone can make his own "tensor core" as "tensor" is just math. I do appreciate Intel doing this because it draws attention to the fact that "tensor cores" are not unique to nVidia, and that anyone can make them, actually--and call them anything they want--like for instance "raytrace cores"...;) -
JarredWaltonGPU
Intel seems committed to doing dedicated GPUs, and it makes sense. The data center and supercomputer markets all basically use GPU-like hardware. Battlemage is supposedly well underway in development, and if Intel can iterate and get the cards out next year, with better drivers, things could get a lot more interesting. It might lose billions on Arc Alchemist, but if it can pave the way for future GPUs that end up in supercomputers in five years, that will ultimately be a big win for Intel. It could have tried to make something less GPU-like and just gone for straight compute, but then porting existing GPU programs to the design would have been more difficult, and Intel might actually (maybe) think graphics is becoming important.waltc3 said:I wouldn't be surprised to see Intel once again take its marbles and go home and pull the ARCs altogether, as Intel did decades back with its ill-fated acquisition of Real3D. They are probably hoping to push it at a loss at retail to get some of their money back, but I think they will be disappointed when that doesn't happen. As far as another competitor in the GPU markets goes, yes, having a solid competitor come in would be a good thing, indeed, but only if the product meant to compete actually competes. This one does not. ATi/AMD have decades of experience in the designing and manufacturing of GPUs, as does nVidia, and in the software they require, and the thought that Intel could immediately equal either company's products enough to compete--even after five years of R&D on ARC--doesn't seem particularly sound, to me. So I'm not surprised, as it's exactly what I thought it would amount to.
I wondered why you didn't test with an AMD CPU...was that a condition set by Intel for the review? Not that it matters, but It seems silly, and I wonder if it would have made a difference of some kind. I thought the review was fine as far it goes, but one thing that I felt was unnecessarily confusing was the comparison of the A380 in "ray tracing" with much more expensive nVidia solutions. You started off restricting the A380 to the 1650/Super, which doesn't ray trace at all, and the entry level AMD GPUs which do (but not to any desirable degree, imo)--which was fine as they are very closely priced. But then you went off on a tangent with 3060's 3050's, 2080's, etc. because of "ray tracing"--which I cannot believe the A380 is any good at doing at all.
Intel set no conditions on the review. We purchased this card, via a go-between, from China — for WAY more than the card is worth, and then it took nearly two months to get things sorted out and have the card arrive. That sucked. If you read the ray tracing section, you'll see why I did the comparison. It's not great, but it matches an RX 6500 XT and perhaps indicates Intel's RTUs are better than AMD's Ray Accelerators, and maybe even better than Nvidia's Ampere RT cores — except Nvidia has a lot more RT cores than Arc has RTUs. I restricted testing to cards priced similarly, plus the next step up, which is why the RTX 2060/3050 and RX 6600 are included.
The only thing I can say that might be a little illuminating is that Intel can call its cores and rt hardware whatever it wants to call them, but what matters is the image quality and the performance at the end of the day. I think Intel used the term "tensor core" to make it appear to be using "tensor cores" like those in the RTX 2000/3000 series, when they are not the identical tensor cores at all...;) I was glad to see the notation because it demonstrates that anyone can make his own "tensor core" as "tensor" is just math. I do appreciate Intel doing this because it draws attention to the fact that "tensor cores" are not unique to nVidia, and that anyone can make them, actually--and call them anything they want--like for instance "raytrace cores"...;)
Tensor cores refer to a specific type of hardware matrix unit. Google has TPUs, various other companies are also making tensor core-like hardware. Tensorflow is a popular tool for AI workloads, which is why the "tensor cores" name came into being AFAIK. Intel calls them Xe Matrix Engines, but the same principles apply: lots of matrix math, focusing especially on multiply and accumulate as that's what AI training tends to use. But tensor cores have literally nothing to do with "raytrace cores," which need to take DirectX Raytracing structures (or VulkanRT) to be at all useful. -
escksu The ray tracing shows good promise. The video encoder is the best. 3d performance is meh but still good enough for light gaming.Reply
If it's retails price is indeed what it shows, then I believe it will sell. Of course, Intel won't make much (if any) from these cards.