Intel recently shared performance metrics of its new Arc A380 desktop GPU in 17 gaming titles, with direct comparisons to the GTX 1650 and RX 6400 — which were all tested on the same PC. On average, the A380 lost in comparison to the GTX 1650 and RX 6400, which will make it one of the slowest entry-level GPUs when it arrives on the US market. Even as a budget offering, Intel will have a tough time making our best graphics card list.
The A300 series is Intel's entry-level desktop GPU, using the smaller "ACM-G11" Arch Alchemist chip. Unlike the mobile A350M and A370M, however, it does have all eight of Intel's Xe GPU cores and enabled alongside the full 96-bit GDDR6 memory interface. That's nearly the same core configuration as the entry-level Arc A370M mobile GPU, but with 50% more memory, 66% more memory bandwidth, and significantly higher GPU clocks that can reach up to 2.45 GHz.
TBP (typical board power) is also higher at 75W, perhaps more, and Intel's Arc A380 will come in several variants. Cards that run at less than 75W can get by without a power connector and have a 2 GHz clock speed, cards with up to an 80W TBP will require at least a 6-pin power connector and can run at up to 2.25 GHz, and cards with an 87W or higher TBP can run at 2.35 GHz or more.
We don't know what card Intel used for the tests, and the Gunnir card images shown here with the 8-pin power connector are for reference purposes only. The test PC was equipped with a Core i5-12600K, 2x16GB DDR4-3200 memory, an MSI Z690-A Pro WiFi DDR4 motherboard (actually the same motherboard we use in our GPU testbed), and a 4TB M600 Pro XT SSD, running Windows 11.
For now, the Arc A380 is the only desktop GPU available to look at on Intel's Arc website. But according to previous driver leaks, we should expect Intel's A500 series and A700 series of desktop GPUs to arrive at some point. Here are the numbers, and again these come straight from Intel's Arc A380 reviewer's guide — we're sharing them with permission while we attempt to get a card for our own in-depth testing. Take these figures with a healthy dose of skepticism, in other words, as most manufacturer provided benchmarks attempt to show products in a better light.
|Games||Intel Arc A380||GeForce GTX 1650||Radeon RX 6400|
|17 Game Geometric Mean||96.4||114.5||105.0|
|Age of Empires 4||80||102||94|
|The Witcher 3||85||101||81|
|Total War: Troy||78||98||75|
On average, the Arc A380 lost to the GTX 1650 by 19% and lost to the RX 6400 by 9%. When we compare each GPU on a game-by-game basis, the Arc A380 only beats the RX 6400 in four of the 17 titles and beats the GTX 1650 in one of them (Naraka Bladepoint). There's also a three-way tie in NiZhan, where all the GPUs managed 200 fps, though we're not sure why Intel would even bother to include that particular benchmark since it looks like there's a frame rate cap.
Regardless, it isn't exactly encouraging to see the new Intel GPU getting beat out by an entry-level Nvidia GPU released over three years ago, and an ultra low-level Radeon GPU that is literally a cut down Navi 24 mobile GPU slapped onto a graphics card PCB. Over the past few months, we've heard reports that Intel's graphics drivers are playing a significant role in gaming performance with these new A-series GPUs, with poor optimization being a big issue.
Perhaps Intel can turn things around and provide well-optimized gaming drivers in the near future once its A-series lineup makes it to rest of the world market. Intel also recently showed its expected performance for the higher tier A700M mobile parts, which looked at least fairly capable. But if Intel has the same driver problems on its mid-range A500 and flagship A700 series graphics cards, where gaming performance matters even more, Intel's GPU division is going to be dealing with serious challenges in a market that's already quite competitive.
For the entry-level and mobile parts, it's not just gaming performance that Intel is hyping up. Arc includes the Xe media engine, which supports up to 8K encode and decode of AVC (H.264), HEVC (H.265), VP9, and AV1 — and Arc is the only GPU right now with hardware encoding support of AV1. Comparing the A380 against a Core i5-12600K CPU encode of an AV1 video, the A380 took less than a quarter of the time (53 seconds versus 234 seconds).
Arc A380 was also faster in other video encoding scenarios, like an HEVC encode using DaVinci Resolve where Intel's Deep Link feature that leverages the CPU graphics and dedicated GPU allowed it to finish the task in 16 seconds compared to 25 seconds on a GTX 1650 card. Interestingly, just the UHD Graphics 770 or Arc A380 alone required 30 seconds, so encoding performance very nearly doubled thanks to Deep Link.
If you're more interested in the media capabilities, Arc might be a great option when it comes to the US market. For gamers, let's hope additional driver improvements can help narrow the gap that Intel's currently showing.
Just compared the now 8 year old 970 with the 1650 and the are around the same (5% faster), so yes my 970 is faster than the Intel's Arc A380.
@artk2219 best I could work out is around £125 or $154 converted from 1030 Chinese Yuan). Some had list one up for ￥3999 including VAT, which is roughly $595 USD.
Agreed. It's not like the card hit 45FPS while the others were at 70+ It was never below 60FPS, and was often "close" to the others. IF priced right its not a horrible card. More so if you factor in driver improvements. This is their entry level card, and the first commercial attempt. Good first try Intel. Keep working on it.
If Intel prices this card around $150, then sure, it won't exactly be a particularly attractive offering. But closer to $100 it would be a much more compelling option than what the competition is currently offering in the "budget" category. If Intel wants to make inroads into the dedicated GPU market, I would hope they would bring competitive pricing to the table, especially considering the amount of uncertainty surrounding the hardware and its long-term driver support. And I get the impression that might be what they're planning. I can't see them releasing official performance numbers that show their card performing almost universally slower than the competition unless pricing is significantly better to compensate.
Likewise, The Witcher 3, now a 7 year old game, is generally shown to not be able to manage 60fps on a 1650 with the settings turned up at 1080p, so the "101fps" Intel shows for that card is similarly suspect. The same goes for some other games, where they are showing around double the framerates one might expect. So they've obviously adjusted the settings to keep the numbers high. You can do that on low-end integrated graphics too, if you're willing to run games at low settings at SD resolutions, so the exact numbers for the frame rates don't tell us much, aside from the relative performance compared to the other cards.