Skip to main content

'Overclocked' Intel Arc A380 Shows Impressive Gains

Overclocking enthusiast Pro Hi-Tech released a YouTube video showcasing Intel's new Arc A380 and how the GPU behaves under "overclocking". We put that in quotes because this isn't traditional GPU overclocking in the same way as we'd approach the best graphics cards. Surprisingly, the A380 showed massive performance gains from an increased power limit and voltage offset, almost putting it into another GPU class altogether.

Pro Hi-Tech (you'll need to turn on auto-captioning and set the appropriate language if, like us, you don't understand Russian) took a somewhat unorthodox approach to the A380 in that he didn't touch the core clocks at all. What he did change were the power targets and voltage offsets in Intel's own graphics utility, with a setting of 55% on the "GPU performance Boost" slider and a +0.255mv voltage offset.

In testing, this gave the Arc A380 a 100–150 MHz GPU core overclock, a meager 4–6% improvement. However, power use increased from around 35W to roughly 50–55W, a far larger 43–57% jump. Could this be yet another aspect of the Arc A380 holding back performance? It certainly appears so.

Pro Hi-Tech benchmarked the overclocked A380 in six games: Watch Dogs Legion, Cyberpunk 2077, Doom Eternal, God of War, Rainbow Six Seige, and World of Tanks. We've collected the results from the video and summarized them in the following table. Note that ReBAR was enabled for all Arc testing, and we added the overall performance line (geometric mean). Also note that the GTX 1650 was a model that doesn't require additional power, meaning its performance is likely slower than a more typical GTX 1650 by at least a few percent.

PRO Hi-Tech Arc A380 "Overclocking" Results
Arc A380 StockArc A380 OCGTX 1650
Average (Geomean)55.175.675.9
Cyberpunk 2077375142
Doom Eternal64102101
God of War324551
Rainbox Six Siege126172181
Watch Dogs Legion496162
World of Tanks607679

The gains in the various games range from around 25% in Watch Dogs Legion and World of Tanks, to as much as 60% in Doom Eternal. Overall, using the geometric mean, the increased power limit and voltage offset boosted performance by 37%. That's a much larger improvement than we typically see with graphics card overclocking.

With the "overclock," the Arc A380 now trades blows with the GTX 1650. It was 20% faster in Cyberpunk 2077, still 12% slower in God of War, and close enough to a tie (within 5%) in the other four games that were tested. Of course, as we've noticed in other testing of the Arc A380, drivers and optimizations appear to be a major factor in performance.

Power Consumption on the A380 Looks Very Strange

At first glance, it appears Intel's A380 has a lot of overclocking headroom. However, there are some strange behaviors coming from the A380 that could be the reason for the "inflated" overclocking results.

The biggest and most obvious issue is the power consumption results. The Arc A380 has an official TDP of 75W on the Intel Arc website. Pro Hi-Tech's power consumption results, both before and after his "overclocking," aren't even close to 75W. The stock configuration routinely showed a measly 35W, while the overclocked config didn't fair that much better maxing out at just under 55W.

We don't know what's going on here, but this could be another one of the many issues plaguing Intel's Arc GPUs at this current time. We've heard plenty about driver optimization issues killing performance in most games, and the gains from enabling resizable BAR are much higher than we've seen with AMD or Nvidia GPUs. Now we're seeing power consumption that's potentially less than half of the rated level, which will obviously put a damper on performance.

We can't help but wonder why Intel even bothered to launch the Arc A380 in the current state, though perhaps it was just using China as a beta testing region. Of course the cards ended up "leaking" outside of China, which was bound to happen, and now the problems Intel is working to fix are there for everyone to see. Hopefully Team Blue can address these issues quickly, and then we can move on to the worldwide launch and hopefully get a much better experience.

Aaron Klotz
Aaron Klotz

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

  • cyrusfox
    Interesting, will be good to see what headroom there is on the top sku(A770?) with a waterblock, I am looking to retire my aging GTX1080.

    As Intel ramps up the PR hopefully this means we are weeks away from a full launch? Ready or not drivers, get these out and at the right price adoption is not going to be a problem.
    Reply
  • InvalidError
    I personally find nothing odd about re-BAR performing better practically all of the time: if drivers are optimized around re-BAR, that would eliminate the legacy 256MB window management overhead from the primary code paths and optimizing for re-BAR likely has much higher priority within Intel's driver team.

    Also, since Intel has only been doing IGPs for the last 20 years with the IGP having direct access to the memory controller, its drivers developers may have been using re-BAR-like flat memory space for a while already.
    Reply
  • garygech
    Intel GPU card hits the store this Christmas, it will be above average. The key to success will be commitment to technology improvements in the die shrink to 4 nm node, combined with much cheaper DDR6 memory moving forward on the card, and ability to use unified DD5 system memory when Meteor Lake becomes a reality in mid-2023. Likely the card will be a low cost to mid level cost card in 2024, that completely suppresses the profitability of competitors. The goal will be in 2024 to bring RTX 3080 performance to the PC at a fraction of the cost. Intel is a story about a football team that lost their way, woke up and realized, you have to run the ball, you cannot just throw one Hail Mary Pass after another. Also, the card is going to have high level tile AI over time, more and more tiles working overtime to augment the real world performance of the GPU. Instead of overclocking, promote the idea of the CHIPS Act, make our silicon in Central Ohio, these people want to be paid to work and innovate.
    Reply
  • rtoaht
    This is great, especially for a $130 card.
    Reply
  • sycoreaper
    rtoaht said:
    This is great, especially for a $130 card.

    It's not that great.
    For $70 more you can just get a well established and reliable 1650. Why settle on a marginally cheaper card that is unproven?
    Reply
  • KyaraM
    Ah, finally an English article to the one I read this morning in German xD

    sycoreaper said:
    It's not that great.
    For $70 more you can just get a well established and reliable 1650. Why settle on a marginally cheaper card that is unproven?
    If you are looking in this price rance, $70 is a lot of money. That's, like, 1/3 of the price of a 1650, which is a lot in literally every case.
    Reply
  • InvalidError
    sycoreaper said:
    For $70 more you can just get a well established and reliable 1650. Why settle on a marginally cheaper card that is unproven?
    2GB more VRAM, re-BAR, hardware support for newest CODECs, tensor cores for future AI stuff like XeSS, RT capabilities and I'm sure there are a few more reasons. Sure, the RT-power wont' be great, but it is at least there to try out just for kicks and I'm sure that if RT becomes standard even at the low-end, more casual games will bother implementing it. I'm sure many game developers wish all hardware would support RT as a baseline so they could develop along a single pipeline.

    As long as driver support gets up to par, the A380 should become a far more future-proof option.

    Also, $70 above a $130 A380 is $200, which is more than the RX6500 typically goes for and is faster than the GTX1650 as long as you make sure not to bust through the 4GB buffer. At that point, may as well throw yet another $70 (35%) on top and get an RX6600 which is roughly twice as fast as the RX6500.
    Reply
  • sycoreaper
    InvalidError said:
    2GB more VRAM, re-BAR, hardware support for newest CODECs, tensor cores for future AI stuff like XeSS, RT capabilities and I'm sure there are a few more reasons. Sure, the RT-power wont' be great, but it is at least there to try out just for kicks and I'm sure that if RT becomes standard even at the low-end, more casual games will bother implementing it. I'm sure many game developers wish all hardware would support RT as a baseline so they could develop along a single pipeline.

    As long as driver support gets up to par, the A380 should become a far more future-proof option.

    Also, $70 above a $130 A380 is $200, which is more than the RX6500 typically goes for and is faster than the GTX1650 as long as you make sure not to bust through the 4GB buffer. At that point, may as well throw yet another $70 (35%) on top and get an RX6600 which is roughly twice as fast as the RX6500.


    All depends on your goals I guess. If you want as cheap as possible it makes sense. If you want at least so!e amount of future proofing, the 6500 might be worth leaving budget territory
    Reply
  • cyrusfox
    sycoreaper said:
    All depends on your goals I guess. If you want as cheap as possible it makes sense. If you want at least so!e amount of future proofing, the 6500 might be worth leaving budget territory
    If your willing to step up price brackets the A580 should be there for double the performance and looking like a price of $200-250

    I'm personally looking forward to the A310(media PC) and the A770 for a potentially GTX1080 sidegrade? Really interested on how it performs, only game I play much, if at all is SC2 though and Intel graphics has done alright there, even integrated.
    Reply
  • rtoaht
    sycoreaper said:
    It's not that great.
    For $70 more you can just get a well established and reliable 1650. Why settle on a marginally cheaper card that is unproven?

    $70 more is actually 54% more. That's a substantial increase in price. Also a few years later that 54% more expensive card will be a paper weight without AV1 encode/decode support. But this cheap $130 dollar card can be still be used as a plex server since AV1 would be more prevalent by then. Also you will likely be able to download "free FPS" when newer drivers are available. It already beats more expensive cards in newer DX12 games. It will likely beat these more expensive cards in all games in a year or 2. So overall it is a much better value for experienced buyers.
    Reply