Intel Arc A580 Review: New Budget Contender

One year after the A750 and A770, Arc A580 finally arrives.

Intel Arc A580 photos and unboxing
(Image: © Tom's Hardware)

Why you can trust Tom's Hardware Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

Intel Arc A580 AI Performance

GPUs are also used with professional applications, AI training and inferencing, and more. Along with our usual professional tests, we've added Stable Diffusion benchmarks on the various GPUs. AI is a fast-moving sector, and it seems like 95% or more of the publicly available projects are designed for Nvidia GPUs. Those Tensor cores aren't just for DLSS, in other words. Intel has XMX (Xe Matrix eXtensions) for a similar performance boost, while AMD has "AI Accelerators" in RDNA 3 to help boost FP16 performance. Let's start with our AI testing and then hit the professional apps.

We're using Automatic1111's Stable Diffusion version for the Nvidia cards. Intel also has an OpenVINO fork of Automatic1111's webui, which shows about a 50% improvement over our previous testing. Finally, AMD has a DirectML fork of Automatic1111, though there are some limitations: You have to use SD 1.5, not SD 2.1, which means 768x768 image isn't using 768x768 training data; you can only generate one image at a time (we normally do a batch of 24); and using a batch size other than 1 also fails.

The DirectML fork seems to provide a good boost in performance to RDNA 3 GPUs, but RDNA 2 GPUs are slower, so we're using a recent Nod.ai Shark variant for RDNA 2 and the DirectML fork for RDNA 3.

With the latest OpenVINO fork of Stable Diffusion, Intel's GPUs look quite impressive. The A580 nearly ties the RTX 3060 at 512x512 generation, though 768x768 results aren't quite as high. AMD's GPUs meanwhile still look relatively tame. The Olive-ONNX DirectML fork does boost the RX 7600 up to RTX 3050 levels, though, which is much better than what we saw previously.

Nvidia pulls slightly ahead with 768x768 image generation, at least with the RTX 4060. AMD's GPUs don't look nearly as competitive at higher resolutions either, and there are plenty of issues that still need to be worked out (see above). If OpenVINO and OneAPI gain support, though, Intel's Arc GPUs could provide a real alternative to Nvidia for AI workloads.

Intel Arc A580 Professional Workloads

SPECviewperf is a different story, and overall the Arc A580 ends up as the slowest GPU in our selection of cards. It's just a hair behind the RTX 3050, while the 3060 and all of the AMD GPUs easily pull ahead. Even Intel's top Arc A770 16GB can't match the RTX 3060 or RX 6600.

As usual, the overall score doesn't really get into all of the details, so check the individual charts for any professional applications you might actually use. Arc does decently in Catia, Maya, and Medical. SNX meanwhile is a horrible showing. SolidWorks, Energy, Creo, and 3DSMax all put Arc at or near the bottom, but the margins are quite as severe as in SNX.

Still, unless Intel really changes things around with future drivers, it's difficult to imagine any professionals taking the Arc A-series seriously. Intel does have Arc Pro offerings, but we're not sure if they're truly tuned for professional applications or if they just have "Pro" tacked on to their product name — we've looked around and can't find any SPECviewperf tests of the Arc Pro A60, sadly.

For GPU-accelerated 3D rendering, the only app that currently supports all three GPU vendors is Blender. It now leverages the ray tracing hardware to boost performance, and that generally puts Nvidia in the pole position. Still, Intel's Arc offerings aren't that far behind.

We're now using Blender Benchmark, version 3.6.0. The Nvidia results haven't changed much with recent updates, but 3.6.0 gave a significant boost to the Intel Arc results. Oddly, the A770 and A750 are effectively tied now, while the A580 falls a bit behind. The A580 still beats AMD's RX 6700 10GB by 50%, and nearly doubles the performance of the RX 6600. At the same time, if you're serious about 3D rendering with Blender, you'd probably want something faster from the RTX 40-series lineup.

Intel Arc A580 Content Creation Summary

Intel's Arc GPUs can be very hit or miss for content creation, depending on what you want to do. Stable Diffusion worked well, Blender was okay, but professional applications are a weak spot.

Something else to consider is the video encoding/decoding performance and quality. We didn't retest anything for this review, but we looked at AMD, Intel, and Nvidia video encoding back in March. Nvidia came out on top, but Intel wasn't far behind. The A380 and A770 had effectively identical results, and we expect the A750 and A580 to perform the same.

Other aspects of video encoding with Arc again show driver immaturity. For example, the "Highlights" feature (Alt+H by default) captures video, but only at up to 1080p and 60 Hz. Furthermore, there was a clear drop in performance when using the function while playing a game. There are other missing aspects as well, like you only get to specify one file name — AMD and Nvidia smartly create process and time/date stamped videos with their equivalent tools.

Jarred Walton

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

  • AgentBirdnest
    Awesome review, as always!
    Dang, was really hoping this would be more like $150-160. I bet the price will drop before long, though; I can't imagine many people choosing this over the A750 that is so closely priced. Still, it just feels good to see a card that can actually play modern AAA games for under $200.
    Reply
  • JarredWaltonGPU
    AgentBirdnest said:
    Awesome review, as always!
    Dang, was really hoping this would be more like $150-160. I bet the price will drop before long, though; I can't imagine many people choosing this over the A750 that is so closely priced. Still, it just feels good to see a card that can actually play modern AAA games for under $200.
    Yeah, the $180 MSRP just feels like wishful thinking right now rather than reality. I don't know what supply of Arc GPUs looks like from the manufacturing side, and I feel like Intel may already be losing money per chip. But losing a few dollars rather than losing $50 or whatever is probably a win. This would feel a ton better at $150 or even $160, and maybe add half a star to the review.
    Reply
  • hotaru.hino
    Intel does have some cash to burn and if they are selling these cards at a loss, it'd at least put weight that they're serious about staying in the discrete GPU business.
    Reply
  • JarredWaltonGPU
    hotaru.hino said:
    Intel does have some cash to burn and if they are selling these cards at a loss, it'd at least put weight that they're serious about staying in the discrete GPU business.
    That's the assumption I'm going off: Intel is willing to take a short-term / medium-term loss on GPUs in order to bootstrap its data center and overall ambitions. The consumer graphics market is just a side benefit that helps to defray the cost of driver development and all the other stuff that needs to happen.

    But when you see the number of people who have left Intel Graphics in the past year, and the way Gelsinger keeps divesting of non-profitable businesses, I can't help but wonder how much longer he'll be willing to let the Arc experiment continue. I hope we can at least get to Celestial and Druid before any final decision is made, but that will probably depend on how Battlemage does.

    Intel's GPU has a lot of room to improve, not just on drivers but on power and performance. Basically, look at Ada Lovelace and that's the bare minimum we need from Battlemage if it's really going to be competitive. We already have RDNA 3 as the less efficient, not quite as fast, etc. alternative to Intel, and AMD still has better drivers. Matching AMD isn't the end goal; Intel needs to take on Nvidia, at least up to the 4070 Ti level.
    Reply
  • mwm2010
    If the price of this goes down, then I would be very impressed. But because of the $180 price, it isn't quite at its full potential. You're probably better off with a 6600.
    Reply
  • btmedic04
    Arc just feels like one of the industries greatest "what ifs' to me. Had these launched during the great gpu shortage of 2021, Intel would have sold as many as they could produce. Hopefully Intel sticks with it, as consumers desperately need a third vendor in the market.
    Reply
  • cyrusfox
    JarredWaltonGPU said:
    I can't help but wonder how much longer he'll be willing to let the Arc experiment continue. I hope we can at least get to Celestial and Druid before any final decision is made, but that will probably depend on how Battlemage does.
    What other choice do they have? If they canned their dGPU efforts, they still need staff to support for iGPU, or are they going to give up on that and license GPU tech? Also what would they do with their datacenter GPU(Ponte Vechio subsequent product).

    Only clear path forward is to continue and I hope they do bet on themselves and take these licks (financial loss + negative driver feedback) and keep pushing forward. But you are right Pat has killed a lot of items and spun off some great businesses from Intel. I hope battlemage fixes a lot of the big issues and also hope we see 3rd and 4th gen Arc play out.
    Reply
  • bit_user
    Thanks @JarredWaltonGPU for another comprehensive GPU review!

    I was rather surprised not to see you reference its relatively strong Raytracing, AI, and GPU Compute performance, in either the intro or the conclusion. For me, those are definitely highlights of Alchemist, just as much as AV1 support.

    Looking at that gigantic table, on the first page, I can't help but wonder if you can ask the appropriate party for a "zoom" feature to be added for tables, similar to the way we can expand embedded images. It helps if I make my window too narrow for the sidebar - then, at least the table will grow to the full width of the window, but it's still not wide enough to avoid having the horizontal scroll bar.

    Whatever you do, don't skimp on the detail! I love it!
    Reply
  • JarredWaltonGPU
    bit_user said:
    Thanks @JarredWaltonGPU for another comprehensive GPU review!

    I was rather surprised not to see you reference its relatively strong Raytracing, AI, and GPU Compute performance, in either the intro or the conclusion. For me, those are definitely highlights of Alchemist, just as much as AV1 support.

    Looking at that gigantic table, on the first page, I can't help but wonder if you can ask the appropriate party for a "zoom" feature to be added for tables, similar to the way we can expand embedded images. It helps if I make my window too narrow for the sidebar - then, at least the table will grow to the full width of the window, but it's still not wide enough to avoid having the horizontal scroll bar.

    Whatever you do, don't skimp on the detail! I love it!
    The evil CMS overlords won't let us have nice tables. That's basically the way things shake out. It hurts my heart every time I try to put in a bunch of GPUs, because I know I want to see all the specs, and I figure others do as well. Sigh.

    As for RT and AI, it's decent for sure, though I guess I just got sidetracked looking at the A750. I can't help but wonder how things could have gone differently for Intel Arc, but then the drivers still have lingering concerns. (I didn't get into it as much here, but in testing a few extra games, I noticed some were definitely underperforming on Arc.)
    Reply