Intel Confirms Poor Arc GPU DX11 Performance Is a Work in Progress

Intel Arc A380 by Gunnir — "Into the Unknown"
(Image credit: Gunnir)

According to a recent Intel Q&A, the company confirmed that driver optimizations for Arc GPUs -- relating to poor performance in DirectX 11 and 9 games -- is going to be a constant work in progress with no end goal in mind. Basically, Intel's lack of experience in the discrete GPU driver space will prevent their GPUs from being competitive with older APIs for quite some time.

This was made very apparent by a review from LinusTechTips, where he saw a 50% performance delta between the DX11 and DX12 versions of Shadow of the Tomb Raider running on an Arc A770. In DirectX 11, the A770 only saw around 38FPS, while in DirectX 12 mode, that frame rate bumps up to a whopping 80 FPS.

For the uninitiated, DirectX 11, Direct X 9, and other older APIs behave very differently from the modern ones like DirectX 12 and Vulkan. These older APIs rely heavily on the GPU driver itself to do a lot of the heavy lifting when it comes to tweaking and configuring lower-level GPU settings unseen by the user. 

This behavior was intentional in an effort to reduce some additional heavy lifting for game developers. As a result, driver optimizations play a massive role in dictating the gaming performance of a GPU with these older APIs. 

This is a night and day difference compared to DirectX 12 and Vulkan, where a lot of this driver baggage has been transferred to the game engine itself, with game developers being responsible for handing lower-level optimizations such as video memory allocation (this is why DirectX 12 and Vulkan are referred to as "low level" APIs).

The bad news for Intel is they have very little experience with these APIs surrounding discrete graphics (in comparison to iGPs). Nvidia and AMD, on the other hand, have more than a decade of experience in the field and know all the little details and odd behaviors DX 11 and DX 9 might have.

As a result, Tom Petersen from Intel says the road towards better performance in APIs like DirectX 11 will be a "labor of love forever." It is a sad truth, but a truth nonetheless. These optimizations don't happen overnight, and there are infinite ways to optimize GPUs for DirectX 11 and its predecessors. This fact holds true even for experienced companies like AMD, which has seen big DirectX 11 driver gains in recent years.

Integrated Graphics Experience Has Made Things Worse for Intel

At first glance, it's easy to assume Intel's experience with integrated graphics would be beneficial. But unfortunately, it has not helped matters and has even made things worse for the company.

In a report we covered a week ago, CEO Pat Gelsinger noted that it made a fatal error on the driver side of development and falsely assumed that it could take its integrated graphics driver stack and apply it to its discrete Arc GPUs.

This strategy showed Intel that its integrated graphics driver stack was utterly inadequate to run Intel's much more powerful Arc GPUs since the architectural differences between its iGPs and dGPUs are massive.

We suspect this could be a big reason Intel's Arc GPUs suffer exceptionally in Direct X 11. If Intel had started from scratch with a dedicated GPU driver stack, the developers would have had more time to optimize for older APIs.

Aaron Klotz
Freelance News Writer

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

  • rluker5
    It is good for that information to come out. More specific information would be better, but we will get that with reviews.
    AMD and Nvidia already have performance differentials in different APIs with AMD still lagging, but improving recently with the older ones on their RDNA2 arch.
    Some like AMD gpus better because they are a better value in the games, settings they play, and others who like different games, settings that are a better performance value with Nvidia can't understand why somebody would prefer AMD.
    Looks like Intel will be similar to AMD in this, only with the driver deficiencies and improvements being relatively larger.
    Reviews will give more information and buyers can then better choose based on the performance value in the games they play and expect to play.

    Edit: I also think that Intel releasing cheaper, smaller gpus initially works out well with these deficiencies. It gives people an opportunity to take a smaller chance (in terms of money) to check them out and increases the userbase to better detect problems to be solved. I hope they release them soon.
    Reply
  • dehjomz
    Makes you wonder if iGPU performance has been abysmal not because of iGPU hardware, but because of iGPU drivers... If/when Intel optimizes its DX11/DX9 drivers, might UHD graphics benefit as well, and thus age like fine wine?
    Reply
  • rluker5
    dehjomz said:
    Makes you wonder if iGPU performance has been abysmal not because of iGPU hardware, but because of iGPU drivers... If/when Intel optimizes its DX11/DX9 drivers, might UHD graphics benefit as well, and thus age like fine wine?
    I believe the igpu performance has been bad for 2 different reasons, one for desktop and one for mobile.
    For desktop the igpu performs on par per tflop with AMD. AMD igpus are much larger though. You wouldn't expect a 192 shader dgpu to be as fast as a 512 shader dgpu, but people for some reason expect that with an igpu.
    For mobile, Intel has a windows power problem where the igpu is completely ignored and the cpu gets all of the power it can use to run max clocks possible. If the power priority were set to the igpu (since any games on igpu are completely igpu limited) and the cpu would be power throttled to low clocks instead, the overall performance would be much better.
    Reply
  • EirikrHinnRauthi
    One would wonder if Intel could use a "wrapper" similar to Proton/Wine/DXVK for Windows games on Linux --- but in this case instead of running DX9/10/11 on Vulkan on Linux -- run it on DX12 or on Vulkan on Windows!

    Boom.
    Reply
  • rluker5
    EirikrHinnRauthi said:
    One would wonder if Intel could use a "wrapper" similar to Proton/Wine/DXVK for Windows games on Linux --- but in this case instead of running DX9/10/11 on Vulkan on Linux -- run it on DX12 or on Vulkan on Windows!

    Boom.

    Like DXVK with AMD cards?
    Could you imagine a toggle switch in their driver gui that automates this?

    piff.
    Reply
  • salgado18
    Admin said:
    Intel Confirms Poor Arc GPU DX11 Performance Is a Work in Progress : Read more
    The title of the news article suggests Intel is progressing towards poor performance :rolleyes:
    Reply
  • TerryLaze
    EirikrHinnRauthi said:
    One would wonder if Intel could use a "wrapper" similar to Proton/Wine/DXVK for Windows games on Linux --- but in this case instead of running DX9/10/11 on Vulkan on Linux -- run it on DX12 or on Vulkan on Windows!

    Boom.
    That would NOT fix any performance issues the hardware has, it would run just as bad and even worse since now there would be another software layer.
    It's a matter of certain instructions not performing well and it wouldn't make a difference if you run these instructions natively or emulated it would still be the same instructions that would have to run.
    Reply
  • cryoburner
    Basically, Intel's lack of experience in the discrete GPU driver space will prevent their GPUs from being competitive with older APIs for quite some time.
    They could easily still be "competitive" in games utilizing those older APIs, even if performance won't be where it could be. According to Intel, they plan to price the cards based on how they perform in DX9/11 titles, so one could arguably look at their much better DX12/Vulcan performance as a bonus relative to the competition. It's possible that they could even outperform the competition at similar price points in most older titles, making up for the unoptimized drivers by providing more hardware with lower or nonexistent profit margins to help them make a good first impression.

    It's actually kind of similar to what we saw with AMD's Polaris cards, or with the early generations of Ryzen CPUs. AMD's offerings, while decent, weren't exactly leading in terms of high-end performance at the time of their launch, but they priced the hardware accordingly and gave more hardware for the money to make up for it, allowing their products to be very competitive despite their limitations.

    I imagine there will probably be certain titles where Intel's cards perform worse than the similarly-priced competition, but as long as performance is still reasonable in those titles, while being better in most others, that shouldn't hold them back too much. So performance of the cards may not be much of a concern. My main concerns would be over how well the various side-features and control panel settings work, and whether there are any compatibility issues with anything.
    Reply
  • rluker5
    TerryLaze said:
    That would NOT fix any performance issues the hardware has, it would run just as bad and even worse since now there would be another software layer.
    It's a matter of certain instructions not performing well and it wouldn't make a difference if you run these instructions natively or emulated it would still be the same instructions that would have to run.
    It works for AMD. That's probably where he got the DXVK wrapper idea from. It improves performance in support lacking apis. Sounds like a bit of a hassle, like reshade or texmod, but more worth it imo. That's why I thought it would be nice if Intel used some of their driver staff hours to set up some automated button to do it for us lazies.
    Reply
  • JayNor
    Sounds like Intel is headed the other direction, reducing the frequency of updates for the older GPUs would be consistent with reducing the effort on dx11 and earlier apis.

    The more interesting stuff coming is in the hardware architecture, with the tGPU on Meteor Lake. There will be a hotchips presentation on it in a few weeks. I'm interested to see how wide they went on the CPU to GPU connection, and assuming this will be use of their ucie design, to reduce the differences in the discrete vs tile GPUs.
    Reply