Exclusive: DirectX 12 Will Allow Multi-GPU Between GeForce And Radeon

A source with knowledge of the matter gave us some early information about an "unspoken API," which we strongly infer is DirectX 12.

We first heard of DirectX 12 in 2013, and DirectX 12 appears to finally be around the corner. It's expected to launch in tandem with the upcoming Windows 10 operating system.

The new API will work much differently from older APIs, and it's common knowledge by now that it will be "closer to the hardware" than older APIs, similar to AMD's Mantle. This will bring massive improvements in framerates and latency, but that's not all that DirectX 12 has up its sleeve.

Explicit Asynchronous Multi-GPU Capabilities

One of the big things that we will be seeing is DirectX 12's Explicit Asynchronous Multi-GPU capabilities. What this means is that the API combines all the different graphics resources in a system and puts them all into one "bucket." It is then left to the game developer to divide the workload up however they see fit, letting different hardware take care of different tasks.

Part of this new feature set that aids multi-GPU configurations is that the frame buffers (GPU memory) won't necessarily need to be mirrored anymore. In older APIs, in order to benefit from multiple GPUs, you'd have the two work together, each one rendering an alternate frame (AFR). This required both to have all of the texture and geometry data in their frame buffers, meaning that despite having two cards with 4 GB of memory, you'd still only have a 4 GB frame buffer.

DirectX 12 will remove the 4 + 4 = 4 idea and will work with a new frame rendering method called SFR, which stands for Split Frame Rendering. Developers will be able to manually, or automatically, divide the texture and geometry data between the GPUs, and all of the GPUs can then work together to work on each frame. Each GPU will then work on a specific portion of the screen, with the number of portions being equivalent to the number of GPUs installed.

Our source suggested that this technology will significantly reduce latency, and the explanation is simple. With AFR, a number of frames need to be in queue in order to deliver a smooth experience, but what this means is that the image on screen will always be about 4-5 frames behind the user's input actions.

This might deliver a very high framerate, but the latency will still make the game feel much less responsive. With SFR, however, the queue depth is always just one, or arguably even less, as each GPU is working on a different part of the screen. As the queue depth goes down, the framerate should also go up due to freed-up resources.

The source said that with binding the multiple GPUs together, DirectX 12 treats the entire graphics subsystem as a single, more powerful graphics card. Thus, users get the robustness of a running a single GPU, but with multiple graphics cards.

It should be noted that although the new Civilization: Beyond Earth title runs on Mantle, it has an SFR option and works in a similar way because AMD's Mantle API supports SFR. Mind you, Split Frame Rendering is not a new trick by any means. Many industrial film, photography, and 3D modelling applications use it, and back in the 90s some game engines also supported it.

Of course, chances are you won't be able to use all of the options described above at the same time. Split frame rendering, for example, will still likely require some of the textures and geometry data to be in multiple frame buffers, and there may be other sacrifices that have to be made.

Build A Multi-GPU System With Both AMD And Nvidia Cards

We were also told that DirectX 12 will support all of this across multiple GPU architectures, simultaneously. What this means is that Nvidia GeForce GPUs will be able to work in tandem with AMD Radeon GPUs to render the same game – the same frame, even.

This is especially interesting as it allows you to leverage the technology benefits of both of these hardware platforms if you wish to do so. If you like Nvidia's GeForce Experience software and 3D Vision, but you want to use AMD's TrueAudio and FreeSync, chances are you'll be able to do that when DirectX 12 comes around. What will likely happen is that one card will operate as the master card, while the other will be used for additional power.

What we're seeing here is that DirectX 12 is capable of aggregating graphics resources, be that compute or memory, in the most efficient way possible. Don't forget, however, that this isn't only beneficial for systems with multiple discrete desktop GPUs. Laptops with dual-graphics solutions, or systems running an APU and a GPU will be able to benefit too. DirectX 12's aggregation will allow GPUs to work together that today would be completely mismatched, possibly making technologies like SLI and CrossFire obsolete in the future.

There is a catch, however. Lots of the optimization work for the spreading of workloads is left to the developers – the game studios. The same went for older APIs, though, and DirectX 12 is intended to be much friendlier. For advanced uses it may be a bit tricky, but according to the source, implementing the SFR should be a relatively simple and painless process for most developers.

Queueing frames has been a difficult point for various studios, such that on some games SLI or CrossFire configurations don't even work. The aggregation together with SFR should solve that issue.

That's as far as we can reach into the cookie jar for now, but we expect to see and learn more at GDC.

Follow Niels Broekhuijsen @NBroekhuijsen. Follow us @tomshardware, on Facebook and on Google+.

Niels Broekhuijsen

Niels Broekhuijsen is a Contributing Writer for Tom's Hardware US. He reviews cases, water cooling and pc builds.

  • dwatterworth
    Just as the implementation of these different distributed rendering techniques will be left up to the developers, won't the use of mixed GPU's, especially cross-vender, be up to the GPU manufacturer? I doubt AMD and Nvidia will allow such a configuration. Suddenly the less expensive AMD flagship combined with an inexpensive Nvidia Cuda / PhysX card would gain a lot more traction I would imagine.
    Reply
  • John Wittenberg
    Yep - Nvidia wrote out the ability to use any of their cards as a PhysX card with a beefier AMD GPU as primary years and years ago. I highly doubt Nvidia will play ball - but stranger things have happened.
    Reply
  • thekyle64
    It sounds to good to be true
    Reply
  • Maddux
    I'm excited about this as it means you can always use your last two video cards to give you a nice boost in performance. That way I'm not wasting money buying two of the same cards to set up SLI or Crossfire that will both be antiquated at the same time. Just use your newest card as the master.

    My question is, is DX12 smart enough to use this to give any boost at all to older games? Or does it strictly require a supported game.
    Reply
  • leo2kp
    I feel like DX12 is going to boost PC gaming like nothing else, once the bugs are worked out ;)
    Reply
  • dwatterworth
    15363101 said:
    I'm excited about this as it means you can always use your last two video cards to give you a nice boost in performance. That way I'm not wasting money buying two of the same cards to set up SLI or Crossfire that will both be antiquated at the same time. Just use your newest card as the master.

    My question is, is DX12 smart enough to use this to give any boost at all to older games? Or does it strictly require a supported game.

    The article points to it's all up to the developers to take advantage of DX12's ability to operate in such a way. I imagine PC only devs will be the first to test the waters.

    Only DX12 titles will be able to take advantage of the tech. There have been a number of articles reviewing backwards compatibility. Previous releases would require too substantial of an overhaul.
    Reply
  • Foobar88
    Rather than the ability to combine an Nvidia and an AMD GPU, I see the big takeaway here as being able to upgrade and run SLI on two cards from different generations. So, for example, I have a GTX 970 now. When Nvidia comes out with a "1070" or similar GPU on a 14nm or 16nm die, I could simply slap one of those into my machine and run SLI with those two cards, rather than having to simply replace my 970. That seems like the most cost-effective way to get true 4k capability, assuming the devs play along.
    Reply
  • jkhoward
    Zomg, a use for my poor integrated GPU!
    Reply
  • edwd2
    I have an HD 7950 that's just sitting there collecting dust right now, It'll be great if I can combine it with my current 290X.
    Reply
  • Grognak
    So it's gonna work with APUs/iGPUs too? That'd be pretty awesome, those lousy Intel graphics will finally be useful, and APU + GPU systems will get a serious performance boost.
    Reply