Sign in with
Sign up | Sign in

Exclusive: DirectX 12 Will Allow Multi-GPU Between GeForce And Radeon

By - Source: Tom's Hardware US | B 73 comments

We have early information about some of the details regarding DirectX 12, and what follows will surprise you.

A source with knowledge of the matter gave us some early information about an "unspoken API," which we strongly infer is DirectX 12.

We first heard of DirectX 12 in 2013, and DirectX 12 appears to finally be around the corner. It's expected to launch in tandem with the upcoming Windows 10 operating system.

The new API will work much differently from older APIs, and it's common knowledge by now that it will be "closer to the hardware" than older APIs, similar to AMD's Mantle. This will bring massive improvements in framerates and latency, but that's not all that DirectX 12 has up its sleeve.

Explicit Asynchronous Multi-GPU Capabilities

One of the big things that we will be seeing is DirectX 12's Explicit Asynchronous Multi-GPU capabilities. What this means is that the API combines all the different graphics resources in a system and puts them all into one "bucket." It is then left to the game developer to divide the workload up however they see fit, letting different hardware take care of different tasks.

Part of this new feature set that aids multi-GPU configurations is that the frame buffers (GPU memory) won't necessarily need to be mirrored anymore. In older APIs, in order to benefit from multiple GPUs, you'd have the two work together, each one rendering an alternate frame (AFR). This required both to have all of the texture and geometry data in their frame buffers, meaning that despite having two cards with 4 GB of memory, you'd still only have a 4 GB frame buffer.

DirectX 12 will remove the 4 + 4 = 4 idea and will work with a new frame rendering method called SFR, which stands for Split Frame Rendering. Developers will be able to manually, or automatically, divide the texture and geometry data between the GPUs, and all of the GPUs can then work together to work on each frame. Each GPU will then work on a specific portion of the screen, with the number of portions being equivalent to the number of GPUs installed.

Our source suggested that this technology will significantly reduce latency, and the explanation is simple. With AFR, a number of frames need to be in queue in order to deliver a smooth experience, but what this means is that the image on screen will always be about 4-5 frames behind the user's input actions.

This might deliver a very high framerate, but the latency will still make the game feel much less responsive. With SFR, however, the queue depth is always just one, or arguably even less, as each GPU is working on a different part of the screen. As the queue depth goes down, the framerate should also go up due to freed-up resources.

The source said that with binding the multiple GPUs together, DirectX 12 treats the entire graphics subsystem as a single, more powerful graphics card. Thus, users get the robustness of a running a single GPU, but with multiple graphics cards.

It should be noted that although the new Civilization: Beyond Earth title runs on Mantle, it has an SFR option and works in a similar way because AMD's Mantle API supports SFR. Mind you, Split Frame Rendering is not a new trick by any means. Many industrial film, photography, and 3D modelling applications use it, and back in the 90s some game engines also supported it.

Of course, chances are you won't be able to use all of the options described above at the same time. Split frame rendering, for example, will still likely require some of the textures and geometry data to be in multiple frame buffers, and there may be other sacrifices that have to be made.

Build A Multi-GPU System With Both AMD And Nvidia Cards

We were also told that DirectX 12 will support all of this across multiple GPU architectures, simultaneously. What this means is that Nvidia GeForce GPUs will be able to work in tandem with AMD Radeon GPUs to render the same game – the same frame, even.

This is especially interesting as it allows you to leverage the technology benefits of both of these hardware platforms if you wish to do so. If you like Nvidia's GeForce Experience software and 3D Vision, but you want to use AMD's TrueAudio and FreeSync, chances are you'll be able to do that when DirectX 12 comes around. What will likely happen is that one card will operate as the master card, while the other will be used for additional power.

What we're seeing here is that DirectX 12 is capable of aggregating graphics resources, be that compute or memory, in the most efficient way possible. Don't forget, however, that this isn't only beneficial for systems with multiple discrete desktop GPUs. Laptops with dual-graphics solutions, or systems running an APU and a GPU will be able to benefit too. DirectX 12's aggregation will allow GPUs to work together that today would be completely mismatched, possibly making technologies like SLI and CrossFire obsolete in the future.

There is a catch, however. Lots of the optimization work for the spreading of workloads is left to the developers – the game studios. The same went for older APIs, though, and DirectX 12 is intended to be much friendlier. For advanced uses it may be a bit tricky, but according to the source, implementing the SFR should be a relatively simple and painless process for most developers.

Queueing frames has been a difficult point for various studios, such that on some games SLI or CrossFire configurations don't even work. The aggregation together with SFR should solve that issue.

That's as far as we can reach into the cookie jar for now, but we expect to see and learn more at GDC.

Follow Niels Broekhuijsen @NBroekhuijsen. Follow us @tomshardware, on Facebook and on Google+.

Discuss
Add a comment
Ask a Category Expert
React To This Article

Create a new thread in the News comments forum about this subject

Example: Notebook, Android, SSD hard drive

Top Comments
  • 38 Hide
    thekyle64 , February 24, 2015 8:54 AM
    It sounds to good to be true
  • 22 Hide
    edwd2 , February 24, 2015 9:17 AM
    I have an HD 7950 that's just sitting there collecting dust right now, It'll be great if I can combine it with my current 290X.
  • 21 Hide
    John Wittenberg , February 24, 2015 8:47 AM
    Yep - Nvidia wrote out the ability to use any of their cards as a PhysX card with a beefier AMD GPU as primary years and years ago. I highly doubt Nvidia will play ball - but stranger things have happened.
Other Comments
    Add your comment Display all 73 comments.
  • 6 Hide
    dwatterworth , February 24, 2015 8:37 AM
    Just as the implementation of these different distributed rendering techniques will be left up to the developers, won't the use of mixed GPU's, especially cross-vender, be up to the GPU manufacturer? I doubt AMD and Nvidia will allow such a configuration. Suddenly the less expensive AMD flagship combined with an inexpensive Nvidia Cuda / PhysX card would gain a lot more traction I would imagine.
  • 21 Hide
    John Wittenberg , February 24, 2015 8:47 AM
    Yep - Nvidia wrote out the ability to use any of their cards as a PhysX card with a beefier AMD GPU as primary years and years ago. I highly doubt Nvidia will play ball - but stranger things have happened.
  • 38 Hide
    thekyle64 , February 24, 2015 8:54 AM
    It sounds to good to be true
  • 6 Hide
    Maddux , February 24, 2015 8:58 AM
    I'm excited about this as it means you can always use your last two video cards to give you a nice boost in performance. That way I'm not wasting money buying two of the same cards to set up SLI or Crossfire that will both be antiquated at the same time. Just use your newest card as the master.

    My question is, is DX12 smart enough to use this to give any boost at all to older games? Or does it strictly require a supported game.
  • 7 Hide
    leo2kp , February 24, 2015 9:02 AM
    I feel like DX12 is going to boost PC gaming like nothing else, once the bugs are worked out ;) 
  • 6 Hide
    dwatterworth , February 24, 2015 9:04 AM
    Quote:
    I'm excited about this as it means you can always use your last two video cards to give you a nice boost in performance. That way I'm not wasting money buying two of the same cards to set up SLI or Crossfire that will both be antiquated at the same time. Just use your newest card as the master.

    My question is, is DX12 smart enough to use this to give any boost at all to older games? Or does it strictly require a supported game.


    The article points to it's all up to the developers to take advantage of DX12's ability to operate in such a way. I imagine PC only devs will be the first to test the waters.

    Only DX12 titles will be able to take advantage of the tech. There have been a number of articles reviewing backwards compatibility. Previous releases would require too substantial of an overhaul.
  • 8 Hide
    Foobar88 , February 24, 2015 9:04 AM
    Rather than the ability to combine an Nvidia and an AMD GPU, I see the big takeaway here as being able to upgrade and run SLI on two cards from different generations. So, for example, I have a GTX 970 now. When Nvidia comes out with a "1070" or similar GPU on a 14nm or 16nm die, I could simply slap one of those into my machine and run SLI with those two cards, rather than having to simply replace my 970. That seems like the most cost-effective way to get true 4k capability, assuming the devs play along.
  • 12 Hide
    jkhoward , February 24, 2015 9:14 AM
    Zomg, a use for my poor integrated GPU!
  • 22 Hide
    edwd2 , February 24, 2015 9:17 AM
    I have an HD 7950 that's just sitting there collecting dust right now, It'll be great if I can combine it with my current 290X.
  • 9 Hide
    Grognak , February 24, 2015 9:22 AM
    So it's gonna work with APUs/iGPUs too? That'd be pretty awesome, those lousy Intel graphics will finally be useful, and APU + GPU systems will get a serious performance boost.
  • 7 Hide
    airborn824 , February 24, 2015 9:25 AM
    Will SFR be easy to code? Otherwise it is a waist as most developers are too lazy to code for multi GPU or even multi core CPU.
  • 6 Hide
    booyaah , February 24, 2015 9:25 AM
    Now if they can only find a way to allow SLI/XFire to work in fullscreen windowed mode, all my problems will be solved !!
  • 11 Hide
    spiketheaardvark , February 24, 2015 9:29 AM
    This may give reason to slap in that old card you have laying around, but I'd wager the reason for this whole set up is for the future arrival of 3d headset like Oculus. This make it possible assign one card per eye rather than each card rendering alternate frames for both eyes. The bonus reduction in latency is a big deal for 3d headsets to prevent nausea.
  • 5 Hide
    Maddux , February 24, 2015 9:54 AM
    Reading the article a second time it sounds like it could theoretically work on more than two GPUs as well. So I could use two discreet GPUs AND gain a little extra power from my CPU's integrated GPU. Man, this sounds like a dream if it's true!
  • 5 Hide
    Kewlx25 , February 24, 2015 10:28 AM
    Quote:
    Just as the implementation of these different distributed rendering techniques will be left up to the developers, won't the use of mixed GPU's, especially cross-vender, be up to the GPU manufacturer? I doubt AMD and Nvidia will allow such a configuration. Suddenly the less expensive AMD flagship combined with an inexpensive Nvidia Cuda / PhysX card would gain a lot more traction I would imagine.


    In order to be DX12 compatible your device must be able to compute. DX12 will treat all computer engines and a resource pool and distribute work among them. It doesn't matter if they're AMD, Nvidia, Intel, or whatever, as long as the device can support DX12.
  • 1 Hide
    dwatterworth , February 24, 2015 10:31 AM
    Quote:
    Quote:
    Just as the implementation of these different distributed rendering techniques will be left up to the developers, won't the use of mixed GPU's, especially cross-vender, be up to the GPU manufacturer? I doubt AMD and Nvidia will allow such a configuration. Suddenly the less expensive AMD flagship combined with an inexpensive Nvidia Cuda / PhysX card would gain a lot more traction I would imagine.


    In order to be DX12 compatible your device must be able to compute. DX12 will treat all computer engines and a resource pool and distribute work among them. It doesn't matter if they're AMD, Nvidia, Intel, or whatever, as long as the device can support DX12.


    That doesn't make sense to me, can you explain more? I understand the concept, but I don't see DX12 not needing drivers which will lock the resources, as they do currently, if a card of a competing vendor is present.
  • 6 Hide
    Joseph Jasik , February 24, 2015 10:34 AM
    Would be great, but I agree, sounds too good to be true...
  • 3 Hide
    RedJaron , February 24, 2015 10:40 AM
    I'm with Maddux and Foobar. My first thought was that your old GPU would still be useful after an upgrade. Of course they'd both need to be DX12 compliant, so just how old can the "old" card be? And depending on how much an APU can contribute to the framerates, they actually might be preferable than the low-budget Intel chips. This could also make Z mboards more desirable as PCIe lane splitting could be more easily used.

    My second thought was how this would reconcile the different rendering effects between AMD and NVidia cards. My best guess there is that only "pure" DX12 rendering methods would be supported when trying to use mixed graphics resources.
  • 5 Hide
    jnanster , February 24, 2015 11:02 AM
    This article only referenced gaming, which is huge, especially from a AMD/Cuda/Physx standpoint. Would also be interested in improved video editing software; OpenCL/Cuda.
  • 7 Hide
    Achoo22 , February 24, 2015 11:30 AM
    Although the article makes it sound like we are ready to move away from SLI and Crossfire licensing, I suspect that things will somehow be twisted in such a way as to do the opposite.
Display more comments
React To This Article