An unholy SLI emerges — Intel's Arc A770 and Nvidia's Titan Xp pair up to provide 70% boost in FluidX3D

Intel Arc A770 Limited Edition
(Image credit: Tom's Hardware)

Although multi-GPU technologies for gaming, like SLI and CrossFire, have been dead for many years now, multi-GPU is still clearly useful for other applications, as seen in a demo in FluidX3D. Under the handle ProjectPhysX, FluidX3D developer Dr. Moritz Lehmann has demonstrated a dual-GPU setup combining Intel's A770 with one of Nvidia's Titan Xp, which is approaching its seventh birthday. Though this is a strange combination at best, the results show that the two GPUs combined pack a punch.

For a multi-GPU demo, it was surprisingly simple. Dr. Lehmann used Acer's Predator A770 16GB and an Nvidia Titan Xp, each of which simulated and rendered half of the simulation. While DX12 and Vulkan are (or were) the highest-profile APIs to perform multi-GPU workloads, FluidX3D actually runs in OpenCL, which incidentally is developed by Khronos Group, the same developers behind Vulkan. 

Precise performance figures and data weren't offered, but Dr. Moritz says the dual-GPU setup outperformed each individual GPU by roughly 70%, which makes sense as the A770 and Titan Xp perform about the same in FluidX3D per the software's scoreboard. It took an hour and 13 minutes to compute the simulation and then around 14 minutes to render it, which means either card on its own would take around two whole hours just to simulate.

While this combination may seem like it was chosen simply for the sake of humor, there was actually some good reasoning for pairing the A770 with the Titan Xp. As the developer says, it makes very little sense to pair a very powerful GPU with a much weaker one, and at least for FluidX3D, it's ideal to have similar memory capacity and bandwidth. With the A770 running its 16GB at 560GB/s and the Titan Xp running its 12GB at 548GB/s, the match actually makes some sense.

With a 70% performance boost, it might seem hard to believe the gaming industry passed up on multi-GPU technology, a sentiment that was echoed by many commentators on the FluidX3D demo. After all, DX12 and Vulkan have great support for the technology, GPU-to-GPU linking technology is more robust than ever, and the latest versions of PCIe are very fast.

In reply to those comments, Dr. Moritz gave his analysis and pointed out a few issues with multi-GPU setups for gaming. The biggest issue would be the cost that goes into developing multi-GPU solutions for games, which used to fall on the shoulders of Nvidia and AMD but shifted to game developers with the arrival of DX12 and Vulkan, which have powerful multi-GPU features but require manual tuning to work effectively. However, game developers don't profit if they implement multi-GPU, which has always been extremely niche, even among PC enthusiasts.

The industry instead took a different direction to get more performance, focusing more on single-GPU setups by making even bigger flagships, which are now "so hilariously large that you can't even fit a single one in a normal PC case, let alone 2" according to Dr. Moritz. Considering many GPUs like the RTX 4090 Founders Edition take up three slots, it's hard to disagree. Today, multi-GPU is thriving in data centers, supercomputers, and AI-focused systems, where the cost of implementing support is more than worth it.

Matthew Connatser

Matthew Connatser is a freelancing writer for Tom's Hardware US. He writes articles about CPUs, GPUs, SSDs, and computers in general.

  • cryoburner
    With a 70% performance boost, it might seem hard to believe the gaming industry passed up on multi-GPU technology, a sentiment that was echoed by many commentators on the FluidX3D demo. After all, DX12 and Vulkan have great support for the technology, GPU-to-GPU linking technology is more robust than ever, and the latest versions of PCIe are very fast.

    SLI scaling in games was rarely ever that good, and some games would see little to no increase in frame rates from adding a second card. And the value isn't likely to be there either. Two 4070 Tis would cost just as much as a 4090 (going by launch MSRPs), a card that offers more graphics cores than the two combined, along with double the usable VRAM, since VRAM in games generally needed to be mirrored between cards. The performance of the single-card solution would undoubtedly be more consistent as well, and not prone to issues like uneven frame pacing or other bugs or performance anomalies that sometimes affected multi-card setups. Putting a pair of enthusiast-level cards like 4090s or even 4080s into a single system would be even less practical. Top-tier enthusiast cards like the 4090 already basically fulfill the role of what SLI used to offer.
    Reply
  • TechLurker
    cryoburner said:
    SLI scaling in games was rarely ever that good, and some games would see little to no increase in frame rates from adding a second card. And the value isn't likely to be there either. Two 4070 Tis would cost just as much as a 4090 (going by launch MSRPs), a card that offers more graphics cores than the two combined, along with double the usable VRAM, since VRAM in games generally needed to be mirrored between cards. The performance of the single-card solution would undoubtedly be more consistent as well, and not prone to issues like uneven frame pacing or other bugs or performance anomalies that sometimes affected multi-card setups. Putting a pair of enthusiast-level cards like 4090s or even 4080s into a single system would be even less practical. Top-tier enthusiast cards like the 4090 already basically fulfill the role of what SLI used to offer.
    Eh, part of SLI's theoretical strength was being able to buy a weaker GPU to start off with, then buying another of the same make later on, to improve performance. Great idea in theory to save costs (all the fantasies of being able to get top tier performance for less), but in reality it just never worked out.

    It also didn't help that it took forever to get asymmetrical pairing working, which was really only on AMD's end with being able to X-Fire different card models, and would have theoretically allowed for additional performance gains and less e-waste (being able to use the old card to prop performance up a bit more in theory). But it came too late and the theoretical benefits never actually materialized as AMD could never quite solve the load balancing.

    Considering that some still care about e-waste, it'd be neat if it were possible to reuse old GPUs to offload some tasks to them, like how PHYSX once was a separate card for physics calcs or some experiments now with using a second GPU for AI processing, but most consumer-grade mobos don't have the wired slots needed to run a pair in x16. Ironically, that's the main reason I miss the SLI era; not for the SLI/X-Fire capabilities, but because mobos had enough wiring to run at least 2 cards at x16 and split a third for x8 or x4 duties, due to SLI or add-in cards being popular back then. Now that's basically prosumer territory.
    Reply
  • Alvar "Miles" Udell
    Device
    FP32
    Mem
    BW
    FP32/FP32
    FP32/FP16S
    FP32/FP16C
    Radeon RX 6900 XT
    23.04
    16
    512
    1968
    4227
    4207
    1x A770+1xTitanXP
    24.30
    24
    1095
    4717
    8380
    8026

    Like the page says it's a memory bandwidth oriented program so it can perform far better than the 6900XT which has about the same FP32 performance, twice the bandwidth equals twice the performance.
    Reply
  • Murissokah
    cryoburner said:
    SLI scaling in games was rarely ever that good, and some games would see little to no increase in frame rates from adding a second card. And the value isn't likely to be there either. Two 4070 Tis would cost just as much as a 4090 (going by launch MSRPs), a card that offers more graphics cores than the two combined, along with double the usable VRAM, since VRAM in games generally needed to be mirrored between cards. The performance of the single-card solution would undoubtedly be more consistent as well, and not prone to issues like uneven frame pacing or other bugs or performance anomalies that sometimes affected multi-card setups. Putting a pair of enthusiast-level cards like 4090s or even 4080s into a single system would be even less practical. Top-tier enthusiast cards like the 4090 already basically fulfill the role of what SLI used to offer.

    As a former user of SLI and Crossfire setups, I don't miss it at all. For a long time I couldn't quite explain why it didn't feel as good as a single powerful card until frame pacing analysis started getting more popular. My experience with multi-GPU setups is that however much they may increase the FPS, it was never fluid.

    The case for multi-GPU was always based on the false premise that it was improving your experience by nearly two fold, though it never did that. It improved average FPS at the cost of horrible pacing. When one of the cards alone was able to achieve low frame times, you wouldn't feel the bad pacing as much. When it didn't, it was a mess. It felt considerably worse.

    For those who may be curious, check the 30 FPS comparison below.
    zOtre2f4qZs:17
    Reply
  • sfjuocekr
    Can we PLEASE stop chanting the same old "multiGPU is dead" nonsense?

    You obviously haven't gotten a clue what you are talking about!

    MultiGPU rendering is still very well possible, the problem is no developer cared to implement it.

    You can do quite a lot of different things at the same time with two GPU's, Vulkan is by far the most flexible in setting up two GPU's but yet here we are... News outlets crying it is dead, because nVidia stopped active driver support... Which had absolutely nothing to do with DX12 and Vulkan support!

    So please, stop spreading fake news and to all you self proclaimed "gamers" and developers... stop repeating nonsense spread by smollbrain news outlets and read the frilling documentation!

    Edit: apparently nobody understands.

    Falling back to comparing numbers, slinging percentages... anyone dropping a percentage doesn't know what he or she is on about at all.

    I have 200kg worth of tomatos, they contain 99% water. After two days I'm left with just 100kg, what percentage of water was lost?
    Reply
  • Amdlova
    sfjuocekr said:
    Can we PLEASE stop chanting the same old "multiGPU is dead" nonsense?
    DEAD... long long time ago.

    For gamers never do any sense. It's only to sell expensively power supplys and big cases.
    Reply
  • Tim_124
    TechLurker said:
    Eh, part of SLI's theoretical strength was being able to buy a weaker GPU to start off with, then buying another of the same make later on, to improve performance. Great idea in theory to save costs (all the fantasies of being able to get top tier performance for less), but in reality it just never worked out.

    It also didn't help that it took forever to get asymmetrical pairing working, which was really only on AMD's end with being able to X-Fire different card models, and would have theoretically allowed for additional performance gains and less e-waste (being able to use the old card to prop performance up a bit more in theory). But it came too late and the theoretical benefits never actually materialized as AMD could never quite solve the load balancing.

    Considering that some still care about e-waste, it'd be neat if it were possible to reuse old GPUs to offload some tasks to them, like how PHYSX once was a separate card for physics calcs or some experiments now with using a second GPU for AI processing, but most consumer-grade mobos don't have the wired slots needed to run a pair in x16. Ironically, that's the main reason I miss the SLI era; not for the SLI/X-Fire capabilities, but because mobos had enough wiring to run at least 2 cards at x16 and split a third for x8 or x4 duties, due to SLI or add-in cards being popular back then. Now that's basically prosumer territory.
    There is a thriving marketplace for old GPU’s on eBay (or whatever second hand market you want). Even really old parts that lost driver support long ago still have an audience.

    Anyone that throws away a GPU is literally throwing away money.

    I agree on wanting to reduce e-waste, but any card that would be of remote interest in a potential SLI setup can easily find a new home on an auction site or secondary market.
    Reply
  • Broly MAXIMUMER
    It seriously is a sorely missed opportunity, tossed out just because "it was hard back then."

    All this other stuff works with 2-4+ GPUs no problem, then and now. And I thought the whole point of mGPU being added to the API's was literally to take all that "hard work" off of the devs (sheesh, as I read it back then, it even sounded like somewhere between "checking a box/a little fiddling" to "the API just handles it" in DX12 and Vulkan)

    I want to see it and never wont want to see it. Feeling like we're having a new "Crysis" moment so here's hoping someone will either take the plunge.
    Reply
  • cryoburner
    sfjuocekr said:
    Can we PLEASE stop chanting the same old "multiGPU is dead" nonsense?

    You obviously haven't gotten a clue what you are talking about!

    MultiGPU rendering is still very well possible, the problem is no developer cared to implement it.

    If a piece of hardware has had no real developer support in years, it can effectively be considered "dead", at least as far as being something worth buying is concerned. Today's demanding games do not support multiple GPU rendering, so no one is buying multiple GPUs for playing modern games. And since no one has been buying multiple GPUs to run modern games, there is no reason for developers to put resources toward supporting the feature. The already-small multi-GPU ecosystem completely collapsed, and would likely need a total reboot to ever be considered viable again. There's always the possibility of that happening down the line, but with videocard manufacturers pushing high-powered single cards to fulfil that role, it doesn't seem like that will be happening any time soon.
    Reply