Page 2:What You Need For An SLI Build
Page 3:How SLI Works
Page 4:How We Tested
Page 5:SLI Scaling In Synthetic Benchmarks
Page 6:SLI Scaling In Game Benchmarks
Page 7:How Many PCIe Lanes Do You Need?
Page 8:Driver Limitations And SLI-AA Mode
Page 9:Micro-Stuttering: Is It Real?
Page 10:Lack Of Support And Image Artifacts
Page 11:Overclocking In SLI
Page 12:SLI And Virtual Reality Applications
Page 13:It's A Great Time To SLI
I've been excited by SLI ever since it was introduced as Scan Line Interleave by 3Dfx. Two Voodoo2 cards could operate together, with the noticeable benefit of upping your maximum 3D resolution from 800x600 to 1024x768. Amazing stuff...back in 1998.
Fast forward almost twenty years. 3Dfx went out of business long ago (it was acquired in 2000 out of bankruptcy by Nvidia), and SLI was re-introduced and re-branded by Nvidia in 2004 (it now stands for Scalable Link Interface). But the overall perception of SLI as a status symbol in hardcore gaming machines, offering massive rendering power, but also affected by numerous technical issues, has changed little.
Today we're looking at the green team specifically, and we plan to follow up with a second part on AMD's CrossFire. In that next piece, you'll see us compare both manufactures' dual-GPU offerings.
In this article, we'll explore some of the technology's basics as it operates today, take an in-depth look at scaling with two cards compared to one, discuss driver and game-related issues, explore overclocking potential and finally provide some recommendations on how to decide whether SLI is right for you.
While SLI technically supports up to four GPUs in certain configurations, it is generally accepted that three- and four-way SLI don't scale as well as a two-way array. While you are likely to see PCs with three or four GPUs at the top of synthetic benchmark charts, they're a lot less common in the real world, and not just because of their cost.
Furthermore, Nvidia representatives confirm that three-way SLI is not supported in 8x/4x/4x PCIe lane configurations, which are native to Intel's LGA 1150 platform. You'll either need an LGA 1150-based board equipped with an (expensive) PLX bridge chip or an even more expensive LGA 2011-v3 platform if you want to go beyond two-way SLI. Fortunately, most Haswell/Ivy Bridge/Sandy Bridge platforms enable two-way SLI without issue.
Finally, another downside of going beyond two-way SLI is that, because of the way SLI works, input lag increases as the number of cards working together goes up.
MORE: Best Graphics Cards For The Money
MORE: How To Build A PC: From Component Selection To Installation
MORE: Gaming At 3840x2160: Is Your PC Ready For A 4K Display
MORE: All Graphics Articles
MORE: Graphics Cards in the Forum
What You Need For An SLI Build
In order to build a SLI-capable system, you need the following:
- A motherboard with at least two free PCIe x16 slots, operating in at least in x8 mode (Nvidia does not support SLI on x4 links). Pretty much all LGA 2011, LGA 2011-v3 and LGA 1150 motherboards satisfy this requirement.
- Two (or more) identical Nvidia-based cards that support SLI, or a dual-GPU card like the GeForce GTX 690 or Titan Z. Generally, different cards won't do the trick.
- A suitable power supply. Increasing the number of GPUs in a system rapidly increases its power requirements. Take that into account when you choose your PSU.
- An SLI bridge. This is generally provided by your motherboard's manufacturer as a bundled accessory.
- The latest Nvidia drivers. If you're reading this article, we're pretty sure that you know that you can grab these from Nvidia's website.
In addition, you'll want a relatively enthusiast-oriented CPU, especially if you're shooting for high frame rates (such as to power 120+ Hz displays) more than better eye candy. For reference, the Core i7-4770K overclocked to 4.4 GHz that we used in these tests appeared to cap out at roughly 150 FPS at 1440p in most applications.
Once all of this is sorted out, you can go ahead and enable SLI in the Nvidia Control Panel.
How SLI Works
There are five SLI rendering modes available: Alternate Frame Rendering (AFR), Split Frame Rendering (SFR), Boost Performance Hybrid SLI, SLIAA and Compatibility mode. In practice, however, you can forget about the latter four. Modern games almost exclusively use AFR.
AFR, in Nvidia's own definition, is:
[In AFR mode] "the driver divides workload by alternating GPUs every frame. For example, on a system with two SLI enabled GPUs, frame 1 would be rendered by GPU 1, frame 2 would be rendered by GPU 2, frame 3 would be rendered by GPU 1, and so on. This is typically the preferred SLI rendering mode as it divides workload evenly between GPUs and requires little inter-GPU communication."
In order to benefit from performance scaling, individual applications need a so-called SLI profile that tells the display driver what specific form of synchronization is required, and which others can be skipped. Nvidia creates these profiles and releases them as part of its periodic driver upgrades. Newer games may not have SLI profiles available when they launch, or the SLI profile initially released may be buggy (creating visual artifacts) or not yet optimized (limiting scaling).
For more information, please refer to this white paper published on Nvidia's developer network. We encourage all of you who wish to learn more about the technology to read it: SLI_Best_Practices_2011_Feb.
How We Tested
|CPU||Overclocked to 4.4GHz|
|Graphics Cards||2x EVGA GeForce GTX 980 Superclocked ACX 2.0|
Core: 2048 CUDA Cores, 1266MHz Base Clock, 1367MHz Boost Clock, 162GT/s Texture Fill Rate
Memory: 4096MB, 256-bit GDDR5, 7010MT/s, 224.3GB/s Memory Bandwidth
Bus: Two 8-lane PCIe Gen 3.0 links, 7.88GB/s PCIe Bus Bandwidth (each card)
*Note: LGA 1150 supports up to 16 PCIe lanes without a PLX chip.
Software and Drivers
|Operating System||Microsoft Windows 8.1 Pro x64|
|Graphics Drivers||Nvidia GeForce 347.25 WHQL|
|Middle-earth: Shadow of Mordor|
Modified LithTech Jupiter EX engine
|Built-in benchmark at maxed graphic settings, HD Content not installed|
Modified Unreal Engine 3 engine
|Built-in benchmark at UltraDX11_DDOF setting|
|Elite Dangerous v1.1.03 Ultra Preset|
Fourth-generation COBRA engine
|Coriolis Station custom benchmark|
Modified Unreal Engine 3 engine
|Built-in benchmark at "Very High" setting|
|Unigine Valley 1.0||ExtremeHD preset, 1920x1080, 8x MSAA|
|3DMark Fire Strike||Custom Graphics Test 2, 2560x1440, 16x AF, 8x MSAA, Max settings|
SLI Scaling In Synthetic Benchmarks
We've all seen plenty of SLI scaling benchmarks, where the analysis focused on some piece of test data, such as "SLI scaling in game x at preset y is z%". Unfortunately, reality is more complex than that, and almost all of those single-number tests don't tell you the whole story. Without some sort of context on what is limiting scaling, it's almost impossible to truly understand the technology's potential.
Together, let's explore SLI scaling in more detail starting with simple synthetic benchmarks. Later, we'll move on to real-world games.
SLI scaling is 73% at 1080p in Unigine Valley, being limited by the CPU – not by the graphics cards
Unigine Valley's Extreme HD preset is run at 1080p (with 8x MSAA). As we'll see from testing other settings, above about 150 FPS, the system's bottleneck shifts from the GPUs to the CPU. Hence the apparent scaling of "only" 73% in this scenario.
What's really happening is that scaling for most scenes is actually close to 100%, while some of the highest-FPS sequences are CPU-limited with SLI active. This leads to a lower average for that benchmark.
Notice that, at about 12:21 PM in the data, very high FPS (above 150) are coupled with decreased GPU core utilization. That's our evidence that the bottleneck is shifting (in this case, to the CPU). I also want to point out that, despite the 980's lower memory bandwidth, the video memory controller is never a bottleneck in this test; it hovers at 80% utilization. Third, look at how low the PCIe link utilization is, even though it's only eight lanes wide. We're reporting about 10% there, though that metric is considered inaccurate by Nvidia and not used internally.
Now for more demanding resolutions...
Closer to ideal: 88% scaling in a more appropriate test
In order to properly assess SLI scaling, we want to give the GPUs a more taxing workload. How about the Custom Graphics Test 2 of 3DMark Fire Strike at 1440p with 16xAF, 8x MSAA and all of the eye candy turned all the way up? In this scenario, we see closer-to-ideal 88% scaling.
Scaling at 4k is a whopping 96%
It's time to really punish those GPUs by turning the resolution up to 2160p (also known as 4K) with 16x AF, no AA and all of the detail settings maximized. Scaling is an almost theoretically-ideal 96%, though, in this extreme test, we don't observe acceptable framerates, even with two GeForce GTX 980s in SLI.
Note that custom runs of Fire Strike do not produce a numerical "points" score. They should be interpreted as relative to each other.
SLI Scaling In Game Benchmarks
We're deliberately trying to be balanced with our game selection by including an AMD "Gaming Evolved" title (Thief), an Nvidia "The Way It's Meant to be Played" game (Middle-earth) and a couple of titles not specifically affiliated with either vendor (BioShock Infinite and Elite Dangerous).
All benchmark results are averages of three separate runs. Variation was minimal (<1%) across them.
Scaling at 2560x1440 (1440p) varies between 60% and 69% in these tests. In many cases, however, we find ourselves CPU-limited. The lack of scaling isn't due to any particular issue with SLI as a technology, but rather the fact that our Core i7-4770K at 4.4GHz can't keep up with the GeForce GTX 980s!
Nevertheless, with average frame rates between 97 and 150, all of these games look exceptionally smooth on the 144Hz display we used to test, even with G-Sync disabled.
Scaling at 3840x2160 is even better than 1440p, varying between 75% and 84% in these tests. At this punishing resolution, the frame rates go down and we're no longer CPU-limited. The GPUs are bottlenecking performance, as evidenced by utilization numbers hovering close to 100% throughout our benchmark runs.
As you can see from our data, a single GeForce GTX 980 is sufficient for playable performance in these games at their highest quality present under the following conditions:
- At 1440p: Allows for what we could call a decent experience. On average, performance falls between 50 and 80 FPS, which is enough to play smoothly with v-sync disabled or with a G-Sync-capable display.
- At 2160p: Allows for what we could call a barely playable experience. On average, performance falls between 28 and 45 FPS, resulting in many sequences falling below 30 FPS and, overall, gameplay that just doesn't feel smooth.
Adding a second GeForce GTX 980 in SLI facilitates the following benefits:
- At 1440p: Allows for v-sync to be enabled with no stuttering on 60Hz displays, or for even smoother gameplay on 120/144Hz displays with performance averaging between 97 and 150 FPS.
- At 2160p: Allows for what we could call a decent experience, with performance averaging between 50 and 83 FPS. That's enough to play with v-sync disabled, though based on testing we'll present shortly, you'll still have to live with micro-stutter in some cases.
In short, while a single GeForce GTX 980 is more than sufficient to drive a 1080p/60Hz display in almost any scenario, adding a second card in SLI introduces benefits to anyone who owns a 1440p/60Hz display or higher, with the gains most pronounced for those playing on 4K screens or enabling 120/144Hz refresh rates. At 4K, SLI is essentially a requirement for a decent experience, although the problem of micro-stutter has not been solved entirely yet for SLI at this resolution.
How Many PCIe Lanes Do You Need?
We've already proven that the performance impact of reducing PCIe bus bandwidth on a single card is essentially negligible (see The Myths Of Graphics Card Performance: Debunked, Part 2). But does the same claim hold true in multi-GPU configurations?
A while back, I was struggling with what seemed like poor performance scaling from GeForce GTX 980s in SLI. I was seeing numbers 30% lower than what I was expecting. And my CPU wasn't the bottleneck, either. Furthermore, the issue only materialized when I had a G-Sync-capable display connected, regardless of whether G-Sync was on or off. I came to suspect some sort of driver issue, before Nvidia and EVGA forum users suggested that I was the only one having this particular issue. If you're a system builder, this is unambiguous guidance: you are the one doing something wrong!
I couldn't make sense of what I was seeing until, about a week later, I remembered that I had left the PCIe Generation setting in my motherboard's firmware at first-gen transfer rates. I was running the two cards at x8 PCIe 1.0! That's a paltry 2 GB/s bandwidth, equivalent more or less to two third-gen lanes. ASRock has a nifty setting in its UEFI that lets you specify this parameter. It's something you never want to mess with unless you are, as I was, testing different PCIe link speeds.
Switching back to PCIe 3.0 in the motherboard UEFI alleviated that 30% performance handicap. But, perhaps even more interestingly, it allowed me to make some interesting inferences.
- Eight lanes (x8) of PCIe 3.0 is more than enough for two top-tier Nvidia cards in SLI. That is, from a PCIe bus effectiveness standpoint, you won't benefit materially from a motherboard with an (expensive) PLX chip or the leap to Intel's LGA 2011-v3 interface.
- Four lanes (x4) of PCIe 3.0 would most likely be fine for SLI. Nvidia does not support three-way SLI on x8/x4/x4-capable PCIe 3.0 motherboards. That's an uncommon scenario, so it matters little anyway. Three-way SLI is pretty rare to start, and if you're in the market for $1500 worth of graphics cards, you can most likely afford an extra $100 for a PLX bridge-equipped motherboard that'll give you the number of lanes Nvidia requires.
- G-Sync somehow materially increases usage of the PCIe bus in SLI. This isn't an issue per se, but it's a somewhat interesting fact if you are curious about understanding precisely how new technologies work. I still wonder why this is the case.
Driver Limitations And SLI-AA Mode
SLI, unfortunately, is not without its downsides.
If you're a fan of Dynamic Super Resolution (DSR) and Multi-Frame Anti-Aliasing (MFAA), two of Nvidia's new Maxwell-specific technologies, we have bad news for you: they are not supported in SLI under Windows 8.1. If you enable SLI, those options simply disappear from the Nvidia Control Panel. Users have reported that these do work under Windows 7, although we have not verified it ourselves. We asked Nvidia about driver support in SLI for both technologies and received the following answer:
"MFAA support for SLI configurations will be coming in a future driver release. DSR support for SLI is supported in some circumstances. More robust hardware configuration support will be coming in future driver releases."
What you do get is an SLI-exclusive anti-aliasing mode called SLI-AA that can be enabled through the Nvidia Control Panel. The company went back and forth on including this feature in its drivers. It was missing for a while and is back now. While you generally won't use it much, the option does allow you to essentially force MSAA on in DirectX 9 games that don't natively support it, and where you don't need the extra performance of SLI or where SLI AFR rendering is not supported at all. It won't work in DirectX 10 or 11 games, so the value of this feature in modern titles is negligible.
The above example illustrates the use of SLI-AA 16x compared to Blizzard's built-in FXAA support for Diablo III. You'll notice that SLI-AA produces a sharper image overall, but, like all MSAA-based techniques, does not remove aliasing from transparent textures (the banner, in this example). Click on the image to expand it for a better visual comparison.
Micro-Stuttering: Is It Real?
You've probably heard the term micro-stuttering used to describe an artifact experienced by owners of certain multi-GPU configurations. In short, it's caused by the rendering of frames at short but irregular time intervals, resulting in sustained high average FPS, but gameplay that still doesn't feel smooth.
The most common cause of stuttering is turning on v-sync when your hardware can't maintain a stable 60 FPS. The same applies to games that forcibly enable v-sync, the best example of which is Skyrim. This phenomenon has nothing to do with SLI, but it will manifest in SLI-based systems as well. In those cases, the output jumps between 30 and 60 FPS in order to maintains synchronization with the screen refresh, meaning some frames are displayed once, while others appear twice. The result is perceived stuttering. The workarounds are using Nvidia's Adaptive V-Sync setting in the graphics control panel, which causes some tearing, V-Sync (smooth), which prevents tearing but limits the frame rate to 30. Of course, if you own a newer G-Sync-capable display, enabling that feature will circumvent the problem altogether.
Micro-stuttering is a different phenomenon altogether. It is evident even when v-sync is disabled. What causes the issue is a variance in so-called frame times. That is, different frames are rendered (and displayed) using different amounts of time, which in turn appears as FPS values that are high (say, above 30-40), but gameplay that is not perceived as smooth. The data defining micro-stuttering is thus the variance in frame times for a given test run. The higher the variance, the less smooth the experience. While frame times do depend on frame rates overall (100 FPS = 10 milliseconds average frame time), frame time variance expressed in relative terms does not.
Sometimes micro-stuttering is caused by a game's engine optimization issues, irrespective of multi-GPU configurations. Don Woligroski tested Middle-earth: Shadow of Mordor, for instance, and observed that game's issues upon release. The problems with AMD cards persisted until it was patched. See below pre- and post-patch frame time variance for single cards. Clearly, there was an issue that needed to be fixed.
Any multi-GPU-equipped system faces a challenge in trying to minimize frame time variance while maximizing average frames per second and diminishing input lag. In the past, older combinations of hardware and software were really hampered by micro-stuttering. And it wasn't until Nvidia and AMD made an effort to meter the rate at which frames appeared did it start getting better.
Middle-earth: Shadow of Mordor's benchmark at 1440p outputs extraordinarily consistent frame times without SLI. Even with the technology enabled, the game behaves very well.
No doubt, this is also attributable to how well Middle-earth and its associated SLI profile are optimized. Again, it took a major game patch before initial issues associated with gameplay smoothness were addressed.
No we'll increase this title's resolution to 2160p.
This is the first time we see less than ideal performance in SLI. Shadow of Mordor just doesn't feel smooth at 4K, even with SLI, and despite an average frame rate that would suggest otherwise.
In our tests with Elite: Dangerous, frame time variance at 1440p is actually lower in SLI. This is possible because overall frame times are lower in SLI versus single-GPU mode. Frontier's fourth-generation COBRA engine appears to be really well-optimized for operation in SLI.
Unlike Shadow of Mordor, performance in Elite: Dangerous' highest-detail preset at 4K appears just as flawless in SLI as it is with a single GPU. Frame time variance is well below what could be identified as micro-stutter.
Thief also behaves extremely well in SLI at 1440p, at least as far as frame time variance is concerned.
By contrast, Thief struggles at 4K, even with the power of two GeForce GTX 980s behind it. Frame time variance rises to levels where micro-stutter would be noticed.
Over the past two generations of graphics architectures, Nvidia has made a concerted effort to minimize frame time variance in SLI configurations. We didn't encounter any micro-stutter in any of the games we tested at 2560x1440. It's more of an issue at 4K however, as our two 980s in SLI posted much higher variances in Middle-earth: Shadow of Mordor and Thief.
Lack Of Support And Image Artifacts
Because multi-GPU setups represent a minority of gaming systems, you can't really blame developers for not testing games extensively with SLI or, occasionally, choosing to not support SLI at all. It took over six months after launch for The Creative Assembly to support SLI in Total War: Rome II, after all. And several Assassin's Creed games had issues with shadows not displaying properly in SLI; these were eventually fixed.
For mysterious reasons, enabling SMAA with HBAO+ in Far Cry 4 on an SLI system results in annoying ghosting effects staying on-screen and dark shadows at night. The only workaround right now is switching to a different shadow and AA setting (like TXAA) in the game's video options.
We've also heard of issues in Dragon Age: Inquisition. Reportedly, there are shimmering textures and "white fog" that is too dense to see through on certain SLI configurations.
Although some issues exist, the good news is that most AAA games eventually get patched. Just don't be surprised if there are issues on launch day if you use a multi-GPU configuration from either Nvidia or AMD.
Overclocking In SLI
Overclocking in SLI may yield a 15-20% performance boost, or more. For reference, my own personal results in 3DMark Fire Strike are available here. I'm working with an EVGA Superclocked version of the 980s, so I'm starting from an already-high 1266MHz base clock rate and 1367MHz GPU Boost rating.
Overclocking cards in SLI is not always straightforward, however. In particular, we've noticed that the bottom card (that is, the one in the lower PCIe slot), regardless of voltage input settings, ends up operating at a lower voltage than the top one. That is undesirable, as it appears to limit the overclocking potential of both cards, since they can only go as fast as the lower card, with its limited voltage increase, will allow.
When asked about the voltage inconsistency, Nvidia answered, "We don’t run each chip at the exact same voltage. There’s variation from chip to chip." For 99% of users, this shouldn't be an issue they need to worry about. If the voltage inconsistency really bothers you though, there is a workaround: unlink the card settings in a tool like PrecisionX and set the card with the lower voltage at a higher core clock offset than the other. This won't lead to core clock rate differences; the frequencies will still be locked at the same number. But it will raise the voltage of the card to match. Not elegant, but it works!
SLI And Virtual Reality Applications
A word of warning is in order if you're using an Oculus Development Kit 2, or are planning to purchase the Oculus Rift once it is released (it's rumored to be coming in late 2015). Virtual reality headsets, by nature, rely on the lowest possible level of latency between head movements and display updates. Anything else can result in uncomfortable motion sickness-type reactions.
Nvidia recently introduced a "Virtual Reality pre-rendered frames" setting in its Control Panel (with a default setting of "1") to help reduce latency to its lowest level possible. Unfortunately, this setting does not apply to SLI; it only affects single GPUs. Because of the way SLI works, the CPU needs to pre-render at least two frames at any given point in time for a measurable performance benefit.
Nvidia's "Virtual Reality pre-rendered frames" setting does not apply to SLI
The Oculus DK2 supports refresh rates no higher than 75Hz. That means the minimum displayable frame time with v-sync is 13.3 milliseconds. Pre-rendering an extra frame from the CPU results in an additional latency of the same amount, which sounds small but is actually quite significant to VR.
Note: Without delving into a rather complex discussion of why v-sync is essentially necessary to VR, or debunking the common "v-sync is evil" attitude in the desktop gaming space, trust us on this one. You'll want v-sync on in your VR applications.
In situations where your game cannot be rendered at a constant 75 FPS, having SLI enabled will help. A frame pre-rendering delay is always desirable over the stuttering caused by having v-sync enabled and "missing" a frame refresh, thus showing the previous frame again.
The Crescent Bay prototype from Oculus is rumored to be a 90 Hz, 1440p device
Furthermore, if the latest Crescent Bay prototype from Oculus is any indication of where the consumer version is headed, maintaining a 90Hz (90 FPS) frame rate at 1280x1440 per eye with v-sync enabled may in fact require two cards in SLI. In that case, the minimum displayable frame time with v-sync is reduced to 11.1 milliseconds, which is certainly an improvement.
In short, SLI is neither required nor necessarily desirable for the current DK2 version of the Rift. But it might actually be helpful for an ideal experience once the final consumer version is introduced, particularly if you want to play with the eye candy turned up in current-generation games. If you're shooting for the absolute lowest possible latency, however, you'll want instead to drop your detail settings and go with a single GPU.
It's A Great Time To SLI
Nvidia's SLI is in great health as of early 2015. AAA games don't always support SLI properly on launch day, but they either get patched quickly or run well with minor workarounds. Once a proper profile is in place for a game, two-way SLI yields real-world scaling of 75 to 85% at 3840x2160 on our GeForce GTX 980s. Below 4K, you'll probably be CPU-bound in most real-world metrics.
Two 980s in SLI complemented by a decent CPU, can, in many cases, push above 150 FPS at 1440p, effectively powering even the most advanced 144Hz gaming displays commercially available. Overclocking, which isn't as straightforward with SLI compared to a single GPU, can give you an additional 15-20% boost if you want to go that route. The Maxwell architecture's high efficiency paved the way for quieter cooling solutions, making SLI even more attractive this generation.
So should you consider going multi-GPU with SLI?
If you're gaming at a resolution/refresh of 1080p/60Hz or lower, you don't need it. One GeForce GTX 980 (or 970) maxes out pretty much everything you throw at it these days. But if you're eying 1080p at 120Hz or more, 1440p at 60Hz+, 4K or gaming across multiple displays, two or three graphics processors will help achieve the performance levels you want. Just know that you'll lose certain features along the way, and will probably run into technical challenges. Further, micro-stuttering may still be an issue for you at the highest resolutions.
SLI is also a great option if you purchase a single GeForce GTX 960, 970 or 980 card now and plan to upgrade at some point before the next-gen architecture surfaces. A second identical Maxwell-based card will almost certainly yield your best bang-for-the-buck upgrade. Just keep in mind it really does need to be identical though, and the inventories of some GPUs don't always last long. Reference-class cards are probably a safer choice in this situation.
We tested for micro-stuttering and found that there really is none to speak of, at least from our one configuration, at 1440p. But there are still issues with micro-stuttering at 4K.
Early adopters already considering their upgrade path for VR might want to hold off. SLI is not a great choice today for the Oculus DK2, but it might be a viable option for the final Oculus Rift when it surfaces as a retail kit.
Finally, if a dual-GM104 card is in the works, and if Nvidia decides to price it reasonably, unlike what it did for GeForce GTX Titan Z, we'd definitely be interested.
In part two of this series, we'll look at the red team's rival multi-GPU technology called CrossFire (we've already lined up a couple of AMD's next-gen cards to make this one happen for you). We'll also compare how the two companies' flagship cards performance in a two versus two royal rumble.
So stay tuned!