Sign in with
Sign up | Sign in
Radeon R9 295X2 8 GB Review: Project Hydra Gets Liquid Cooling
By , Igor Wallossek,
1. Not For The Faint Of Heart, AMD Says

Source: WikipediaSource: Wikipedia

Dreadnought. Perhaps you know the word from Final Fantasy. Or maybe Warhammer. Or Star Trek, even.

But the dreadnoughts I was thinking about during my week locked up in the lab were the 20th-century battleships built by Britain, France, Germany, Italy, Japan, and the U.S. Before the signing of the Washington Naval Treaty in 1922, each of those countries (and several others) poured tons of resources into one-upping each other, commissioning capital ships able to move faster, fire further, and prevent more damage. Eventually, the exercise became economically exhausting.

But all in the name of claiming superiority, right?

The graphics card market is in the midst of its own arms race. AMD fired a white-hot salvo back in 2011 with the introduction of its Radeon HD 7970, which easily dwarfed Nvidia’s GeForce GTX 580. A few short months later, Nvidia shot back with the GeForce GTX 680, hitting harder and for less money. Since then, both companies have traded broadsides, introducing the Radeon HD 7970 GHz Edition, GeForce GTX 690, Radeon HD 7990, GeForce GTX Titan, and Radeon R9 290X, all leveraging relatively similar architectures to push the performance envelope. Increased prices were offset by higher frame rates, which affluent gamers willingly paid.

If those cards are the dreadnoughts of our industry, then we’re about to enter the era of super-dreadnoughts (yes, that’s a thing).

A couple of weeks ago, Nvidia announced its GeForce GTX Titan Z, a dual-GK110-powered, triple-slot behemoth. Jen-Hsun called it the perfect card for those in need of a supercomputer under their desk. And using his 8 TFLOP specification, I worked backward to a core clock rate around 700 MHz per GPU. That’s more than 100 MHz lower than the GK110 on a GeForce GTX Titan. Wouldn’t you be better off building that supercomputer using two, three, or even four Titans? We have to wait and see; the Titan Z isn’t available yet.

Although one GeForce GTX Titan Z appears destined to be quite a bit slower than a pair of Titans, Nvidia plans to ask an astounding $3000, or 50% more for it.

In response, AMD is escalating the arms race with its Radeon R9 295X2, another dual-GPU specimen. But this one is quite a bit different. To begin, it sports Hawaii GPUs that run just a bit faster than the single-processor Radeon R9 290X. Also, the 295X2 is a dual-slot board. How is such a feat possible? Closed-loop liquid cooling, of course.

AMD Fires Back With (Relative) Value

The existence of this card wasn’t a carefully-guarded secret. In fact, AMD had a marketing agency shipping out care packages alluding to its arrival. But a lot of the 295X2’s rumored specifications were completely wrong. Let's set the record straight, shall we?

Learn More About Hawaii

For more information on the Hawaii GPU, check out Radeon R9 290X Review: AMD's Back In Ultra-High-End Gaming

Again, AMD starts with two Hawaii processors, each manufactured at 28 nm and composed of 6.2 billion transistors. Those GPUs are unaltered, sporting a full 2816-shader configuration with 176 texture units, 64 ROPs, and an aggregate 512-bit memory bus. Four gigabytes of GDDR5 per processor are attached, yielding a card with 8 GB on-board.

AMD has a respectable track record of keeping its dual-GPU boards almost as fast as two single-GPU flagships. The Radeon HD 6990 ran something like 50 MHz slower than a Radeon HD 6970. But it still managed to accommodate two fully-operational Cayman processors. The Radeon HD 7990 did battle against the GeForce GTX 690 with Tahitis also operating 50 MHz slower than the then-fastest card in AMD’s stable. They too were fully-featured, with all 2048 shaders enabled.


Radeon R9 295X2
Radeon R9 290X
GeForce GTX Titan
GeForce GTX 780 Ti
Process
28 nm
28 nm
28 nm28 nm
Transistors
2 x 6.2 Billion
6.2 Billion
7.1 Billion
7.1 Billion
GPU Clock
Up to 1018 MHz
Up to 1 GHz
837 MHz
875 MHz
Shaders
2 x 2816
2816
2688
2880
FP32 Performance
Up to 11.5 TFLOPS
5.6 TFLOPS
4.5 TFLOPS
5.0 TFLOPS
Texture Units
2 x 176
176
224
240
Texture Fillrate
Up to 358.3 GT/s
176 GT/s
188 GT/s
210 GT/s
ROPs
2 x 64
64
48
48
Pixel Fillrate
Up to 130.3 GP/s
64 GP/s
40 GP/s
41 GP/s
Memory Bus
2 x 512-bit
512-bit
384-bit
384-bit
Memory
2 x 4 GB GDDR5
4 GB GDDR5
6 GB GDDR5
3 GB GDDR5
Memory Transfer Rate
Up to 5 GT/s
5 GT/s
6 GT/s
7 GT/s
Memory Bandwidth
2 x 320 GB/s
320 GB/s
288 GB/s
336 GB/s
Board Power
500 W
250 W
250 W
250 W

The Radeon R9 295X2's twin Hawaii GPUs go even further. Whereas a reference Radeon R9 290X runs at up to 1000 MHz, the 295X2 gets a small bump to 1018 MHz. Yes, the processors are still subject to the dynamic throttling behavior we illustrated in The Cause Of And Fix For Radeon R9 290X And 290 Inconsistency. But because cooling is better this time around, we’ve been told that throttling shouldn’t be an issue.

Between the two GPUs, their respective memory packages, and a bunch of power circuitry, AMD plants a PEX 8747 switch, the same 48-lane, five-port device found on its Radeon HD 7990 and Nvidia’s GeForce GTX 690. The switch interfaces with each Hawaii processor’s PCI Express 3.0 controller, facilitating a 16-lane connection between the GPUs and platform.

AMD also offers a similar array of display outputs as what we saw on the 7990, including one dual-link DVI-D connector and four mini-DisplayPort interfaces.

For all of that, AMD claims it will charge $1500 (or €1100 + VAT). The Radeon R9 295X2 won’t be available immediately, either. As of right now, the company says you’ll find it for sale online the week of April 21st. Don and I are in agreement here: we’ve seen too many missed price estimates and ship dates from AMD to take this one as gospel. We'll treat $1500 as general guidance for now.

2. Power And Design Decisions

Power Consumption, By The Numbers

Notice the two eight-pin power plugs? A lot of folks were speculating that AMD would use three of those. AMD is coy about the 295X2’s maximum power, but claims it averages around 500 W under load. We’ll give you a definitive answer on consumption in the following pages. However, let’s use 500 W as a nice, round figure. The PCI-SIG electromechanical specification rates a 16-lane PCI Express slot for up to 75 W. A six-pin auxiliary connector is rated for the same 75 W. And you get 150 W from an eight-pin connector. Two of those eight-pin plugs plus a motherboard slot should add up to 375 W, leaving us about 125 W short of our target.

According to AMD, that’s not a problem. Representatives from the PCI-SIG declined comment, but AMD says:

"The PCI spec was created as a guideline for wide compatibility and thermal density within a two-slot form factor. The 295X2 is about pushing performance, not wide compatibility, and as a result requires carefully-chosen infrastructure by DIYers. This selection criteria for PSUs and cases…will appear on amd.com after launch. When it comes to PSUs, the 295X2 will separate the wheat from the chaff, so to speak. The best PSUs will use low-gauge wiring and high-output MOSFETs...”

We’ll get into the hardware you need to support a Radeon R9 295X2 shortly. The takeaway for now is that AMD’s new flagship pushes well beyond the specification we’ve long-assumed was a ceiling, but now know doesn’t have to be (so long as you own the right equipment).

Playing Dress-Up

While I readily give AMD credit for building dual-GPU cards that come close to doubling the potential of its single-GPU flagships, I’m far less complimentary of how the company typically achieves those numbers. Check out a couple of these story titles. There’s AMD Radeon HD 6990 4 GB Review: Antilles Makes (Too Much) Noise and the much-debated Radeon HD 7990 In CrossFire: The Red Wedding Of Graphics. Past efforts were some combination of noisy, hot, and simply untenable in dual-card arrays.

My feedback didn’t make me a very popular guy at AMD, judging by some of the phone calls I fielded. But engineers took that input and came up with something better-conceived for the Radeon R9 295X2: a closed-loop cooler able to dissipate thermal energy from two high-end processors and exhaust it right out of your chassis using one big 120 mm fan.

Naturally, I’m amped to see AMD maintain its scaling with two Hawaii GPUs. But I’m even more impressed that the company is treating the reference cooler with respect. A partnership with Asetek results in a semi-custom solution that includes a heat sink covering the whole card, two water blocks in series, approximately 380 mm of tubing, a radiator, and a 120 mm fan.

This gives AMD the flexibility to fit into a dual-slot form factor on a board as long as the Radeon HD 7990. Asetek’s cooler is covered by a boxy metal shroud colored black and silver, giving the card more rigidity than past plastic-laden affairs. A metal backplate sandwiches the PCB, cooling the memory and adding more stiffness. Both hoses exit out the top of the card.

As far as industrial design goes, Nvidia’s GeForce GTX 690, 780, 780 Ti, and Titan still sport sexier aesthetics. But AMD makes up much of its deficit with a metal casing, red-illuminated center fan (mostly for cooling the power circuitry), and lit-up Radeon logo on top of the card. Kudos to the company for building a more substantial enthusiast-oriented product.

3. Does Your System Have What It Takes?

Making Sure You Support The Radeon R9 295X2

Not surprisingly, a ~500 W graphics card attached to rubber hoses and a radiator requires a couple of special considerations. Mainly, your chassis needs to be large enough with the necessary mounting points, and your power supply must deliver ample current.

Right out of the gate, AMD’s Radeon R9 295X2 is a long card (it’s 12-inches in length, like the Radeon HD 7990), so you can’t cram it into compact enclosures. But now you also need room to mount its radiator and fan as an exhaust. A great many chassis have at least one spot for a 120 mm fan, so this shouldn’t be an issue. But if you’re also using a closed-loop CPU cooler, you actually need two positions able to take a radiator.

Rosewill sent us a couple of its Throne enclosures to use in the lab, and they served as my platform for testing. I already had an Intel BXRTS2011LC blowing out the back, and needed to mount the 295X2’s radiator up top. In every orientation but one, Asetek's solution interfered with the voltage regulator heat sink on my MSI X79A-GD45 Plus. Of course, I have plenty of hardware here in the office to swap in or out, but the Radeon R9 295X2 came really close to not working with the first configuration I set up.

Not an issue in the Throne, but something else to think about is the fact that you have about 380 mm of tubing to work with, which could become in an issue in a particularly tall chassis. The same concern applies to the second card in a quad-GPU configuration. Of course, that’d also require a third mounting spot as well.

AMD is more specific when it comes to power supply compatibility. Clearly, you need two eight-pin auxiliary connectors, and the company suggests avoiding adapters to create eight-pin blocks. Each one needs to be capable of delivering 28 A of current, and a combined 50 A beyond the rest of the platform’s draw on the +12 V rail.

As you might imagine, this is a really good time to own a power supply with a single +12 V rail. If yours uses multiple rails, the next step is to figure out how your connectors share them, taking care to truly reserve the right amount of current for AMD’s card.

The company doesn’t call out a recommended power supply capacity in its press material, instead choosing to get specific about amperage. But if you’re setting aside an aggressive 550 W for the Radeon R9 295X2 and you have an overclocked Ivy Bridge-E-based CPU, memory, storage, and a handful of cooling fans, anything under 1000 W starts looking a little dicey.

4. Test Hardware And Benchmarks

As always, the hardware and benchmarks used in today’s review are important. However, methodology is also more relevant than ever, particularly in light of the dynamic clock rate behavior we described when AMD’s Radeon R9 290X first launched.

To that end, all of our testing happens in the aforementioned Rosewill Throne chassis. And rather than simply firing up benchmarks after an idle period, we heat up every card with several minutes of gameplay prior to recording results. If a configuration is prone to throttling, that gets documented. Really though, AMD effectively addressed variability on its reference Hawaii-based board through a driver, and most third-party solutions are better-cooled.

Test Hardware
Processors
Intel Core i7-4960X (Ivy Bridge-E) 3.5 GHz Base Clock Rate, Overclocked to 4.2 GHz, LGA 2011, 15 MB Shared L3, Hyper-Threading enabled, Power-savings enabled
Motherboard
MSI X79A-GD45 Plus (LGA 2011) X79 Express Chipset, BIOS 17.8
Memory
G.Skill 32 GB (8 x 4 GB) DDR3-2133, F3-17000CL9Q-16GBXM x2 @ 9-11-10-28 and 1.65 V
Hard Drive
Samsung 840 Pro SSD 256 GB SATA 6Gb/s
Graphics
AMD Radeon R9 295X2 8 GB

2 x AMD Radeon R9 290X 4 GB (CrossFire)

AMD Radeon HD 7990 6 GB

2 x Nvidia GeForce GTX Titan 6 GB (SLI)

2 x Nvidia GeForce GTX 780 Ti 3 GB (SLI)

Nvidia GeForce GTX 690 4 GB
Power Supply
Rosewill  Lightning 1300 1300 W, Single +12 V rail, 108 A output
System Software And Drivers
Operating System
Windows 8.1 Professional 64-bit
DirectX
DirectX 11
Graphics DriverAMD Catalyst 14.4 Beta

Nvidia GeForce 337.50 Beta

AMD claims that the Radeon R9 295X2 is designed for gaming at 3840x2160. However, we also ran benchmarks at 2560x1440, which is still a popular enthusiast-oriented resolution. All tests at QHD are run through our FCAT system; numbers are generated using video captured from a DVI display output. Testing at Ultra HD was conducted through a mix of technologies; the GeForce GTX 690 and Radeon HD 7990 wouldn’t cooperate with the dual-HDMI method of getting FCAT working at 4K. This shouldn’t be an issue when frame pacing is working properly, since there are no dropped or runt frames to report. Where it results in suspect data, however, we’ll call that out.

Benchmarks And Settings
Battlefield 4
2560x1440 and 3840x2160: Ultra Quality Preset, v-sync off, 100-second Tashgar playback. FCAT for 2560x1440; Fraps/FCAT for 3840x2160
Arma 3
2560x1440 and 3840x2160: Ultra Quality Preset, 8x FSAA, Anisotropic Filtering: Ultra, v-sync off, Infantry Showcase, 30-second playback, FCAT and Fraps
Metro: Last Light
2560x1440 and 3840x2160: Very High Quality Preset, 16x Anisotropic Filtering, Normal Motion Blur, v-sync off, Built-In Benchmark, FCAT and Fraps
Assassin's Creed IV
2560x1440 and 3840x2160: Maximum Quality options, 4x MSAA, 40-second Custom Run-Through, FCAT and Fraps
Grid 2
2560x1440 and 3840x2160: Ultra Quality Preset, 120-second recording of built-in benchmark, FCAT and Fraps
Thief
2560x1440 and 3840x2160: Very High Quality Preset, 70-second recording of built-in benchmark, FCAT and Fraps
Tomb Raider
2560x1440 and 3840x2160: Ultimate Quality Preset, FXAA, 16x Anisotropic Filtering, TressFX Hair, 45-second Custom Run-Through, FCAT and Fraps
5. Results: Arma 3

2560x1440

AMD claims its Radeon R9 295X2 is designed for 4K gaming. But I also wanted to run 2560x1440. Not only is that resolution far more common in the high-end space, but it also serves as a good baseline before we get to the Ultra HD numbers.

Arma 3 demonstrates a platform bottleneck at 2560x1440, even with the game's lushest detail settings switched on. Average frame rates from most configurations hover just under 80, while minimums are just under 70 FPS.

Only the Radeon HD 7990 and GeForce GTX 690 fall short of the choke point, though both still deliver a readily-playable experience.

Charting frame rate over time shows the ultra-high-end boards in their narrow range up top, as the other two cards trail.

Frame time variance attempts to quantify the smoothness of a given graphics card’s performance. Once upon a time, not long ago, this was a very real issue for AMD in multi-GPU arrays. Its processors would deliver frames as they were made ready, sometimes resulting in runts—frames on-screen for so short of a time that you don’t actually perceive them.

The company first addressed concerns over reported versus experienced frame rates with a special driver that more evenly paced the rate at which output was displayed. And although the Radeon HD 7990 and Radeon R9 290X cards approach CrossFire differently, incredibly low frame time variance in Arma 3 shows that both solutions demonstrate effective pacing to keep variance low.

This sample of frame times reveals a handful of small spikes, but overall consistent performance.

3840x2160

Gone is the bottleneck as we downshift to 3840x2160 and watch these cards further-differentiate themselves. The Radeon R9 295X2 sits up top, followed by two Radeon R9 290X boards in CrossFire. The GeForce GTX 780 Ti and Titans in SLI take third and fourth place.

Although we haven’t seen any dropped or runt frame issues from Nvidia in the past, two GK110s should easily best a pair of GK104s. However, Fraps and FCAT results seem to agree that Titans in SLI and the GeForce GTX 690 report similar average frame rates. What we’re likely missing is the fact that the 690’s 2 GB of memory per GPU causes quite a bit of stuttering. So, while the frame rate appears high through Fraps, the experience of gaming on a 690 at 4K is not nearly as pleasant.

That same phenomenon isn’t captured in the frame rate over time chart, where the GeForce GTX 690 appears quite quick. More notable is that the Radeon R9 295X2 is faster than the R9 290Xes in CrossFire, which in turn outperforms the two high-end combos from Nvidia.

The frame time variance at 3840x2160 is much higher than it was at 2560x1440, which we’d expect given significantly lower frame rates. However, all the way down to the Titans in SLI, even worst-case variance isn’t all that bad.

The GeForce GTX 690 registers significantly higher variance at Ultra HD. AMD’s Radeon HD 7990 runs into bad worst-case variance, while its average and 75th-percentile numbers are much more reasonable.

6. Results: Assassin’s Creed IV: Black Flag

2560x1440

Like Arma 3, four of our six graphics solutions are bottlenecked in Assassin’s Creed IV using the game’s most demanding quality features. AMD achieves slightly lower minimum frame rates than Nvidia, though the R9 290X in CrossFire and 295X2 never dip below 50 FPS.

See how the top four configurations maintain a fairly narrow performance band? Those solutions appear limited by some aspect of our overclocked platform. The Radeon HD 7990 and GeForce GTX 690 span a broader range dictated by the graphics workload.

AMD’s frame time variance is slightly higher across the board, though even our worst-case figures are still impressively consistent.

There are far more frame time spikes in Assassin’s Creed IV than there were in Arma 3, again, predominantly from AMD’s cards.

3840x2160

As with Arma 3, the apparent platform bottleneck in Assassin’s Creed IV isn’t as much of an issue at 3840x2160. Instead, these cards demonstrate low averages and less-than-ideal minimum frame rates using the game’s most taxing details.

The GeForce GTX 780 Tis don't appear to be limited by their 3 GB of GDDR5. Instead, the SLI array takes a first-place finish ahead of AMD’s Radeon R9 295X2 and Nvidia’s GeForce GTX Titans.

Our Assassin’s Creed IV benchmark requires a ton of manual intervention, so the frame rate over time charts isn’t as consistent as we’d like from one run to the next. The four fastest solutions clump up in a less-than-10 FPS-range, while the Radeon HD 7990 and GeForce GTX 690 drag along in unplayable territory.

Nvidia’s GeForce GTX 780 Tis and Titans in SLI offer very low frame time variance. The Radeons are also well-behaved in this gauge of smoothness.

It’s the GeForce GTX 690 that encounters the most serious issues. That card simply isn’t a player at this resolution, though. So, while it’s good to illustrate the limitations of 2 GB per GPU at 3840x2160, I’ll refrain from mentioning the board’s performance every time we test a game at 4K.

7. Results: Battlefield 4

2560x1440

Thanks to a slightly higher clock rate, the Radeon R9 295X2 inches past a couple of R9 290X cards in CrossFire. However, a recent driver update from Nvidia gives two GeForce GTX 780 Tis the upper-hand in Battlefield 4…at least at this resolution. Two GTX Titans hang in there as well, even if lower core frequencies and fewer shaders force the $1000 boards in behind a pair of AMD’s single-GPU flagships.

In comparison, the once-mighty Radeon HD 7990 and GeForce GTX 690 are humbled. At least they’re still plenty-fast at Battlefield 4’s most taxing detail preset.

The highs and lows are best-seen by charting out frame rate over time.

Both AMD and Nvidia do a great job of pacing frames out consistently. Our 95th percentile numbers—a near-worst-case—remain under 3 ms, and only Nvidia’s GeForce GTX 690 approaches that figure.

Aside from a few major spikes from Nvidia’s GeForce GTX 690, measured frame times in Battlefield 4 are low.

3840x2160

Battlefield 4 uses quite a bit of graphics memory. So, it’s not surprising to see the dual-Hawaii-based configurations doing really well, while two GeForce GTX 780 Ti cards (each with 3 GB on-board) experience lower minimum frame rates at 3840x2160. Titans have 6 GB each and manage more playable minimums. But because they come equipped with fewer shaders and lower clock rates, average performance drops to fourth place.

Once, and only briefly, the Radeon R9 290Xes and 295X2 fall under 40 FPS. Otherwise, they’re perfectly playable.

The GeForce GTX 780 Tis are almost as fast on paper. However, you can see more punctuated dips in the frame rate over time chart. In fact, as you play through the Tashgar level used for this test, you’ll see the characters pop in and out of view. A log of memory use through the run shows 3 GB being exceeded easily, which is why I’d hold off on recommending GeForce GTX 780 Tis for 4K.

Overall, frame time variance is reasonable, though the 690 throws off our bar and line charts. Spikes from the 780 Tis correspond to dips seen in the frame rate over time graph.

8. Results: Grid 2

2560x1440

We know Grid 2 to be fairly processor- and memory bandwidth-limited. Fortunately, our X79-based box with its overclocked Core i7 and quad-channel memory controller running at 2133 MT/s should alleviate that bottleneck.

Thanks to optimizations in Nvidia’s newest driver to hammer out overhead, the GeForce GTX 780 Tis score a second victory. Two Radeon R9 290Xes in CrossFire and the R9 295X2 aren’t far behind, though.

As we’ve seen in a few other games already, the top four solutions still appear to be platform-constrained, while the previous-gen dual-GPU boards follow behind.

The slowest contender, Nvidia’s GeForce GTX 690, never dips below 80 FPS. It goes without saying that every option on the board is more than ample for playing Grid 2 at 2560x1440 using the game’s most demanding detail options.

Ultra-low frame time variance results suggest that high frame rates are complemented by smooth gameplay in Grid 2.

All GPU combinations exhibit spikes in frame delivery. However, they aren’t so severe as to negatively to affect the experience.

3840x2160

The Radeon R9 295X2 does really well in Grid 2, slightly outpacing two 290Xes in Crossfire, and more definitively besting some of Nvidia’s fastest cards.

Playable performance isn’t a problem in this typically platform-bound title.

Frame time variance is manageable too, except for the Radeon HD 7990 and GeForce GTX 690.

9. Results: Metro: Last Light

2560x1440

Two potent Hawaii GPUs allow the Radeon R9 295X2 to score a win in Metro: Last Light at 2560x1440. The R9 290Xes in CrossFire are right behind, followed by two Nvidia configurations in SLI.

As we’d expect, the Radeon HD 7990 and GeForce GTX 690 bring up the rear, though they continue to facilitate playable frame rates.

The frame rate over time chart shows the two slowest boards falling under 40 FPS. Otherwise, their performance is commendable.

Low frame time variance concurs with results from other games: both AMD and Nvidia show consistency in the rate at which frames are delivered on-screen.

It looks like most of the spikes in our frame time sample come from AMD’s Radeon HD 7990 and Nvidia’s GeForce GTX 690. But they’re not bad enough to cause problems with the game’s smoothness.

3840x2160

Metro: Last Light is notoriously graphics-bound, and it successfully keeps four of the most powerful GPU arrays under an average of 50 FPS at 3840x2160 using the Very High preset. Fortunately, those same configurations also maintain minimum frame rates above 30, yielding a marginal, but still playable experience.

Charting frame rate over time shows just how close two GeForce GTX 780 Tis and a pair of Hawaii GPUs come to mirroring each other’s performance.

The biggest frame time variance issues come from Nvidia’s GeForce GTX 690. Otherwise, the results we measure are indicative of AMD’s frame pacing feature working to prevent the dropped and runt frame issues we started quantifying almost a year ago.

10. Results: Thief

2560x1440

AMD’s Radeon R9 290Xes in CrossFire and R9 295X2 perform similarly in Thief. But solid frame rates don’t stop Nvidia’s GeForce GTX 780 Tis and Titans in SLI from scoring first- and second-place finishes.

Meanwhile, the Radeon HD 7990 and GeForce GTX 690 trail. We’ve seen Thief eat up a ton of graphics memory, so it’s possible that the 690’s 2 GB of GDDR5 per GPU is responsible for the low minimum frame rate figure.

Aside from one hitch encountered by Nvidia’s GeForce GTX 690, all of these results are both smooth and playable.

Every tested configuration exhibits low frame time variance. There is one result more notable than the others, though. Typically, we’d expect average variance lower than the 75th percentile, which in turn should be lower than the 95th percentile. But the GeForce GTX 690’s 75th percentile figure is higher than the average.

A look at the frame time chart shows why. In essence, the 690 is plagued by a handful of severe snags. So, while it’s typically a strong performer, those spikes drive up the average and 95th percentile results.

3840x2160

AMD’s Radeon R9 295X2 continues its chart-topping march in Thief, averaging 45 FPS, but more impressively keeping minimum performance above 40 FPS.

As you can see from the GeForce GTX 690’s showing, graphics memory is an important consideration at this title’s Very High preset. We can’t even blame multi- or super-sampled anti-aliasing; the quality setting employs FXAA. Three gigabytes per GPU might not even be enough. Although Nvidia’s GeForce GTX 780 Ti takes second place in the averages, it dips back to a minimum of 31 FPS. Two Titans hold up a little better.

Fortunately, it looks like the 780 Tis only get hit hard in a couple of places. Otherwise, they hang right there with AMD’s cards.

All of the frame time variance figures are acceptable, aside from the GeForce GTX 690.

Huge variance numbers from a couple of GK104 GPUs throw off the scale of this chart.

11. Results: Tomb Raider

2560x1440

AMD’s new dual-GPU juggernaut takes a first-place finish in Tomb Raider, followed not far behind by GeForce GTX 780 Tis in SLI, Radeon R9 290Xes in CrossFire, and twin Titans.

There’s one particularly demanding sequence in our benchmark that pushes every solution hard. It’s responsible for knocking the Radeon HD 7990 and GeForce GTX 690 down under 40 FPS.

However, Nvidia’s cards are at a notable disadvantage in that they don’t handle the Ultimate preset’s TressFX feature properly. Rather than flowing naturally, Lara’s hair shimmers and pops.

Although this is an AMD Gaming Evolved title, the Radeons demonstrate higher frame time variance than Nvidia’s cards. The results aren’t bad, but you can clearly see in the frame time chart there are a few spikes and overall-higher averages.

3840x2160

More so than Thief, Tomb Raider gobbles up memory. The GeForce GTX 690 crashes before our benchmark finishes, in fact. A pair of Hawaii GPUs, each with 4 GB of fast GDDR5, handle this game best. Even if the Nvidia boards were able to deliver comparable frame rates, they’re still not able to render the TressFX effect correctly.

The Radeon R9 295X2 and two 290Xes in CrossFire perform almost identically. Frame rate over time also shows us how hard Nvidia’s cards get hit when the compute-intensive TressFX technology is featured prominently in the benchmark scene (those big dips correspond to up-close views of Lara’s hair).

The longest bars come from AMD’s Radeon HD 7990, which isn’t playable at 4K using Tomb Raider’s Ultimate quality preset anyway.

12. Power Consumption: Introducing Our Equipment

Three Generations Of AMD Dual-GPU Cards, Compared

Naturally, we're going to compare the power consumption of AMD's Radeon R9 295X2 to other CrossFire- and SLI-based setups. But first, we want to use our high-end equipment for a little experiment, comparing the company's newest dual-GPU card to its predecessors. The point is to figure out whether AMD is moving in the right direction with its flagship cards.

Meet Our Test Equipment

Our power consumption test setup was planned in cooperation with HAMEG (Rohde & Schwarz) to yield accurate measurements at small sampling intervals, and we've improved the gear continuously over the past few months.

AMD’s PowerTune and Nvidia’s GPU Boost technologies introduce significant changes to loading, requiring professional measurement and testing technology if you want accurate results. With this in mind, we're complementing our regular numbers with a series of benchmarks using an extraordinarily short range of 100 μs, with a 1 μs sampling rate.

We get this accuracy from a 500 MHz digital storage oscilloscope (HAMEG HMO 3054), while measuring currents and voltages with the convenience of a remote control.

The measurements are captured by three high-resolution current probes (HAMEG HZ050), not only through a riser card for the 3.3 and 12 V rails (which was custom-built to fit our needs, supports PCIe 3.0, and offers short signal paths), but also directly from specially-modified auxiliary power cables.

Voltages are measured from a power supply with a single +12 V rail. We're using a two-millisecond resolution for the standard readings, which is granular enough to reflect changes from PowerTune and GPU Boost. Because this yields so much raw data, though, we keep the range limited to two minutes per chart.

Methodology
Contact-free DC measurement at PCIe slot (using a riser card)
Contact-free DC measurement at external auxiliary power supply cable
Vvoltage measurement at power supply
Test Equipment
1 x HAMEG HMO 3054, 500 MHz digital multi-channel oscilloscope
3 x HAMEG HZO50 current probes (1 mA - 30 A, 100 kHz, DC)
4 x HAMEG HZ355 (10:1 probes, 500 MHz)
1 x HAMEG HMC 8012 digital multimeter with storage function
Power Supply
Corsair AX860i with modified outputs (taps)
13. Power Consumption: Idle

Desktop Mode Without Load

As with the Radeon HD 7990, AMD's Radeon R9 295X2 features AMD’s ZeroCore Power capability, which is active in two different states. It's important to make a distinction between a powered-on display with Windows idle and a powered-down monitor triggered by Windows' profiles.

When ZeroCore Power does its job, consumption drops from 28.5 to 13.5 W at idle, as one Hawaii processor is turned off. Then, the display switches off as well, and power use drops to 6-7 W. At that point, even the card's fan stops spinning.

In case you're missing the 3.3 V rail's blue line (as we were at first), all of the Radeon R9 295X2's components pull from the +12 V rail now.

Comparing To The Radeon HD 6990 and 7990

If you take a close look at the chart below, you'll see that the +3.3 V rail was used by the Radeon HD 7990, if only just barely. Also, nearly all of the power is provided by the card's auxiliary connectors, rather than the motherboard's slot. 

The Radeon R9 2905X2's behavior is more similar to cards like the GeForce GTX 780 and 780 Ti, while AMD's older models are mainly driven by the separate power leads.

14. Power Consumption: Gaming

It took quite a bit of experimentation to find a realistic and repeatable gaming workload that'd allow us to generate meaningful power readings. Fortunately, we found what we were looking for while running the benchmarks for our 2014 VGA Charts: Unigine Heaven. Once we switched over to a platform that wasn't processor-limited, we started seeing consistent GPU loads in the 95% range, giving us the power levels expected from most of our benchmarked games. These tests happen at 1920x1080 in full-screen mode, using the Ultra preset, normal tessellation, and 2x AA.

AMD's Radeon R9 295X2 draws less power than two Radeon R9 290X cards together, amazingly enough. We checked with AMD, which confirmed for us that the dual-GPU board's chips are specially-binned. Presumably, that means the processors are lower-leakage parts, though it's also possible that more effective cooling helps bring down consumption compared to the hot-running reference design. After all, we've seen the 290X's power use drop 30 W just from a better heat sink and fan.

The two-generation-old Radeon HD 6990 pulls way more power from the PCI Express slot, with peak values that exceed the PCI-SIG's 75 W specification.

AMD’s Radeon HD 7990 turns out to be more frugal than two single-GPU graphics cards, drawing less power from the motherboard’s PCI Express slot compared to the Radeon HD 6990. But it also tends to throttle back after reaching a certain thermal limit.

Zooming In For More Detail

A sampling rate of 1 μs is as precise as we're able to get. It's impossible to start each card's test at exactly the same time when we're zoomed in this far. Still, the charts are pretty definitive: the Radeon HD 7990 throttles slightly after hitting its temperature ceiling, demonstrating the most inconsistent curve progression, followed by the extremely hot Radeon HD 6990. Meanwhile, the Radeon R9 295X2 runs at a comparatively low temperature, giving us the most stable chart.

15. Power Consumption: General-Purpose Computing

GPGPU Endurance Test

In order to create a consistent load that mimics something you might do in the real world (and doesn't get categorized as a "power virus" by the driver), I fired up one instance of GUIMiner per GPU, creating a 100% load.

The Radeon R9 295X2 draws a little more power here than the gaming test on the previous page, but amazingly doesn't exceed 450 W. PowerTune steps in, as you can see in the chart below, and the dual-GPU card's performance drops compared to the single-processor boards. You could try pushing the Radeon R9 295X2 above 500 W by increasing its power target, but it really wouldn't make any sense to do so.

The Radeon R9 295X2 draws a total of 420 W from the auxiliary power connectors, and only a few watts more from the motherboard's PCI Express slot.

In comparison, the two-generation-old Radeon HD 6990 pulls about 360 W from the auxiliary connectors, but averages a more substantial 62 W from the motherboard's slot, with peaks up to 74 W.

Finally, the Radeon HD 7990 is forced to hit the brakes due to a lower peak clock rate under heavy load.

16. Power Consumption: Drawing Some Conclusions

Who Needs +3.3 V? Not The Radeon R9 295X2

Our first interesting discovery was the complete absence of a load on the +3.3 V rail. Although we did observe a minimum draw of .1 W, that's within a margin of error. We can say fairly confidently that all of the R9 295X2's components run at +12 V now.

PCI Express Slot Measurements

The second and third discoveries concern power consumption from the motherboard's PCI Express slot. Even with the GPUs pegged at 100%, the Radeon R9 295X2 doesn't pull more than 28 W from its host platform. However, the card demonstrates unorthodox behavior compared to the other cards not based on Hawaii at idle. In that state, it does draw most of the power it needs from the PCI Express slot, and only 5 W from the auxiliary connectors.

In the past, we noticed graphics cards getting less power from the motherboard. AMD's Radeon R9 295X2 exemplifies this by maxing out at that very conservative 28 W figure. That also means most maximum power consumption calculations based on an assumed 75 W reading from the motherboard are wrong.

CrossFire And SLI With Single-GPU Cards

Because we don't have six current probes at the lab or the ability to store the immense amount of data generated by two cards at the same time, we recorded separate values for each warmed-up graphics card, one after the other.

Power Consumption At Idle

Although a 28.5 W measurement from the Radeon R9 295X2 is far from ideal, it's less power than you'd need for two Radeon R9 290Xes in CrossFire. And that's before ZeroCore Power kicks in. When ZeroCore is active, you only need 13.5 W to keep AMD's latest dual-GPU flagship running.

Gaming

Presumably due to its effective thermal solution helping reduce leakage current, AMD's Radeon R9 295X2 beats two 290Xes in CrossFire by 40 W. Then again, our experiments with a single Radeon R9 290X suggest you can shave off 30 W and increase performance by using a more powerful cooler. Multiply that out for an array of cards in CrossFire. 

According to our measurements, one Radeon R9 295X2 uses roughly as much power as two GeForce GTX 780 Tis in SLI.

Maximum Load: Compute

The Radeon R9 295X2 finishes well ahead of two GeForce GTX 780 Tis in SLI, though this is attributable to PowerTune intervening to drop the AMD card's peak performance (even though temperatures remained below 66 °C).

If you wanted to really hammer the card hard with a power virus and increase the card's power target, you could certainly push the 295X2 above the 500 W average AMD cites. Compare that to a pair of Radeon R9 290Xes in CrossFire, which approach 600 W before slamming into their speed limiter.

17. Temperatures And Noise

Thermals

The Asetek closed-loop liquid cooler does a fairly good job, though it's only equipped with a 120 mm radiator. A peak GPU temperature of 65 °C is admirable, particularly considering that we couldn't push a single Radeon R9 290X below 50 °C in our aftermarket cooling project.

Acoustics

We're using a calibrated studio-quality microphone for all sound level measurements. It's positioned at a 90° orientation 50 cm away from each graphics card. Results are collected after each card hits its peak operating temperature in our gaming benchmark.

Although 45 dB(A) is clearly audible, the Radeon R9 295X2 is significantly quieter than any competing setup, while the Radeon HD 6990 and R9 290X in CrossFire blow you right out of the room. AMD clearly put effort into improving the experience it conveys, and we appreciate that.

Sound Level Videos

Lastly, let's compare three generations of dual-GPU graphics cards from AMD in videos.

Radeon R9 295X2

R9 295X2 - Gaming Loop - 100% Load

Radeon HD 7990

HD 7990 - Gaming Loop - 100% Load

Radeon HD 6990

HD 6990 - Gaming Loop - 100% Load

18. Radeon R9 295X2: AMD Did A Lot Of Things Right

As one of the most vocal critics of AMD’s past board designs, I’m satisfied with the choices it made in enabling two Hawaii GPUs on one graphics card.

Radeon HD 6990, Radeon HD 7970, Radeon HD 7990, Radeon R9 290—all of those products were remarkable in their own rights, boasting big specifications that should have rained fire down on the competition. But in every case, they were noisy, or hot, or unfriendly to the components around them. AMD simply wasn’t paying enough attention to design. Meanwhile, Nvidia followed up its plastic-enveloped GeForce GTX 680 with a series of metal-kissed, whisper-quiet reference-cooled boards that delivered performance and elegance.

Really, AMD’s Radeon R9 295X2 is the company’s first card—ever, I’d say—to emphasize the experience of owning high-end hardware. It takes two massive GPUs, runs them at a slightly higher clock rate than their single-processor implementation, cools them more effectively than the reference Radeon R9 290X, and makes less noise in the process. One Radeon R9 295X2 sips power compared to two Radeon R9 290Xes, based on measurements from our very expensive and very precise measurement equipment.

It’s not as polished as some of Nvidia’s cards. The closed-loop cooler can be unwieldy, and in addition to the rubber hoses, there are exposed fan leads you’ll want to tuck away. But AMD is using a cooler from Asetek bolted onto its own PCB, not an in-house thermal solution of its own to build around.

Still, I’ll take it. The metal shroud, back plate, and illuminated logo are all premium touches that transcend this company’s past efforts. Because the 295X2 employs a radiator and 120 mm fan designed to exhaust waste heat from your chassis, you aren’t forced to read my complaints about axial fans. Moreover, if you want to run two of them in a quad-GPU arrangement, power supply capacity and chassis selection should be your only two concerns.

Oh, and budget, of course. AMD tells us it plans to ask $1500 for the Radeon R9 295X2 when the card shows up for sale later in April. Just one should outperform Nvidia’s GeForce GTX Titan Z. But you’ll be able to buy two for the same price as the dual-GK110-based battleship. Clearly, that comparison leaves one super-dreadnought smoldering.

We’re smart enthusiasts, though. What about more economical card combinations?

Let’s start with a look at AMD’s line-up. One Radeon R9 295X2 is almost exactly as fast as two 290Xes. The cheapest models are selling for somewhere between $570 and $600. For around $1200, then, you can have the same two Hawaii GPUs driving 4K resolutions in your ultra-high-end gaming PC. There’s just one problem: all of the partner boards worth buying employ axial fans that fill your case full of Radeon jetwash. Two 290Xes set up quite the cooling conundrum, particularly if you’re trying to overclock your CPU as well. Power users married to the idea of AMD graphics are better off paying the $300 premium for closed-loop liquid cooling, a dual-slot board, and a little extra prestige.

Drawing parallels to Nvidia is harder. The GeForce GTX 780 Ti is a great card, but it sells for $700. It’s good at 2560x1440. Logging memory use suggests that it comes awful close to running out of steam at 4K, though. We’re expecting 6 GB models for an extra $50 right around the time AMD says its new Radeon should start shipping, putting us at the same $1500 for a pair. Two Titan Blacks could be an alternative, though at $2200 combined, AMD’s card makes more sense.

In the end, Radeon R9 295X2 represents an important moment for AMD. Not only is this one product a compelling piece of hardware at a price that can be justified by flush gamers, but the company clearly listened to the feedback we hurled its way and built a board we’d be proud to own. AMD isn’t completely out of the woods, though. We have an estimated price and an estimated date for availability. The past several launches were peppered by misses on both fronts, and we’ve learned our lesson about recommending gear before you can buy it. We’re watching out for delivery on those promises, AMD.