Sign in with
Sign up | Sign in
Radeon R9 290X Review: AMD's Back In Ultra-High-End Gaming
By , Igor Wallossek,
1. Hawaii: A 6.2 Billion Transistor GPU For Gaming

Today, the fastest single-GPU graphics card is Nvidia’s GeForce GTX Titan (Benchmarking GeForce GTX Titan 6 GB: Fast, Quiet, Consistent). It sells for no less than $1000 and comes equipped with 6 GB of fast GDDR5 memory. By all accounts, it’s really well-suited for gaming at 2560x1440, it serves up playable performance at 5760x1080 in some games, but doesn’t quite move fast enough for 3840x2160. In fact, in Gaming At 3840x2160: Is Your PC Ready For A 4K Display?, I came to the conclusion that it’d take a couple of GeForce GTX 780s to serve up satisfactory frame rates on an Ultra HD screen.

And now AMD is billing its new Radeon R9 290X as a ready-for-4K solution. Them’s fighting words, particularly with Ultra HD targeted as the next frontier in PC gaming. The technology is still very expensive, and it’s far from refined. But I challenge you to enjoy your favorite title on a 32”, 8.3-million-pixel screen, and then hand it back willingly. Expect 4K to be the battleground on which AMD and Nvidia drop their high-end GPUs moving forward.

Last week, while Nvidia put on an event in Montreal to announce a handful technologies and initiatives, including an upcoming GeForce GTX 780 Ti, AMD was taking the wraps off of a few benchmark results that indeed showed the 290X faster than GeForce GTX 780 in BioShock Infinite and Tomb Raider at 3840x2160.

What is at the heart of this new board, which seemed to effortlessly speed past Nvidia’s $650 solution? The Hawaii GPU—a much more complex piece of silicon than Tahiti, based on the same Graphics Core Next architecture. Think of it as a little bit of old and a little bit of new.

Is AMD Back To The "Big GPU" Approach?

All the way back in 2007, AMD altered its GPU strategy, shifting away from large monolithic processors in favor of more scalable designs. It’d build for a fairly mainstream price point/power target, and either derive down to create less expensive parts or stick two GPUs next to each other in an ultra-high-end configuration.

Over time, AMD’s engineers trended toward more complex chips, and the ~100 W RV670 gave way to the 150 W RV770, which was succeeded by the Radeon HD 5870’s roughly 200 W Cypress GPU, the 6970’s 250 W Cayman, and the similarly power-hungry Tahiti. Each step of the way, though, AMD managed to get two of its flagship processors onto one PCB, yielding that crazy-fast halo board. Of course, the most recent example is AMD’s Radeon HD 7990, rated for a scorching 375 W.

With Hawaii, AMD appears to eschew its sweet-spot philosophy with a 6.2-billion transistor GPU that’s 44% more complex than Tahiti, and yet manufactured using the same 28 nm process. A die size of 438 mm² is still quite a bit smaller than Nvidia’s GK110. However, it’s still larger than any graphics processor we’ve seen from the company (including R600 at 420 mm²; Tahiti only occupies 352 mm²).


Radeon R9 290X
Radeon R9 280X
GeForce GTX Titan
GeForce GTX 780
Process
28 nm
28 nm28 nm28 nm
Transistors
6.2 Billion
4.3 Billion
7.1 Billion
7.1 Billion
GPU Clock
1 GHz
1 GHz
836 MHz
863 MHz
Shaders
2816
2048
2688
2304
FP32 Performance
5.6 TFLOPS
4.1 TFLOPS
4.5 TFLOPS
4.0 TFLOPS
Texture Units
176
128
224
192
Texture Fillrate
176 GT/s
128 GT/s
188 GT/s
166 GT/s
ROPs
64
32
48
48
Pixel Fillrate
64 GP/s
32 GP/s
40 GP/s
41 GP/s
Memory Bus
512-bit
384-bit
384-bit
384-bit
Memory
4 GB GDDR5
3 GB GDDR5
6 GB GDDR5
3 GB GDDR5
Memory Data Rate
5 Gb/s
6 Gb/s
6 Gb/s
6 Gb/s
Memory Bandwidth
320 GB/s
288 GB/s
288 GB/s
288 GB/s
Board Power
250 W (Claimed)
250 W
250 W
250 W

Again, the underlying GCN architecture on which Hawaii is based remains similar. The Compute Unit building block looks exactly the same, with 64 IEEE 754-2008-compliant shaders split between four vector units and 16 texture fetch load/store units.

There are a few tweaks to the design though, including device flat addressing to support standard calling conventions, precision improvements to the native LOG and EXP operations, and optimizations to the Masked Quad Sum of Absolute Difference (MQSAD) function, which speeds up algorithms for motion estimation. Incidentally, all of those features debuted alongside the Bonaire GPU we reviewed back in March (AMD Radeon HD 7790 Review: Graphics Core Next At $150); AMD just wasn’t discussing them yet. And with the introduction of DirectX 11.2, both Bonaire and Hawaii add programmable LOD clamping and the ability to tell a shader if a surface is resident—both of which are tier-two features associated with tiled resources.

But the arrangement of AMD’s CUs is different. Whereas Tahiti boasted up to 32 Compute Units, totaling 2048 shaders and 128 texture units, Hawaii wields 44 CUs organized into four of what AMD is calling Shader Engines. The math adds up to 2816 aggregate shaders and 176 texture units. Operating at up to 1 GHz (this becomes an important distinction later), that’s 5.63 TFLOPS of floating-point performance. We've also come to learn that AMD changed the double-precision rate from 1/4 to 1/8 on the R9 290X, yielding a maximum .7 TFLOPS. The FirePro version of this configuration will support full-speed (1/2 rate) DP compute, giving professional users an incentive to spring for Hawaii's professional implementation.

Hawaii also employs eight revamped Asynchronous Compute Engines, responsible for scheduling real-time and background task to the CUs. Each ACE manages up to eight queues, totaling 64, and has access to L2 cache and shared memory. In contrast, Tahiti had two ACEs. The Kabini and Temash APUs we wrote about earlier this year come armed with four. Why is Hawaii so dramatically different? Some evidence exists to suggest that Hawaii’s asynchronous compute approach is heavily influenced by the PlayStation 4’s design, though AMD won't confirm this itself. Apparently, Sony’s engineers are looking forward to lots of compute-heavy effects in next-gen games, and dedicating more resources to arbitrating between compute and graphics allows for efficiencies that weren’t possible before.

Tahiti’s front-end fed vertex data to the shaders through a pair of geometry processors. Though its quad Shader Engine layout, Hawaii doubles that number, facilitating four primitives per clock cycle instead of two. There’s also more interstage storage between the front- and back-end to hide latencies and realize as much of that peak primitive throughput as possible.

In addition to a dedicated geometry engine (and 11 CUs), Shader Engines also have their own rasterizer and four render back-ends capable of 16 pixels per clock. That’s 64 pixels per clock across the GPU—twice what Tahiti could do. Hawaii enables up to 256 depth and stencil operations per cycle, again doubling Tahiti’s 128. On a graphics card designed for high resolutions, a big pixel fill rate comes in handy, and in many cases, AMD claims, this shifts the chip’s performance bottleneck from fill to memory bandwidth.

The shared L2 read/write cache grows from 768 KB in Tahiti to 1 MB, divided into 16 64 KB partitions. This 33% increase yields a corresponding bandwidth increase between the L1 and L2 structures of 33% as well, topping out at 1 TB/s.

It makes sense, then, that increasing geometry throughput, adding 768 shaders, and doubling the back-end’s peak pixel fill would put additional demands on Hawaii’s memory subsystem. AMD addresses this with a redesigned controller. The new GPU features a 512-bit aggregate interface that the company says occupies about 20% less area than Tahiti’s 384-bit design and enables 50% more bandwidth per mm². How is this possible? It actually costs die space to support very fast data rates. So, hitting 6 Gb/s at higher voltage made Tahiti less efficient than Hawaii’s bus, which targets lower frequencies at lower voltage, and can consequently be smaller. Operating at 5 Gb/s in the case of R9 290X, the 512-bit bus pushes up to 320 GB/s using 4 GB of GDDR5. In comparison, Tahiti maxed out at 288 GB/s.

2. CrossFire: Farewell Bridge Connector; Hello DMA

Before now, adding a second, third, or fourth Radeon card in CrossFire was a matter of picking a compatible motherboard (with the right PCI Express slot spacing), dropping in the additional hardware, and linking the cards up with a bridge connector draped over the top. That connector shuttled the secondary card’s frames to the first, where an on-die compositing engine put the stream together for output.

This approach worked well in a world of 2560x1600 and less. However, it became problematic above four-megapixel resolutions, where information had to be moved across PCI Express instead, negatively affecting practical frame rates at 5760x1080 and Ultra HD.

So, AMD built a DMA engine into its compositing block, facilitating direct communication between GPUs over PCI Express and enough throughput for those triple-screen and 4K configurations that performed so poorly before. This is not inconsequential. Moving display data is a real-time operation, necessitating bandwidth provisioning, buffering, and prioritization.

The big benefit is that there’s no longer any need for an external bridge. Whereas the interplay between CrossFire connector and PCIe bus continues to stymie the rest of AMD’s cards (except for the Bonaire-based R7 260X—that GPU has the xDMA feature, too), a pair of Radeon R9 290Xes in CrossFire support the company’s frame pacing technology at 3840x2160 right out of the gate.

Windows display properties next to CCC, showing Frame Pacing enabled at 3840x2160Windows display properties next to CCC, showing Frame Pacing enabled at 3840x2160

You don’t even need PCI Express 3.0 connectivity—as in, the xDMA engine doesn’t rely on any specific feature of the third-gen standard. AMD says this feature will work on platforms limited to older versions of PCIe, too. With that said, if you’re shopping for two R9 290Xes and you’re still using a motherboard with PCI Express 2.0, it might be time to upgrade (or think about a combination of hardware that won’t bottleneck performance).

AMD’s timing is ideal. When I wrote Gaming At 3840x2160: Is Your PC Ready For A 4K Display?, there wasn’t even a point to testing Radeon cards. Without frame pacing, we fully expected to see one-half of dual-GPU configuration’s frames getting dropped. But now the company has a new flagship seemingly built for high-resolution gaming. And, given that we already considered two GeForce GTX 780s the sweet spot for smooth frame rates on a 4K screen, it seems probable that you’d want at least two 290Xes, too.

3. TrueAudio: Dedicated Resources For Sound Processing

We covered TrueAudio in AMD Radeon R9 280X, R9 270X, And R7 260X: Old GPUs, New Names. What follows comes from that piece, with one important correction: AMD let us know that its R7 260X features three HiFi2 EP Audio DSP cores, rather than two. The higher-end R9 290 and 290X also sport three cores.

If you followed along with AMD’s tech day webcast, then you sat through a lot of TrueAudio discussion. In fact, given the amount of time dedicated to TrueAudio, the feature seemed like it’d be the day’s emphasis.

At the event, we were hearing the partner demos across eight channels, and the positional audio was certainly discernable, if not overwhelmingly busy (on purpose, no doubt). But we all know that 7.1- and even 5.1-channel sound setups outside of a home theater are very uncommon. Two- and 2.1-channel configurations, including headsets, are far more common. Unfortunately, it didn’t sound like anyone tuned in over Livestream was hearing the same output over stereo.

For anyone who was around in the late ‘90s to hear Aureal’s and Sensaura’s technologies, before both were acquired by Creative, you know that the head-related transfer functions used to create effective positional audio over two channels are not new. The point of TrueAudio is to facilitate more complex sound effects (those HRTFs aren’t computationally free) without burdening the host processor. Today, AMD says that audio gets as much as 10% of a game’s CPU utilization budget, limiting what developers can do. But with TrueAudio, AMD wants to guarantee the availability of real-time processing resources specifically for sound, and regardless of the host CPU you have installed.

This is achieved through the Tensilica HiFi2 EP Audio DSP cores mentioned on the previous page. In the R7 260X, there are three cores integrated on the Bonaire GPU. The higher-end R9 290 and 290X will also feature three DSP cores dedicated to TrueAudio. Those DSPs employ Tensilica’s Xtensa ISA with fixed- and floating-point number support, which AMD says is equally useful for high-end gaming and embedded applications. Because the DSP is programmable by nature, you can really feed anything you want into it, so long as there’s a decoder available. To that end, the professional audio software vendors are purportedly showing an interest, eager to see what dedicated hardware can do that host-based processing couldn’t.

The real-time nature of audio in a gaming environment means that fast access to compute cycles and memory is imperative, even if the cores themselves aren't particularly powerful. Each one includes 32 KB of instruction and data cache, along with 8 KB of scratch RAM. A fast routing interface connects the DSPs to 384 KB of shared internal memory organized in 8 KB banks. The local resources are fed by a multi-channel DMA engine able to keep the cores busy. And up to 64 MB of frame buffer memory is addressable through a low-latency bus interface shared with the display pipeline.

One of the first questions that came to mind upon hearing about TrueAudio was, “will game developers, already strapped for time and money as they get their titles to market, put resources into sound when there’s so much going on in graphics, physics, and AI?” AMD seems to think that the impact on ISVs will be minimal, though. Because a majority of developers are utilizing middleware for their audio, TrueAudio needs support from those companies first and foremost. Once you get support in Audiokinetic and Firelight’s FMOD, detecting and utilizing TrueAudio becomes much easier. From there, the feature exerts its influence before getting handed off to a codec, and is consequently compatible with any output type.

What about the fact that AMD is only making TrueAudio available across three products, two of which aren’t even available yet? Representatives say that AMD has to start somewhere with TrueAudio, and this is simply the first public airing. I’d add that high-end graphics cards, destined for high-end PCs also don’t need audio effects acceleration as much as less powerful platforms. But you can guess where this is going: expect the same technology to start showing up in AMD’s APUs and mobile GPUs, which are less powerful and might even realize power benefits from accelerating audio.

4. PowerTune: Balancing Performance And Acoustics

The last time we went into depth on AMD’s PowerTune technology was last year, when the company introduced its Boost feature to Radeon HD 7970 GHz Edition. Back then, we determined that the card’s base clock was stuck at 1 GHz, and overclocking consisted of moving the target on an extra P-state that’d hold as long as you ducked in under a power ceiling. All the way up, though, you’d see fan speed increase. Altering the fan speed through AMD’s OverDrive applet set a constant duty cycle, which probably wasn’t apropos all of the time.

With its Radeon HD 7790, AMD changed the behavior of PowerTune based on additional input from a second-gen VR controller. That same functionality carries over now to R9 290X.

So, now, PowerTune takes input from thermal sensors, creates an estimation of power use in real-time through activity counters, folds in telemetry data from the voltage regulator, and feeds that data into a digital power management arbitrator. That arbitrator is programmed to know the GPU’s power, thermal, and current limits. Within those bounds, it controls voltages, clock rates, and fan speeds, prioritizing maximum performance. If one of the input limits is exceeded, the arbitrator can pull back on voltage and/or frequency.

All of this can happen very quickly thanks to the aforementioned VR controller. Previously, there was a relatively long delay between the request for a higher voltage and a subsequent clock rate step. AMD’s second-gen serial VID is around two orders of magnitude faster (~10 µs rather than 1 ms), it provides confirmation of the switch, and it’s granular down to 6.25 mV steps.

With the ability to define and customize power, fan speed, GPU clock (performance), and target temperature, it becomes possible to very specifically dictate how an R9 290X behaves. Fan speed is one of the most clearly affected variables. Past cards employed a fan table that correlated temperature to RPM, but failed to deliver optimal acoustics—a point I’ve mentioned more than once. Now, however, the controller is both reactive and predictive, varying acceleration based on workload and, ideally, smoothing out changes to fan speed more than before.

Of course, all of this intelligence is still dependent on a well-designed thermal solution able to translate R9 290X’s 1 GHz clock rate and 95-degree temperature ceiling into friendly acoustics, even under load. By default, the card wants to run as close as possible to 1 GHz, and will let Hawaii get to 95 °C in the interest of spinning the fan slowly. You can imagine that the very nastiest loads will cause the fan to ramp up and up and up as it tries to maintain 95 degrees at 1 GHz. That’d be alright for performance, but it’d probably sound pretty bad. So, AMD implements two different BIOSes on R9 290X: one called Quiet Mode, and the other dubbed Uber. The first puts a default limit of 40% duty cycle on the fan, while the second one stops at 55%.

If the card is running in Quiet mode, hits 95 degrees, and cannot control temperature under 40% fan speed, it’ll start pulling back clock rate to avoid 96 degrees. Performance takes a hit in the interest of low noise. Switching to Uber mode simply gives you 15% more duty cycle before clock rates start dialing back.

I debated about where to put this graph. In one sense, it belongs with my CrossFire data because it shows that heat hurts the way two R9 290Xes perform. But I'm putting it here because this is an illustration of PowerTune in action. The technology, for better or worse, is forcing these cards to abide a 40% fan speed. So, when the GPU hits 95 degrees and can't spin its fan any faster, you have to watch the core clock melt away. The effect is even more severe with two cards next to each other (even with space between). Hawaii is still a very fast GPU, in spite of this phenomenon, but it's a shame to observe, regardless.

You’re certainly free to manually specify higher maximum fan speeds than the 40% I used, but it’s pretty telling that even AMD’s Uber mode stops at 55%. Again, we’re dealing with a reference cooler that makes a lot of noise once it gets going. I’d personally leave the card set to its Quiet firmware in my own PC.

5. Overclocking: PowerTune Changes Things

Because power, clock rate, fan speed, and temperature are all so interdependent, overclocking is not as simple as setting a frequency. In fact, even running at stock settings isn’t that easy. As we saw on the previous page, you'll start with great clock rates and taper off to something lower over time.

Catalyst Control Center: OverDrive, Updated

Changes to the Catalyst Control Center affect the performance and power consumption of AMD's Radeon R9 290X. As you can see, the OverDrive applet looks a lot different from what you're used to. There's that heat map, to begin, which defines how much power and clock rate (as percentages) to increase or decrease compared to AMD's stock configuration.

“Over Limit” instead of Overclocking

Overclocking in the traditional sense of the word doesn't really make sense with this board. Under normal conditions, and faced with a full load, the R9 290X already isn't able to sustain the "up to 1 GHz" specified by AMD. You saw that on the previous page. So, depending on the game or application you're running, Hawaii needs an extra nudge from its power limiter to run at higher speeds (and then, only if you have the cooling to allow this).

Here's a look at AMD's original settings:

Increasing the power limit by 25 percent pushes consumption up to (and briefly beyond) the 300 W available from one six-pin auxiliary connector, one eight-pin lead, and the PCI Express slot. We didn't want to risk going much higher than that for our review. Our peak power consumption measurements do show that there’s some additional headroom, though, at this point it's all conjecture since we weren't getting any performance increase attributable to a higher power limit or fan speed setting. Too-high temperatures are to blame for this.

So, how do we lower them?

Fan RPM and Target Temperature

You want to bring your temperature target down, rather than increasing it, to trigger more aggressive cooling. Here’s the default setting:

As we now know, you can no longer specify a fan speed in RPM. Instead, a maximum “up to” percentage is selected, which represents the highest duty cycle AMD's fan will spin. Again, the Quiet firmware defaults to 40%, while Uber mode stops at 55%.

You cannot override those ceilings by simply moving the slider all the way over to 100% and enjoying lower temperatures. Since AMD's 95-degree default limit can typically be maintained somewhere between 40 and 50%, you need to change the target temperature, too.

Set up like this, you're more likely to hear the R9 290X's fan hit the gas and take off. Depending on your power limit setting, it's possible to get this card's cooler spinning at 95% under full load. Those settings are the only way we were able to hit our peak power consumption measurements. 

As with traditional overclocking, increasing the R9 290X's power limit alters AMD's specification. Company representatives assure us that Hawaii is meant to be used at 95 degrees, but that's already really hot. We're not sure yet how pushing power even higher will affect the GPU's reliability. Take a cautious approach under the enthusiast community has more time to experiment (you'd better believe Nvidia is going to figure out what it takes to make this thing pop as soon as it can).

6. The Radeon R9 290X

AMD’s reference Radeon R9 290X is exactly as long as Radeon HD 7970 (11”) and similarly two expansion slots wide. Even the 75 mm centrifugal fan looks like it carried straight over.

Also familiar is the little switch on the card’s top edge. Previously, that might have been used to control maximum clock rates, enabling a minor boost for an extra bit of performance. Given those PowerTune changes we just covered, though, that wouldn't make sense. Instead, that's the switch to toggle between Quiet and Uber mode.

The fan shroud is clearly updated, and I’ve already heard feedback from Tom’s Hardware staffers who really like the more sweeping red and black design. I remain partial to Nvidia’s metal shroud and polycarbonate window though, particularly at this very high-end price point. There are plenty of GeForce GTX 780s with third-party coolers, but a great many ship with the reference ID I wrote about in The Story Of How GeForce GTX 690 And Titan Came To Be. It’d be great to see AMD step up with something similarly inspired.

Despite similar dimensions, Radeon R9 290X is clearly based on a different PCB than AMD’s Tahiti-based cards. Most obvious is the lack of CrossFire connectors. Because Hawaii features an xDMA engine, CrossFire traffic is carried over the PCI Express bus, eliminating the need for those pesky cables. It appears improbable that an aftermarket cooler designed for 7970 would work on R9 290X.

AMD is staying quiet on maximum board power, but claims that R9 290X should push up to 250 W in typical gaming scenarios. Realistically, because PowerTune is constantly making changes, it’s pretty difficult to nail down peak consumption. We recorded a range, though, and found a peak that spanned from 225 to 295 W. Given one eight- and one six-pin auxiliary power connector, plus a 75 W PCI Express slot, those numbers are within the 300 W you probably wouldn’t want to exceed.

The R9 290X cards we received all had two dual-link DVI ports, a full-sized HDMI output, and one DisplayPort connector. Its Hawaii GPU features an updated display controller though, which includes a third independent timing generator. So, although the flagship board comes equipped with one less display output than the R9 280X we recently reviewed, you can actually hook up six screens operating at different resolutions and timings to the R9 290X with an MST hub.

Hawaii’s new display controller will also enable the 600 MHz pixel rates needed to support upcoming single-stream Ultra HD displays at 60 Hz. As you know, currently, the only way to drive a 4K screen is through two HDMI ports or one DisplayPort 1.2 output with MST support. These correspond to a pair of 1920x2160 tiles that come together as a 2x1 Eyefinity array. Next-generation scalars will make 3840x2160p60 possible without tiling—they’ll simply require higher pixel clocks. Radeon R9 290X can do it for sure, but AMD isn’t certain whether its older display controllers will.

7. Test System And Benchmarks

Test Hardware And Software

Test Hardware
Processors
Intel Core i7-4960X (Ivy Bridge-E) 3.6 GHz Base Clock Rate, Overclocked to 4.3 GHz, LGA 2011, 15 MB Shared L3, Hyper-Threading enabled, Power-savings enabled
Motherboard
ASRock X79 Extreme6 (LGA 2011) X79 Express Chipset, BIOS 2.50
Memory
G.Skill 32 GB (8 x 4 GB) DDR3-2133, F3-17000CL9Q-16GBXM x2 @ 9-11-10-28 and 1.65 V
Hard Drive
Samsung 840 Pro SSD 256 GB SATA 6Gb/s
Graphics
AMD Radeon R9 290X 4 GB

AMD Radeon R9 280X 3 GB

AMD Radeon HD 7990 6 GB

Nvidia GeForce GTX Titan 6 GB

Nvidia GeForce GTX 780 3 GB

Nvidia GeForce GTX 690 4 GB
Power Supply
Corsair AX860i 860 W
System Software And Drivers
Operating System
Windows 8 Professional 64-bit
DirectX
DirectX 11
Graphics DriverAMD Catalyst 13.11 Beta 5 (All AMD cards)

Nvidia GeForce 331.40 Beta (All Nvidia cards)
Benchmarks And Settings
Battlefield 3
1920x1080, 2560x1440, 3840x2160, and 7680x1440: Ultra Quality Preset, v-sync off, 90-second Going Hunting playback. FCAT for 1920x1080, 2560x1440, and 7680x1440; Fraps for 3840x2160
Arma III
1920x1080, 2560x1440, and 7680x1440: Ultra Quality Preset, 8x FSAA, Anisotropic Filtering: Ultra, v-sync off, Infantry Showcase, 30-second playback, FCAT
3840x2160: High Quality Preset, 4x FSAA, Anisotropic Filtering: High, v-sync off, Infantry Showcase, 30-second playback, Fraps
Metro: Last Light
1920x1080, 2560x1440, and 7680x1440: Very High Quality Preset, 16x Anisotropic Filtering, Low Motion Blur, v-sync off, Built-In Benchmark, FCAT
3840x2160: High Quality Preset, 16x Anisotropic Filtering, Low Motion Blur, v-sync off, Built-In Benchmark, Fraps
The Elder Scrolls V: Skyrim
1920x1080, 2560x1440, and 7680x1440: Ultra Quality Preset, FXAA Disabled, 25-second Custom Run-Through, FCAT
3840x2160: Ultra Quality Preset, FXAA Disabled, 25-second Custom Run-Through, Fraps
BioShock Infinite
1920x1080 and 2560x1440: Very High Quality Preset, 75-second Opening Game Sequence, FCAT
3840x2160: Very High Quality Preset, 75-second Opening Game Sequence, Fraps
7680x1440: Ultra Quality Preset, 75-second Opening Game Sequence, FCAT
Crysis 3
1920x1080 and 2560x1440: High System Spec, High Texture Resolution, FXAA, 60-second Custom Run-Through, FCAT
3840x2160: High System Spec, High Texture Resolution, FXAA, 60-second Custom Run-Through, Fraps
7680x1440: Very High System Spec, Very High Texture Resolution, FXAA, 60-second Custom Run-Through, FCAT
Tomb Raider
1920x1080, 2560x1440, and 7680x1440: Ultimate Quality Preset, FXAA, 16x Anisotropic Filtering, TressFX Hair, 45-second Custom Run-Through, FCAT
3840x2160: Ultimate Quality Preset, FXAA, 16x Anisotropic Filtering, TressFX Hair, 45-second Custom Run-Through, Fraps
8. Results: Arma III At 1920x1080 And 2560x1440

Originally, I planned to skip testing at 1920x1080—it seems like too-mainstream of a resolution for these cards. But I was reminded by someone who sells a lot of high-end hardware that FHD remains massively prolific.

With that said, we’re able to run Arma III at its Ultra quality preset on even an R9 280X and enjoy playable performance. Nvidia’s GeForce GTX 690 is actually the fastest card at 1920x1080, followed by its GeForce GTX Titan. However, the Radeon R9 290X still averages more than 60 FPS, alongside the GK110-powered GeForce GTX 780. It’d appear that AMD’s Radeon HD 7990 doesn’t have the CrossFire profile it’d need to properly support Arma.

Stepping up to QHD exacts a more taxing workload. The R9 290X’s increased memory bandwidth and higher pixel fill rate allow it to maintain more of its performance than GeForce GTX Titan or 780…

…the thing is, with minimums under 40 FPS, I’d hesitate before recommending any single-GPU solution in this game.

I’m going to leave frame time variance out of this story at 1920x1080. FCAT is reporting odd frame time behavior at that specific resolution, even though we’re able to verify average frame rates with Fraps. At 2560x1440, however, it’s clear that frame time variance in Arma is very low, even for the dual-GPU cards.

9. Results: Arma III At 3840x2160

Ultra HD presents us with some interesting challenges. To begin, while DisplayPort is the most logical interface between your graphics card and a tiled display, there is no way for us to use FCAT and DP for analyzing performance. The workaround is two HDMI inputs, one of which gets routed through a DVI splitter and into a capture card. But while Nvidia says this works fine, AMD is insistent that its controller doesn't support this configuration due to timing issues. That leaves us with Fraps. And of course, there’s no way for us to pick up dropped and runt frame using Fraps. So, we immediately shed the dual-GPU solutions from our charts.

What we’re left with, then, are five single-GPU boards at a dialed-down High detail preset in Arma III. The Ultra configuration we used on the previous page is simply too demanding. Even set to High, we see averages under 40 FPS at best.

Using the Quiet mode firmware, AMD’s Radeon R9 290X maintains more than 30 FPS, beating GeForce GTX Titan.

With that said, I can’t believe that anyone willing to spend $3500 on an Ultra HD screen today would compromise game settings to make a single-card gaming box playable. Calling R9 290X the best solution for a smooth experience at 3840x2160 is a red herring. In Gaming At 3840x2160: Is Your PC Ready For A 4K Display?, I concluded that you’d want at least two GeForce GTX 780s for 4K. And although the R9 290X is faster than even the $1000 Titan, I maintain that you need a pair in order to crank your settings up to where they should be.

Average frame time variance isn’t bad, but the AMD cards encounter less consistent delivery in worst-case measurements, which could manifest as more stutter when the going gets tough.

10. Results: Battlefield 3 At 1920x1080 And 2560x1440

While both dual-GPU boards rule the roost in Battlefield 3, AMD’s Radeon R9 290X appears as the fastest single-GPU card at both 1920x1080 and 2560x1440, if only barely in both cases.

It used to be that Radeon HD 7990s sold for $600, which would have made them a little more attractive. But now they’re back up to $800 or more, and still broken above 2560x1600 and in multi-monitor configurations (not to mention in multi-card arrays).

The R9 290X is a far better-looking solution at 2560x1440 and 1920x1080, sliding right past the $1000 Titan.

Keeping its nose above 90 FPS at 1920x1080, AMD’s Hawaii-based board is overkill for Battlefield 3, even using the Ultra preset. It’s a far better match for running QHD resolutions.

Frame time variance is low across the board at 2560x1440. The Radeon HD 7990 fares worst, and even its 95th percentile figure is great by all accounts.

11. Results: Battlefield 3 At 3840x2160

The R9 290X’s victory over Titan looks similar in Battlefield 3, though Uber mode doesn’t do anything for performance. Presumably, this is because Battlefield doesn’t tax Hawaii as much, so the chip doesn’t reach its target temperature. In Arma, the extra 15% fan headroom is necessary to prevent R9 290X from backing off of its peak clock rate, yielding higher frame rates.

We’re able to keep the fastest cards in excess of 30 FPS using Battlefield’s Ultra preset. Nvidia’s GeForce GTX Titan dips under, though.

Strangely, while AMD’s R9 290X achieves the highest frame rates, it also encounters the highest worst-case variance. We attempt to minimize statistical noise by taking 95th percentile measurements rather than 99th, but even still, the Uber firmware runs into big variance numbers up there.

12. Results: BioShock Infinite At 1920x1080 And 2560x1440

We used BioShock’s introductory game sequence for benchmarking, since the built-in utility didn’t have a preset for testing at 3840x2160 (and choosing "Current settings" yielded two 1920x2160 tiles on AMD's hardware). We also kept the game settings constant for the purpose of comparison. As a result, we end up some crazy-high average frame rates at 1920x1080 and 2560x1440. AMD’s R9 290X serves up the highest numbers again, but they’re not statistically relevant given these performance levels.

Most interesting, perhaps, is that AMD’s cards see a couple of sizeable speed-ups during the benchmark run, while Nvidia’s hardware remains flat.

Frame time variance is again very low across the board. Radeon HD 7990 with frame pacing enabled stands out as being least-consistent. But with a worst-case variance under 1 ms, we’d consider that result nothing short of stellar.

13. Results: BioShock Infinite At 3840x2160

The finishing order in BioShock is identical to the previous two games, though Uber mode does have an impact, indicating the benefit of increased fan speed to maintaining higher clock rates. GeForce GTX Titan and 780 follow, the former averaging a few frames per second less than R9 290X, and the latter a few frames behind that.

Here’s where the average frame rates are made and broken. Notice that there is a performance spike favoring AMD at the beginning, and another one about halfway through the benchmark. Those same sequences turn into valleys on Nvidia’s hardware. In talking to both companies, there’s no clear explanation for why this divergence would be happening, except that the intro sequence in BioShock might not be taxing enough. Then again, if it’s not taxing at Ultra HD, why didn’t the Nvidia cards hit a ceiling at FHD or QHD (AMD’s cards still spiked; Nvidia’s were flat)?

Variance is a non-issue in BioShock, even at this extreme resolution.

14. Results: Crysis 3 At 1920x1080 And 2560x1440

As with BioShock, I wanted to maintain the same test setting in Crysis 3 across our three resolutions for comparison purposes. Interestingly enough, the numbers we see from Radeon R9 280X match up with what we saw in AMD Radeon R9 280X, R9 270X, And R7 260X: Old GPUs, New Names, even using a newer driver. However, more powerful graphics hardware appears to push us into a platform bottleneck at 1920x1080 and 2560x1440. The easiest way around this would be using a more taxing detail preset (though as you’ll see in the Ultra HD tests, High is really the ceiling for single-GPU gaming at 3840x2160).

At 1920x1080, all of the cards fall into a narrow range. Pushing to 2560x1440 spread it out a bit. But it won’t be until we reach 3840x2160 that we see the bars better-differentiated.


What about all of those spikes and dips? Crysis 3 is the one game in our suite that requires manual input to run through a preset path. Enemies take different paths and terrain gets affected by explosions. We typically run this test multiple times when it looks like something might affect the outcome noticeably, but the more severe discrepancies at 1920x1080 seem to confirm that the platform is playing more of a role in the frame rates than our powerful graphics cards.

Less consistent frame delivery affects the dual-card solutions most. AMD’s Radeon HD 7990 sees the highest average variance, though Nvidia’s GeForce GTX 690 gets hit by more severe worst-case results.

15. Results: Crysis 3 At 3840x2160

AMD’s Uber firmware does little to help the R9 290X’s performance in Crysis 3. Not that it needs it. In our custom run, the Hawaii-based board is able to slide by GeForce GTX Titan by a few frames per second, on average. Is that enough to call the R9’s finish a win? Not exactly—dropping under 30 FPS, even at a reduced quality level, doesn’t satisfy the maxed-out details you’d want with an expensive display on your desk.

This is where we see those dips, in this case corresponding with a big explosion. Still, R9 290X is a big improvement over Tahiti. The Radeon HD 7970 GHz Edition approaches 20 FPS—definitely too slow.

As with BioShock, our variance numbers aren’t troubling, even if the instantaneous frame rates are.

16. Results: Metro: Last Light At 1920x1080 And 2560x1440

Metro’s Very High detail preset is demanding, so it’s telling that all of these cards serve up such playable performance at 1920x1080. The step up to 2560x1440 knocks the R9 280X under 40 FPS on average. Meanwhile, R9 290X averages more than 50 FPS at both of its BIOS settings.

This game’s built-in benchmarking utility applies a highly variable load. We can see, however, at both resolutions, that the most taxing sequences have AMD’s R9 290X well ahead. Whereas GeForce GTX Titan and 780 dip down to 30 FPS or so at 2560x1440, the R9 290X maintains around 40 FPS at its worst.

Dual-GPU cards again demonstrate the highest frame time variance. But a worst-case result under 2 ms is still plenty smooth, practically.

17. Results: Metro: Last Light At 3840x2160

Benchmarking 120 seconds of Metro’s built-in benchmark yields the following numbers. In short, you’re looking at another narrow victory favoring AMD’s Radeon R9 290X over GeForce GTX Titan. The win is again largely symbolic, though…

The Hawaii-based card approaches 20 FPS on one occasion, indicating that when action picks up in Metro: Last Light, the game punishes high-end hardware. An average in excess of 30 FPS sounds alright, but the minimums tell another story.

Most of the frame time variance numbers aren’t a problem. Only the Radeon HD 7970 GHz Edition exhibits worst-case results that’d be noticeable. This is the same sort of phenomenon demonstrated by Radeon R9 290X in Battlefield 3 at 3840x2160, too.

18. Results: The Elder Scrolls V: Skyrim At 1920x1080 And 2560x1440

These are the highest frame rates we’ve ever seen in Skyrim, thanks to our 4.3 GHz Core i7-4960X. At 1920x1080, Nvidia’s GeForce GTX Titan scores a narrow victory. Regardless, even the R9 280X averages more than 80 FPS at 2560x1440.

Because these cards are all so potent, they form a big jumble when we chart frame rate over time. Skyrim runs perfectly well on a Tahiti-powered board, even at its most taxing settings and with the high-res texture pack installed.

The multi-GPU cards exhibit some frame time variance, but the single-GPU boards perform flawlessly.

19. Results: The Elder Scrolls V: Skyrim At 3840x2160

Skyrim is one of the only games in our suite where you can easily play at Ultra HD using just one graphics card. The thing is, you’ll enjoy the game equally using one R9 290X (up at the top of the chart) or one Radeon HD 7970 GHz Edition (at the bottom). Both average plenty-high frame rates throughout.

Alright, so the Tahiti-powered board dips under 60 FPS once. Still, these cards all perform admirably in this game, which looks phenomenal at 3840x2160.

The frame time variances in Skyrim are higher than some of the other titles we’ve looked at, but they’re still nothing we’d be bothered by, translated to on-screen stutter.

20. Results: Tomb Raider At 1920x1080 And 2560x1440

We use a 45-second run-through in Tomb Raider—one of the most taxing passages we could find that’s completely repeatable. At both 1920x1080 and 2560x1440, R9 290X trails the dual-GPU boards, but is faster than GeForce GTX Titan and 780. Nvidia tends to have trouble in this title with TressFX enabled, as we might expect given the feature’s dependence on DirectCompute and AMD’s strength in compute-oriented tasks.

Distinct separation in our frame rate over time chart shows that Tomb Raider is graphics-bound, particularly at the Ultimate quality preset. AMD’s Radeon HD 7990 clearly takes the top spot. However, given its $800+ price tag now, and in light of its outstanding issues, that isn’t a card we’d recommend. Selling for $1000, the GeForce GTX 690 is simply too expensive compared to AMD’s $550 R9 290X.

21. Results: Tomb Raider At 3840x2160

Tomb Raider’s Ultimate quality preset is taxing—so much so that it knocks the 35 FPS average we saw in Gaming At 3840x2160: Is Your PC Ready For A 4K Display? using the Ultra preset down to 27 FPS on Nvidia’s GeForce GTX Titan. Fortunately for AMD, its Radeon R9 290X is faster, averaging just over 30 FPS.

Unfortunately, charting out frame rate over time shows that our sequence nearly drops to 15 FPS at its toughest. That’s just too low to be considered playable.

Variance is a secondary measurement, and it doesn’t really matter once a frame rate is determined to be unplayable. Nevertheless, Nvidia’s cards exhibit worst-case variances that we’d consider detectable.

22. CrossFire: Arma III At 7680x1440

When I first started generating CrossFire data (at the request of /u/Schmakk in /r/buildapc), my reaction was, "...the heck? How can a card that offers so much memory bandwidth and pixel fill rate fall behind the card it was faster than in single-GPU testing?" And then I started my clock rate log and everything made more sense. Beyond the possibility of performance being shader-bound, we know for a fact that two R9 290Xes in CrossFire throttle back their clock rate sooner and more severely than one board due to heat.

With that said, two $550 cards are still keeping pace with two $1000 boards in Arma III.

Even with two high-end cards, the Ultra detail preset is a bit much at 7680x1440. You'd probably want to scale back a bit if you own three 2560x1440 screens (incidentally, those 11 million pixels can be had for $1200 or so, which is almost one-third of an Ultra HD screen).

We record higher frame time variance from AMD's cards. The fact that its single- and multi-card solutions are on fairly event ground keeps us from suspecting an issue with CrossFire, though.

23. CrossFire: Battlefield 3 At 7680x1440

Radeon R9 290X scales well in Battlefield 3. A win with one card turns into a win with two, even if both hardware combinations manage frame rates that appear plenty-playable.

Charting frame rate over time shows that AMD's advantage comes from the beginning of the benchmark, inside an aircraft carrier.

A CrossFire'd configuration does incur higher variance, though. Here's the thing: it's subjectively really hard to tell the practical difference between two Titans and two R9 290X cards in this game. There is clear tearing and stutter from both solutions, but this is a persistent issue with Battlefield that I'd chalk up to DICE's engine, rather than a multi-GPU technology.

24. CrossFire: BioShock Infinite At 7680x1440

R9 290X beat GeForce GTX Titan at 1920x1080, 2560x1440, and 3840x2160. So what the heck happened at 7680x1440? Check it out:

Radeon R9 290X is getting beaten up by the workload with one card. With two installed, it spends most of the run just sitting at 727 MHz (72% of its rated clock rate). Now, again, I have to concede that achieving this performance level against $2000 worth of hardware from Nvidia is still phenomenal. But it's such a shame that AMD can't get more from this GPU. At any rate, let's look at performance over time.

The discrepancy between Titan and 290X naturally grows in a multi-card configuration, though the $550 board from AMD still has plenty to be proud of.

Enabling CrossFire over PCI Express through a DMA engine works brilliantly for circumventing the bottleneck imposed by AMD's top bridges. However, this mechanism might not be optimized yet. Our frame time variance numbers are by no means problematic. But they are notably higher than what Nvidia can do in SLI.

25. CrossFire: Crysis 3 At 7680x1440

The same sort of reversal affects Crysis, and a peek at the temperature logs reveal a pattern of heating up to 94 degrees or so, maxing out the fan at 40%, and then necessarily dropping clock rate in order to maintain stability.

Although the bars flip-flop, looking at frame rates over time shows that two GeForce GTX Titans and two Radeon R9 290Xes are quite competitive. Switching on AMD's Uber mode would really benefit these boards with additional cooling capacity. The thermal solution is simply too loud for that, though.

Here's more evidence that CrossFire likely needs some additional work. This is with frame pacing enabled, and yet the paired-up R9 290Xes demonstrate fairly high worst-case variance, more than doubling what we observe from one card. If you click on the following link, you can see what this looks like in a raw chart of frame times. The deep red line is a tell-tale indicator of more variance compared to the narrower pink line (or Nvidia's lines, which are also pretty clean).

26. CrossFire: Metro: Last Light At 7680x1440

Metro: Last Light behaves a lot like Crysis, though the frame rate over time chart gives both graphics card combinations a fairly similar-looking line.

Bigger spikes affect the CrossFire-based configuration (these look a lot like the Crysis chart linked on the previous page). Less than 10 ms of variance in the 20 frames before and after each measured point isn't bad, but we'd be curious to get two Titans and two 290Xes in front of some readers for blind testing.

27. CrossFire: The Elder Scrolls V: Skyrim At 7680x1440

Skyrim gets hit hard, and it's difficult to pinpoint the cause. Performance jumps up and down, variance is severe, and scaling from one GPU to two is downright bad. A look at the frequency logs don't turn up anything nasty. One Radeon R9 290X keeps its head above 900 MHz, while two do dip under 800 MHz briefly. However, that's no worse than some of the situations observed previously.

Update: Actually, the answer to this is simple: although frame pacing and CrossFire now work together at resolutions in excess of 2560x1600, DirectX 9 still is not supported. So, we'll need to wait for AMD's updated beta driver for a fix.

28. CrossFire: Tomb Raider At 7680x1440

Performance returns to the land of predictability in Tomb Raider, where a clock rate hit keeps two R9 290Xes from beating a pair of Titans, though not by much.

Variance is still higher on the paired-up Hawaii boards. You can bet we're going to dig into those higher worst-case numbers in the days to come...

29. Power Consumption

Idle and Multi-Monitor Loads

The Radeon R9 290X’s power consumption at idle is surprisingly high. Even though AMD makes a point of highlighting its ZeroCore Power feature, which does drop the card to a miserly 5 to 6 W, you only enjoy the benefit of this when your monitor is in suspend mode. As soon as the desktop becomes active, power consumption jumps to 20 W with one monitor connected. Connect two and you’re looking at 57 W. Three monitors take you all the way up to 59 W. This means that the R9 290X consumes more power than two overclocked GeForce GTX 780s in SLI with more than one monitor attached.

Hardware Accelerated Video Output

During Blu-ray playback (or other accelerated video work), AMD's Radeon R9 290X consumes 70 W. This is bizarre, since the Radeon R7 240 does the same thing under 17 W. AMD clearly has some driver work to do still.

Onwards and Upwards: Gaming

After that negative attention, PowerTune kicks in to do the job it's supposed to do. The technology makes its adjustments so quickly that it's difficult to express average power consumption using one number. There's a lot of variation, and the reading changes based on several factors.

Because we can't be as objective as we'd want, we're providing a range instead. To achieve this, we left the power limit alone in CCC and lowered the board's target temperature to 70 degrees Celsius. The resulting cooling performance is about on par with what AMD’s partners offer on existing cards in the same thermal class, giving us a preview of what they might achieve with their own cooling solutions and R9 290X.

Power figures between 185 and 218 W are pretty darned good in the ultra-high-end segment. In light of these results, I think we can forgive the idle numbers we recorded earlier.

When Push Comes to Shove: The Peak Values

If you want to take the Radeon R9 290X to its limits, then you need to push it hard by increasing its power limit and dropping the target temperature. Under those conditions, it's possible to exceed 300 W. We even saw 335 W from the card, though that's probably not at all something you want to reproduce.

The 225 W we measured using a compute-heavy load and stock settings can be pushed as high as 295 W by giving the fan more room to spin up and targeting a lower thermal ceiling. Unfortunately, those conditions don't last. Once the Radeon R9 290X hits its target temperature, power consumption drops considerably. This explains the card’s relatively low performance in our GPGPU benchmarks.

30. Noise

Since we've already spent plenty of time talking about fan behavior, explaining noise levels under different loads is really easy. Right upfront: the cooler's moderate stock setting and the GPU's high target temperature make for a relatively quiet card, so long as you don't mess with it. But make no mistake, this thing is in no way as good as the partner solutions that'll undoubtedly be gracing R9 290Xes soon. As usual, the measurements were taken with a studio microphone perpendicular to the graphics card’s middle from a distance of 50 cm.


Idle
Load
Quiet Mode (Default)
33.4 dB(A)
45.2 dB(A)
Uber Mode (Default)
34.1 dB(A)
51.2 dB(A)
+25% Power Limit + 70 °C Target Temperature
Fan Speed up to 100%
34.3 dB(A)
72.9 dB(A)

Low Load: 30 Percent RPM at 38.8 dB(A)

Radeon R9 290X - 30% Fan Speed

Quiet Mode: 40 Percent RPM at 45.2 dB(A)

Radeon R9 290X - 40% Fan Speed

Uber Mode: 55 Percent RPM at 51.2 dB(A)

Radeon R9 290X - 55% Fan Speed

We decided to forgo the video demonstrating what a 95% duty cycle sounds like. It’s pointless and potentially bad for your long-term hearing. The noise is simply unbearable without commercial-grade ear protection.

31. CAD: AutoCAD 2013

2D Performance

There are only very marginal differences between the graphics cards. If 2D output is all you need, then it doesn’t really matter whether you pick a gaming or workstation board.

3D Performance

The story changes considerably for 3D. Consumer graphics cards benefit from DirectX optimizations. It needs to be noted that Autodesk is pretty much alone in taking things that direction, though. AMD's Radeon R9 290X holds its own, even though the inconsistent results do make you wonder a bit.

32. CAD: Autodesk Inventor 2013

Autodesk Inventor 2013 uses DirectX as well, which is reflected once again in the consumer graphics cards’ competitive scores. Interestingly, the Radeon products fare better here than they do under AutoCAD 2013, which allows the R9 290X to shine.

33. OpenGL: Maya 2013 And LightWave

Maya 2013

We chose two example scenes that don’t use the new Viewport 2.0 with DirectX to demonstrate the consumer graphics cards’ disadvantage compared to the workstation graphics cards with OpenGL. The AMD FirePro and Nvidia Quadro graphics cards benefit from their much more optimized drivers. The Radeon R9 290X falls in line with the rest of the consumer graphics cards. The second scene demonstrates nicely how well Nvidia's GeForce GTX 580 still does if you give it the right task.

LightWave

LightWave generally just kills consumer graphics cards. Still, in a pinch, the newest Radeon cards can be used for medium-sized models that aren’t too complex. Make no mistake, though; the workstation cards are clearly ahead.

AMD's Radeon R9 290X edges out the Radeon HD 7970 GHz Edition. Once again, we're surprised that the Hawaii-based board doesn’t pull ahead by a wider margin.

34. OpenCL: Bitmining, LuxMark, And RatGPU

Even though bitcoins themselves have seen some lively discussions of late, and there’s specialized equipment that’s much more efficient for mining them than graphics cards, mining still makes for a great benchmark. AMD is clearly the winner here. However, the two graphics card generations perform almost the same. The Radeon R9 290X’s inability to pull ahead is due to PowerTune. The temperature and power targets are reached quickly, and, once that happens, clock rates take a big hit.

For what it's worth, AMD’s new flagship shares this fate with Nvidia. The GeForce GTX Titan reacts just as badly under this kind of load. From where we sit, AMD clearly optimized the R9 290X for gaming. Under a constant, heavy compute load, Hawaii stands no chance of delivering the same exceptional performance.

It’s no secret that the OpenCL-based LuxRender software has always been one of AMD’s strong suits. Its OpenCL implementation is fairly well optimized and serves as a good demonstration of Nvidia’s lack of commitment to this platform. Unfortunately, the Radeon R9 290X’s advantage over the 7970 GHz Edition isn’t as large as it should be based on each card's technical specifications. Once more, PowerTune hits the brakes and keeps Hawaii from achieving its full potential (or self-combusting). This is somewhat sad to see, really.

ratGPU performance is a whole other story, which is to say that the pattern of graphics cards that have an advantage changes due to ratGPU’s completely different architecture. The older AMD Radeon HD 6970’s result is especially noteworthy in this benchmark, since it practically destroys the rest of the field. The current Radeon graphics cards have to be content with the better part of the middle, and the R9 290X doesn’t change this.

35. R9 290X: A Taste Of Paradise That Won’t Break The Bank

A trip to Bora Bora is going to set you back big time. Monte Carlo and Capri are also great places to go if you want to be seen spending lots of cash. But Hawaii—now that can be done relatively affordably. And it can still be pretty damn close to paradise.

Similarly, AMD’s Radeon R9 290X isn’t the most expensive or luxurious graphics card out there. It leans on an old cooling solution that we’d like to see improved, and it’s wrapped in a plastic shroud. There are a few things we think AMD could be doing better, and we’ll get into those. But when it comes to gaming performance, this card has little trouble trouncing its primary competition, GeForce GTX 780, and even Nvidia’s GeForce GTX Titan in a number of cases—both of which are substantially more expensive boards.

Let’s get the bad out of the way first. AMD is pushing its Hawaii GPU pretty hard in order to achieve the performance it’s getting. Although the R9 290X is rated for 1000 MHz, the right load will get Hawaii up to its 95 °C limit pretty fast. From there, you have to rely on the right fan speed to keep that clock rate up.

AMD says it gives you total control over this and, thanks to an updated PowerTune technology that defines maximum fan speed (rather than dialing in an absolute value), indeed it does. But you also get stuck with the same noisy thermal solution that makes reference Radeon HD 7970s so acoustically grating. Company engineers insulate you from having the same loud experience by implementing two firmware modes: Quiet and Uber. Quiet keeps the fan under 40% duty cycle. Uber lets it get up to 55%, and that’s too loud for me. So, I stick with Quiet mode. Once Hawaii is at 95 °C and the fan hits 40%, frequencies start retreating quickly. It’s not uncommon to see them bouncing between mid-700 to mid-800 MHz in single-card configs. In CrossFire, they’ll drop to 727 MHz and stay there. The bummer is that a more effective thermal solution could keep acoustics down and allow Hawaii to operate toward the top of its range more consistently.

How much does any of that matter if R9 290X is still a stellar performer? I guess that depends on how much it costs, right? As it happens, AMD says you’ll find it flagship Hawaii-based board for $550. That’s $100 less than GeForce GTX 780 and $450 less than a Titan. And better performance, in many of the cases we tested, than both. Wowsa.

Practically speaking, if you own a single QHD display, AMD’s Radeon R9 280X remains a good entry point for playable performance in most games at $300. Nvidia’s GeForce GTX 770 is the next step up, but it’s not so much faster that’d we’d recommend spending an extra $100. If you really want to play taxing new titles like Arma III at their highest quality levels, Radeon R9 290X becomes the most affordable way to go with the speed-up to match its price.

It’s certainly possible to play games at 3840x2160 using R9 290X, but nobody is going to spend $3500 on a new monitor and settle for barely-playable performance at dialed-back settings. You’re going to want two Radeon R9 290X or GeForce GTX 780 cards to make that happen. We couldn’t benchmark CrossFire against SLI at Ultra HD resolutions, since AMD doesn’t support the display output configuration we’d need to use for our FCAT-enabled equipment. However, based on our 7680x1440 results, expect the Hawaii-based boards to be faster. And $200 less when you buy a pair.

The coup de grâce is our set of benchmarks across three QHD screens—more than 11 million pixels. With all of our games cranked up to their highest possible settings, two R9 290Xes come close to a pair of $1000 Titans. AMD isn’t helped by the fact that its cards are pretty much pegged at 73% of their stock clock rate due to heat and my insistence on using the Quiet firmware. But maybe the company’s board partners will work some thermal magic and “uncork” some of Hawaii’s performance without compromising acoustics.

In the spirit of getting massive performance at a substantial discount, then, I’m giving AMD’s Radeon R9 290X Tom’s Hardware’s Elite award—the first time a graphics card has received this honor, I believe, during my tenure. The decision was controversial. Nvidia still does thermals, acoustics, and aesthetics better. But now it’s also charging a hefty premium for those luxuries. AMD’s card is faster, cheaper, and it makes an effort to keep acoustics under control, so long as you stick with its Quiet mode. AMD reworked its approach to CrossFire and now has a more elegant solution that, while not perfect (we still measured dropped and runt frames in Skyrim, along with notable variance in other titles), does facilitate frame pacing right out of the gate at resolutions all the way up to 7680x1440. I’ll get more enthusiastic about the R9 290X if third-party designs start showing up with better cooling. Until then, it’d be downright negligent to not recognize this card’s class-leading performance at a price we paid for Radeon HD 7970 two years ago.