The reactions to last month’s Radeon R9 290X launch were polarizing, to say the least. On one hand, you had this new GPU largely based on a familiar architecture, but still equipped with new technology and, overall, typically faster than GeForce GTX 780 and Titan. On the other, it proved to be power-hungry, purportedly designed to run at a cringe-inducing 95 °C, and cooled by a fan that gets very loud, if you let it.
So two factions faced off—those who saw the value in a very fast gaming card priced hundreds of dollars less than the competition, versus others who weren’t impressed by a new GPU edging out Nvidia’s eight-month-old flagship.
Regardless of which side you chose, we can all agree that more performance at a lower price point is good for PC gaming, though. Just look at the aftermath: Nvidia dropped the GeForce GTX 770 to an attractive $330 and its GeForce GTX 780 to $500. We even know now that the GeForce GTX 780 Ti will go for $700 when it emerges.
Adding value is exactly what today is about, too. Using the same Hawaii GPU it just unveiled, AMD is introducing a Radeon R9 290.
Hawaii Gets A Haircut
The R9 290 is a derivative product, which means its specifications don’t fall far from the 290X. As you know, Hawaii is a 6.2-billion transistor processor manufactured at 28 nm. But instead of enabling all 44 of its Compute Units, AMD fuses off four of them, dropping the chip’s shader count to 2560 (from 2816). This has the dual effect of trimming texture units from 176 to 160. Although AMD isn’t specific about the four CUs that get disabled, company representatives do say they’re turned off in a manner to yield consistent performance from one board to the next.
And to clarify a point from my R9 290X review: Hawaii doesn’t offer 1/4-rate double-precision compute like Tahiti did. Instead, AMD drops DP performance to one-eighth of the chip’s FP32 throughput, and instead saves the more potent compute potential for its FirePro cards, taking a page out of Nvidia’s playbook. That makes the 290’s peak floating-point performance about 4.84 TFLOPS, while its DP rate is 606 GFLOPS.
| Radeon R9 290X | Radeon R9 290 | Radeon R9 280X | GeForce GTX Titan | GeForce GTX 780 | |
|---|---|---|---|---|---|
| Process | 28 nm | 28 nm | 28 nm | 28 nm | 28 nm |
| Transistors | 6.2 Billion | 6.2 Billion | 4.3 Billion | 7.1 Billion | 7.1 Billion |
| GPU Clock | Up to 1 GHz | Up to 947 MHz | 1 GHz | 836 MHz | 863 MHz |
| Shaders | 2816 | 2560 | 2048 | 2688 | 2304 |
| FP32 Performance | 5.6 TFLOPS | 4.8 TFLOPS | 4.1 TFLOPS | 4.5 TFLOPS | 4.0 TFLOPS |
| Texture Units | 176 | 160 | 128 | 224 | 192 |
| Texture Fillrate | 176 GT/s | 152 GT/s | 128 GT/s | 188 GT/s | 166 GT/s |
| ROPs | 64 | 64 | 32 | 48 | 48 |
| Pixel Fillrate | 64 GP/s | 61 GP/s | 32 GP/s | 40 GP/s | 41 GP/s |
| Memory Bus | 512-bit | 512-bit | 384-bit | 384-bit | 384-bit |
| Memory | 4 GB GDDR5 | 4 GB GDDR5 | 3 GB GDDR5 | 6 GB GDDR5 | 3 GB GDDR5 |
| Memory Data Rate | 5 Gb/s | 5 Gb/s | 6 Gb/s | 6 Gb/s | 6 Gb/s |
| Memory Bandwidth | 320 GB/s | 320 GB/s | 288 GB/s | 288 GB/s | 288 GB/s |
| Board Power | 250 W (Claimed) | 250 W (Claimed) | 250 W | 250 W | 250 W |
Hawaii’s other vital specs remain remarkably intact, though. A geometry engine in each of four Shader Engines maintains as many primitives per cycle. Every Shader Engine is also equipped with four render back-ends, enabling up to 64 pixels per clock across the GPU. The aggregate 512-bit memory bus carries over as well, and Radeon R9 290 sports the same 4 GB of 1250 MHz GDDR5 RAM.

With so many similarities between R9 290X and 290, aside from shader count, AMD also dials back maximum frequency to keep the two cards from landing on top of each other in performance. The 290X runs at up to 1 GHz, while Radeon R9 290 peaks at 947 MHz.
Clock Rate Inflation: Marketing Gone (Too) Wild
Let’s talk a little bit about core clock rates though, since that was a point of contention from Radeon R9 290X Review: AMD's Back In Ultra-High-End Gaming. In essence, it appears that AMD has a base clock rate around 727 MHz with its R9 290X, though the Hawaii GPU wants to run as close to 1000 MHz as possible. By the time the chip approaches its 95-degree ceiling, you’ll probably find the fan already spinning at 40% duty cycle using AMD’s “Quiet” firmware. From there, the GPU clocks down. Depending on the chip’s quality and the workload you run, Hawaii might slide all the way to 727 MHz and stay there if its fan can’t keep it cool enough.

On the R9 290X we received from AMD, and in the seven games we tested, a 40% fan speed is good enough to average about 874 MHz. But when you’re actually gaming on a hot card (and not just benchmarking a cold one), our two-minute Metro: Last Light test suggests you’ll be spending more of your time in the upper-700 MHz range. In fact, in some titles, you’ll dip under 1000 MHz before even getting out of the menu system and into the action (Arma and BioShock).
You could call that questionable marketing. After all, the only way you’ll actually see a sustained 1000 MHz is if you either let the R9 290X’s fan howl like a tomcat looking for action or play platform-bound games. Then again, if you’re still seeing better performance from 290X than competing cards, what does it matter how Hawaii gets there, right?
With that in mind, how does the R9 290 fare in comparison?

I maintained the same scale and enforced the same 40% fan speed limit to give you an idea of how much more variance there is between the troughs and crests. AMD gives the 290 an “up to” rating of 947 MHz, but our seven games average 832 MHz. In the most taxing situations, the clock rate floor, or base clock, appears to be 662 MHz. If the GPU can’t be kept cool, even down at that base frequency, you’ll see the 40% fan limit forcibly exceeded (it crept up to 44% in a three-run stress test of Metro: Last Light).
When you think about it, this is basically the reverse of Nvidia’s GPU Boost technology. AMD is selling its cards using the highest-possible frequency you’ll see, and then slowing them down. Nvidia is citing a base clock and then allowing the GPU’s headroom to push higher. The company makes it a point to specify the base and typical boost numbers, though. AMD’s scheme undoubtedly suffers a lack of clarity, and after piling praise onto the R9 290X’s value story, I now have to hope that Nvidia doesn’t follow AMD down this muddy little rabbit hole.
Ultimately, the performance figures are what matter most. Just be careful before drawing definitive conclusions. The longer you run any of these tests at stock settings on AMD’s reference design, the more averages will come away from the rated 947 MHz figure. Our sample doesn’t have a Quiet and Uber mode. Both of its firmware switch positions share the same 40% fan speed maximum. And to complicate the situation, prior to launch, AMD rolled out an updated driver that overrides the BIOS setting in software to allow fan speeds up to 47% by default. What is the impact of that modification?

As we know, overcoming AMD’s throttling mechanism requires manually increasing the OverDrive applet’s maximum duty cycle. By upping its shipping fan speed from 40% to 47%, AMD allows its reference cooler to blow harder, maintaining higher frequencies for longer durations, at the expense of greatly increased noise. Once again, we find ourselves pinning hopes for a quieter, more consistent R9 290 on the company’s partners.
But all we have for now is the reference design. With that in mind, let’s have a look at the Radeon R9 290 itself.
Actually, before I dive into the Radeon R9 290 that AMD sent to our lab, I need to broach the subject of variability from one GPU to the next.
Hawaii has the potential to be a very, very fast GPU. If you cool it right and keep it at its frequency ceiling, it can beat Titan. We saw that in our review of the card, and that’s why it earned our Elite award. When you don’t pamper it, though, the chip is quick to let you know that it’s running at redline. Unfortunately, AMD’s reference cooler, spinning at acoustically-friendly speeds, cannot cool Hawaii well enough to promise consistent clock rates in different apps. You start at 1000 MHz and, within minutes, are at some frequency lower than that. It might be 900-something, 800-something, or 700-something megahertz, depending on your specific GPU. That can turn into benchmark results that look nothing like each other from one card to the next.
The card that AMD sent to me is a stallion. Even if you get it nice and hot before running a test, bringing it down off of that 1000 MHz “wishful thinking” spec, it’s still faster than GeForce GTX 780, and oftentimes GeForce GTX Titan. But the Radeon R9 290X I bought from Newegg is a dud. It’ll drop to 727 MHz and stay there…and the reference cooler still can’t cool it fast enough. The result is that it violates its 40% fan speed ceiling as well. The craziness, then, is that my R9 290 press board is typically faster than my R9 290X retail card. In the benchmarks, you’re going to see numbers for all three.
Update: As is Tom's Hardware policy, we shared these potentially problematic findings with AMD prior to publication, and the company insists something is wrong with the retail-purchased cards I tested. We will continue investigating and, if any additional news becomes available, update this story.
Does that mean R9 290X’s recognition is unwarranted? I will say that Nvidia’s price cuts add pressure AMD’s flagship didn’t feel a couple of weeks ago. And in light of the almost-30% difference in frequency between ceiling and floor, it’s a lot harder to put confidence in the representativeness of press-sampled cards.
Of course, that puts us between a rock and a hard place. For R9 290X, we can go out and buy boards to compare. But there’s no way to know if the R9 290s you buy will operate at the top of their range (947 MHz) or the bottom (662 MHz).
The good news to come from all of this, perhaps, is that existing R9 290X and 290 cards employ AMD’s reference cooler design. This is the weak link in the chain affecting all of the Hawaii-based products we’ve tested thus far (and we’ve been testing pretty much non-stop for three weeks now). Again, third-party designs with more effective coolers will be what change the story.
Rumor has it, though, that AMD is holding its partners at bay until GeForce GTX 780 Ti launches, allowing the company to reevaluate the ultra-high-end space and put a target on where it needs to be for another victory. We have Hawaii running at a constant, stable 1.158 GHz in our lab, and we know a card with two eight-pin power inputs could be a real beast. However, we also don’t anticipate AMD or its partners offering 780 Ti-killing performance at the same $550 price point.
So, here we are, facing a trimmed-down GPU and the same thermal solution. Let’s have a look…
Does it come as any surprise that a second graphics card sporting AMD’s Hawaii GPU, lightly altered, appears identical to the Radeon R9 290X? Given the lack of evolution that went into 290X’s thermal solution, we wholly expected 290 to be indistinguishable. Today’s description gets a whole lot easier as a result.
In short, this is the same 11”-inch-long, dual-slot board with a 75 mm centrifugal fan.
Its top edge prominently features the same eight- and six-pin auxiliary power connectors, and a distinct lack of CrossFire connectors. To that point, Radeon R9 290 benefits from the xDMA engine built into Hawaii’s on-die compositing block. Right out of the box, two of these boards support CrossFire configurations with frame pacing enabled at Ultra HD and multi-screen resolutions. What they don’t yet support is frame pacing in DirectX 9 games like Skyrim or OpenGL-based titles. AMD still claims that the beta driver adding that capability will be available before the end of 2013.
Display output connectivity is the same, too. Modified from my 290X coverage:
The R9 290 card we received has two dual-link DVI ports, a full-sized HDMI output, and one DisplayPort connector. Its Hawaii GPU features an updated display controller though, which includes a third independent timing generator. So, although the board comes equipped with one less display output than the R9 280X we recently reviewed, you can actually hook up six screens operating at different resolutions and timings to the R9 290 with an MST hub.
Hawaii’s new display controller will also enable the 600 MHz pixel rates needed to support upcoming single-stream Ultra HD displays at 60 Hz. As you know, currently, the only way to drive a 4K screen is through two HDMI ports or one DisplayPort 1.2 output with MST support. These correspond to a pair of 1920x2160 tiles that come together as a 2x1 Eyefinity array. Next-generation scalars will make 3840x2160p60 possible without tiling—they’ll simply require higher pixel clocks. Radeon R9 290 can do it for sure, but AMD isn’t certain whether its older display controllers will.
We'll go into more detail in the pages that follow, but it's also worth noting that AMD claims that Radeon R9 290 bears the same 250 W typical board power as the 290X. That was a conservative estimate for the 290X, and the same likely goes for 290, too. Suitably, AMD also arms this board with one eight- and one six-pin power connector.
Because Radeon R9 290X can be so bipolar, it’s important to spell out how we’re testing all of these cards today.
Typically, we run benchmarks in rapid succession. This means that the GPU remains warm after its first run. This matters very little to cards that operate at one clock rate. But it makes more of a difference on boards with Nvidia’s GPU Boost technology, which won’t stretch up as high when certain limits are exceeded. AMD’s clocking mechanism is even more sensitive to thermal and power conditions, necessitating an even more regimented approach to testing.
For every test we run, we spend five minutes in-game, playing, before launching our benchmark sequence. In a metric like Metro: Last Light, where we use a benchmark tool, we run multiple iterations prior to starting the measurement.
In this way, we’re sure the card is at its maximum operating temperature, yielding the lowest benchmark results, but best representing the experience you’d get from these cards after just a few minutes of playing your favorite title.
We’re also changing up the benchmarking a bit. In preparation for today’s piece, we re-ran every single test. We standardized settings across resolutions to better track scaling, and, again, we’re subjecting every board to five minutes of in-game time before firing up our testing sequence. As a result, we're presenting slightly different numbers from our 290X review, but the data should be more precise, too.
Test Hardware And Software
| Test Hardware | |
|---|---|
| Processors | Intel Core i7-3970X (Sandy Bridge-E) 3.5 GHz Base Clock Rate, Overclocked to 4.3 GHz, LGA 2011, 15 MB Shared L3, Hyper-Threading enabled, Power-savings enabled |
| Motherboard | MSI X79A-GD45 Plus (LGA 2011) X79 Express Chipset, BIOS 17.5 |
| Memory | G.Skill 32 GB (8 x 4 GB) DDR3-2133, F3-17000CL9Q-16GBXM x2 @ 9-11-10-28 and 1.65 V |
| Hard Drive | Samsung 840 Pro SSD 256 GB SATA 6Gb/s |
| Graphics | AMD Radeon R9 290 4 GB |
| AMD Radeon R9 290X 4 GB | |
| AMD Radeon R9 280X 3 GB | |
| AMD Radeon HD 7990 6 GB | |
| Nvidia GeForce GTX Titan 6 GB | |
| Nvidia GeForce GTX 780 3 GB | |
| Nvidia GeForce GTX 690 4 GB | |
| Power Supply | Corsair AX860i 860 W |
| System Software And Drivers | |
| Operating System | Windows 8 Professional 64-bit |
| DirectX | DirectX 11 |
| Graphics Driver | AMD Catalyst 13.11 Beta 8 (Radeon R9 290) |
| AMD Catalyst 13.11 Beta 7 (All Other AMD cards) | |
| Nvidia GeForce 331.65 Beta (All Nvidia cards) | |
| Benchmarks And Settings | |
|---|---|
| Battlefield 4 | 1920x1080, 2560x1440, and 3840x2160: Ultra Quality Preset, v-sync off, 100-second Tashgar playback. FCAT for 1920x1080 and 2560x1440; Fraps for 3840x2160 |
| Arma III | 1920x1080, 2560x1440, and 3840x2160: Ultra Quality Preset, 8x FSAA, Anisotropic Filtering: Ultra, v-sync off, Infantry Showcase, 30-second playback, FCAT and Fraps |
| Metro: Last Light | 1920x1080, 2560x1440, and 3840x2160: Very High Quality Preset, 16x Anisotropic Filtering, Low Motion Blur, v-sync off, Built-In Benchmark, FCAT and Fraps |
| The Elder Scrolls V: Skyrim | 1920x1080, 2560x1440, and 3840x2160: Ultra Quality Preset, FXAA Disabled, 25-second Custom Run-Through, FCAT and Fraps |
| BioShock Infinite | 1920x1080, 2560x1440, and 3840x2160: Very High Quality Preset, 75-second Opening Game Sequence, FCAT and Fraps |
| Crysis 3 | 1920x1080, 2560x1440, and 3840x2160: High System Spec, High Texture Resolution, MSAA Low (2X), 60-second Custom Run-Through, FCAT and Fraps |
| Tomb Raider | 1920x1080, 2560x1440, and 3840x2160: Ultimate Quality Preset, FXAA, 16x Anisotropic Filtering, TressFX Hair, 45-second Custom Run-Through, FCAT and Fraps |
I wanted to cut down on the page count of this story, so all of the re-run benchmarks are piling into one chart with three resolutions. Again, everything you see in the next seven pages is the product of heating every graphics card up prior to testing.

Right out of the gate, at 1920x1080, Radeon R9 290 jumps up alongside our press-sampled R9 290X at 1920x1080, 25601440, and the unplayable 3840x2160. Of course, achieving this requires a more aggressive 47% fan speed ceiling, which isn’t as bad as the 290X’s Uber mode, but still significantly louder than Quiet mode.
Meanwhile, the R9 290X we bought off the shelf starts under the $330 GeForce GTX 770 and $400 R9 290. Now you see why we’re making such a big deal about the variance between boards, right?
Fortunately for AMD, the shift to 2560x1440, where we’d expect these products to be used, shakes up the standings. The press-sampled R9 290 finishes in front of the GeForce GTX 780, and indeed the Titan as well. It continues to barely trail the 290X card we received from AMD, too. But then there’s the retail 290X, which manages to tie the $500 GeForce GTX 780, but loses to the 290 it should be beating.
By the time we hit 3840x2160, all of these cards are running too slowly for playable performance. You’d need to back Arma III off of its Ultra graphics quality setting—and after spending $3500 on a monitor, you aren’t going to want to do that.



The frame rate over time charts demonstrate just how close Radeon R9 290 and 290X come to each other—at least the cards we were sent by AMD. Our retail board is consistently in a different (lower) class.

Nvidia’s cards have an issue with Arma at 1920x1080—we cannot FCAT their results without a ton of frames getting inserted into the video output. Charted out, these insertions are what mess with worst-case frame time variance at that resolution.
At 2560x1440, every card drops back to very low variance, which is what we want to see to confirm that there’s little in the way of stuttering going on.
Stepping up to Ultra HD, however, frame rates drop so low, and the workload is so demanding, that variance between frames grows substantially.
Welcome to the first review featuring Battlefield 4 performance figures. I played through the entire single-player campaign in one night to nail down the best possible repeatable sequence to use for benchmarking, settling on the Tashgar chapter’s introduction.

The good news is that we see some great differentiation and consistency between cards at 1920x1080 and 2560x1440. At both resolutions, AMD’s Radeon R9 290 shows up just under the GeForce GTX Titan and in front of GeForce GTX 780. Phenomenal for a $400 graphics card, right?
Our moment of pause comes from the retail 290X card we also tested, which turns up between the GeForce GTX 780 and 770—both of which cost less than the Hawaii-based board.
To AMD’s credit, both the Radeon R9 290X and 290 do appear better-suited to 4K gaming than the competition from Nvidia. Even the retail 290X pops up ahead of Titan once we measure at 3840x2160. It’s just unfortunate that frame rates are too low with a single card to make that resolution playable.



The first two frame rate over time charts show how well both dual-GPU cards fare still. The third reminds us why they can be a hassle: with no way to reliably benchmark them at 3840x2160 using FCAT, we leave their results out rather than posting Fraps-based numbers that don’t include dropped and runt frame data.

Although Battlefield 4 is the newest game in our suite, we observe fantastic variance numbers at 1920x1080 and 2560x1440. Ultra HD would be much more worrisome if the frame rates were higher.

The BioShock numbers line up pretty well at FHD and QHD resolutions, too. Radeon R9 290 shows up just behind our 290X press board, which is to say that both are faster than GeForce GTX 780.
Notice that GeForce GTX Titan shows up in front of the Radeon R9 290X. On this page, it actually trailed by quite a bit. The difference, of course, is in the five-minute pre-test heat-up that each card is subjected to in today’s review. Once we shed the overly-ambitious frequency that you’d never enjoy while playing BioShock, the finishing order shifts. Our press card is still a great performer—its result is merely tempered.
This procedure isn’t at all friendly to the retail card, though. At 1920x1080, it shows up behind GeForce GTX 770. At 2560x1440, it manages a slight win over the GK104-based board. But it still trails GeForce GTX 780. Only at 3840x2160 does our retail 290X leapfrog the 780.



It’s safe to say that the AMD-supplied 290X and 290 cards, along with the retail 290X, are all playable at Ultra HD resolutions. We’re still wary of the big gap between both of our Radeon R9 290X cards, though.

Frame time variance in BioShock is exceedingly low. Even in a worst-case scenario, the latency between successive frames should appear minimal.

Our Crysis 3 benchmark is based on real-world gameplay. Fairly consistently, it appears to be platform-bound, though. It might be tempting to suspect a v-sync issue, given the average frame rates at 1920x1080 clumping up at 60 FPS. However, if you look back to our R9 290X review, you’ll notice averages in the 65 FPS range—roughly corresponding to our switch from a Core i7-4960X to a -3970X processor this time around.
One observation cannot be missed, though: Radeon R9 290 looks a lot like our sampled 290X and GeForce GTX Titan. The retail R9 290X is quite a bit slower though, particularly at 2560x1440.
The frame rates drop too low at 3840x2160 to be usable, though that’s clearly where AMD’s Hawaii GPU excels. We’ve already tested the 290X in CrossFire and seen impressive results. However, we’re waiting for a second retail card before revisiting that configuration in a more realistic way.



Our dual-GPU numbers were generated by the FCAT tool suite, which is designed to factor our dropped and runt frames. And yet, the Radeon HD 7990 is somehow able to transcend the ceiling imposed on every other card at 1920x1080.
This is masked somewhat at 2560x1440, where the GeForce GTX 690 reminds us that it’s a very capable performer, too. Single-GPU boards like the Radeon R9 290X, 290, GeForce GTX Titan, and 780 all clump together though. There's a little more spread at 3840x2160, but only enough to see the retail 290X getting outperformed by the sampled 290. Both cards beat out Nvidia’s GeForce GTX 780.

Worst-case frame time variance is fairly low at 2560x1440. It gets worse at 1920x1080 and 3840x2160, though seemingly not in a consistent way. Only the GTX 690’s higher numbers would make sense from the standpoint of getting two GPUs to render frames consistently. Just remember these are 95th percentile numbers. The average and 75th percentile are being excluded to avoid a data overload.

At 1920x1080, our Metro: Last Light numbers put AMD’s dual-Tahiti card in the lead, followed by the Radeon R9 290X we reviewed a couple of weeks ago. The R9 290 with its 47% maximum fan speed settings places third, surprisingly beating Nvidia’s GeForce GTX 690. From there, Nvidia’s GeForce GTX Titan finishes fifth, trailed by the R9 290X we purchased from Newegg. That’s a 13% difference between the sampled and retail cards.
At 2560x1440, Nvidia’s boards regain some ground. However, the 290X and 290 cards from AMD still beat GeForce GTX Titan. Meanwhile, that retail card files in behind GeForce GTX 780—a board that sells for $50 less.
Hawaii gets its mojo back a bit at 3840x2160, where its better-balanced back-end and copious memory bandwidth land the retail card in front of GeForce GTX 780, but behind Titan. No matter—those frame rates are too low for single-GPU configurations, anyway.



Most of these boards are grouped up fairly tightly in the frame rate over time charts. At 3840x2160, even the fastest solution drops under 20 FPS. It’d take a couple of Radeon R9 290X cards to achieve playability using the settings we’ve picked.

Most of the variance numbers from Metro are solid. There are four exceptions that come from four different GPUs, so it’s probable that the average and 75th percentile results from those boards would probably look a lot more similar than the worst-case figures.

A platform limitation at 1920x1080 causes a bit of havoc in the sorting at 2560x1440 and 3840x2160. Even at Ultra HD resolutions, though, all nine solutions serve up playable average frame rates. Even with the high-res texture pack installed, AMD’s Radeon R9 280X averages more than 50 FPS.



It takes scales starting at 80 FPS (at 1920x1080), 60 FPS (at 2560x1440), and 40 FPS (at 3840x2160) to put some distance between nine cards in Skyrim.

Worst-case variance at 3840x2160 is a mixed bag. Otherwise, all of these cards serve up a reasonably smooth experience.

The finishing order is pretty consistent across all three tested resolutions in Tomb Raider. At 1920x1080, our press-sampled R9 290X and 290 cards beat GeForce GTX Titan. Even with TressFX enabled—a big advantage for AMD—the Titan manages to outmaneuver the 290X we purchased, though.
R9 290 even edges out the 290X from AMD at 2560x1440. But if retail 290s behave more like our store-bought 290X, we’d expect performance somewhere between GeForce GTX 780 and 770.
None of these cards are fast enough for Ultra HD. In fact, the GeForce GTX 770 shows why 2 GB cards (and dual-GPU boards with 2 GB per processor like GeForce GTX 690) are wholly unsuitable for such high resolutions. They run out of on-board memory, pure and simple.



Tracking frame rate over time shows us that, although most of these cards achieve playable average frame rates at 2560x1440, they’re still pushed down under 30 FPS in the most demanding part of our benchmark.

The GeForce GTX 770 is hit by disturbingly bad frame time variance in its 95th percentile chart. Everything else suggests smoothness through the test.
Not much changes in these metrics since our Radeon R9 290X launch story. As expected, the R9 290 falls in line behind AMD's flagship board. It doesn’t really look like the switch from OpenGL to DirectX is becoming a trend though, since Autodesk is the only major company making this drastic change. The advantage of DirectX for end-users is that they can do without specialized workstation cards, so long as they’re willing to forgo the drivers optimized for specific applications, greater compute performance, and so on. DirectX’s disadvantage is its use of single-precision coordinates, which can easily lead to display errors in complex models, resulting in things like the feared push-through effect of surfaces right behind another surface.



For CAD applications employing OpenGL, the Radeon R9 290’s performance is enough for a place in the middle of the pack, but that’s about it. The realized benchmark results are far from what's theoretically possible based on the hardware's specifications, but OpenGL performance remains largely dependent on driver optimization, which just isn’t a very high priority on gaming graphics cards. The situation changes a bit from one application to another, but OpenGL is certainly not the Radeon R9 290’s strong suit.



The Radeon R9 290’s results are basically the same as what we reported for the 290X. Hawaii possesses a lot of theoretical performance, but it’s often diminished by the card's reference design, which is forced to scale back significantly under the load of a compute-oriented workload. This is not what we'd want to see from a GPU otherwise celebrated for its potential in OpenCL-accelerated apps.
It's also worth noting that AMD artificially handicaps Hawaii's FP64 throughput with a 1/8, compared to Tahiti's 1/4, rate. This is to allow FirePro cards based on the same GPU to offer another differentiator (we hear they'll be full-speed, or 1/2).



Measuring Power Consumption
We’re using a current clamp to measure power consumption at the external PCIe power cable and, using a special PCB, directly at the PCIe slot. These measurements are recorded in parallel and in real time, added up for each second, and logged using multi-channel monitoring along with the respective voltages. All of this results in a representative curve over the span of 10 minutes. That's all we really need, since these cards reach their operating temperatures relatively quickly.
The curve isn’t just representative; it's also exact. Measuring system power introduces bias, since a number of factors can affect consumption other than the graphics card. A faster GPU might cause the CPU’s power consumption to go up as well, for example, since a limiting factor holding it back is gone.
We're using Nvidia's GeForce GTX 780 as the “competitor”, since its performance comes the closest to Radeon R9 290, making it a good basis for comparison. You'll also find a GeForce GTX 770 and Radeon R9 280X in the line-up as well.
AMD Radeon R9 290 Gaming Loop Power Consumption
The feedback we received on the forums after our AMD Radeon R9 290X launch article prompted us to question if the efficiency of Hawaii-based graphics cards changes if the temperatures are purposefully kept low. For this reason, we’re presenting three sets of results for the Radeon R9 290.
Default Mode and Settings with Catalyst 13.11 Beta v7
This mode is similar to the Radeon R9 290X’s Quiet mode with its 40 percent maximum fan speed. Let’s have a look at a 10-minute benchmark run consisting of Metro: Last Light looped at maximum settings. It’s interesting to see that the power consumption is significantly higher before the target temperature is reached, after which time it drops noticeably. The average in the graph only takes the values after the limiter kicks in to make the outcome more representative of real-life usage.

Temperature Target of 70 °C and Maximum Fan Speed of 80 Percent with Catalyst 13.11 Beta v7
The second benchmark run shows the card’s power consumption after setting the target temperature to 70 °C and the maximum fan speed to 80 percent, while leaving the power limit alone. Better cooling gets rid of the peak and smoothes out the curve. The power consumption is higher overall, though. We aren’t exactly in love with a very busy chart that’s characterized by many erratic jumps, which can only partially be explained by the wildly jumping GPU clock frequency.


The massive power consumption drops shown by the force-fed Radeon R9 290 and its more constant clock rate curve confirms the theory that reaching the target temperature results in frantic regulation attempts by the power limiter, which leads to the observed jerkiness.
Default Mode and Settings with Catalyst 13.11 Beta v8
Right before the 290's original launch date, AMD told us it was pushing back the introduction to accommodate a new driver that was supposed to bolster performance. In reality, the driver update is a compromise between the Quiet and Uber modes of the already-launched Radeon R9 290X. A 47 percent maximum fan speed and a target temperature of 95 °C is exactly in the middle between the two modes. Let’s take a look at the power consumption and the actually achievable clock frequencies.


Two things are interesting here. First, the power consumption peak is gone, and the card levels off at exactly the same point as the force-cooled board once the temperature limit is reached. Second, the clock frequencies are similarly consistent and only slightly lower than those of the much cooler card. Looking at these results, and taking into account how often AMD asked us for our impressions of the Radeon R9 290’s cooler and performance, the new driver's trick becomes clear: you get a cooler, but louder, graphics card with a small performance boost.
Nvidia GeForce GTX 780 Gaming Loop Power Consumption
Now, how much power does the GeForce GTX 780 need to achieve similar performance? Just like before, we’ll help the card out a bit by lowering its temperature to 70 °C, which we achieve by increasing fan speed.

The first thing to note is that the GeForce GTX 780’s curve also sports a small peak when the temperature target is reached. This peak is a lot less pronounced than the 290’s, though. Cooling the card down barely changes the curve.

What makes this comparison different from AMD's card is that the curve is a lot smoother and doesn’t show the same extreme power consumption fluctuations. Even more interesting are the minimum and maximum power consumption, which stay the same when the card is cooled down. Only the average increases.
Nvidia GeForce GTX 770 and AMD Radeon R9 280X Gaming Loop Power Consumption
Let’s take a look at a couple of lower-end cards. This will be important later when is comes time to look at efficiency.
The differences are massive in light of the fact that both boards performance very similarly. This nicely demonstrates just how much of a jump ahead AMD has taken with this graphics card generation.


Bottom Line
AMD is doing a great job catching up, but it isn’t quite there yet. According to our measurements, the former 40 W difference between the two rebranded graphics cards is down to approximately 20 W separating R9 290 and GTX 780. Then again, slightly higher gaming performance might be something to take into account as well.
Either way, Nvidia’s power consumption curves are smoother and free of short, sudden drops. This should give AMD a reason to tune its drivers a bit more.
Thanks to your requests in our feedback section, we’re including the benchmark performance results in our efficiency calculations. Specifically, we’re using the benchmarks recorded during our power consumption measurements. This is especially interesting due to the fact that Metro: Last Light isn't a Gaming Evolved title, so nobody can accuse it of favoring AMD.
We're testing at 1920x1080, given that resolution's popularity. The two faster and two slower graphics cards end up with fairly similar performance, which dispels any lingering doubt about this being an apples-to-oranges comparison. The graphs show the results for the Radeon R9 290 with its new driver and higher fan speed, since displaying three different results would have been confusing.
Gaming Loop Performance
Let’s first take a look at the plain frames per second and the frames per second percentages. This provides a nice overview.
With the new drivers that are supposed to keep the boisterous radial fan under control, AMD's Radeon R9 290 only gives up about one percent of its performance compared to eight percent with the old drivers. The performance difference is six percent for the Nvidia GeForce GTX 780. This doesn’t really make either of the two reference graphics cards look great. One gives up some of its performance, and the other one gets loud.


Efficiency
This is where power consumption enters the scene. We’re now judging the graphics cards based on how much power they need to achieve each of their frame rates. The GeForce GTX 780 does benefit from its better cooling, and manages to stay in the same place that we’ve become accustomed to. Then again, through some smart maneuvering, AMD manages to push its card to, or at least close to, the same level as Nvidia’s offering.


The Radeon R9 290 is only three to four percent less efficient than Nvidia's GeForce GTX 780. This is a pretty massive improvement over the 26 percent separating the AMD Radeon R9 280X and GeForce GTX 770. The fact that Nvidia's GeForce GTX 780 has already been the happy recipient of several optimized drivers, whereas the Radeon R9 290 is only supported by a beta driver should provide some food for thought, too. The gap between the two graphics cards could shrink, or even disappear altogether, at some point in the future.
AMD Radeon R9 290X Correction
Ultimately, any piece of equipment can fail, and this is what happened to our current clamp while we were benchmarking the Radeon R9 290X. That means some of the power consumption results in our launch article weren't quite right. Power was one of the last things we measured, and not only are the Radeon R9 290X’s numbers for the gaming loop too low, but the error got worse with time, increasingly throwing off the curve. Thus, we're repeating our readings on the 290X and presenting the corrected curve below. Also, we've taken steps so this doesn't happen again, using additional equipment and comparing the results through different methods.
On the bright side, the Radeon R9 290X we're using for our new power consumption benchmark isn’t a press sample sent to us by AMD, but an off-the-shelf retail card.

Power Consumption Overview for All Graphics Cards
Here are the power consumption numbers for all of the benchmarked graphics cards, including the corrected Radeon R9 290X results.





Bottom Line
Once again, the Radeon R9 290X’s power consumption at idle is very high, and it gets a lot worse when a second or third monitor is connected. The Blu-ray playback results are also far from satisfactory. Things get a lot better when it comes to gaming loops, where the Radeon R9 290X is much improved over AMD’s Tahiti. This lets it gain a lot of ground on the more efficient Nvidia graphics cards. Seeing that the gap is really not that large any more, what remains might eventually even be eliminated completely with performance increases due to driver optimizations.
Noise
The Radeon R9 290X review covered fan speeds and how noise level relates to different loads. AMD's new Catalyst 13.11 Beta 8 driver doesn't really improve the situation. We don't get more performance from any specific optimization, but rather by increasing the noise level.
We’re presenting separate videos for the two drivers to demonstrate progression from the first to the second one. Unfortunately, the louder of the two is the driver AMD apparently plans to ship. For comparison, we also include the Nvidia GeForce GTX 780, as well as a Radeon R9 290 that we upgraded ourselves with a third-party cooling solution. These nicely show just how much performance AMD leaves on the table due to its reference cooler. As always, the measurements are taken with a studio microphone perpendicular to the middle of the card from a distance of 50 cm.


AMD Radeon R9 290 Noise Comparison Before and After the Driver Update
Both videos show the noise level during a long gaming loop and illustrate the result in the graph.
At idle, the AMD Radeon R9 290’s radial fan is definitely noticeable, but bearable.
Noise Comparison with the Nvidia GeForce GTX 780
Same gaming loop, different graphics card. The GeForce GTX 780 at stock speed and settings is a lot quieter, but it pays for it by reaching its thermal limit quickly. The fan needs to be pushed quite a bit to achieve consistent GPU Boost frequencies. Seventy percent are enough for a cold card, but once it’s warmed up, a fan speed of 80 percent is needed to maintain those higher clock rates. This is the only way to get an apples-to-apples comparison of the two competing graphics cards.
Replacing the Reference Cooler with Arctic's Accelero Xtreme III
The Arctic Accelero Xtreme III, now in its third iteration, can keep pretty much anything cool. This is the type of heat sink and fan combination that kept the overclocked Fermi-based Sparkle GeForce GTX 480 from melting not just itself, but half the computer. It does fit the Radeon R9 290 with minor modifications and can thus serve us as an example of what AMD could have done with this card. We’ll publish the entire upgrade as a guide soon, since it was really, really worth it.
Overclocking Results
We’re using the same gaming loop as before, trying to pinpoint the card's maximum clock rate through a series of small increases. The Arctic Accelero Xtreme III can be controlled via PWM or run with a constant voltage and RPM. The OverDrive applet's new fan control changes the game, though. Maximum fan speed is now bound to the target temperature. It doesn’t make sense to set this target to the 50 or 60 °C that are possible with this cooler just to have it spin slightly faster. That kind of setup is essentially self-limiting due to its (too) good cooling performance. Even under a full load, it’s almost impossible to get the Arctic Accelero Xtreme III to spin at more than 20 to 25 percent by changing the driver settings. This isn’t enough to provide cooling to the voltage converters.
Consequently, we went with the direct connection and a fixed voltage. Even at 7 V, the upgraded Radeon R9 290 is barely louder at prolonged full load than the stock versions are at idle, and the GPU and VRMs stay cool to boot.
Let’s take a look at the benchmark results of the overclocked Radeon R9 290, which turn out to be a big surprise. There’s a 20 percent difference between the original card and the overclocked one. The updated last-minute driver reduces this difference to a still-massive almost-13 percent. Keep in mind that we’re not just talking about a frequency increase, but also more usable performance and less noise. The Arctic Accelero Xtreme III demonstrates nicely what can be achieved with AMD's Radeon R9 290.


Video Comparison between the Reference and Third-Party Cooler
The first two videos show the AMD Radeon R9 290 with the Arctic Accelero Xtreme III at 12 and 7 V, respectively. The third one shows the original stock version of the card.
Bottom Line
If anything deserves an award, it’s the Arctic Accelero Xtreme III third-party cooler that lets AMD's Hawaii-based boards realize their potential. This is how the card could, and should, perform. Why AMD persists with its sub-par cooling solution is really anyone’s guess, especially since these problems have been going on for years. Dumping the issue on its partners can’t really be the solution either, since a graphics card’s reputation is made, or lost, on launch day.
As long as the only reaction to this is a driver update with questionable benefits, the reference graphics cards will always be the cheap solution. This GPU deserves better. As we said before, we’ll post the upgrade guide as its own story, since none of AMD’s partners currently offer their own PCBs and cooling solutions.
The technology press is in the privileged position of receiving high-end components before anyone else sees them. Although this sometimes translates to all-night marathons of benchmarking and cramped hands, the trade-off is that most of us can build bleeding-edge gaming PCs without spending a dime.
But it’s dropping $550 dollars on an already-released Radeon R9 290X that turns this review on its head. Had we simply tested our R9 290 sample against the previously-reviewed 290X, we would have concluded that the slightly cut-back Hawaii GPU comes pretty darned close to AMD’s flagship, spanking GeForce GTX 780 and going up against Titan. Priced at $400, that would have been something special indeed.
However, the two retail Radeon R9 290X boards in our lab are both slower than the 290 tested today. They average lower clock rates over time, pushing frame rates down. Clearly there’s something wrong when the derivative card straight from AMD ends up on top of the just-purchased flagships. So who’s to say that retail 290s won’t follow suit, and when we start buying those cards, they prove to underperform GeForce GTX 780? We can only speculate at this point, though anecdotal evidence gleaned from our experience with R9 290X is suggestive.
Back To The 290…
Try to set that aside for a moment and assume the R9 290 we’ve been working with is representative of what you’ll find boxed up on retail shelves. Originally, AMD had the card set up with a 40% default maximum fan duty cycle. The experience was similar to R9 290X and its Quiet mode. Though louder than Nvidia’s reference GeForce GTX 770, 780, and Titan, I could have lived with the acoustics, and I appreciated that the fan shroud vented out.
After catching wind of Nvidia’s price cuts, however, AMD went back and re-spun its driver to override the 290’s firmware. It extracted more performance from 290 by increasing maximum fan speed from 40 to 47%, which falls between the 290X’s Quiet and Uber settings. This successfully allows our press card to surge ahead of its pricier competition by maintaining clock rates closer to the top of its range.
I have two issues with this. First, at 47% duty cycle, the fan is too loud. It’s obviously not as bad as the 290X’s Uber mode, but I don’t see any compelling reason to compromise acoustics when quieter solutions exist. AMD points out that you can turn the fan down if you want, and that's true, but you'd watch 290's performance erode at the same time. Second, I simply don’t trust the numbers I’m getting from the 290 we have on-hand to review. Even if it’s a total fluke that the R9 290X cards we have are so diametrically opposed, the mere existence of this much variance means Radeon R9 290 is either as fast as a GeForce GTX Titan and priced phenomenally or somewhere behind a retail R9 290X, just ahead of GeForce GTX 770, and priced to slot into the market (unspectacularly). I’m not comfortable making a recommendation one way or the other on 290 until we see some retail hardware.
If that sounds like an about-face after my Radeon R9 290X review, well, in some ways it is. There was simply no way to anticipate so much variation from one card to another at launch. AMD insists what we're seeing isn't right, but we can only determine that with greater retail availability. Moreover, AMD came to market with a fantastic price on 290X compared to its competition. That situation has since changed. And now, the decision to let the 290's fan hit 47% duty cycle feels like a knee-jerk reaction, sacrificing experience for higher sustained clock rates. Less consistency, tighter pricing, more noise...let's just say I'm more wary this time around.
Where Are Those Partner Boards?
So much of what’s being discussed relates to keeping Hawaii as cool as quietly as possible, and it’s hardly a secret that AMD’s reference solution is the center of attention. Our own lab experiments demonstrate Hawaii’s potential (read the previous page if you haven't already). We know it’ll run fast, and we know this can be down without a ton of noise. So when can we expect the custom-built cards to address our concerns? We hear they’re being held back until more is known about GeForce GTX 780 Ti—and this is entirely plausible. After all, if AMD could just keep Hawaii running at 1 GHz without creating a racket, it’d have another shot at the high-end crown.










