Sign in with
Sign up | Sign in
AMD Radeon R9 280X, R9 270X, And R7 260X: Old GPUs, New Names
By , Igor Wallossek,
1. Tahiti, Pitcairn, And Bonaire Show Up For An Encore

Back in 2000, a gentleman by the name of Brian Hentschel called my dorm room at UCLA to ask my opinion of brand names. Brian was a marketing manager at ATI, and the company was looking for something catchy to succeed its Rage family. I had owned every single Rage-based desktop graphics product up until that point, and was pumped to provide feedback on the company's next-gen nomenclature.

Thirteen years later, I cannot remember the other options ATI was throwing around, but I distinctly recall liking Radeon least of all. Clearly I have no future in marketing, because I’ve been reviewing Radeon-branded cards ever since.

With its latest generation, AMD maintains the Radeon legacy, but changes everything that comes after. According to the company’s PR team, the new naming scheme makes positioning easier—and I’d have to agree. Our own writers were mistyping combinations of Radeon HD 7990, 7970, 7790, and so on. Now, we have the high-end Radeon R9 and mainstream R7 families, which are sub-divided into three-digit models suggesting performance levels.

Say Goodbye To The Old Names And Hello To The Old GPUs

At its press day in Hawaii, two weeks ago, AMD publically announced the Radeon R7 250, R7 260X, R9 270X, R9 280X, R9 290, and R9 290X. There’s also an R7 240 the company didn’t mention. How on earth will you ever memorize all of the corresponding specifications of each card in a timely manner? It’s easy: although we’re looking at new model names, all of the products AMD is talking about today employ GPUs already found in the Radeon HD 7000-series line-up.

Take that R9 270X, for example. With 1280 shaders spread across 20 compute units, it employs the same Pitcairn GPU introduced on the Radeon HD 7870 GHz Edition in March of last year. Or how about the R9 280X? Its 2048 shaders, 1 GHz engine frequency, and 384-bit memory bus should remind you of the Radeon HD 7970 GHz Edition, sporting the Tahiti GPU.

Let's at least keep it real, guys. These aren't new GPUs.Let's at least keep it real, guys. These aren't new GPUs.

Of course, taking existing technology, tweaking it a bit, and giving it a shiny new-sounding name is an old practice. Much of the Radeon HD 8000 family is a replicate of the 7000s, shipped off to OEMs in the hope that folks buying tier-one machines don’t know any better. And don’t think I’m picking on AMD here. Nvidia’s GeForce GTX 700M series has Fermi-based models in it with core configurations dating back almost three years. The GeForce GTX 770 and 760 employ the same GK104 GPU found at the top of the 600-series. This sort of thing seems to happen a lot in the graphics market.

The good news for today is that familiar GPUs make our job quite a bit easier. Doubly-so because the two products based on never-before-seen silicon, R9 290 and 290X, still aren’t ready for their public debut. This leaves us with the remainder of AMD’s R9 and R7 line-ups, well-known (and tested) technology, and price drops across the board. Positioning becomes the main focus of today's discussion, then.

Just don't be quick to marginalize what AMD is doing. Most Radeon HD 7970 GHz Edition cards currently sell for somewhere around $375. R9 280X is going to debut at $300. The GeForce GTX 770 that Nvidia launched to replace its GTX 680 still sells for $400. Remember when the original 7970 sold for $550? Boy, that escalated quickly.

Let’s take a closer look at the R9 280X—for now, the highest-end board in AMD’s re-branded portfolio.

2. R9 280X: The Tahiti GPU’s Second (Or Third?) Lease On Life

I guess it depends whether you consider the Radeon HD 7970 GHz Edition a distinct product introduction. If so, then R9 280X is the third single-GPU graphics card sporting AMD’s complete Tahiti GPU.

When I wrote AMD Radeon HD 7970 GHz Edition Review: Give Me Back That Crown!, that card featured a base clock rate of 1 GHz that’d jump to 1.05 GHz under the effects of PowerTune with Boost. Otherwise, it employed Tahiti in its uncut form—with 2048 Stream processors, 128 texture units, 32 ROPs, and a 384-bit aggregate memory bus populated with 3 GB of GDDR5 at 6 GT/s. AMD slapped a 250 W maximum board power on the card and started shipping it at $500 (already down $50 from the original 7970’s launch).

The R9 280X’s specs are very, very similar. Its Tahiti GPU boasts the same 2048:128:32 configuration, with a 384-bit memory bus rocking 3 GB of GDDR5 that AMD says should operate at 6 GT/s. You’ll need to use the same six- and eight-pin connectors for an identical 250 W board power. The one notable difference is the 280X’s engine clock, which tops out at 1 GHz. As a result, you’ll notice the R9 280X performing slower in our benchmarks. Fortunately, there’s that price drop…

At $300, an R9 280X does battle against the GeForce GTX 760, mostly. I say mostly because the cheapest 770s are selling for around $400 (clearly too high to be in the same league), while the 760s are between $250 and $320. And since we already know the Radeon HD 7970 GHz Edition and GeForce GTX 770 are the cards that trade blows, well, AMD should be in a pretty strong position by the time we get to the end of our benchmarks.

To answer whether existing Radeon HD 7970 cards can be paired with the new R9 280X, yes, they work together. A quick Fraps-based test showed one 280X hitting 52.9 FPS in Battlefield 3 at 2560x1440. Dropping a 7970 GHz Edition next to the newer board pushed frame rates to 102.3 FPS. When it wasn't in use, the 7970 properly spun down according to AMD's ZeroCore technology.

We received two different takes on the R9 280X, neither of them conforming to AMD’s reference design.

Asus R9280X-DC2T-3GD5

The first card was Asus’ R9 280X DirectCU II TOP, an overclocked board with a GPU capable of up to 1070 MHz and GDDR5 memory at 6400 MT/s. According to Asus, it’ll sell this variant for $310, bundled with a power adapter and CrossFire cable.

Beyond the tweaked specifications, Asus employs a non-reference PCB and oversized cooling to help manage power, thermals, and acoustics. The company says AMD’s reference 280X employs a five-phase (60 A/phase) PWM, while its own card features eight phases at 45 A/phase. This, along with claimed higher-quality power components, is supposed to benefit aggressive overclocking. Naturally, Asus bundles its GPU Tweak software for adjusting core clock, voltage, memory, power target, and fan speed (mostly settings you can tune in AMD’s OverDrive applet, except for voltage).

Although the R9 280X DirectCU II TOP is a dual-slot card, it’s also long and tall. The fan shroud stretches over the back of the PCB, imparting an 11.2” overall length. Further, a heat pipe coming out of the board’s top increases maximum height to 5.7”.

Display outputs include one dual-link DVI-I port, one dual-link DVI-D connector, HDMI, and a full-sized DisplayPort ouput.

XFX R9-280X-TDFD

XFX’s R9-280X-TDFD showed up next, based on a slightly different PCB. This is the one we benchmarked for our performance evaluation, if only because we were able to get its speeds and feeds first. An 850 MHz core clock rate accelerates up to 1 GHz when the thermal headroom allows. The memory tops out at AMD’s reference 6 GT/s, serving up to 288 GB/s of bandwidth.

Otherwise, the R9-280X-TDFD is a dual-slot board employing two axial-flow fans that do their job very quietly, but dump waste heat back into your case instead of exhausting it out the back.

Display outputs total five. You get one dual-link DVI-I port, one single-link DVI-D connector, HDMI, and two mini-DisplayPort outputs. Quad-card CrossFire configurations are supported across two bridges, though an extra-tall plastic frame surrounding the heat sink makes even a flexible ribbon connector difficult to attach.

3. R9 270X: Pitcairn Gets A Little Boost

When the Radeon HD 7870 launched, it sold for $350. Now, you can find the cards going for somewhere between $185 and $200. Incidentally, AMD wants to introduce its R9 270X, based on an ASIC it’s calling Curacao (but is every bit similar to Pitcairn on paper), at the upper end of that same range: $200.

Although this isn’t nearly the savings story we heard from the R9 280X, AMD probably isn’t feeling pressured by the 270X’s primary competition. Nvidia’s GeForce GTX 660 with 2 GB is currently selling for about $200 as well, and it doesn’t keep up with Pitcairn (just prior to launch, Nvidia announced the 660 is dropping to $180).

Fortunately, for roughly the same price, AMD does bolster the R9 270X’s performance a little.

The Radeon HD 7870 had a Pitcairn GPU with 1280 shaders, 80 texture units, and 32 ROPs on it. A 1000 MHz core and 1200 MHz memory clock were ample for a solid gaming experience at 1920x1080, and an aggregate 256-bit bus with 2 GB of GDDR5 memory helped facilitate high detail settings.

Likewise, R9 270X brings to bear 1280 shaders, 80 texture units, and 32 ROPs. Its Curacao GPU notches up 50 MHz to 1.05 GHz, and the 2 GB of GDDR5 on our press sample runs at 1400 MHz (or 5.6 GT/s). There will be 4 GB models, AMD says, but they’ll be a bit pricier than the 2 GB version’s $200. Whereas the 7870 bore a 175 W board power, R9 270X is rated for 180 W. Fortunately, the small bump is meager enough that you’ll still find yourself using two six-pin auxiliary connectors.

Third-party implementations will likely differ in the display outputs that get exposed, but AMD’s reference model features two DVI outputs, HDMI, and DisplayPort.

4. R7 260X: TrueAudio’s First Outing On The Back Of Bonaire

As you saw during our coverage of AMD’s GPU14 Tech Day in Hawaii, there are three SKUs in AMD’s line-up with TrueAudio support: R9 290X, R9 290 (neither of which are available yet), and R7 260X.

The Tensilica (a fabless semiconductor company) DSPs that enable TrueAudio are built onto AMD’s GPUs, which might make you think, “Ah ha, a new feature on-die—surely we must also be dealing with a new GPU, too.” Disappointingly, no. This functionality was actually part of the Bonaire processor that launched alongside Radeon HD 7790 (AMD Radeon HD 7790 Review: Graphics Core Next At $150, from March of this year), but was simply not enabled. Now, at least, it’s available for middleware developers to play with and, eventually, expose.

Bonaire, as it appeared on the Radeon HD 7790, boasted 896 shaders, 56 texture units, and 16 ROPs. The sample we reviewed had a GPU running at 1 GHz with 1 GB of GDDR5 memory at 6 GT/s on a 128-bit bus. Power, interestingly enough, was rated at 85 W, deliverable through the PCI Express slot and one six-pin lead. Bonaire was also AMD’s first GPU with an updated version of PowerTune featuring a second-generation voltage regulation controller able to control voltage in 6.25 mV steps. The result of this functionality is much faster response to changes in input (temperature, activity, or telemetry) and corresponding voltage/clock rate/fan speed adjustments.

The GPU that shows up on R7 260X features all of the same vital specs, including the newer PowerTune implementation. Only, its engine clock operates at up to 1.1 GHz. Its memory (now 2 GB instead of 1) streaks along at 1625 MHz over an aggregate 128-bit bus, serving up to 104 GB/s of bandwidth. It’s still serviceable by a single six-pin power cable, but the result of higher clock rates all around are a board power that jumps to 115 W. That’ll hardly be an issue for most folks.

As of this writing, Gigabyte has a Radeon HD 7790 running at 1075 MHz with 2 GB of memory at 1500 MHz selling for $140 on Newegg. AMD plans to sell the R7 260X for the same $140. Most of the other 7790s are 1 GB models, so AMD is still adding some value. But there’s really not much to get excited about in the shift from Radeon HD 7790 to R7 260X, aside from TrueAudio getting switched on. And even then, we need to wait for the feature to get enabled in upcoming games.

5. TrueAudio: Dedicated Resources For Sound Processing

If you followed along with AMD’s tech day webcast, then you sat through a lot of TrueAudio discussion. In fact, given the amount of time dedicated to TrueAudio, the feature seemed like it’d be the day’s emphasis.

At the event, we were hearing the partner demos across eight channels, and the positional audio was certainly discernable, if not overwhelmingly busy (on purpose, no doubt). But we all know that 7.1- and even 5.1-channel sound setups outside of a home theater are very uncommon. Two- and 2.1-channel configurations, including headsets, are far more common. Unfortunately, it didn’t sound like anyone tuned in over Livestream was hearing the same output over stereo.

For anyone who was around in the late ‘90s to hear Aureal’s and Sensaura’s technologies, before both were acquired by Creative, you know that the head-related transfer functions used to create effective positional audio over two channels are not new. The point of TrueAudio is to facilitate more complex sound effects (those HRTFs aren’t computationally free) without burdening the host processor. Today, AMD says that audio gets as much as 10% of a game’s CPU utilization budget, limiting what developers can do. But with TrueAudio, AMD wants to guarantee the availability of real-time processing resources specifically for sound, and regardless of the host CPU you have installed.

This is achieved through the Tensilica HiFi2 EP Audio DSP cores mentioned on the previous page. In the R7 260X, there are two three cores integrated on the Bonaire GPU. The higher-end R9 290 and 290X will also feature three DSP cores dedicated to TrueAudio. Those DSPs employ Tensilica’s Xtensa ISA with fixed- and floating-point number support, which AMD says is equally useful for high-end gaming and embedded applications. Because the DSP is programmable by nature, you can really feed anything you want into it, so long as there’s a decoder available. To that end, the professional audio software vendors are purportedly showing an interest, eager to see what dedicated hardware can do that host-based processing couldn’t.

The real-time nature of audio in a gaming environment means that fast access to compute cycles and memory is imperative, even if the cores themselves aren't particularly powerful. Each one includes 32 KB of instruction and data cache, along with 8 KB of scratch RAM. A fast routing interface connects the DSPs to 384 KB of shared internal memory organized in 8 KB banks. The local resources are fed by a multi-channel DMA engine able to keep the cores busy. And up to 64 MB of frame buffer memory is addressable through a low-latency bus interface shared with the display pipeline.

One of the first questions that came to mind upon hearing about TrueAudio was, “will game developers, already strapped for time and money as they get their titles to market, put resources into sound when there’s so much going on in graphics, physics, and AI?” AMD seems to think that the impact on ISVs will be minimal, though. Because a majority of developers are utilizing middleware for their audio, TrueAudio needs support from those companies first and foremost. Once you get support in Audiokinetic and Firelight’s FMOD, detecting and utilizing TrueAudio becomes much easier. From there, the feature exerts its influence before getting handed off to a codec, and is consequently compatible with any output type.

What about the fact that AMD is only making TrueAudio available across three products, two of which aren’t even available yet? Representatives say that AMD has to start somewhere with TrueAudio, and this is simply the first public airing. I’d add that high-end graphics cards, destined for high-end PCs also don’t need audio effects acceleration as much as less powerful platforms. But you can guess where this is going: expect the same technology to start showing up in AMD’s APUs and mobile GPUs, which are less powerful and might even realize power benefits from accelerating audio.

6. Display Technology

Historically, display technology is one of AMD’s strengths. Back in 2009, the company caught its competition flat-footed with Eyefinity, which supported three independent outputs from the Radeon HD 5870 and as many as six from a special Eyefinity 6 Edition of the card. This was a pivotal moment in my career as a writer who loved to play games. Previously, I used a Quadro NVS card in my personal workstation to drive a three-screen array, while a second system handled 3D. The Radeon HD 5870 let me combine those equally important functions in one machine.

More recently, AMD was on the receiving end of an offensive because its drivers do not yield a favorable experience at 3840x2160 (frame pacing isn't supported yet in Eyefinity). But although you probably wouldn’t want to game on one or more of the company’s graphics cards at 4K resolutions, configuring a tiled display was originally easier on AMD’s hardware than Nvidia’s. Of course, Nvidia has since incorporated DisplayID 1.3 support, which automatically creates a Surround array, making the setup process that much smoother. AMD's latest software likewise streamlines usage with existing tiled panels.

Projected adoption of Ultra HDProjected adoption of Ultra HD

The point is that AMD does support 4K TVs (30 and 24 Hz) over HDMI and DisplayPoint, and tiled monitors (60 Hz) using DisplayPort with its existing GPUs. Tiled monitors are not supported through two HDMI ports, which is how we tested Nvidia’s cards in Gaming At 3840x2160: Is Your PC Ready For A 4K Display? Frankly, that’s fine. I'd much rather connect one DisplayPort cable in MST mode anyway. We only benchmarked through the HDMI interface to facilitate video capture for our FCAT analysis tools.

Additionally, the R9 and R7 boards are making it possible to connect matching monitors to any three outputs. Previously, the requirement was that one needed to hook up via DisplayPort. If you can get your hands on a MST hub, you can even enable five- and six-screen configurations using AMD’s reference cards. The issue remains availability; the only solution comes from Club3D, and you can’t buy it anywhere in the U.S.

Single-stream 4K at 60 Hz requires at least 600 MHz pixel ratesSingle-stream 4K at 60 Hz requires at least 600 MHz pixel rates

In the future, we have confirmation that forthcoming AMD cards will definitely support single-stream, non-tiled 4K displays as they become available and get validated. This will likely be early in 2014. It remains to be seen whether the R9- and R7-series cards getting tested today can claim the same thing. The display controller’s frequency, memory arbitration, and latency all play a role in driving that many pixels.

There’s actually quite a bit more to cover on the display technology side. But because AMD hasn’t pulled the veil off of its R9 290 and 290X cards yet, we have to hold off on that discussion. More soon, though.

7. Test Setup And Software

Test Hardware And Software

Test Hardware
Processors
Intel Core i7-4960X (Ivy Bridge-E) 3.6 GHz Base Clock Rate, Overclocked to 4.3 GHz, LGA 2011, 15 MB Shared L3, Hyper-Threading enabled, Power-savings enabled
Motherboard
ASRock X79 Extreme6 (LGA 2011) X79 Express Chipset, BIOS 2.50
Memory
G.Skill 32 GB (8 x 4 GB) DDR3-2133, F3-17000CL9Q-16GBXM x2 @ 9-11-10-28 and 1.65 V
Hard Drive
Samsung 840 Pro SSD 256 GB SATA 6Gb/s
Graphics
AMD Radeon R9 280X 3 GB

AMD Radeon R9 270X 2 GB

AMD Radeon R7 260X 2 GB

AMD Radeon HD 7970 GHz Edition 3 GB

AMD Radeon HD 7870 GHz Edition 2 GB

AMD Radeon HD 7790 2 GB

Nvidia GeForce GTX 760 2 GB

Nvidia GeForce GTX 660 2 GB

Nvidia GeForce GTX 650 Ti Boost 2 GB

Nvidia GeForce GTX 650 Ti 2 GB
Power Supply
Corsair AX860i 860 W
System Software And Drivers
Operating System
Windows 8 Professional 64-bit
DirectX
DirectX 11
Graphics DriverAMD Catalyst 13.11 Beta 1 (All AMD cards)

Nvidia GeForce 331.40 Beta (All Nvidia cards)
Benchmarks And Settings
Battlefield 3
Ultra Quality Preset, v-sync off, 1920x1080, 2560x1440, DirectX 11, Going Hunting, 90-Second playback, Fraps
Arma 3
Very High Detail Preset, DirectX 11, 2x FSAA, v-sync off, 1920x1080, 2560x1440, Infantry Showcase, 30-Second playback, Fraps
Grid 2
Ultra Quality Preset, 4x MSAA, v-sync off, 1920x1080, 2560x1440, Built-In Benchmark, Fraps
The Elder Scrolls V: Skyrim
Ultra Quality Preset, FXAA Disabled, 1920x1080, 2560x1440, Custom Run-Through, 25-Second playback, Fraps
BioShock Infinite
Ultra Quality Settings with Diffusion Depth of Field, DirectX 11, 1920x1080, 2560x1440, Built-in Benchmark Sequence, 75-Second playback, Fraps
Crysis 3
High System Spec, SMAA Low (1x), High Texture Resolution, 1920x1080, 2560x1440, Custom Run-Through, 60-Second Sequence, Fraps
Tomb Raider
Ultra Quality Preset, FXAA Enabled, 16x Anisotropic Filtering, TressFX Hair, 1920x1080, 2560x1440, Custom Run-Through, 45-Second playback, Fraps
8. Results: Arma III

For each game we’re testing, we need to evaluate three different products. First up is AMD’s “new” R9 280X. As expected, it’s slower than the Radeon HD 7970 GHz Edition, though just slightly. Nvidia’s closest-priced alternative, GeForce GTX 760, sells for $50 less, but even gets beaten by the R9 270X in Arma III. The cheapest 7970 GHz Edition card (as of this writing) sells for $330, so for $30 less, the R9 280X is a good example of AMD’s Tahiti GPU made more attractive.

Stepping down one product category means giving up playable performance at 2560x1440 (at least using Very High quality settings). Nevertheless, AMD’s R9 270X has little trouble outpacing GeForce GTX 760 and the Radeon HD 7870. AMD scores a value win, without question. But with 7870s going for as little as $170, spending $30 more on an R9 270X is a step in the wrong direction, price-wise, for the same Pitcairn/Curacao GPU.

Arma is a great-looking title, and its Very High detail setting is pretty taxing. An average frame rate in the 30s might not be satisfactory at 1920x1080, compelling you to scale back on eye candy (a shame, really). R7 260X won’t change your experience compared to the Radeon HD 7790. The thing is, most 7790s are 1 GB cards. The 2 GB Gigabyte model we bought sells for the same $140 AMD plans to charge for its R7 260X. So, for the same price, you’re getting a slight overclock and TrueAudio turned on. The good news for AMD is that, even after a price drop on Nvidia’s GeForce GTX 650 Ti Boost, its Bonaire-based boards are still a better value. Our Best Graphics Cards For The Money column concurs. 

An analysis of frame rate over time at 1920x1080 and 2560x1440 breaks our 10 comparison boards into three distinct groups. Up top, the Tahiti-based offerings appear uncontested by the GeForce GTX 760, which instead competes against $200 Pitcairn-based cards.

Arma is taxing enough that, at 2560x1440, you’re probably going to want a Tahiti-class card. Otherwise, you’re going to spend a fair amount of time under 30 FPS.

In single-GPU configurations, all of these solutions demonstrate low frame time variance. For more on what this measurement includes and how we generate it, check out this page.

9. Results: Battlefield 3

Again, a $30 savings seems to be worthwhile, given the performance sacrifice you make going from Radeon HD 7970 GHz Edition to R9 280X. Then again, with vanilla 7970s already selling for $300 online, you didn’t exactly need to wait for a re-brand for access to Tahiti at a great price.

Although AMD fails to impress relative to its prior-generation products, it fares better against GeForce GTX 760. The R9 costs $50 extra, but is probably what you’d want in order to play Battlefield 3 smoothly at 2560x1440 using the Ultra detail preset. Really, the GeForce card is more of a match-up to AMD’s older Radeon HD 7950. But that model is going to disappear, and Nvidia doesn’t have anything else short of $400 to go up against the R9 280X at $300.

I’m not sure it’s worth spending an extra $20 for an R9 270X when the Radeon HD 7870 comes as close as it does. But you’ll probably want to choose the 7870 over Nvidia’s GeForce GTX 660, even at the adjusted $180 price point.

R7 260X’s advantage over the Radeon HD 7790 is marginal. Its lead over the GeForce GTX 650 Ti is similarly small. But the GeForce GTX 650 Ti Boost is quite a bit faster and only $10 pricier. That’s a smarter play for Battlefield 3 using the Ultra detail preset at 1920x1080.

Charting frame rate over time is good for monitoring dips into unplayable territory. At 2560x1440, you can see the R9 270X does keep minimum performance a bit smoother than the Radeon HD 7870 in a handful of passages. That could help make a case for spending the extra $30 on AMD’s higher-clocked board (or for simply overclocking your 7870).

There’s little of interest to report from our frame time variance charts. In single-card configurations, each of these solutions behaves itself.

10. Results: BioShock Infinite

It would have been easy to recommend AMD’s Radeon HD 7870 over the GeForce GTX 660 when it sold for $200. But a recent drop to $180 balances that price range, making it hard to declare a winner. What we do know is that, for another $20, the R9 270X doesn’t really add anything compelling to the story in BioShock Infinite.

In contrast, it’d be hard to not spend an extra $10 on a GeForce GTX 650 Ti Boost, given its advantage over the R7 260X, GeForce GTX 650 Ti, and Radeon HD 7790.

At the top end, Radeon HD 7970 GHz Edition and R9 280X serve up the best experiences at 2560x1440. To get any more, you’d have to jump all the way up to $400 for a GeForce GTX 770. This big hole in Nvidia’s line-up makes AMD’s new R9 280X the entry point for gamers looking to play demanding first-person titles at 2560x1440 using high detail levels.

The two Tahiti-based cards keep their noses above 50 FPS at 1920x1080, while Nvidia’s GeForce GTX 760 drops to the mid-40s. That’s still fast enough, if you’re playing on a FHD display. But the cheaper Pitcairn-based cards and GeForce GTX 650 Ti Boost are quick enough to keep up at 1920x1080.

Gaming at 2560x1440 is a more taxing test of each GPU’s potential. AMD’s Radeon HD 7970 GHz Edition and R9 280X maintain at least 35 FPS. Meanwhile, Nvidia’s GeForce GTX 760 spends a lot more of its time under 40 FPS, dipping closer to 30 on one occasion.

Frame time variance is a little higher in BioShock Infinite, but even at 2560x1440, the worst-case numbers don’t look too bad.

11. Results: Crysis 3

Nvidia's lack of a solution between $250 and $400 gives AMD an opportunity to enable playable frame rates in another demanding title at 2560x1440 with its R9 280X. GeForce GTX 760 dips close to 30 FPS, making it a better solution for cranking the details up even higher at 1920x1080.

You could also get away with a GeForce GTX 660 for $180 at that resolution instead of a R9 270X for $200. However, the smartest play is a Radeon HD 7870 for $180, as long as they're around.

At the lower end of the spectrum, Crysis 3 with a High System Setting preset is a little too demanding for R7 260X, Radeon HD 7790, or GeForce GTX 650 Ti. If this title is important to you, we again find ourselves drawn to the GeForce GTX 650 Ti Boost with 2 GB for an extra $10.

Unlike a lot of our more automated tests, Crysis 3 involves a manual run-through with live action that varies each time. As such, there’s a little more variance between benchmarks. This is most obvious from those dips you see at the very end of our 1920x1080 chart. There’s one area in the map we test that hammers frame rates. It clearly affects the R9 280X’s performance, but not the Radeon HD 7970 GHz Edition. Also not affected is the Radeon HD 7870, while the R9 270X slides down to about 30 FPS before popping right back up.

Throughout the run at 2560x1440, AMD’s fastest Tahiti-based card manages an almost-10 FPS lead over GeForce GTX 760. The Nvidia board is still plenty playable, but I’ll maintain that R9 280X is a good entry point for this resolution using the settings we’ve chosen.

Getting hammered by big performance dips at the end of our run really affects the worst-case frame time variance numbers reported by GeForce GTX 760 and R9 280X at 1920x1080. In reality, those shouldn’t worry you—the slow-down isn’t perceptible as we play through the test.

12. Results: Grid 2

Bigger numbers in Grid 2 mean that even mid-range cards serve up playable performance—so long as you match them up to high-end platforms with plenty of memory bandwidth. In this case, an overclocked Core i7-4960X and four channels of DDR3-1866 memory are what carry the Radeon HD 7870 and R9 270X to almost 50 FPS average rates at 2560x1440.

Tahiti justifies its price premium over the GeForce GTX 760’s GK104 at 2560x1440. The highest-end Nvidia card we’re testing, which sells for $250, barely slides in ahead of the Pitcairn-based boards. Again, Radeon HD 7870 for $180 looks like a pretty sweet deal for as long as it’s around, right?

At the bottom end, R7 260X comes in just ahead of the Radeon HD 7790, which matches its price. The GeForce GTX 650 Ti Boost, selling for $10 extra, does nothing extra for performance at 2560x1440. And its advantage at 1920x1080 isn’t significant enough to change the gaming experience.

Although performance through our Grid 2 benchmark run jumps up and down, creating fairly busy lines, we still see three clumps of cards. Unfortunately for Nvidia, its GeForce GTX 760 is part of the second clump where AMD’s cheaper Pitcairn-based cards show up.

Frame time variance is very low in Grid 2, even when we look at the worst-case 95th percentile numbers.

13. Results: The Elder Scrolls V: Skyrim

Skyrim tends to be more platform-bound than most of our other benchmarks, so an overclocked Ivy Bridge-E-based configuration with lots of fast memory lets these cards perform to their peak potential using the Ultra detail preset.

The thing is, this game just doesn’t tax graphics hardware very much. You’ll still find it playable at 2560x1440, even on a Bonaire-powered Radeon HD 7790 or R7 260X. Most notable, perhaps, is that a $200 R9 270X trades blows with a $250 GeForce GTX 760.

Smooth frame rate over time line graphics demonstrate an entire field of playable performance at 1920x1080, and mostly ample numbers at 2560x1440 using the game’s Ultra quality preset.

The GeForce cards experience higher frame time variance, on average. At 1920x1080, only the 650 Ti’s worst-case result is something you’d likely notice. At 2560x1440, however, the numbers using Nvidia’s latest beta drivers aren’t as good. Again, it’s the GeForce GTX 650 Ti that demonstrates the least-favorable behavior.

14. Results: Tomb Raider

Radeon HD 7970 GHz Edition and R9 280X are in a league of their own through our manually-run Tomb Raider benchmark, never dropping below 50 FPS at 2560x1440. Meanwhile, the R9 270X slides in ahead of Nvidia’s GeForce GTX 760. The real winner is AMD’s Radeon HD 7870, though, which similarly does battle in the same space, but costs $20 less than the new R9.

AMD’s R7 260X is a little bit quicker than the Radeon HD 7790 it replaces, but not so much so that it’d warrant a higher price. The same holds true for Nvidia’s GeForce GTX 650 Ti Boost. Unfortunately, that card does cost more. The trio is sufficient for 1920x1080, but higher resolutions are too demanding.

To reiterate a recurring theme, the Radeon HD 7970 and R9 280X are great entry points for gaming at 2560x1440. The R9 270X, GeForce GTX 760, and Radeon HD 7870 certainly suffice as well, but they come a lot closer to marginal. Moreover, to get something faster than the $300 R9 280X, you’d have to spring for a $400 GeForce GTX 770.

Tiny frame time variances indicate a smooth experience across the board. Only Nvidia’s GeForce GTX 650 Ti runs into a bit of trouble at 2560x1440—a resolution too taxing for the frame rates to be playable anyway.

15. CAD: AutoCAD 2013 And Inventor 2013

AutoCAD 2013

2D Performance

The differences between cards are marginal. At this point, as long as you’re dealing with 2D output, the type of card you’re using doesn’t really matter. From workstation- to gaming-class hardware, all of these cards are sufficient.

3D Performance

There are clearer reasons to favor one board over another if you isolate 3D performance. Desktop-oriented cards do benefit from DirectX support. However, the new Radeon cards aren’t stunners by any stretch; they roughly achieve similar results as their genetic predecessors.

Autodesk Inventor 2013

Inventor also employs DirectX, which means that even gaming boards stand a chance. Interestingly, the Radeon cards fare better than they did in AutoCAD.

16. OpenGL: Maya 2013 And LightWave

Maya 2013

Two scenes that don’t employ the new Viewport 2.0 with DirectX support illustrate the disadvantages of consumer cards compared to workstation hardware cards as soon as OpenGL comes into play. The FirePro and Quadro drivers are simply better-optimized. But especially in the second scene, you also see that a GeForce GTX 580 does well, so long as the workload is right for it.

Lightwave

LightWave wrecks consumer cards as well. You can certainly use the R9 and R7 cards to mess with mid-sized models that aren’t too complex, but a workstation card is clearly a better option.

17. OpenCL: Bitmining, OpenCL, And RatGPU

Bitmining

Based on All About Bitcoin Mining: Road To Riches Or Fool's Gold?, we know that specialized hardware is much better for mining Bitcoins than graphics cards. But the practice is still worth benchmarking.

AMD leads the way, as it has in the past, though the differences between this generation and last are very small.

LuxMark

For the past two years, we’ve watched AMD dominate compute-oriented workloads. It does particularly well in the OpenCL-accelerated LuxMark benchmark, based on the LuxRender rendering system. Nvidia’s Kepler architecture isn't as inspiring for this type of task.

RatGPU

ratGPU is unique ray-tracing renderer accelerated by OpenCL and available for 3ds Max and Maya. Remarkably, the older Radeon HD 6970 dominates the rest of the line-up, while the new Radeons must be content with a mid-pack finish.

18. Power Consumption

Unfortunately, AMD didn’t send reference R9 280X cards to either our U.S. or German offices. Instead, we have a number of board partner designs running at different clock rates and with varying cooling solutions. We’re getting as close as possible to the reference specs, but be advised that the power consumption values might not match the model coming straight from AMD.

Here’s the thing, though: when you compare the new cards’ power consumption to their predecessors, you notice that there’s hardly any difference at all. Some of the newer boards technically enjoy higher peak clock rates, though this contributes little to the power story, as our test cases don’t allow for sustained operation at those frequencies.

Let's take a closer look at this situation and the resulting power consumption of these graphics cards:

19. Clock Rate And Temperature

The same challenges that faced us in trying to create a reference R9 280X for our power consumption testing also affect our frequency monitoring and thermal readings. Again, our U.S. and German offices received different partner boards with a variety of configurations. AMD, for some reason, pinned an embargo on those third-party products that expires later, so we’re setting aside measurements of the 280X for a round-up instead.

What we want to illustrate, though, are the effectively-achievable clock rates under load:

While the R9 280X that we adjusted to behave like a reference model can sustain the higher frequencies in real-world gaming conditions, the R7 260X and the R9 270X behave differently. We got the R7 260X to hit an almost-constant 1100 MHz by using the +20 setting in PowerTune, but saw little difference from the R9 270X. Tuning the card’s settings allows us to hit its peak clock rate more often, but that’s still not enough to call its ceiling a usable increase. This is why we’re using the R7 260X with its two PowerTune setups for our thermal measurements.

Temperatures Under Realistic Load

We took our thermal readings in a closed-up Corsair Obsidian 900D with its case fans spinning slowly and an ambient temperature of 22 degrees Celsius.

The R9 270X’s temperature rises and then levels off at around 80 degrees. In contrast, the R7 260X hits a peak value (with and without our PowerTune adjustment) and then backs off before stabilizing. These differences are the result of an evolved fan controller, which we’ll look at on the next page. Both reference cards seem to be designed for a maximum temperature of 80 degrees Celsius, though. By raising the PowerTune value, we’re able to push that boundary up.

20. Fan Speed And Noise

Fan Speed

The R9 280X gets left out again, since we want to focus on the reference design, rather than partner boards that’ll all perform dissimilarly.

Since fan speed is one of the primary determinants of noise level, this chart is worth a close examination.

The temperature curves on the previous page suggested what we’d see here. The R9 270X ramps up fan speed in a more granular way, smoothing out the thermal plateau. The R7 260X tries to hold a lower fan speed for longer before stepping up suddenly. That’s where you saw temperatures peak before dropping back down to the 80-degree range.

Noise Level

AMD tends to put its emphasis on partner boards, which it has embargoed for another couple of days. We still wanted to generate some noise data with the reference cards, though.

They both idle under 32 dB(A), and are practically inaudible. Under the load of our custom gaming loop, the R7 260X demonstrates a modest 44.3 dB(A), while the R9 270X at 47.3 dB(A) is notably noisier. We wouldn’t recommend the cooling solution on either reference card. But again, partner boards typically have their own heat sinks and fans that need to be evaluated independently.

21. Old GPUs Ride Again, But That’s Not A Bad Thing

“What's in a name? That which we call a rose
By any other name would smell as sweet;”

AMD’s GCN architecture debuted almost two years ago in the Tahiti GPU. I’ve mentioned this before, but I paid $550 each for two Radeon HD 7970s to do my own CrossFire testing. And Tahiti lives on today, now at the heart of R9 280X selling for $300. The Pitcairn GPU followed a few months behind in March of 2012. That GPU gets reused today as well in the R9 270X. Bonaire is a much more recent development, emerging in March of this year as Radeon HD 7790 and again today as R7 260X. It’s the only card currently available with AMD’s TrueAudio technology, and it’s the only board in today’s round-up that employs a more sophisticated version of PowerTune able to switch voltage and clock rate very quickly for finer-tuned response to environmental variables.

Unfortunately, when you don’t have higher-performing products to talk about, your only option is to beat up the pricing to attract new buyers. That’s been going on for a while now, and it’s really the reason why R9- and R7-series cards don’t look like great deals compared to the boards they replace. The good news is that AMD’s story doesn’t end today. We still have R9 290 and 290X cards to look forward to, and those are based on new silicon.

There’s another positive in all of this for AMD: in the process of hacking away at its flagship’s price tag, the company pushed Tahiti into a price band Nvidia doesn’t service. True, R9 280X is slower than the Radeon HD 7970 GHz Edition. But the card is also about $30 cheaper. At $300, there’s a $50 premium over GeForce GTX 760, but that board doesn't handle QHD resolutions as well with detail settings cranked up. If I was building a PC to game on a 2560x1440 display and wanted to get in the door as inexpensively as possible without sacrificing graphics quality, the 280X would be my card. That value is why I’ll hand the Tahiti-based board our Smart Buy award. There’s certainly something to be said for revisiting a GPU when it's selling for $200 less than the last time you reviewed it.

I’m not as impressed with the other two cards. R9 270X is a slightly faster Radeon HD 7870 priced as much as $30 more than its predecessor. We know those 7870s won’t last much longer, but for the time being, who’s going to want the 270X at its introductory price? I’m sure Don will let everyone know in his monthly column when the good deals on prior-gen products run out. Until then, the R9 270X is a “meh”, even if it has no trouble blowing past Nvidia's GeForce GTX 660. I'll take a 7870, thank-you-very-much.

The same goes for R7 260X, except this time the pressure is external. AMD is introducing the 260X at the same price as existing 2 GB Radeon HD 7790 boards, which are a bit slower. However, right before launch, Nvidia announced a cut on its 2 GB GeForce GTX 650 Ti Boost down to $150—a $10 premium. For that $10, you get card that's notably faster in a number of benchmarks. Although AMD now offers its TrueAudio technology on R7 260X, developers haven’t done enough with the feature to make it something we can test. Until then, Nvidia has the advantage for its card’s performance.

The R9 290 and 290X will feature TrueAudio, too. Will either of those cards have what it takes to do battle with GeForce GTX Titan or 780? Time will tell, and those are the boards we're most looking forward to.