Time flies. We published AMD Radeon HD 7970: Promising Performance, Paper-Launched almost a year and a half ago. The graphics card market was quite a bit different back then. AMD’s virginal Graphics Core Next architecture made its debut against Nvidia’s Fermi-based GeForce GTX 580, absolutely blowing the company’s own Radeon HD 6970 out of the water in the process.
And a dual-GPU card, based on two of the 7970’s Tahiti GPUs was rumored to be right around the corner. We waited. And we waited.
Of course, in the 12 months that followed, no official Radeon HD 7990 surfaced. Rather, board partners tentatively dipped their toes into that high-end space. PowerColor got out ahead of the rest with a dual-Tahiti offering that consumed three expansion slots, required three eight-pin auxiliary power connectors, and screamed like a banshee any time we applied a load to it. HIS followed suit, giving us exclusive access to a couple of prototypes before withdrawing its plans to ship a dual-GPU solution altogether. Finally, Asus threw its hat into the ring with a liquid-cooled card of its own, as obscenely-priced and limited as it was. We looked at all of them in Asus' ROG Ares II: Four Dual-GPU Graphics Cards, Compared, eventually coming to the conclusion that Nvidia’s GeForce GTX 690, while a bit slower in our benchmarks, made more sense than any of the Radeons.

Challenge accepted, AMD said. Today we have a real, actual Radeon HD 7990, straight from the company’s own product team. It’s a dual-slot card. It only requires two eight-pin power connectors. And—brace yourself—its fans spin quietly. That’s not to say the 7990 is silent, but more on that later.
Hold Nothing Back In The Name Of Performance
Stripped down to its bare PCB, the Radeon HD 7990 consists of two Tahiti GPUs, each surrounded by 3 GB of GDDR5 memory and connected by a PLX Technology PEX 8747 switch.

The graphics processors are complete—AMD doesn’t disable any of their resources, so each brings 2,048 Stream processors to the table, along with 128 texture units, 32 ROPs, and an aggregate 384-bit memory bus. The company even sets the GPUs to operate at 950 MHz, with a 1 GHz boost state. That’s a little faster than the vanilla Radeon HD 7970, and a bit slower than the later GHz Edition version, which starts at 1 GHz and accelerates to 1.05 GHz.
The 3 GB of GDDR5 memory attached to each GPU runs at 1.5 GHz, just like AMD’s Radeon HD 7970 GHz Edition (in comparison, the original 7970 launched with a 1,375 MHz memory clock), delivering up to 288 GB/s per GPU.
Nestled between the two 4.3 billion-transistor chips is that PEX 8747 switch—the same one Nvidia uses to enable inter-GPU communication on the GeForce GTX 690. The 48-lane, five-port device is manufactured at 40 nm and is PCI Express 3.0-capable. So, it attaches to each GPU through a 16-lane link, and then to the host interface with an additional 16 lanes.

All of that hardware is used to enable up to five simultaneous display outputs, one of which comes from dual-link DVI and four of which get exposed through mini-DisplayPort connectors. In comparison, Nvidia’s GeForce GTX 690 can only do four monitors in a three-plus-one configuration. Five screens in a 5 x 1 arrangement make far more sense to productivity-oriented enthusiasts.
At least based on its raw specifications, the Radeon HD 7990 is technically closer to two 7970 GHz Editions in CrossFire than GeForce GTX 690 is to two 680s in SLI. And given the massive performance boosts we’ve seen from AMD’s driver team over the last year, the paper promise is a compelling advantage that should make this the fastest dual-slot graphics card in existence. Now, what about the rest of the board’s vitals?
| Radeon HD 7990 | Radeon HD 7970 GHz Ed. | GeForce GTX Titan | GeForce GTX 690 | GeForce GTX 680 | |
|---|---|---|---|---|---|
| Shaders | 2 x 2,048 | 2,048 | 2,688 | 2 x 1,536 | 1,536 |
| Texture Units | 2 x 128 | 128 | 224 | 2 x 128 | 128 |
| Full Color ROPs | 2 x 32 | 32 | 48 | 2 x 32 | 32 |
| Graphics Clock | 950 MHz | 1,000 MHz | 836 MHz | 915 MHz | 1,006 MHz |
| Texture Fillrate | 2 x 121.6 GT/s | 128 Gtex/s | 187.5 Gtex/s | 2 x 117.1 Gtex/s | 128.8 Gtex/s |
| Memory Clock | 1,500 MHz | 1,500 MHz | 1,502 MHz | 1,502 MHz | 1,502 MHz |
| Memory Bus | 2 x 384-bit | 384-bit | 384-bit | 2 x 256-bit | 256-bit |
| Memory Bandwidth | 2 x 288 GB/s | 288 GB/s | 288.4 GB/s | 2 x 192.3 GB/s | 192.3 GB/s |
| Graphics RAM | 2 x 3 GB GDDR5 | 3 GB GDDR5 | 6 GB GDDR5 | 2 x 2 GB GDDR5 | 2 GB GDDR5 |
| Die Size | 2 x 365 mm2 | 365 mm2 | 551 mm2 | 2 x 294 mm2 | 294 mm2 |
| Transistors (Billion) | 2 x 4.31 | 4.31 | 7.1 | 2 x 3.54 | 3.54 |
| Process Technology | 28 nm | 28 nm | 28 nm | 28 nm | 28 nm |
| Power Connectors | 2 x 8-pin | 1 x 8-pin, 1 x 6-pin | 1 x 8-pin, 1 x 6-pin | 2 x 8-pin | 2 x 6-pin |
| Maximum Power | 375 W | 250 W | 250 W | 300 W | 195 W |
| Price (Street) | $1,000 | $430 | $1,000 | $1,000 | $460 |
Cooling A More Elegant Solution
AMD buttons the Radeon HD 7990 up into a dual-slot card that only needs two eight-pin auxiliary power connectors to drive it. With that said, the card jams right up against the PCI-SIG’s electromechanical specification for one x16 slot and two eight-pin connectors: 375 W.

Even still, it’s impressive that AMD created a two-slot flagship after we’d seen nothing but bulkier solutions from PowerColor, HIS, and Asus. The company achieved this by screening out the best of the best ASICs prior to shipping off its Tahiti GPUs. For a while now, it’s been setting aside the top few percent of the lowest-power, highest-frequency parts, building up an inventory specifically for this launch. And while AMD’s partners are locked at certain voltage levels for their Tahiti-based products (inherently affecting power), AMD was able to drop the voltage on its special chips to still get up to 1 GHz out of them inside of a 375 W power limit.
That’s not to say it’s easy to keep the Radeon HD 7990 cool (or even quiet, for that matter). Instead of the one axial fan Nvidia uses on its GeForce GTX 690, or even the centrifugal blower on Radeon HD 6990, AMD employs three axial fans on 7990. Their blades aren’t particularly thick, and the fans themselves aren’t reinforced for stability (you can rock them side-to-side with your fingers). But they spin slowly enough under typical gaming loads that they address the one thing that bothered me most about Radeon HD 6990: fan noise. In games, the 7990s fans are nearly imperceptible.

There is a price to be paid for this sort of design, though. Three fans, side-by-side, are only effective if they aren’t doing battle with each other. And that means channeling air vertically, rather than horizontally. At the end of the day, then, the slotted rear I/O panel that typically helps exhaust hot air from most graphics cards is almost non-functional. Instead, waste heat from both GPUs is jettisoned out the top of the card, right into your case. Consequently, you’ll want to be careful picking the right chassis for the Radeon HD 7990. AMD currently recommends two models: Antec’s Eleven Hundred with two 120 mm side-panel fans, and Cooler Master’s HAF-X, also with side-panel cooling. Enthusiasts who go a different route need to build with airflow in mind. Almost assuredly, small form-factor isn’t a viable option.

Physically, the Radeon HD 7990 measures the same 12-inches long as Radeon HD 6990. That’s an inch longer than GeForce GTX 690. Fortunately, both of its eight-pin power connectors are up on top of the board, so you don’t have to worry about leads extending another inch or two behind the already-long add-in. There’s also a metal plate on the back of the PCB. Given that the first fan’s blades protrude up above the plastic shroud a tad, you won’t want to put two 7990s right next to each other in a quad-CrossFire configuration.
Wait, What’s That Hum?
The Radeon HD 7990’s cooling fans spin quietly—something I was so happy AMD addressed. But another acoustic issue nagged at me. Previously, PowerColor sent in its AX7990 6GBD5-A2DHJ Devil13 for us to look at. But I was surprised at just how much noise the card’s inductors generated—like, I couldn’t believe an engineer would kick something like that out the door and expect someone to pay a grand for it.
To a lesser degree, the Radeon HD 7990 runs into something similar. AMD explained it to me as an artifact of oscillation between heavy and light workloads, where current draw spikes and dips, causing ceramic capacitors and the PCB itself to vibrate. The volume and tone of this phenomenon vary according to the task you’re performing, but it was noticeable enough during our real-world game testing with Bakersfield-based volunteers that several asked me to explain what was happening.
The solution is to turn on v-sync, capping the frame rate and preventing those highly variable loads. I don’t think it’s particularly ideal to have to use v-sync, but there it is. Igor in our German office created some video and performed frequency analysis that you’ll be looking at shortly. Decide for yourself if this is a deal-breaker.
| Test Hardware | |
|---|---|
| Processors | Intel Core i7-3770K (Ivy Bridge) 3.5 GHz at 4.0 GHz (40 * 100 MHz), LGA 1155, 8 MB Shared L3, Hyper-Threading enabled, Power-savings enabled |
| Motherboard | Gigabyte Z77X-UD5H (LGA 1155) Z77 Express Chipset, BIOS F15q |
| Memory | G.Skill 16 GB (4 x 4 GB) DDR3-1600, F3-12800CL9Q2-32GBZL @ 9-9-9-24 and 1.5 V |
| Hard Drive | Crucial m4 SSD 256 GB SATA 6Gb/s |
| Graphics | AMD Radeon HD 7990 6 GB |
| AMD Radeon HD 7970 GHz Edition 3 GB | |
| Nvidia GeForce GTX 690 4 GB | |
| Nvidia GeForce GTX 680 2 GB | |
| Nvidia GeForce GTX Titan 6 GB | |
| Power Supply | Cooler Master UCP-1000 W |
| System Software And Drivers | |
| Operating System | Windows 8 Professional 64-bit |
| DirectX | DirectX 11 |
| Graphics Driver | AMD Catalyst 13.5 (Beta 2) |
| Nvidia GeForce Release 320.00 | |
| AMD Catalyst Frame_Pacing_Prototype v2 For Radeon HD 7990 | |
Our work with Nvidia’s Frame Capture Analysis Tools last month yielded interesting information, and it continues to shape the way we plan to test multi-GPU configurations moving forward. Because it’s such a departure from the Fraps-based benchmarking we’ve done in the past, though, today’s review includes more than just FCAT-generated data. We’re also bringing a handful of gamers to our SoCal lab to go hands-on with Radeon HD 7990 and GeForce GTX 690 in eight different titles. What we’re hoping to achieve is unprecedentedly comprehensive performance data using FCAT, and then the real-world “reality check” from gaming enthusiasts. We want to know if this new emphasis on latency between successive frames maps to the actual gaming experience.
At the same time, we recognize that the new data we’re generating is far more sophisticated than the simple average frame rates that previously made it easy to pit two graphics cards against each other. Fortunately, we still have average results to report, along with frame rates over time. The newest addition is frame time variance. We’ve heard that this isn’t as explanatory as we’d hoped, so we have the following explanation to help clarify.
Why aren’t we simply presenting frame times, as other sites are? Because we feel that raw frame time data includes too many variables for us to draw the right conclusions.
For example, a 40-millisecond frame sounds pretty severe. Is this indicative of stuttery playback? It might, and it might not. Take the following two scenarios:
First, how would your game look if that 40-ms frame was surrounded on both sides by other frames that took the same amount of time to render? The resulting frame rate would be a very consistent 25 FPS, and you might not notice any stuttering at all. We wouldn’t call that frame rate ideal, but the even pacing would certainly help experientially.
Then consider the same 40-ms frame in a sea of 16.7-ms frames. In this case, the longer frame time would take more than twice as long as the frames before and after it, likely standing out as a stutter artifact of some sort.
Yes, the hypothetical is simplified for our purposes. But the point remains; if you want to call out stuttering in a game, you need more context than raw frame times. You also need to consider the frames around those seemingly-higher ones. So, we came up with something called frame time variance.
We’re basically looking at each frame and coming to a conclusion whether it’s out of sync with the field of frames before and after it. In the first example, our 40-ms frame surrounded by other 40-ms frames would register a frame time variance of zero. In our second example, the 40-ms frame surrounded by 16.7-ms frames would be reported as a variance of 23.3 ms.
Experimentation with this in the lab continues. But from what we’ve seen, gamers are noticing changes as small as 15 ms. Therefore, this is our baseline. If frame time variance is under 15 ms, a single frame probably won’t cause a perceptible artifact. If the average variance approaches 15 ms, with spikes in excess, it’d be reasonable to expect a gamer to report stuttering issues.
The actual Excel formula we’re using on frame times listed chronologically from top to bottom is as follows:
=ABS(B20-(TRIMMEAN(B2:B38, 0.3))) //The formula describes the frame time variance for the 20th frame in a capture, listed in cell B20.
Breaking this down, the formula looks at frame time values starting 18 cells in front of and 18 cells behind the targeted frame, and then averages them out (excluding 30% of the outliers so that the average isn’t affected by anomalous results). This average frame time is then subtracted from the current frame time. The result is fed back as an absolute, or positive value.
We’re always hoping to see frame time variance of zero. In reality, though, there is always some variation one way or the other. So, we look across the spectrum and report average, 75th, and 95th percentile values.
I know—sounds like it gets pretty intense. But you’re going to see some pretty cool details from the nearly 1.5 TB of video we captured from AMD’s Radeon HD 7990, two Radeon HD 7970s in CrossFire, the Nvidia GeForce GTX 690, GeForce GTX Titan, and two GeForce GTX 680s in SLI. All of the testing was done at 2560x1440, and we’re using eight different games to represent each solution’s performance.
| Benchmarks And Settings | |
|---|---|
| Battlefield 3 | Ultra Quality Preset, v-sync off, 2560x1440, DirectX 11, Going Hunting, 90-Second playback, FCAT |
| Far Cry 3 | Ultra Quality Preset, DirectX 11, v-sync off, 2560x1440, Custom Run-Through, 50-Second playback, FCAT |
| Borderlands 2 | Highest-Quality Settings, PhysX Low, 16x Anisotropic Filtering, 2560x1440, Custom Run-Through, FCAT |
| Hitman: Absolution | Ultra Quality Preset, MSAA Off, 2560x1440, Built-In Benchmark Sequence, FCAT |
| The Elder Scrolls V: Skyrim | Ultra Quality Preset, FXAA Enabled, 2560x1440, Custom Run-Through, 25-Second playback, FCAT |
| 3DMark | Fire Strike Benchmark |
| BioShock Infinite | Ultra Quality Settings, DirectX 11, Diffusion Depth of Field, 2560x1440, Built-in Benchmark Sequence, FCAT |
| Crysis 3 | Very High System Spec, MSAA: Low (2x), High Texture Resolution, 2560x1440, Custom Run-Through, 60-Second Sequence, FCAT |
| Tomb Raider | Ultimate Quality Preset, FXAA Enabled, 16x Anisotropic Filtering, TressFX Hair, 2560x1440, Custom Run-Through, 45-Second Sequence, FCAT |
| LuxMark 2.0 | 64-bit Binary, Version 2.0, Sala Scene |
| SiSoftware Sandra 2013 Professional | Sandra Tech Support (Engineer) 2013.SP1, Cryptography, Financial Analysis Performance |
Whenever I jump into my car, I like to let my oil temperature get to 80 degrees before I start flogging the engine. Consider 3DMark today’s warm-up, leading into some very spirited driving. AMD claims supremacy in this synthetic metric, as we’d expect it to given outright impressive performance specifications.
The real question is whether a lead in Futuremark’s title bears out in the rest of benchmark suite once we start factoring out dropped and runt frames, which don’t positively affect gaming, but still would have been counted toward the average frame rate in Fraps.

Talk about precision. The Fire Strike score gives us the exact hierarchy we would have predicted based on each solution’s specs. Indeed, the Radeon HD 7990 claims its first-place finish.
But we know this means little outside of bragging rights. Let’s load up a 256 GB SSD full of eight top gaming titles and start recording 430 MB/s of raw video at 2560x1440 to analyze using the FCAT tool suite.
And by the way, a number of readers have asked for access to the FCAT extractor tool and Perl scripts, eager to dig in and confirm that they’re above-board. If you’d like to get your hands on the tool, just let me know.


I warned you that there’d be a ton of data to process, and I wasn’t lying. Right out of the gate, allow me to distinguish between the Hardware FPS and Practical FPS numbers in the chart below. Hardware FPS is what we call the result that you would have seen previously, had we stuck to Fraps-based testing. Hardware FPS includes dropped frames and runt frames, neither of which contribute positively to your gaming experience. They do, however, get counted by Fraps.
When we use Nvidia’s overlay tool and process the expected color sequence using FCAT, we’re quickly and accurately able to identify when a frame gets rendered, but never shows up on-screen (a color gets skipped) or when a runt appears (the expected color appears, but occupies 20 vertical scan lines or less, making imperceptible).

The impact of this distinction massively affects AMD’s standing. Allow me to call out specific results. The Radeon HD 7990 appears to serve up more than 100 FPS in Battlefield 3 using Ultra settings at a 2560x1440 resolution. It looks like it’s bumping into a platform limitation on our Core i7-3770K overclocked to 4 GHz, in fact. But when you play back the 90-second video of our benchmark, you clearly see that each visible frame is succeeded by a small runt that only shows up for a millisecond or two. When all of those are factored out, the average frame rate you actually experience is closer to 56.2—lower even than a GeForce GTX Titan. Two Radeon HD 7970s in CrossFire are subject to the exact same issue, yielding confirmation that this isn’t a product-specific phenomenon, but rather a problem that affects AMD’s technology.
Now, you’ll notice that I have data in there corresponding to a prototype driver. Anticipating our findings, the company shipped us an early build of software it plans to make public in the second half of 2013 with provisions for frame pacing. In essence, the driver is adding latency between frames to deliver a more consistent experience, per our hypothetical scenario on the third page of this review. Because the numbers already appeared platform-bound, this doesn’t appear to negatively affect performance, though it almost completely ameliorates the frequency of runts encountered in Catalyst 13.5 Beta 2.
As a point of comparison, the GeForce GTX 680s in SLI, the GTX 690, and GTX Titan all serve up identical hardware and practical frame rate numbers; frame pacing is already something Nvidia enables, so the incidence of dropped and runt frames is very small.

We’re working on the presentation of this data, I promise. For the time being, though, think of the thin, dotted lines as points of reference. They’re the Hardware FPS numbers—the one Fraps would have conveyed. The thicker lines are the practical frame rates over time (in this case, a 90-second run).
For the most part, Radeon HD 7990 tracks with two Radeon HD 7970 GHz Edition cards in CrossFire, except for a number of spikes up closer to the Hardware FPS number. One GeForce GTX Titan appears both faster and smoother in comparison.
There’s hope for AMD, though. See the prototype driver’s practical frame rate hovering up alongside two GeForce GTX 680s in SLI?

By pacing out its frames at a largely platform-bound resolution, the three GeForce-based configurations present minimal frame time variation. In comparison, the Radeon cards driven by Catalyst 13.5 Beta 2 demonstrate more variation between frames.
In the past, we would have looked at 95th percentile numbers in the 11 ms range and suggested that the real-world impact of that variation would be minimal. However, after polling gamers who swapped between HD 7990- and GTX 690-powered PCs, all of which could tell the difference, we’re forced to question what is and is not perceptible latency.
Alright, so you get what’s going on now, right? We have average frame rate, divided between what the hardware is cranking out and what you can actually see on-screen, we have both of those frame rates plotted over time, and we have our unique analysis of frame time variance.

Applying the same methodology to BioShock Infinite, the average frame rates once again land fairly close together, despite a frame rate-over-time chart (below) that demonstrates practical frame rates from under 40 to more than 90 FPS.
Fraps would have shown the Radeon HD 7990 in a narrow first-place finish. However, removing dropped and runt frames yields a practical result that falls under what two GeForce GTX 680s and the GTX 690 achieve. The prototype driver helps a little, but not much.

There’s so much going on with this chart that it’s difficult to analyze. Most stark are the dips encountered by Radeon HD 7970 GHz Edition cards in CrossFire, which sharply contrast the two cards’ hardware FPS. When you chart out runts and drops over time, it becomes clear that the 7970s are hammered by the second component of BioShock’s built-in benchmark, which is dominated by runt frames.
The Radeon HD 7990 isn’t subject to nearly as much deviation in hardware and practical frame rate. Two roughly 10-second passages negatively affect the 7990. Otherwise, though, it’s fairly consistent.

Our last puzzle piece puts the Radeon HD 7970s’ behavior into context. Incurring almost twice as much average latency between successive frames, two cards in CrossFire range from about 4 ms up to 24 ms, with outlier spikes as high as 50 ms. Worst-case, the 7990 experiences a similar latency range. But better response to the second sequence in BioShock’s benchmark drives down the average and 75th percentile numbers.
Of course, in comparison, disciplined metering means the GeForce-based solutions offer very similar hardware and practical frame rates.

In the interest of brevity, we’ll keep the commentary short on this one. Even at this title’s most demanding settings, 2560x1440 simply isn’t a high-enough resolution to really tax our swath of ~$1,000 graphics configurations.
A higher-end platform would have been nice. So, why didn’t we use the Sandy Bridge-E-based setup we’ve employed in the past? There remains an issue with PCI Express 3.0 compatibility and today’s multi-card arrays, which Nvidia and AMD address differently. In order to avoid introducing additional variables, we chose to stick with Z77 Express and a PCIe 3.0-capable CPU to side-step communications bottlenecks.

The field is clumped together through our benchmark run, giving us little to comment on.

Although Nvidia does a better job of rendering frames consistently, charting out frame time latency shows that AMD is almost every bit as smooth in Borderlands 2. Incidentally, we know that this is one of the titles AMD’s driver team already optimized for, helping explain why this title shines as the Radeons struggle in other games.

It’s a good thing that we have so much information at our disposal. Otherwise, the average frame rates in Crysis 3 would confound us. On one hand, we see two Radeon HD 7970s in CrossFire rendering more than 46 FPS, but giving us a more modest 23.4 FPS once we remove dropped and runt frames. On the other, AMD’s Radeon HD 7990 yields a similarly poor 22.3 FPS in both the hardware and practical frame rate measurements. Then, we install the prototype Catalyst driver and get chart-topping results.

It isn’t clear why the Radeon HD 7990’s hardware and practical frame rates are so low from this chart. But when we drill down into the raw frame times, we see the dual-GPU flagship bouncing between very precise ~35 and ~50 ms frames, occasionally jumping to ~65 ms. This is being done deliberately, perhaps to circumvent the severe number of runts encountered by the CrossFire config. Radeon HD 7990 surprisingly encounters few drops or runts at all, instead simply suffering low all-around frame rates.
Perhaps the prototype driver suggests what AMD would like to see in the long-term. It does drop some frames and cut others off prematurely (indicated by the divergent dotted line). However, we clearly see it’s much more competitive against Nvidia’s hardware.

We expect a single GPU to serve up the lowest variance between successive frames, and GeForce GTX Titan delivers. Two GTX 680s in SLI leverage Nvidia’s metering technology to achieve solid numbers as well.
Meanwhile, a range from about 5 to 45 ms really hurts the Radeon HD 7970s in CrossFire. The strangely specific peaks and valleys between 35 and 50 ms help keep the 7990’s frame time variance and check. But it’s the prototype driver we should probably be looking forward to most.

For about $920, two GeForce GTX 680s in SLI deliver the highest average frame rate in Far Cry 3, followed by the pricier GeForce GTX 690. AMD’s Radeon HD 7970s in CrossFire and 7990 would appear to fall in just behind Nvidia’s multi-GPU solutions. However, dropped and runt frames chip away at the frame rate you actually experience, taking two Radeon HD 7970s below even what a single GTX Titan achieves. The Radeon HD 7990 fares better, and is bolstered further by work AMD is doing in preparation of a driver release expected later this year.

This does not look good for two Radeon HD 7970s in CrossFire. It’s not immediately apparent why they’re behaving so much differently than the Radeon HD 7990. However, a peek at the frame time over time chart that FCAT spits out shows the 7970s ranging between 0 and 45 ms per frame throughout our benchmark. The 7990 fluctuates between the same to numbers, but for far less of the test. AMD’s prototype driver cleans up a lot of the dips and spikes, resulting in a better practical frame rate.

And this is the visual representation of those frame time differences. We see how the 7970s and 7990 are close to comparable at the 95th percentile. However, the Radeon HD 7990 spends a lot less time swinging between high-latency frames, pulling the average and 75th percentile numbers down significantly. AMD’s Malta board isn’t the strongest performer in Far Cry 3, but it runs smoothly more of the time than two Radeon HD 7970s in CrossFire.

Performance in Hitman: Absolution favors AMD’s Radeon HD 7970s in CrossFire and new Radeon HD 7990. And, good news: although the hardware frame rates for those two solutions are higher, an almost-complete eradication of runt frames translates into a practical result that comes really close. It’s only lower because of some dropped frames that never get displayed.
All of the Nvidia cards get stuck around 55 FPS. The fact that GeForce GTX Titan leads the pack suggests something other than GPU performance holds GTX 690 and the 680s in SLI back.

We can see where a handful of dropped frames pull the Radeon HD 7990’s practical frame rate down in four distinct areas. But regardless of whether you’re looking at the 7990 or two Radeon HD 7970s in CrossFire, AMD does really well in Hitman.

If you pop open the release notes for Catalyst 13.3, you see that AMD optimized latencies for two more games: Hitman: Absolution and Tomb Raider. When you add Borderlands 2 and Skyrim to the list, both of which were optimized in Catalyst 13.2, four of our eight tested titles should run more smoothly than the others.
Borderlands 2 was a strong game for AMD, and the same largely holds true in Hitman. Nvidia’s cards continue to deliver frames more consistently. But a bottleneck of some sort keeps the GeForce boards from challenging the practical frame rates achieved by Radeon HD 7990 and two 7970s in CrossFire.
Interestingly, the prototype Catalyst driver doesn’t help AMD in Hitman. Company representatives divulged that the package is derived from an older branch of its driver. So, it’s entirely possible that the special tweaks that went into Catalyst 13.3 (and carried over to 13.5 Beta 2) supersede the prototype software.

Long ago we established that Skyrim is predominantly platform-bound. Big, dual-GPU graphics cards are largely wasted on this game, which is why roughly 11 FPS separate the top and bottom finishers.
This time around, Nvidia finishes first, second, and third, albeit by a symbolically-high frame rate and an imperceptibly-low delta.

All of these cards largely track together during our 25-second run. AMD’s boards exhibit some divergence between what the cards render and what shows up on-screen. Using our FCAT analysis tools, we see that those dips are caused by dropped frames, though the impact isn’t worrying.

Skyrim is the third benchmark in our suite with deliberate tuning by AMD to optimize frame time latency. The result is a super-tight range from the Radeon HD 7990. Our 95th percentile numbers only jump because of spikes that occur intermittently throughout the run (as high as ~64 ms in one case).
Two Radeon HD 7970s in CrossFire exhibit comparable 95th percentile latencies. But a wider range through the rest of the run translates to greater average and 75th percentile numbers.
The prototype software demonstrates the same tight latencies on Radeon HD 7990. Slightly lower performance could be related to the driver’s older foundation, though.
I’d call AMD’s work in Skyrim good enough to minimize any disadvantage the Radeons might have suffered previously, though it’s worth noting that Nvidia achieves lower latency numbers across the board.

AMD’s Radeon HD 7990 pushes the highest practical frame rate, losing some of the card’s rendering effort to runts and drops. It does significantly better than two Radeon HD 7970s in CrossFire, though, which get hammered by the number of runts that only show up on-screen for a couple of milliseconds. Prototype software helps the 7990 a little. But because Tomb Raider is the fourth title with latency-specific optimizations already rolled in to Catalyst 13.3, it’s possible that a lot of the gains are already baked-in.

The thin, dotted lines again reflect hardware frame rates, while solid lines are indicative of what you actually see once runts and dropped frames get disregarded.

Although it incurs the highest 95th percentile latency, AMD Radeon HD 7990 with Catalyst 13.5 Beta 2 yields better average frame time variance than the other two Radeon-based data points.
Unfortunately, while those numbers seem fairly low, volunteers we brought in to test routinely fingered the Radeon HD 7990 as less consistent than GeForce GTX 690 after swapping between platforms armed with both cards.
Speaking of the subjective testing that helps us draw more confident conclusions throughout our benchmark analysis…
By rolling FCAT into our regular test suite and phasing Fraps out of multi-card coverage, we have a ton of new quantitative information that presents us with more insight into performance than we’ve ever offered before. In theory, we should be armed with the data to get even more authoritative.
But we’re still missing a vital piece of information: how do real gamers perceive various levels of latency between frames? Are we making a bigger deal about smoothness only because we have the tools to measure it? Is the issue getting overblown in the process?
We’re working on leveraging the audience size of Tom’s Hardware to generate experiential data that’ll go into a story of its own, exploring what gamers think about certain variables based on first-hand play. For this piece, though, I felt it important to bring a select few gamers into my home, where they could try out the Radeon HD 7990 and GeForce GTX 690, one card right after the other, in the same games.

I set two open test beds behind a pair of Auria EQ276W 27” displays. The systems were both running Z77-based motherboards with Ivy Bridge-based Core i7 processors and 16 GB of DDR3-1600 memory. Both featured 256 GB SSDs with identical drive images, too. The test subjects weren’t told which system had which card, or to which test bed their monitor was attached. Though, over the course of seven hours, I did let them know where their opinions were leading us. Each gamer spent between 10 and 15 minutes in front of each screen (I was only able to involve five folks for this; I’d like to at least double that in the future), before switching and repeating.
The Verdict
Unanimously, the entire group identified game play on Nvidia’s GeForce GTX 690 as the smoothest. Although I was worried about group pressures affecting the responses, or any of the other pitfalls associated with subjective analysis, each gamer was asked to identify the factors that affected his judgment, and we received specific answers.
This could have been done more scientifically, given more time, a larger sample size, and enough matching hardware. But I was satisfied enough with the discussion to include its outcome here.
The bulk of our gaming involved AMD’s Catalyst 13.5 Beta 2 driver. However, I surprised the group by dropping AMD’s special prototype driver onto the Radeon HD 7990-equipped machine. Without telling anyone what the software was supposed to do, I asked them to retry titles they had already played. Again, the response was universal: action on the dual-Tahiti board was noticeably smoother in most games, but seemed intermittently choppy in a couple of others (Crysis 3 and Tomb Raider). This is actually in contention with the benchmarks, which show the frame pacing-optimized software delivering higher practical frame rates in those two titles.
My working hypothesis, after also seeing a couple of titles that looked choppier under the prototype driver (Battlefield 3 is the one I singled out), is this: although deliberately inserting latency helps avoid runts and drops, benefiting the frame rate FCAT measures, it’s not always done precisely enough to prevent perceptible blips in the action. AMD is still working on the driver, though, and it certainly seems to achieve the company’s goal. Skyrim, in particular, elicited a few “whoa, nice” reactions from gamers who previously singled-out the Radeon HD 7990 under Catalyst 13.5 Beta 2.
All of our measurements are taken in a semi-anechoic chamber at an ambient temperature of 22° C from a distance of 50 cm (19.7 inches), with the microphone perpendicular to the middle fan of the Radeon HD 7990. As always, we report noise levels in dB(A) to account for the human ear’s idiosyncrasies as much as possible.
These tests employ the same calibrated setup used for our PC audio testing, since our studio microphone enables more precise measurements at frequencies above 8 kHz than a sound-level measuring device. Why go to all of that extra effort? Because sound pressure level doesn’t tell the whole story. Although we actually like the acoustics of the Radeon HD 7990’s triple-fan cooler a little more than GeForce GTX 690’s single-fan solution, we also have to live with the fact that, this time around, fan noise isn’t what you’re most likely to hear while you game. As mentioned earlier on, variable loads cause the card’s ceramic capacitors and PCB to vibrate, resulting in a whining sound that grates on your ears.
The company says it actually did quite a bit to minimize this, and you can further help by enabling v-sync to limit frame rates. This brings us back to our analysis and the fact that subjective impressions are a lot different than what an SPL meter would have reported. Don’t worry—we have video to make all of this clearer.
Temperatures, Fan Curves, and Sound Levels
Our first point of interest is comparing the Radeon HD 7990’s maximum temperature to GeForce GTX 690. Both cards are taxed using a GPGPU-oriented application that generates a continuous 100% load. While Nvidia’s dual-GPU board runs at a constant 914.5 MHz, unable to accelerate its core clock any higher, the Radeon HD 7990’s frequency oscillates between 950 and 1000 MHz even after ten minutes, though it tends to trend closer to the base clock rate.

As we can see, the Radeon HD 7990 runs slightly cooler. But how do those temperatures correspond to fan speed?
Unlike the GeForce GTX 690, which sports a center-mounted axial fan using large blades set at a rather sharp angle, AMD’s Radeon HD 7990 is equipped with three axial fans, each of which employing blades that are more curved and set at a shallower angle. This paves the way for lower noise levels and temperatures, even though the trio of blowers rotate faster, too.

AMD’s fan curve is less granular than Nvidia’s, though it’s also generally more conservative. The 7990’s fans spin slower at temperatures under 60°C. In theory, this could yield very low noise output at idle or when the card encounters a moderate load. That’d be quite a coup for AMD, which struggled with noise in the past, but appears to have a real winner in its Radeon HD 7990. Unfortunately, though, more taxing applications trigger the whining issue that creates more noise than the fans.

What AMD refers to as capacitor and PCB vibration ends up costing the Radeon HD 7990 its theoretical advantage. It’d be great to see AMD fix this and really redefine what it means to sell a flagship dual-GPU card that barely makes a whisper.
This takes us to our next question: How much of the noise is actually generated by the fans themselves?
Let’s start with a task that features a continually-changing load, never reaching 100% or falling below 40%. We begin our video analysis with a round of Crysis 3. You’ll hear the card start whining at the five-second mark.
Since the fans start off quietly and only become audible toward the end of our video, we want to concentrate on the parasitic noise that dominates throughout the recording. The fans aren’t responsible for this card’s noise levels at all. A frequency spectrogram helps us to understand this better. Its color scale represents the individual levels ranging from blue (quiet) and purple (middle) through red (elevated noise levels) to orange and yellow (loud). Green stands for the predetermined upper limit, which is never reached:
The fans only kick into high gear when a compute-heavy app pushes both Tahiti GPUs to full load, filling in the lower end of the frequency spectrum. This makes them seem louder to human ears, as if they were responsible for the overall noise level. A video helps us illustrate once again.
And here’s the spectrogram that goes along with the video. It shows us how the fans easily drown out the whining under full load.
We’ve learned two things. First, as a result of the components generating noise at different frequencies, AMD’s Radeon HD 7990 cannot be accurately measured using a simple sound level meter. Second, AMD partly undermines the hard-earned progress that went into quieting its cooling solution. Although the overall result isn’t bad (we’re certainly much happier with the 7990 than the 6990), the high-pitched whine is noticeable enough to illicit raised eyebrows from observers correctly ascertaining that a graphics card shouldn’t be making those noises.
Reference Under Load: GeForce GTX 690
To give you a point of comparison, we applied the same load to GeForce GTX 690 and recorded its output.
Measuring the general-purpose compute performance of multi-GPU solutions is a challenge because not every app knows how to exploit more than one graphics processor at a time. We also have to strike CUDA- or Stream/APP-only software from our list. That doesn’t leave many options, which is why we’re limiting our search to OpenCL-accelerated applications.
The most obvious benefit to OpenCL is that both vendors’ cards compete on a playing field that is as level as we can make it. Besides, a comparison using real-world metrics covering floating-point (FP32) and double-precision (FP64) math is much more interesting than a huge field of synthetic benchmarks. As usual, we also include a number of current workstation-class cards to see how they fare relative to their consumer siblings.
Rendering
We chose two different renderers that take almost opposing approaches to optimization. On one hand, we have the well-known LuxMark benchmark based on the LuxRender engine. On the other, we use the integrated benchmark of RatGPU, an application that tends to favor Nvidia cards but isn’t really optimized for either architecture. LuxMark reports its result in samples per second, while RatGPU measures the time per run.

There’s really not much to say about LuxMark that the chart doesn’t already tell us. AMD’s GCN architecture dominates, and an OpenCL-optimized application able to exploit two Tahiti GPUs simply screams.

Meanwhile, RatGPU shows us what many CUDA-enabled renderers have proven in the past, namely none of the Kepler-based GeForce cards can keep up with the Fermi-based GeForce GTX 580 in compute-heavy software. It’s a little strange that the VLIW4-based Radeon HD 6970 is faster than Radeon HD 7970 GHz Edition, though.
Encryption
The software we’re using for this test treats the multi-chip cards as if they have one GPU, so performance scales very well. AMD’s Radeon HD 7990, which seems to excel in integer-based hashing operations, performs really well, followed by a number of other GCN-based boards.


Financial Analysis Performance (Float/FP32)
We see the same sort of near-ideal scaling from the Radeon HD 7990 in our four financial analysis benchmarks (two benchmarks with two levels of precision each). Indeed, AMD’s flagship almost delivers two times the performance of the single-GPU Radeon HD 7970 GHz Edition, despite slightly lower clock rates. Meanwhile, the GeForce GTX Titan and 690 can’t even compete.


Financial Analysis Performance (Double/FP64)
Repeating those two benchmarks using double-precision math makes the differences even more apparent. While Nvidia’s other cards struggle with FP64, the Titan actually does quite decently, especially compared to the GK104-based GeForce GTX 690 and GTX 680. The trick is to activate CUDA’s dual-precision mode in the card’s driver, which also extends functionality to OpenCL. Although this negatively affects clock rates, the card is faster overall in FP64-based workloads.
Meanwhile, the Radeon HD 7990 doesn’t need any tweaking to achieve its impressive and chart-topping performance.


Unigine Heaven 4.0
Unigine Heaven 4.0 is one of those tests that helps us evaluate the performance of cutting-edge graphics features in a real game engine when we’re benchmarking under DirectX 11. What happens when we run it under OpenGL instead, though? Here are the metric’s key features:
- Comprehensive use of hardware tessellation, with adjustable settings
- Dynamic sky with volumetric clouds and tweakable day/night cycle
- Real-time global illumination and screen-space ambient occlusion
- Cinematic and interactive fly/walk-through camera modes

Although the Radeon HD 7990 leads the pack, it’s also obvious that SLI offers better scaling in this OpenGL benchmark than CrossFire.
Unigine Sanctuary (OpenGL)
The second benchmark from Unigine emphasizes a different set of features, and Nvidia’s cards do unexpectedly well. The Radeon board seem to struggle with Sanctuary’s particle system—something we’ve also observed in other benchmarks. Throw lots of particles at the AMD cards and they slow down noticeably.
- Five dynamic lights
- HDR rendering
- Parallax occlusion mapping
- Ambient occlusion mapping
- Translucence
- Volumetric light and fog
- Particle systems
- Post-processing

Unigine Tropics
The Radeon HD 7990 pulls ahead of Nvidia’s GeForce GTX 690, reversing the trend we see from single-GPU cards based on the same GPUs. Here’s a summary of this test’s key aspects:
- Dynamic sky with light scattering
- Live water with a surf zone and caustics
- Special materials for vegetation
- HDR rendering
- Parallel split shadow map
- Depth of field
- Real-time ambient occlusion
- Up to 2M polygons per frame
- Simulation of changing light conditions

More than a year ago, we heard murmurs about a dual-Tahiti board code-name New Zealand that was right around the corner. As it turns out, New Zealand describes all of AMD’s multi-GPU projects, from the board partner designs we already reviewed to the FirePro S10000 and Radeon Sky 900. Also included under that umbrella is Malta, the high-end gaming card now known as Radeon HD 7990.
AMD wants $1,000 for this new flagship—the same price as GeForce GTX 690, which yields a higher practical average frame rate in six of our eight benchmarks as it delivers frames more smoothly across the board. The GTX 690 is shorter, set up to exhaust at least some of its waste heat out of your chassis, and significantly more power-friendly. It eschews plastic in favor of metal. And it doesn’t whine under variable loads. Nvidia simply sells a better-built dual-GPU graphics card.

With that said, the Radeon HD 7990 is a pleasant surprise. Three different partner boards had me convinced that a dual-Tahiti board running at full speed just wouldn’t be possible without some sort of exotic design. Not only does AMD enable Radeon HD 7990 in a dual-slot form factor with two eight-pin power connectors, but it also addresses my biggest beef with the company’s most recent high-end reference designs: too much noise. Even under load, the 7990’s three fans slice through air quieter than a GeForce GTX 690. It’s only unfortunate that power-related vibrations generate more volume than the coolers themselves. Massive compute performance, low idle power consumption enabled by ZeroCore technology, and some of the fastest 3D performance available make this a very desirable product for certain environments.
But when we combine the quantitative data enabled by video capture-based performance analysis and the subjective judgments of a panel of gaming enthusiasts who simply want to play their favorite titles on the best hardware possible, Nvidia’s thousand-dollar GeForce GTX 690 outshines the similarly-priced Radeon HD 7990. Our early look at AMD’s prototype driver suggests that more evenly pacing the rate at which frames are shown on-screen helps minimize frame time variance, which our gamers definitely noticed. But that release isn’t expected for months—the second half of 2013 is as specific as AMD gets.
And so we’re faced with a card that represents a huge improvement over its predecessor, but still comes up shy of its competition, and is priced like an equal.
If the story ended there, the winner would be clear. However, AMD is working magic with developers, and the Radeon HD 7990’s game bundle looks like the culmination of a serious ISV push. Every 7990 will include a copy of BioShock Infinite, Tomb Raider, Crysis 3, Far Cry 3, Far Cry 3: Blood Dragon, Hitman: Absolution, Sleeping Dogs, and Deus Ex: Human Revolution. That’s $335 worth of software, if you don’t own any of it already. I personally find five of the eight titles interesting, which is some sort of record for a game bundle.
No matter what, $1,000 is a lot of money to spend on a graphics card accompanied by a handful of caveats. But if you’re able to extract a couple hundred bucks of value from the bundle, AMD’s suggested retail price gets a little softer. Interested parties should expect to wait a couple of weeks for availability, the company says.






