Bugatti’s Veyron Super Sport. Aston Martin’s One-77. Lamborghini’s Reventón. They’re all million-dollar-plus automobiles that most of us love to read about, probably won’t see on the road, and almost certainly will never own.
And yet, they’re still sexy.
When it comes to drawing parallels between high-end hardware and really nice cars, I’m just as guilty as many others in the technology press. Really, though, the comparisons are hardly realistic. For the price of four Veyron tires, you could buy 35 Core i7-3960X processors. A thousand bucks for a CPU sounds ludicrous, but that’s certainly more accessible than many of life’s other luxuries.
One of our GTX 690's GK104 GPUs
So, while it’s tempting to review Nvidia’s new GeForce GTX 690 as if it were the gaming world’s Bugatti—to describe its features as if it were an untouchable kilogram of PCI Express 3.0-capable unobtanium—the fact of the matter is that this thing could become a viable option for enthusiasts with a thousand bucks in their hands.
Sure, you’ll also need an expensive platform and at least one 30” display in order to enjoy its capabilities. But we can’t just take the GTX 690 for a lap around the track, call it an amazing-looking piece of equipment, and assume you’ll never have to choose between this and a couple of GeForce GTX 680s or Radeon HD 7970s.
No, GeForce GTX 690 has to pass muster—same as any other card that finds its way into our little Nürburgring for GPUs.
We received enough detail about GeForce GTX 690 from Nvidia’s announcement at the GeForce LAN 2012 in Shanghai to write an early preview (Nvidia GeForce GTX 690 4 GB: Dual GK104, Announced). But I’ll try to expound just a little, borrowing from the text used in my news piece. And then of course we’ll move on to the benchmarks at 1920x1080, 2560x1600, and 5760x1080.
GeForce GTX 690 4 GB: Under The Hood
From that preview:
GeForce GTX 690 is a dual-GK104 part. Its GPUs aren’t neutered in any way, so you end up with 3072 cumulative CUDA cores (1536 * 2), 256 combined texture units (128 * 2), and 64 full-color ROPs (32 * 2).
Gone is the NF200 bridge that previously linked GF110 GPUs on Nvidia’s GeForce GTX 590. That component was limited to PCI Express 2.0, and these new graphics processors beg for a third-gen-capable connection.

We now know that the GeForce GTX 690’s GPUs are linked by a PEX 8747, one of PLX’s third-gen, 48-lane, five-port switches manufactured at 40 nm. The switch communicates with each GK104 over 16 lanes of PCI Express 3.0, and then creates a 16-lane interface to the host. Developed expressly for graphics applications, the 8 W switch is rated for 126 ns latency, so you shouldn’t have to worry about its insertion between the GPUs negatively impacting performance. And again, a number of the capabilities uniquely integrated into NF200, like multi-cast, are supported by PEX 8747, too. Continuing…
Each graphics processor has its own aggregate 256-bit memory bus, on which Nvidia drops 2 GB of GDDR5 memory. The company says it’s using the same 6 GT/s memory found on its GeForce GTX 680, able to push more than 192 GB/s of bandwidth per graphics processor. Core clocks on the GK104s drop just a little, though, from a 1006 MHz base down to 915 MHz. Setting a lower guaranteed ceiling is a concession needed to duck in under a 300 W TDP. However, in cases where that power limit isn’t being hit, Nvidia rates GPU Boost up to 1019 MHz—just slightly lower than a GeForce GTX 680’s 1058 MHz spec.


Power is actually an important part of the GeForce GTX 690’s story. Nvidia’s GeForce GTX 590 bore a maximum board power of 365 W. One 75 W slot and two 150 W eight-pin plugs pushed the 590 mighty close to its ceiling. But because the Kepler architecture takes such a profound step forward in efficiency, Nvidia is able to tag its GTX 690 with a 300 W TDP. And yet it still includes two eight-pin inputs able to deliver up to 375 W, including the slot.
Although the GeForce GTX 690’s board-level features are certainly impressive, this story is arguably just as much about the effort Nvidia’s engineers put into building a card that’d operate quietly, more effectively dissipate heat, and look good in the process. The result is a sturdy, rigid piece of hardware, which demonstrates more showmanship than any reference card preceding it. Back to the preview:

The exterior frame is built of chromium-plated aluminum, rather than the plastic materials covering most other cards (including the GeForce GTX 680). The fan housing itself is a magnesium alloy, which purportedly aids heat dissipation and logically improves vibration dampening compared to plastic shrouds.
Dual vapor chambers cover each of the GPUs, similar to what we’ve seen from both Nvidia and AMD in the past. This time, however, a polycarbonate window over each fin stack allows curious enthusiasts to peer “under the hood.” An LED up top can actually be controlled through a new API Nvidia is making available to partners. So, it might respond to load, getting lighter and darker, as an example.
Currently, I should note, the LED simply lights up when the card is powered on. And here’s where we get to the part sure to turn some folks off…

As with dual-GPU boards from the past, Nvidia is using a center-mounted axial fan, which it claims is optimized for moving air without generating a lot of noise. The trouble with axial fans in this configuration is that they exhaust the heat from one GPU out the rear I/O panel, while the second chip’s thermal energy is jettisoned back into your chassis. Both AMD and Nvidia went the axial route last generation, so we have to surmise that it’s logistically the only approach that makes sense. At least the TDP on GeForce GTX 690 is lower than 590, indicating less maximum heat to dissipate.
Talk to some of the boutique system builders out there and they’ll tell you that shifting from centrifugal to axial-flow fans is akin to the manufacturer dealing with its own product’s thermal output, and then saying, “Here, now you figure out what to do with this.” If we’re to imagine that the most relevant application for GeForce GTX 690 is quad-SLI, you basically have the equivalent of two GeForce GTX 680s dumping hot air into your chassis instead of exhausting out. Finally…
The 690’s rear I/O panel plays host to three dual-link DVI outputs and a mini-DisplayPort connector, accommodating four screens total. A single SLI connector up top links the GeForce GTX 690 to one other card, enabling quad-SLI arrays.

Connecting three displays to two GeForce GTX 680s can be a real pain. Although each board offers four display outputs, there are recommended combinations, depending on whether your screens employ DVI, HDMI, or DisplayPort. With three DVI connectors on GeForce GTX 690, configuration becomes significantly easier.
Overclocking GeForce GTX 690
Beyond simply building a more capable cooler, Nvidia claims that hand-picking low-leakage GK104 GPUs helps minimize the GeForce GTX 690’s thermal output. As a result, the card slides in under a 300 W TDP. But company representatives say there is plenty of headroom left in the card for clock rates beyond the stock 915 MHz base and 1019 MHz typical GPU Boost frequency.

Using EVGA’s excellent Precision X tool, we managed to push a 150 MHz core and 25 MHz memory offset using a 20%-higher power target. Stability was marginal at those settings, though, so we nudged the voltage up from its .988 V default up to 1.025 V, which kept the card from crashing.





The resulting gains aren’t bad, ranging from a 13%+ speed-up in Battlefield 3 to a 5%+ boost in Metro 2033 at 2560x1600.
Tessellation
You can call it tradition by this point. Our examination of tessellation scaling is intended to quantify claims that both Nvidia and AMD make regarding continually-improving implementations of geometry processing. We like to use real-world metrics where possible, and HAWX 2 gives us an easy on/off toggle for applying additional vertices.



The only real take-away here is that a GeForce GTX 690 does as well as two GeForce GTX 680s in SLI, which improve on what a single GeForce GTX 680 achieves on its own. We’re not sure why the 680 bleeds off so much of its performance when you turn tessellation on, but there’s clearly a bottleneck hammering the frame rate harder than geometry.
PCI Express 3.0 Representing
There’s a story behind Nvidia’s support for third-gen PCI Express and Intel’s X79 Express platform. But it requires a little bit of history.
Way back when I first previewed Sandy Bridge-E (check out Intel Core i7-3960X (Sandy Bridge-E) And X79 Platform Preview for that little piece of history), everyone I talked to insisted that the processor’s PCIe controller wasn’t going to be validated at 8 GT/s data rates. It'd be a PCIe 2.0 part. Then, suddenly the story changed and it was called 8 GT/s-capable (though mention of the standard itself was left out).
When AMD launched its Radeon HD 7000-series cards, we were able to demonstrate them operating at PCI Express 3.0 signaling speeds. Then, Nvidia launched its GeForce GTX 680—with a press driver that was limited to 5 GT/s. The company sent us a second version to show that PCI Express 3.0 was working, and assured us that it’d operate at 8 GT/s on Ivy Bridge-based platforms (which we’ve since confirmed).
Why not just ship it like that? There was a reason, we are digging deeper, but aren’t yet ready to discuss the results.
Let’s put the puzzle pieces together, though.
- X79 and Sandy Bridge-E were originally going to operate at second-gen signaling rates.
- GeForce GTX 680, a card that scales really well in SLI, operates at 5 GT/s data rates attached to Sandy Bridge-E processors and 8 GT/s in Ivy Bridge-based platforms.
- GeForce GTX 690 offers 8 GT/s signaling in both Sandy Bridge-E and Ivy Bridge-based platforms.
The issue doesn’t appear to be related to GK104, Nvidia’s card, or its driver. Rather, it’d seem to relate back to our original report that Sandy Bridge-E was not fully validated for PCI Express 3.0.
GTX 690 at PCIe 3.0 on X79
GTX 680 at PCIe 2.0 on X79
Is This It For Affluent Gamers In 2012?
I saw a lot of comments from folks who read GeForce GTX 680 2 GB Review: Kepler Sends Tahiti On Vacation and decided they wanted to wait for Nvidia to launch a desktop-oriented card based on a more complex graphics processor—if only because they were unwilling to pay $500 for the company’s next-gen “Hunter” (if you don’t know what I’m talking about, check out the first page of my GeForce GTX 680 review).
On behalf of those folks, I plied Nvidia for more information about a proper “Tank” in the GeForce GTX 600-series. Although the company’s representatives were deliberately vague about the existence of another GPU, they clearly indicated that GeForce GTX 690 wouldn’t be eclipsed any time soon. Personally, I’d be surprised to see anything based on a higher-end GPU before Q4.
Even then, there’s no guarantee that a tank-class card would outperform two GK104s (GF104 had little trouble destroying GF100 in Amazing SLI Scaling: Do Two GeForce GTX 460s Beat One GTX 480?, after all). The more likely outcome would be a better-balanced GPU able to game and handle compute-oriented tasks.
| Test Hardware | |
|---|---|
| Processors | Intel Core i7-3960X (Sandy Bridge-E) 3.3 GHz at 4.2 GHz (42 * 100 MHz), LGA 2011, 15 MB Shared L3, Hyper-Threading enabled, Power-savings enabled |
| Motherboard | Gigabyte X79-UD5 (LGA 2011) X79 Express Chipset, BIOS F10 |
| Memory | G.Skill 16 GB (4 x 4 GB) DDR3-1600, F3-12800CL9Q2-32GBZL @ 9-9-9-24 and 1.5 V |
| Hard Drive | Intel SSDSC2MH250A2 250 GB SATA 6Gb/s |
| Graphics | Nvidia GeForce GTX 690 4 GB |
| 2 x Nvidia GeForce GTX 680 2 GB | |
| 2 x AMD Radeon HD 7970 3 GB | |
| AMD Radeon HD 7950 3 GB | |
| AMD Radeon HD 6990 4 GB | |
| Nvidia GeForce GTX 590 3 GB | |
| Nvidia GeForce GTX 580 1.5 GB | |
| Power Supply | Cooler Master UCP-1000 W |
| System Software And Drivers | |
| Operating System | Windows 7 Ultimate 64-bit |
| DirectX | DirectX 11 |
| Graphics Driver | Nvidia GeForce Release 301.33 (For GTX 690) |
| Nvidia GeForce Release 300.99 and 301.10 (For GTX 680) | |
| Nvidia GeForce Release 296.10 (For GTX 580 and 590) | |
| AMD Catalyst 12.2 (For HD 7950 and HD 6990) | |
| AMD Catalyst 12.4 (For HD 7970) | |
As a rule, we do our testing with the latest drivers each time we start a new story. This ensures that any fixes or performance improvements introduced by a software update get reflected in our coverage.
This time around, however, we don't have that luxury. So, the GeForce GTX 690 is tested using Nvidia's new 301.33 build. From there, we spot-checked the GeForce GTX 680 on its own and in SLI using the public 301.10 release. Then, we did the same thing with AMD's Radeon HD 7970 in one- and two-card configurations using Catalyst 12.4.
| Games | |
|---|---|
| Battlefield 3 | Ultra Quality Settings, No AA / 16x AF, 4x MSAA / 16x AF, v-sync off, 1920x1080 / 2560x1600 / 5760x1080, DirectX 11, Going Hunting, 90-second playback, Fraps |
| Crysis 2 | DirectX 9 / DirectX 11, Ultra System Spec, v-sync off, 1920x1080 / 2560x1600 / 5760x1080, No AA / No AF, Central Park, High-Resolution Textures: On |
| Metro 2033 | High Quality Settings, AAA / 4x AF, 4x MSAA / 16x AF, 1920x1080 / 2560x1600 / 5760x1080, Built-in Benchmark, Depth of Field filter Disabled, Steam version |
| DiRT 3 | Ultra High Settings, No AA / No AF, 8x AA / No AF, 1920x1080 / 2560x1600 / 5760x1080, Steam version, Built-In Benchmark Sequence, DX 11 |
| The Elder Scrolls V: Skyrim | High Quality (8x AA / 8x AF) / Ultra Quality (8x AA, 16x AF) Settings, FXAA enabled, vsync off, 1920x1080 / 2560x1600 / 5760x1080, 25-second playback, Fraps |
| 3DMark 11 | Version 1.03, Extreme Preset |
| HAWX 2 | Highest Quality Settings, 8x AA, 1920x1200, Retail Version, Built-in Benchmark, Tessellation on/off |
| World of Warcraft: Cataclysm | Ultra Quality Settings, No AA / 16x AF, 8x AA / 16x AF, From Crushblow to The Krazzworks, 1920x1080 / 2560x1600 / 5760x1080, Fraps, DirectX 11 Rendering, x64 Client |
| SiSoftware Sandra 2012 | Sandra Tech Support (Engineer) 2012.SP4, GP Processing and GP Bandwidth Modules |
| LuxMark 2.0 | 64-bit Binary, Version 1.0, Room Scene |

Just as we expected, Nvidia’s GeForce GTX 690 4 GB falls in just behind two GeForce GTX 680s. Two Radeon HD 7970s take third place.
Almost surreally, previous-generation dual-GPU cards like the GeForce GTX 590 and Radeon HD 6990 get dramatically out-classed.
If you saw my day-two coverage of GeForce GTX 680, where I tested SLI, CrossFire, and 5760x1080, you’ll notice that these dual-card numbers are notably higher. We think it’s possible that the dual-card setups were pushing the overclocked Core i7-3960X harder, possibly throttling it. This issue doesn’t affect any of our other scores, and was addressed by simply bumping the maximum wattage and amperage up to 200.







Again, the GeForce GTX 690 performs right where we knew to look: just slightly under two GeForce GTX 680s.
Here’s the thing, though. We know GeForce GTX 680s are nearly impossible to find. We know that GeForce GTX 690s are going to be crazy hard to track down. But then you have Radeon HD 7970s selling for $480 and at least coming close in CrossFire. You end up saving $40 total on a card that is readily available and that we’ve shown to have significant overclocking headroom.
Although we were hard on AMD’s flagship back at $550, the competition is much more two-sided at the prices it’s hitting now.



The Crysis 2 charts place Nvidia’s GeForce GTX 690 about as close to comparable as you can get to twin GTX 680s without matching up exactly. That’s reassuring for anyone who might have been apprehensive about giving up a few megahertz compared to two GeForce GTX 680s in SLI.
AMD’s Radeon HD 7970s in CrossFire show well at 1920x1080 and 5760x1080. However, even with Catalyst 12.4, they continue exhibiting a disturbing (and reproducible) crash at 2560x1600 in DirectX 9 mode.



The GeForce GTX 690 and 680s in SLI trade blows in Skyrim, depending on resolution. In either case, though, performance is very close between them.
Although the Radeon HD 7970s in CrossFire start off pretty rocky, they finish at 5760x1080 faring far better against competing Nvidia boards.



DiRT 3 gives us more of the same, as GeForce GTX 690 takes a close second-place finish at all three resolutions to two GeForce GTX 680s in SLI.
Radeon HD 7970s aren’t too far behind, delivering playable performance all the way through 5760x1080 with 8x MSAA enabled.



World of Warcraft is so processor-bound that Nvidia’s old GeForce GTX 590 finds itself in the mix at 1920x1080. At that same resolution, as we’ve seen many times before, AMD’s cards simply struggle.
The GeForce GTX 690 and twin 680s in SLI swap places at 2560x1600, though by a negligible margin.
It’s really only at 5760x1080 with 8x AA applied that we see where two GeForce GTX 680s are better than one, and where one GTX 690 is just as fast. Mainstream though this game may be, it looks sharp as heck spanned across three screens in all of its DirectX 11 and 64-bit glory.



Although Nvidia’s cards take an early lead in Metro 2033, the AMD Radeon HD 7970s in CrossFire surge forward at 2560x1600 and 5760x1080, demonstrating the strongest advantage with 4x AA turned on.
Meanwhile, the GeForce GTX 690 and GTX 680s in SLI hang very close together, delivering nearly identical performance and showing that you have little to fear from the dual-GPU card’s slightly lower base and Boost clocks.

Not surprisingly, GeForce GTX 690 demonstrates respectable FP32 performance. However, its FP64 throughput, artificially capped, is far behind the capabilities of even one Radeon HD 7970.
Unfortunately, Nvidia’s 301.33 driver appears to break support for the OpenCL General Purpose Bandwidth and Cryptography tests, which previously worked just fine. I think it’s pretty clear, though, that the company intends these as gaming cards and hopes to separate out its compute-oriented products.
That’s good news for AMD, since its GCN-based Radeon HD 7000-series boards seem well-suited to both 3D and compute applications.

As a case in point, LuxMark 2.0 shows the Nvidia cards all underperforming current- and last-gen AMD boards, with the Radeon HD 7970 in CrossFire throwing down huge sample/sec results.


Both AMD and Nvidia do an admirable job of keeping idle noise low, even from dual-GPU cards and dual-card arrays.
This is a result of idle temperatures that remain manageable.


I flipped back and forth trying to figure out which of the GeForce GTX 690’s two GPUs would get hotter, but the warmest processor only got up to 78 degrees under load. That’s particularly impressive considering a single GeForce GTX 680 peaks at 79 degrees, while two 680s in SLI push the inside card’s chip to 83 degrees.
Maintaining that modest temperature isn’t a problem for the GeForce GTX 690, either. True to Nvidia’s word, its flagship operates more quietly than a pair of GeForce GTX 680s in SLI (though it measured a little louder than a GeForce GTX 590).
The more meaningful victory is over a single Radeon HD 7970—not to mention the acoustic train wreck that is two. Fortunately for AMD, its board partners are abandoning its reference cooling design en masse. In Five Radeon HD 7970 3 GB Cards, Overclocked And Benchmarked, all four of the aftermarket-cooled models ended up quieter than the card with the noisy centrifugal blower (though Gigabyte’s three-fan implementation did get pretty loud under its maximum overclock).

That the latest-generation cards are at the top of this chart speaks volumes about the technologies used by both AMD and Nvidia to reduce the power use of even their highest-end hardware.
Although Nvidia’s GeForce GTX 690 doesn’t fare particularly well overall, it is the second-most power-friendly solution amongst the dual-GPU configurations, just behind two GeForce GTX 680s in SLI.

AMD really struts its stuff when your displays go to sleep. Its ZeroCore technology allows two Radeon HD 7970s to use as much power as one GeForce GTX 680 at idle. And whereas the dual-GPU AMD setup sheds 20 W right off the bat, GeForce GTX 690 is only able to drop 3 W in the same situation.

Each GeForce GTX 680 is rated with a 195 W maximum board power. Each Radeon HD 7970 is rated for 250 W. The GeForce GTX 690’s board power is set at 300 W.
It makes sense, then, that it’d use less power under load compared to twin GeForce GTX 680s. Two Radeons in CrossFire are noticeably more egregious power consumers. Even the GeForce GTX 590 averages higher use than the new GeForce GTX 690.
Truly, this is where the Kepler architecture’s emphasis on performance/watt shines. It would have been nice to see Nvidia spend more time cutting power at idle, like AMD, given the majority of time we spend not gaming. However, the savings under load are certainly impressive.
Benchmarking Nvidia’s new GeForce GTX 690 almost wasn’t even necessary. The company gave us a great idea of what to expect when it told us that its new Death Star would be fully operational, featuring two uncut GK104s. A slightly lower base clock suggested average performance just a smidge below two GeForce GTX 680s—but certainly not enough to be noticeable while you’re gaming.
If you consider $500 for GeForce GTX 680 to be a fair price, then $1000 for GeForce GTX 690 is comparably reasonable. But is it any better? Or should you just stick to a pair of Nvidia’s fastest single-GPU cards?
That’s going to depend on your priorities.
Practically, two GeForce GTX 680s facilitate slightly better performance and they exhaust all of their heat out into the surrounding environment. They’re also scalable at a more granular level. That is to say, if you buy two GeForce GTX 680s today, you could add a third tomorrow and be out-of-pocket for $1500. For most enthusiasts with the right motherboard slot configuration, that’s the smarter play.

But a GeForce GTX 690 is quantifiably quieter than GTX 680s in SLI, even if you have two empty spaces between the cards for ventilation (such is the case with our Gigabyte X79-UD5 test bed). It’s the easiest way to achieve quad-SLI, too, though you’d better be prepared to tackle the thermals inside your chassis with all of the verve of a professional system builder. Overclocking a CPU gets a lot tougher in a bath of GK104 backwash.
And finally, GeForce GTX 690 has an X-factor that’s hard to measure. It’s a beefy mix of metal, polycarbonate, silicon, and lights—certainly sexier than GeForce GTX 680. But enough so to sway a big buying decision? For some folks, sure. Holding the thing in my hand imparts a sense of fine workmanship that just can’t be conveyed by words. The GeForce GTX 690 is a well-built board, and I simply must call out the attention given to acoustics especially.
If you count yourself a fan of hardware bling, GeForce GTX 690 is unquestionably the fanciest card to ever come from Nvidia, and you certainly can’t knock its visual appeal (nor can you knock its unmatched performance).
Now, let’s say you want to buy one (or two, for quad-SLI). Not surprisingly, given general unavailability of even the single-GPU GeForce GTX 680, you’re going to have a hard time getting your paws on a GTX 690. How bad will the situation be? Nvidia tells us that the cards will roll out in limited quantities, just like GeForce GTX 590. A few should be floating around by May 3rd, with greater numbers on May 7th. System builders just laugh when we ask how many they’re expecting. Consider our expectations curbed.
With Nvidia claiming availability on day one, we can’t preemptively call this a paper launch like we did when AMD previewed the Radeon HD 7970 three weeks before shipping it. But don’t be surprised if the closest you ever get to a GeForce GTX 690 is this story, regardless of whether you have a Grover Cleveland burning a hole in your back pocket.
And so we come full circle. Nvidia’s new flagship is a lot like an expensive sports car: attractive, exclusive, and not necessarily practical. But if you’re in line for one, there’s a fair chance you already know that and probably don’t care.