GeForce GTX Titan is a super-fast graphics card, right? We know it employs a trimmed-back version of Nvidia’s GK110 GPU, and sure, we’ve often wondered what a fully-functional version of the processor could do. But given the board’s once-uncontested performance lead and its butt-clenching $1000 price tag, it was never a sure thing that GK110, uncut, would ever surface on the desktop.
After all, GK110 is a 7.1-billion-transistor GPU. And Nvidia is already (happily) selling a 2880-core version into $5000 Quadro K6000 cards.
Competition has a way of altering perspective, though. AMD’s Radeon R9 290X launch wasn’t perfect. However, it taught us that the Hawaii GPU, properly cooled, can humble Nvidia’s mighty Titan at a much lower price point.
Not to be caught off-guard, Nvidia was already binning its GK110B GPUs, which have been shipping since this summer on GeForce GTX 780 and Titan cards. The company won’t get specific about what it was looking for, but we have to imagine it set aside flawless processors with the lowest power leakage to create a spiritual successor for GeForce GTX 580. Today, those fully-functional GPUs drop into Nvidia’s GeForce GTX 780 Ti.
GK110 In Its Fully Glory
That’s right—we’re finally getting a glimpse of GK110 with all of its Streaming Multiprocessors turned on. So, GeForce GTX 780 Ti features a total of 2880 CUDA cores and 240 texture units. For the sake of completeness, we can work backward: given 192 shaders per SMX, we have 15 working blocks, and with three SMX blocks per Graphics Processing Cluster, there are five of those operating in parallel, too.
GK110 as it appears in GeForce GTX 780 Ti
This is one SMX more than GeForce GTX Titan, with its 2688 CUDA cores, enjoys. So, you get 192 additional shaders and 16 more texture units. Nvidia also turns up the GPU’s clock rates too, though. Titan’s base clock is 837 MHz and its typical GPU Boost frequency is specified at 876 MHz. GTX 780 Ti starts at 875 MHz and, Nvidia says, can be expected to stretch up to 928 MHz in most workloads.
GK110’s back-end looks the same. Six ROP partitions handle up to eight pixels per clock, adding up to 48 ROP units. A sextet of 64-bit controllers facilitate a familiar 384-bit aggregate memory bus. Only, rather than dropping 1500 MHz modules onto it like the company did with Titan, Nvidia leans on the latest 1750 MHz memory, yielding a 7000 Gb/s data rate and up to 336 GB/s of bandwidth.
The design decision that’ll probably trigger the most controversy is Nvidia’s choice to use 3 GB of GDDR5, down from Titan’s 6 GB. In today’s games, I’ve tested 3 GB cards like the Radeon R9 280X at up to 3840x2160 and not had issues running out of memory. You will, however, have trouble with three QHD screens at 7680x1440. Battlefield 4, for example, goes right over 3 GB of memory usage at that resolution. You’ll be fine at 5760x1080 and Ultra HD for now, but on-board GDDR5 will become a bigger issue moving forward.
No memory back there. That means you get 3 GB on the other side of the PCB.
Is GeForce GTX 780 Ti More Titanic Than Titan?
At this juncture, the most natural question to ask is: well what about the $1000 GeForce GTX Titan? Nvidia is calling GeForce GTX 780 Ti the fastest gaming graphics card ever, and it’s selling for $700. That’s less than Titan for a card with technically superior specifications.
Titan lives on as a solution for CUDA developers and anyone else who needs GK110’s double-precision compute performance, but is not beholden to the workstation-oriented ECC memory protection, RDMA functionality, or Hyper-Q features you’d get from a Tesla or Quadro card. Remember—each SMX block on GK110 includes 64 FP64 CUDA cores. A Titan card with 14 active SMXes, running at 837 MHz, should be capable of 1.5 TFLOPS of double-precision math.
You don't get this option with GeForce GTX 780 Ti
GeForce GTX 780 Ti, on the other hand, gets neutered in the same way Nvidia handicapped its GTX 780. The card’s driver deliberately operates GK110’s FP64 units at 1/8 of the GPU’s clock rate. When you multiply that by the 3:1 ratio of single- to double-precision CUDA cores, you get a 1/24 rate. The math on that adds up to 5 TFLOPS of single- and 210 GFLOPS of double-precision compute performance.
That’s a compromise, no question. But Nvidia had to do something to preserve Titan’s value and keep GeForce GTX 780 Ti from cannibalizing sales of much more expensive professional-class cards. AMD does something similar with its Hawaii-based cards (though not as severe), limiting DP performance to 1/8 of FP32.
And so we’re left with GeForce GTX 780 Ti unequivocally taking the torch from Titan when it comes to gaming, while Titan trudges forward more as a niche offering for the development and research community. The good news for desktop enthusiasts is that Nvidia’s price bar comes down $300, while performance goes up.
Now, is that enough flip the script on AMD and its Radeon R9 290X? The company is still selling at a very attractive (for ultra-high-end hardware) $550 price point, after all. Here’s the thing: as you saw two days ago from our R9 290 coverage, retail cards are rolling into our lab, and we’re not seeing the same Titan-beating performance that manifested in Radeon R9 290X Review: AMD's Back In Ultra-High-End Gaming. With only a handful of data points pegging 290X between GeForce GTX 770 and 780, and quicker than Titan, consistency appears to be AMD’s enemy right now. Company representatives confirm that there's a discrepancy between between absolute fan speed and its PWM controller, and is working to remedy this with a software update. Our German team continued investigating as I peeled off to cover GeForce GTX 780 Ti, and demonstrated that the press and retail cards are spinning at different fan speeds. But there's more to this story relating to ambient conditions, so you'll be hearing more about it soon.
Nvidia is seizing on this issue in the meantime, and with good reason. With clock rates ranging from 727 to 1000 MHz on our Radeon R9 290X cards, and AMD’s reference thermal solution limiting performance at different frequencies in different games, we couldn’t draw a conclusion one way or the other in AMD Radeon R9 290 Review: Fast And $400, But Is It Consistent? Can we be any more definitive about Nvidia’s response to all of the Hawaii news?
Alright, so, Nvidia frankly didn’t need to do much to make its 780 Ti a sharp-looking piece of gear. I traced the history of this industrial design in The Story Of How GeForce GTX 690 And Titan Came To Be, and remain impressed with the work that Nvidia’s engineers did to make its latest high-end card aesthetically pleasing and mechanically effective.
The GeForce GTX 780 Ti is changed minimally from the design we already know. The card’s model name, etched into the fan shroud, is now painted black—a more noticeable contrast against the silver body than before. Also, the heat sink sitting under that big polycarbonate window is black as well, standing out more ominously than Titan’s aluminum fins. Because the 780 Ti is limited to 3 GB of GDDR5, the final difference is a lack of memory packages on the back of its PCB.
Otherwise—yeah, it’s a very similar-looking product that measures 10.5” long, employs the same centrifugal fan, and offers a similar display connectivity suite. You get two dual-link DVI ports, HDMI, and a full-size DisplayPort connector.
Under the hood, of course, there’s a fully-functional GK110 GPU running at higher clocks than Titan. But Nvidia cites the same 250 W TDP as Titan (indeed, that’s the number it uses for GeForce GTX 780, too). The company says that this is correct—careful binning lets it turn on more of the processor and operate at higher clocks without exceeding the 250 W board power figure.
As a result, GeForce GTX 780 Ti employs the same eight- and six-pin power connectors as 780 and Titan.
Although Nvidia sometimes limits the number of cards that can be used together, it supports four-way SLI configurations with GeForce GTX 780 Ti. Of course, you'll need a compatible platform; it isn't enough to simply use a Z87-based motherboard with its 16 lanes of third-gen PCIe divided up, for example. A properly-equipped X79 board will work, as will a mainstream system with the right PLX switch.
Nvidia also makes a big deal about software adding value to GeForce GTX 780 Ti. To begin, there’s a three-game bundle that includes Assassin’s Creed IV: Black Flag, Batman: Arkham Origins, and Splinter Cell Blacklist. I rarely get very excited about game bundles, and this one is no exception. Assassin’s Creed is a console port designed for PlayStation 3 and Xbox 360. Batman hasn’t been getting the warmest reception. And I’m personally not a devotee of the Splinter Cell franchise. Nevertheless, that’s still $170 of free games for folks interested in the trio of titles.
More compelling to me is the beta introduction of ShadowPlay (finally). Not everyone is going to get as much of a kick out of this—mostly because not everyone wants to record and play back moments from their digital conquests. However, as a former WoW raider, I have a directory of boss kill videos from back in the day that simply slammed my PC as I tried to capture them with Fraps. Offloading the encode would have been simply brilliant, and I know there are plenty of folks looking for the same functionality today. For more on ShadowPlay and its impact on gaming performance, check out Nvidia's Shield Revisited: Console Mode, Streaming, And More.
We’re applying the same methodology used to test AMD’s Radeon R9 290: mainly, each graphics card is subjected to five minutes of gameplay before we fire up our benchmarks. What results is a more representative look at performance than simply running one test after another. Here’s a little secret: these are the same numbers run for that R9 290 launch—I simply added the GeForce GTX 780 Ti data to them.
Regarding the debate about variability and AMD’s Hawaii-based cards: like it or not, R9 290X operates at a range between 727 and 1000 MHz, and 290 runs between 662 and 947 MHz. Depending on the ambient environment you’re in (our lab is climate-controlled to 78 degrees, controlled by a Nest thermostat, but naturally ranges plus or minus a couple of degrees at a time), Radeon R9 290X will react. As it happens, our retail card tends to run at lower clock rates in a cool room. Increase the ambient to 78-80 degrees, and that’s when it drops clock rates more significantly compared to the board we got from AMD. Even if AMD hammers this issue out with a new driver, thermally-constrained workloads will still push Hawaii-based cards down under their peak performance levels.
| Test Hardware | |
|---|---|
| Processors | Intel Core i7-3970X (Sandy Bridge-E) 3.5 GHz Base Clock Rate, Overclocked to 4.3 GHz, LGA 2011, 15 MB Shared L3, Hyper-Threading enabled, Power-savings enabled |
| Motherboard | MSI X79A-GD45 Plus (LGA 2011) X79 Express Chipset, BIOS 17.5 |
| Memory | G.Skill 32 GB (8 x 4 GB) DDR3-2133, F3-17000CL9Q-16GBXM x2 @ 9-11-10-28 and 1.65 V |
| Hard Drive | Samsung 840 Pro SSD 256 GB SATA 6Gb/s |
| Graphics | Nvidia GeForce GTX 780 Ti 3 GB |
| AMD Radeon R9 290X 4 GB | |
| AMD Radeon R9 290 4 GB | |
| AMD Radeon R9 280X 3 GB | |
| AMD Radeon HD 7990 6 GB | |
| Nvidia GeForce GTX Titan 6 GB | |
| Nvidia GeForce GTX 780 3 GB | |
| Nvidia GeForce GTX 690 4 GB | |
| Power Supply | Corsair AX860i 860 W |
| System Software And Drivers | |
| Operating System | Windows 8 Professional 64-bit |
| DirectX | DirectX 11 |
| Graphics Driver | Nvidia GeForce 331.70 Beta (GeForce GTX 780 Ti) |
| Nvidia GeForce 331.65 WHQL (All OtherNvidia cards) | |
| AMD Catalyst 13.11 Beta 8 (Radeon R9 290) | |
| AMD Catalyst 13.11 Beta 7 (All Other AMD cards) | |
| Benchmarks And Settings | |
|---|---|
| Battlefield 4 | 1920x1080, 2560x1440, and 3840x2160: Ultra Quality Preset, v-sync off, 100-second Tashgar playback. FCAT for 1920x1080 and 2560x1440; Fraps for 3840x2160 |
| Arma III | 1920x1080, 2560x1440, and 3840x2160: Ultra Quality Preset, 8x FSAA, Anisotropic Filtering: Ultra, v-sync off, Infantry Showcase, 30-second playback, FCAT and Fraps |
| Metro: Last Light | 1920x1080, 2560x1440, and 3840x2160: Very High Quality Preset, 16x Anisotropic Filtering, Low Motion Blur, v-sync off, Built-In Benchmark, FCAT and Fraps |
| The Elder Scrolls V: Skyrim | 1920x1080, 2560x1440, and 3840x2160: Ultra Quality Preset, FXAA Disabled, 25-second Custom Run-Through, FCAT and Fraps |
| BioShock Infinite | 1920x1080, 2560x1440, and 3840x2160: Very High Quality Preset, 75-second Opening Game Sequence, FCAT and Fraps |
| Crysis 3 | 1920x1080, 2560x1440, and 3840x2160: High System Spec, High Texture Resolution, MSAA Low (2X), 60-second Custom Run-Through, FCAT and Fraps |
| Tomb Raider | 1920x1080, 2560x1440, and 3840x2160: Ultimate Quality Preset, FXAA, 16x Anisotropic Filtering, TressFX Hair, 45-second Custom Run-Through, FCAT and Fraps |

Average frame rates start out exceptional, as GeForce GTX 780 Ti comes close to matching Nvidia’s dual-GPU GTX 690 at 1920x1080.
Dropping to 2560x1440 hits performance pretty hard. However, you’re still looking at more than 40 FPS from GeForce GTX 780 Ti, besting Titan, 780, and 770, in that order.
The GK110-equipped 780 Ti is faster than AMD’s R9 290X. The magnitude of its victory depends on the environment you use it in. Our 290X from AMD maintains its clock rate really well in our 78-degree office, so the 780 Ti only beats it by 5%. The card we bought starts shedding frequency faster under the same conditions. GeForce GTX 780 Ti is 23% faster in that case. AMD claims that an upcoming driver will better-normalize absolute fan speeds between cards, which should facilitate more even cooling. Should that prove true, we may see these massive gaps disappear. As of this writing, however, our observations stand.
Our results become inconsistent at 3840x2160, though frame rates are too low at that resolution for a playable experience anyway.



In the two resolutions that matter most with a single-GPU graphics card, GeForce GTX 780 Ti is a standout.

Frame time variance remains wonky on the Nvidia cards at 1920x1080, and it’s pretty ugly across the board at 3840x2160. However, our numbers at 2560x1440 look much more in line with what we’d hope to see.

All of the cards we’re testing are fast enough to average playable frame rates at Battlefield 4’s Ultra quality preset. Several cards also fare really well at 2560x1440, too.
GeForce GTX 780 Ti is 8% faster than the press-sampled R9 290X, which holds on to its clock rate most reliably. Our retail board drops to lower frequencies in the lab, allowing 780 Ti to beat it by 29%.
No matter which card you pick, 3840x2160 is simply not playable.



Again, under the dual-GPU boards, Nvidia’s GeForce GTX 780 Ti is the victor at 1920x1080 and 2560x1440. Ultra HD is more muddled, but only because we’re dealing with eight cards crammed in under the 25 FPS mark.

Overall, observed frame time variance looks great. The only exceptions happen at 3840x2160, where average frame rates are too low anyway.

Our numbers in BioShock Infinite are huge. The GeForce GTX 780 Ti is the fastest single-GPU card at 1920x1080 and 2560x1440, beating the R9 290X we received from AMD by 12% at QHD. It beats the retail board by 31%.
To reiterate, this is not to say that press boards are fast and retail boards are slow. Rather, 290X is operating within an almost-300 MHz range. The retail board simply drops to the lower end of that range faster than the press board in our testing environment.



Even our retail R9 290X outperforms GeForce GTX 780 at 3840x2160, exceeding an average of 51 FPS. The 290X from AMD is only about 8% quicker (putting it ahead of GeForce GTX Titan). However, 780 Ti trumps the entire field and never drops below 50 FPS in our benchmark.

Frame time variance is tiny—even the worst-case measurements fall under 1 ms.

There remains a ceiling that these high-end cards can’t bust through at 1920x1080. It appears related to v-sync, though the feature is forced off in both card vendors’ drivers. The more interesting resolution is 2560x1440, and at that setting, GeForce GTX 780 Ti is second only to the dual-GPU solutions. It’s about 5% quicker than the resilient AMD-sampled Radeon R9 290X, and nearly 30% faster than our retail-purchased board, which slows down in a warm lab.



With dips to 20 FPS at 3860x2160, Ultra HD is not a playable resolution for single-GPU configurations.

Several cards spike above 5 ms at 1920x1080 in our frame time latency measurement. As with the average frame rate results, there’s no definitive reason this should be the case; it just looks like something other than graphics processing is bottlenecking performance, simultaneously causing less consistency in the way frames are delivered.

At every resolution, GeForce GTX 780 Ti is the fastest single-GPU graphics card you can buy. Granted, that distinction isn’t uncontestable—it’s only a couple of frames up on AMD’s Radeon R9 290X press board with its higher 40% fan speed, and could easily be challenged by a third-party board with better cooling. However, our retail card shows us that when Hawaii heats up and cannot be cooled fast enough, 780 Ti could be up to 22% faster at 2560x1440 as 290X drops below the performance levels of a vanilla GeForce GTX 780.
Bumping up fan speed on the AMD card is going to help that. However, then you’re also messing with acoustics, and noise can be a big issue with the reference cooler, too.



I personally think the average frame rate chart is more telling than any of the frame rate over time graphics, if only because the spread between boards appears so tight. It is worth pointing out that the GeForce GTX 780 Ti doesn’t drop below 30 FPS at 2560x1440, while our retail R9 290X does flirt with this boundary.

These are the same frame time variance outliers observed in our Radeon R9 290 coverage. We have four different GPUs with varying memory configurations represented, so it’s unlikely that any one variable is to blame. More than likely, if we were to zoom in to 96th or 97th percentile numbers, similar worst-case conditions would crop up for the other cards, too.

Skyrim is completely playable on even a Radeon R9 280X at 3840x2160, so while faster cards at lower resolutions do demonstrate different frame rates, the deltas between them aren’t as notable as they are in other titles.
We’re definitely platform-bound at 1920x1080. Stepping up to 2560x1440 seperates the pack a little bit, as GeForce GTX 780 Ti only loses to the dual-GPU cards. Then, for some reason, GeForce GTX Titan steps up at Ultra HD, while the rest of the contenders file in around 70 FPS and lower.



Again, these are all playable frame rates. The R9 280X never even touches the 40 FPS boundary at 3840x2160.

Frame time variance is very low, for the most part, up until we hit 4K. At that point, even the worst-case figures would likely be imperceptible as stutter.

AMD has an inherent advantage in Tomb Raider when we use the Ultimate detail setting with TressFX enabled. However, the GeForce GTX 780 Ti still manages to stave off the R9 290X AMD sent us at all three tested resolutions.
Of course, our numbers for the retail 290X show Nvidia’s 780 Ti 20% faster at 2560x1440. That’s quite a bit more significant.



The dual-GPU boards are still fastest. But you’re looking at spending $800 or more for the problematic Radeon HD 7990 and $1000 for GeForce GTX 690. Although the 690 is a more attractive offering, its 2 GB per GPU is even more of an issue at high resolutions than 780 Ti’s 3 GB.

Frame time variance is really low, except for the GeForce GTX 770 at 3840x2160. This might be related to memory capacity, since 770 is the only 2 GB board in our comparison. GeForce GTX 690 could be subject to the same issue. But because we can't generate FCAT data for both dual-GPU boards at 4K, we’re leaving the space for those bars blank.
It doesn’t really look like the switch from OpenGL to DirectX is becoming a trend, since Autodesk is the only major company making this drastic change. Of course, the advantage of DirectX for end-users is that they can do without specialized workstation cards, so long as they’re willing to forgo the drivers optimized for specific applications, greater compute performance, and so on. DirectX’s disadvantage is its use of single-precision coordinates, which can easily lead to display errors in complex models, resulting in things like the feared push-through effect of surfaces right behind another surface.
Nvidia's GeForce GTX 780 Ti is the company's fastest consumer graphics card in this benchmark, and it rules supreme in the Cadalyst 3D suite. However, the GK110-based board does succumb to the latest Hawaii-based Radeon R9s in Inventor.



In the end, desktop graphics cards fare the same with OpenGL; they get a spot in the middle of the pack, despite better specifications. OpenGL performance still relies heavily on driver optimization, which consumer graphics cards simply do not benefit from (by design). Of course, specific applications and engines also play a role, but, by and large, OpenGL-based workloads aren't where you're going to find gaming hardware shining brightest.



The OpenCL results are similar to what we found when we reviewed GeForce GTX Titan. This API has been seemingly neglected for some time by Nvidia and could use attention. AMD's Tahiti- and Hawaii-based boards turn in superior results.



One might expect to see massive performance from Nvidia’s new offering here, but the GeForce GTX 780 Ti’s double-precision performance (1/24-rate) is much more limited than what you can achieve with GeForce GTX Titan (1/3-rate).
In many applications, this really doesn’t matter much, but the otherwise slower Titan is twice as fast in Blender. A look at a computational finance workload (Monte Carlo Price Options) shows a real-world double-to-single precision ratio of 1:25.8 for the GeForce GTX 780 Ti and 1:5.8 for the Titan. This is fairly close to the expected values. Clearly, you'll need to decide for yourself if lower compute performance is a problem before you spend $700 on a 780 Ti.



Measuring Power Consumption
We’re using a current clamp to measure power consumption at the external PCIe power cable and, using a special PCB, directly at the PCIe slot. These measurements are recorded in parallel and in real time, added up for each second, and logged using multi-channel monitoring along with the respective voltages. All of this results in a representative curve over the span of 10 minutes. That's all we really need, since these cards reach their operating temperatures relatively quickly.
The curve isn’t just representative; it's also exact. Measuring system power introduces bias, since a number of factors can affect consumption other than the graphics card. A faster GPU might cause the CPU’s power consumption to go up as well, for example, since a limiting factor holding it back is gone.
We’re including three different GK110-based graphics cards in our measurements. Starting from scratch allows for a comparison that’s as objective as possible. We’re using the new GeForce GTX 780 Ti, the Titan, and Gigabyte's GTX 780 WindForce GHz Edition, which might be able to compete with the two other cards thanks to elevated clock rates.
Let’s first take a detailed look at each of the three cards. We’re benchmarking both boards with Nvidia's reference cooler twice: once with default settings and once at 70 °C GPU temperature. The latter necessitates a manual fan speed increase.
GeForce GTX 780 Ti
We start with a look at the frequencies, which might help us explain the somewhat unexpected differences in power consumption later.

Even under full load, the GeForce GTX 780 Ti balances its frequencies well. Consequently, its power consumption is similar in the two scenarios. Nvidia has raised its target temperature target from 80 to 83 °C, which results in a fan RPM that's a little bit higher. Still, the shape of the curve shows how the power consumption decreases once the card backs off of its GPU Boost clock rates.

Things look different when the fan RPM is pushed up. We sought to achieve a 70 °C GPU temperature by setting Nvidia's fan speed to 80% duty cycle, which yields additional performance. We’ll take a closer look at this difference a little later in our efficiency section. For now, here’s a nicely shaped curve:

GeForce GTX Titan
Next up: the former champion. With a temperature target of only 80 °C and a fan that spins only half as fast, the Titan faces an uphill battle. Let’s first take a look at the frequencies:

The difference is almost scary to behold, suggesting the Titan's fan could have probably been pushed a little harder. Aiming for a 70 °C GPU temperature using 80-percent fan speed, GeForce GTX Titan lives up to its name and can even show off its GPU Boost feature a bit. So, what does the card’s power consumption look like after its clock rates are uncorked by pushing a lot of air across its heat sink? First, a look at the stock settings:

Power consumption drops alongside clock rate, which also negatively impacts game performance. Again, we'll evaluate this phenomenon's effect on efficiency shortly.
How about when we dramatically ramp up cooling? GeForce GTX Titan puts its pedal to the medal and pulls quite a bit more power.

This is just a look at power, so all we can tell from these charts is that draw increases by 18 W. Our hope would be that you also get a corresponding performance boost, too. We'll see shortly.
Gigabyte GTX 780 Windforce GHz Edition
The round-up of GK110-based boards is completed by Gigabyte's brand new GTX 780 WindForce GHz Edition. This card features fewer CUDA cores, but they're running at higher clock rates. Is that enough of a compromise to keep a lower-cost, overclocked graphics card competitive? We've seen in the past that GK110’s sweet spot is under 1000 MHz. However, there's also a new stepping of the chip available, and Gigabyte's offering does facilitate a completely consistent frequency, even under load, thanks to its excellent cooler. The card is naturally more expensive than other GTX 780 boards, so the company has to hope it does battle based on elevated clock rates.

Gigabyte's GTX 780 WindForce GHz Edition manages to hold a core frequency of almost 1180 MHz. This is reflected in our power consumption measurements, though.

We see an average power draw of 226 W, putting the Gigabyte card at the same level as our more aggressively-cooled GeForce GTX 780 Ti, and 4 W beyond the 780 Ti's stock configuration.
Due to popular demand, we once again use our power consumption results at 1920x1080 for the efficiency computations.
Gaming Loop Performance
First, let’s take a look at the average frame rate and corresponding percentage comparison. As expected, Nvidia's GeForce GTX 780 Ti is the hands-down winner. The GTX 780 WindForce GHz Edition isn’t that far behind, whereas the GeForce GTX Titan is much more limited by its stock fan settings than the 780 Ti. A reference GeForce GTX 780 can’t really keep up.


Efficiency
It’s time to factor power consumption back into the picture. We’re looking at how much power each graphics card needs to generate its frame rate. It quickly becomes apparent that the latest spin of GK110, binned aggressively, does well.


Interestingly, the original GeForce GTX 780 is the winner, since it’s the closest to the processor's sweet spot. The other graphics cards get noticeably less efficient as their core frequencies increase. Gigabyte's GTX 780 WindForce GHz Edition is an interesting case, with an efficiency that’s close to a reference GeForce GTX Titan and a cooling solution that puts it ahead of the Titan.
Here’s the direct comparison of every tested card's results. Looking at the GeForce GTX 780 Ti’s numbers, it immediately becomes clear that pushing flagship-class performance still requires a lot of power. Still, when you take it gaming capabilities into account, Nvidia's newest ultra-high-end board is also the most efficient card in its segment.





The bottom line is that the GeForce GTX 780 Ti offers a very appealing power consumption-to-performance ratio. The new Radeon R9 graphics cards gain ground on Nvidia, but they're nowhere close to catching up.
Another factor to consider is price, and it remains to be seen how the market reacts to all of these introductions in so short of a time frame. Right now, there’s a fitting solution for pretty much everyone, and a more competitive graphics card space is proving to be a boon to enthusiasts who have access to more performance at lower prices than ever before.
As usual, we're measuring noise levels perpendicular to the middle of the graphics cards using a studio microphone from a distance of 50 cm. It immediately becomes apparent that Nvidia opted to increase the GeForce GTX 780 Ti’s fan speed compared to the 780 and Titan to better ensure the card doesn't reach its thermal limit. This does make the new card louder, but that’s the price you pay for more performance. Here are the results:
The end result isn’t surprising, and it’s still preferable to hitting the temperature ceiling. Gigabyte's GTX 780 WindForce GHz Edition’s custom cooler does very well, but also blows hot air back into your case. You'll need to decide whether that's a side effect you're willing to accept from a high-end graphics card.
Gaming Loop Comparison Videos
Feel free to listen for yourself:
Just for fun, here’s one more video showing Nvidia's cooler running at 80 percent fan speed to keep its GPU from hitting a thermal limit.
I’ve been spending so much time trying to figure out why my Radeon R9 290X cards perform differently that I almost didn’t get this story written. The investigation continues, and includes absolute fan speeds that correspond to dissimilar PWM control, along with sensitivities to ambient conditions. Regardless of why you might see two Hawaii-based boards delivering frame rates separated by double-digit percentages, the real point is that this behavior is designed into the Radeon R9 290X. AMD’s card is meant to range from 727 to 1000 MHz, depending on the environment. Given the reference cooler, specifically, and the Quiet firmware setting, which together can't quite keep up with Hawaii, you have to expect variance. It takes cranking up the fan speed or installing a third-party cooler to prevent severe performance pull-backs.
Nvidia seems happy capitalizing on this confusing state of affairs, and is positioning GeForce GTX 780 Ti as the fastest single-GPU board out there…consistently. The degree to which it wins depends on how AMD’s flagship is used. Sometimes the 780 Ti takes a single-digit-percent win; other times it’s 30%+ faster. Whether Nvidia’s advantage is worthwhile depends on what you’d see from your R9 290X.
Beyond its performance, GeForce GTX 780 Ti is more efficient than Titan thanks to tightly-binned GK110B GPUs that come fully-enabled, operate at higher frequencies, and yet are rated for the same 250 W TDP. As a result, this is a quiet card. It elegantly blows waste heat out of its I/O bracket. And the board looks good. We know that thermal solution isn’t cheap, but it’s the reason Nvidia keeps gathering praise for its design, while everyone looks forward to third-party board vendors replacing AMD’s reference effort.
GeForce GTX 780 Ti isn’t perfect. Priced at $700, it’s a bargain compared to Titan. But it’s not a bargain given the competition (after all, we already know what it takes to make R9 290X and 290 run faster). Nvidia does handicap the card’s FP64 performance for purposes of segmentation. However, AMD’s doing that now as well with its Hawaii-based boards. Perhaps the biggest issue enthusiasts will find with 780 Ti is memory capacity. Titan ships with 6 GB of GDDR5, while AMD includes 4 GB on its $550 Radeon R9 290X and $400 290. In today’s games, and at resolutions as high as 3840x2160, 780 Ti’s 3 GB should be sufficient. However, it’s already possible to punch above that in Battlefield 4 using three 2560x1440 monitors. When you’re sinking serious coin on ultra-high-end hardware, future-proofing is an important consideration.
When the dust settles, though, GeForce GTX 780 Ti does emerge as the fastest single-GPU graphics card you can buy for common enthusiast-class resolutions. It houses an incredibly complex processor and does a superb job keeping the chip cool, quietly. Living in Bakersfield, where it gets into the 100-degree range during summer, I particularly appreciate 780 Ti’s consistent performance. Though I’m not necessarily a fan of Nvidia’s price point, something tells me that the folks who are truly interested in buying a GeForce GTX 780 Ti know why they want it, and are more than happy to scrape $300 off of Titan for a better-performing gaming product.
Now, if you’ll excuse me, I need to figure out how to get GeForce GTX 780 Ti into my mini-ITX Tiki. The card may be a little rich for my budget, but the fact that it’ll fit—physically and electrically—is nothing short of amazing.










