GPU Boost 2.0
I didn’t have a chance to do a ton of testing with Nvidia’s second-generation GPU Boost technology in my GeForce GTX Titan story, but the same capabilities carry over to GeForce GTX 780. Here’s the breakdown:
GPU Boost is Nvidia’s mechanism for adapting the performance of its graphics cards based on the workloads they encounter. As you probably already know, games exact different demands on a GPU’s resources. Historically, clock rates had to be set with the worst-case scenario in mind. But, under “light” loads, performance ended up on the table. GPU Boost changes that by monitoring a number of different variables and adjusting clock rates up or down as the readings allow.
In its first iteration, GPU Boost operated within a defined power target—170 W in the case of Nvidia’s GeForce GTX 680. However, the company’s engineers figured out that they could safely exceed that power level, so long as the graphics processor’s temperature was low enough. Therefore, performance could be further optimized.
Practically, GPU Boost 2.0 is different only in that Nvidia is now speeding up its clock rate based on an 80-degree thermal target, rather than a power ceiling. That means you should see higher frequencies and voltages, up to 80 degrees, and within the fan profile you’re willing to tolerate (setting a higher fan speed pushes temperatures lower, yielding more benefit from GPU Boost). It still reacts within roughly 100 ms, so there’s plenty of room for Nvidia to make this feature more responsive in future implementations.
Of course, thermally-dependent adjustments do complicate performance testing more than the first version of GPU Boost. Anything able to nudge GK110’s temperature up or down alters the chip’s clock rate. It’s consequently difficult to achieve consistency from one benchmark run to the next. In a lab setting, the best you can hope for is a steady ambient temperature.
In addition to what I wrote for Titan, it should be noted that you can adjust the thermal target higher. So, for example, if you want GeForce GTX 780 to modulate clock rate and voltage based on an 85- or 90-degree ceiling, that’s a configurable setting.
Eager to keep GK110 as far away from your upper bound as possible? The 780’s fan curve is completely adjustable, allowing you to specify duty cycle over temperature.
Troubleshooting Overclocking
Back when Nvidia briefed me on GeForce GTX Titan, company reps showed me an internal tool able to read the status of various sensors, which made it possible to diagnose problematic behavior. If an overclock was pushing GK110’s temperature too high, causing a throttle response, it’d log that information.

The company now enables that functionality in apps like Precision X, triggering a “reasons” flag when certain boundaries are crossed, preventing an effective overclock. This is very cool; you’re no longer left guessing about bottlenecks. Also, there’s an OV max limit readout that lets you know if you’re pushing the GPU’s absolute peak voltage. If this flag pops, Nvidia says you risk frying your card. Consider that a good place to back off your overclocking effort.
- GK110 Gets A Little Bit Leaner
- GeForce GTX 780: The Card
- GeForce Experience And ShadowPlay
- GPU Boost 2.0 And Troubleshooting Overclocking
- Test Setup And Benchmarks
- Single-Card Results: Battlefield 3
- Single-Card Results: BioShock Infinite
- Single-Card Results: Borderlands 2
- Single-Card Results: Crysis 3
- Single-Card Results: Far Cry 3
- Single-Card Results: Hitman: Absolution
- Single-Card Results: The Elder Scrolls V: Skyrim
- Single-Card Results: Tomb Raider
- Multi-GPU Results: Battlefield 3
- Multi-GPU Results: BioShock Infinite
- Multi-GPU Results: Borderlands 2
- Multi-GPU Results: Crysis 3
- Multi-GPU Results: Far Cry 3
- Multi-GPU Results: Hitman: Absolution
- Multi-GPU Results: The Elder Scrolls V: Skyrim
- Multi-GPU Results: Tomb Raider
- Heat, Noise, And Cooling
- Power Consumption And GPU Boost
- OpenGL: 2D And 3D Performance
- DirectX And CAD: 2D And 3D Performance
- CUDA Performance
- OpenCL: Single-Precision
- OpenCL: Double-Precision
- GeForce GTX 780: Another GK110-Based Card For Wealthy Gamers

Of course, one could argue that as we get closer to higher-end products, the performance increase is always minimal and price to performance ratio starts to increase, however, for the past 3-4 years (or so I guess), never has it been that the 2nd highest-end GPU having such low performance difference with the highest-end GPU. It's usually significant enough that the highest end GPU (GTX x80) still has it's place.
Tl;dr,
The GTX Titan was released to make the GTX 780 look incredibly good, and people (especially on the internet), will spread the news fast enough claiming the $650 release price for the GTX 780 is good and reasonable, and people who didn't even bother reading reviews and benchmarks, will take their word and pay the premium for GTX 780.
Nvidia is taking a different route to compete with AMD or one could say that they're not even trying to compete with AMD in terms of price/performance (at least for the high-end products).
Of course, one could argue that as we get closer to higher-end products, the performance increase is always minimal and price to performance ratio starts to increase, however, for the past 3-4 years (or so I guess), never has it been that the 2nd highest-end GPU having such low performance difference with the highest-end GPU. It's usually significant enough that the highest end GPU (GTX x80) still has it's place.
Tl;dr,
The GTX Titan was released to make the GTX 780 look incredibly good, and people (especially on the internet), will spread the news fast enough claiming the $650 release price for the GTX 780 is good and reasonable, and people who didn't even bother reading reviews and benchmarks, will take their word and pay the premium for GTX 780.
Nvidia is taking a different route to compete with AMD or one could say that they're not even trying to compete with AMD in terms of price/performance (at least for the high-end products).
Thats apretty bad analogy. A gpu is still smooth even with some of the cores/vram/etc turned off, it doesn't increase latency/frametimes/etc.
I must've missed something. Why wait a week?
Probably to get the GTX 770 launch into the picture, and maybe price cuts from AMD.
That was my opinion after I read Anandtech's review.
Not all is right at nvidia and this is just desperate times for desperate measures stuff, we now await AMD's response and if they play it right and make the node jump it could end up being very ugly.
but i don't know why people are complaining about the price because nvidia had no good competition for it at the moment and when they do they will have to reduce it
GK110 isn't a new anything. It's been around as long as the GTX 680 aka GK104 and is still part of the Kepler family. I think the new cards you're thinking of that are due sometime next year (maybe?) are the Maxwell family of cards.
I still maintain that this is what the 680 should have been a year ago, but I've beaten that horse to death too many times so I'll shut up...
No, if I meant Maxwell I would have said Maxwell. GTX 700 is GK110 but in the long and short Nvidia talked this up to be an almighty part yet we are only talking about 20% faster than the aging 7970. So now we wait for AMD's response which may still be some time yet.
I'd rather save $200+ and get a 7970GE. If Nvidia really wants to be aggressive they need to sell this for ~$550.
Granted, the price difference between this and Titan is ridiculously, making it a no-brainer purchase. Not for me though. Not upgrading from two 670s yet, hehe.