GPU Boost 2.0 And Troubleshooting Overclocking
GPU Boost 2.0
I didn’t have a chance to do a ton of testing with Nvidia’s second-generation GPU Boost technology in my GeForce GTX Titan story, but the same capabilities carry over to GeForce GTX 780. Here’s the breakdown:
GPU Boost is Nvidia’s mechanism for adapting the performance of its graphics cards based on the workloads they encounter. As you probably already know, games exact different demands on a GPU’s resources. Historically, clock rates had to be set with the worst-case scenario in mind. But, under “light” loads, performance ended up on the table. GPU Boost changes that by monitoring a number of different variables and adjusting clock rates up or down as the readings allow.
In its first iteration, GPU Boost operated within a defined power target—170 W in the case of Nvidia’s GeForce GTX 680. However, the company’s engineers figured out that they could safely exceed that power level, so long as the graphics processor’s temperature was low enough. Therefore, performance could be further optimized.
Practically, GPU Boost 2.0 is different only in that Nvidia is now speeding up its clock rate based on an 80-degree thermal target, rather than a power ceiling. That means you should see higher frequencies and voltages, up to 80 degrees, and within the fan profile you’re willing to tolerate (setting a higher fan speed pushes temperatures lower, yielding more benefit from GPU Boost). It still reacts within roughly 100 ms, so there’s plenty of room for Nvidia to make this feature more responsive in future implementations.
Of course, thermally-dependent adjustments do complicate performance testing more than the first version of GPU Boost. Anything able to nudge GK110’s temperature up or down alters the chip’s clock rate. It’s consequently difficult to achieve consistency from one benchmark run to the next. In a lab setting, the best you can hope for is a steady ambient temperature.
In addition to what I wrote for Titan, it should be noted that you can adjust the thermal target higher. So, for example, if you want GeForce GTX 780 to modulate clock rate and voltage based on an 85- or 90-degree ceiling, that’s a configurable setting.
Eager to keep GK110 as far away from your upper bound as possible? The 780’s fan curve is completely adjustable, allowing you to specify duty cycle over temperature.
Troubleshooting Overclocking
Back when Nvidia briefed me on GeForce GTX Titan, company reps showed me an internal tool able to read the status of various sensors, which made it possible to diagnose problematic behavior. If an overclock was pushing GK110’s temperature too high, causing a throttle response, it’d log that information.
The company now enables that functionality in apps like Precision X, triggering a “reasons” flag when certain boundaries are crossed, preventing an effective overclock. This is very cool; you’re no longer left guessing about bottlenecks. Also, there’s an OV max limit readout that lets you know if you’re pushing the GPU’s absolute peak voltage. If this flag pops, Nvidia says you risk frying your card. Consider that a good place to back off your overclocking effort.
Of course, one could argue that as we get closer to higher-end products, the performance increase is always minimal and price to performance ratio starts to increase, however, for the past 3-4 years (or so I guess), never has it been that the 2nd highest-end GPU having such low performance difference with the highest-end GPU. It's usually significant enough that the highest end GPU (GTX x80) still has it's place.
Tl;dr,
The GTX Titan was released to make the GTX 780 look incredibly good, and people (especially on the internet), will spread the news fast enough claiming the $650 release price for the GTX 780 is good and reasonable, and people who didn't even bother reading reviews and benchmarks, will take their word and pay the premium for GTX 780.
Nvidia is taking a different route to compete with AMD or one could say that they're not even trying to compete with AMD in terms of price/performance (at least for the high-end products).
Thats apretty bad analogy. A gpu is still smooth even with some of the cores/vram/etc turned off, it doesn't increase latency/frametimes/etc.
I must've missed something. Why wait a week?