Power Consumption And Heat
Here’s where things get dicey. I knew going into this story that the GeForce GTX 480 and 470 would be hot, power-hungry boards—Nvidia told me as much back in January. But measuring the extent of those values is an unscientific practice, at best.
FurMark is generally frowned upon as an unrealistic representation of peak power (a power virus, as AMD’s Dave Baumann puts it). However, it does serve as a theoretical worst-case scenario. Indeed, while the GeForce GTX 480 doesn’t use as much power as the dual-GPU GeForce GTX 295, it does out-consume the dual-GPU Radeon HD 5970 (no small feat, at 450W system power draw). Also, the GeForce GTX 470 uses significantly more power than the Radeon HD 5870.
Notably-missing from the chart is ATI’s Radeon HD 4870 X2, which we know from past reviews to chew up about as much power as Nvidia’s GeForce GTX 295. However, neither of the X2s in the lab seem to respond to FurMark at all anymore, running at a constant 13 frames per second or so and chewing up slightly-higher-than-idle power numbers. Maybe ATI “cured” that virus with forced lower frequencies in FurMark (the X2 was only able to hit 13 FPS or so, while our other cards were doing 40 or 50 FPS). But that doesn’t prevent the X2 from jumping into the 400+ watt system power range in actual games.
Heat jumps in FurMark as well, though it’s worth noting that none of these boards encountered heat-related stability issues. Keeping up with the thermals does mean the GeForce GTX 480’s fan ramps up fairly aggressively and does generate quite a bit of noise. However, we were unable to replicate that behavior in any real-world gaming load.
We’re actually a bit surprised about the idle power numbers as they were measured. AMD impressed us with the Radeon HD 5870’s 27W idle board rating, achieved in part by clocking its GPU down to 157 MHz and its GDDR5 memory to 300 MHz. Nvidia goes even further, dropping clocks to 50 MHz core, 67 MHz memory (270 MT/s data rate), and 100 MHz for the shaders. Nvidia doesn’t cite its idle board power, but an educated guess would still put the GeForce GTX 480 around 60W at those frequencies.
Just how do the GeForce GTX 480 and 470 size up in such a real-world load? Great question.
I ran all of the DirectX 11 cards in our story through the Unigine v2.0 benchmark, measuring average performance in frames per second. During the run, I had each configuration hooked up to a logger, polling power consumption every two seconds, yielding an average over the run. By dividing power use into average performance, we get an index that should give efficiency advocates something to think about.
Despite its aggressive power use, the Radeon HD 5970’s performance is enough to make it the most efficient board in the lineup, followed by the Radeon HD 5850 and Radeon HD 5870. Nvidia’s GeForce GTX 480 and GTX 470 pull up the rear. What I'm wondering is this: Nvidia rates the GTX 480 with a 250W maximum board power. AMD cites 294W for the 5970. Why do we keep seeing Nvidia's card using more system power?
Admittedly, these results are easily skewed—we can drive them one way or the other by hand-picking certain setting to cater to one or another architecture’s strengths. For example, turning off tessellation would give the Radeons a sizable advantage, since they take a more substantial hit when the feature is enabled. In fact, a couple of days before the launch, Unigine released v2.0 of the Heaven test, which adds even more emphasis on tessellation than the first revision used to generate our first batch of results. Thus, our numbers represent a best-case scenario for Nvidia; easing up on the tessellation load shifts the efficiency index even further in favor of the Radeon HD 5800-series cards, and we have charts demonstrating that, too.