We test all cards at all load levels at 72°F room temperature. At least one hour before the test, all cards are moved to the testing room to make sure they are at room temperature when testing begins. After powering on the test rig, we wait for 15 minutes at idle load until temperatures stabilize before we start measuring. We log the temperatures using MSI Afterburner.
In order to measure the temperature under load, we use the Bitcoin mining application (GPGPU) or, if the card is incompatible, a pre-programmed Perlin noise loop from 3DMark Vantage. This typically generates very high FurMark-like loads, but, unlike with FurMark, the drivers don't throttle the performance (and, indirectly, power draw). Thus, the temperatures we see are worst-case, non-throttled temperatures.
Why don’t we just measure the card temperature in a demanding game? The answer is simple: even a demanding game may result in an inconsistent load if we're not diligent about recording the result at the same time. There are also games that generate extremely high frame rates while displaying menus, and the resulting power draw may not just match, but actually exceed, the power draw seen in a Metro 2033 loop and even approach the power draw of a Perlin noise loop. With GPGPU, you know how hot the card will get in a worst-case scenario. Also, the number of real-world GPGPU applications is increasing, so the extreme load numbers will become more and more relevant in the future.
We conduct the temperature measurements at the same time as the power measurements, which are discussed on the next page.
I agree. I know Tom's spends a lot of time benchmarking, but Folding@home is something that is a bit more common. I would love to see F@H in some articles.
BTW, I appreciate all the work you guys do.
The 5760x1080 resolution will also push the GPU's harder than a 2560x1440/1600 could so why limit the resolution there?
So what would YOU like to see used then? If they were trying to push Nvidia wouldn't Hawx 2 be in the suite?