The Myths Of Graphics Card Performance: Debunked, Part 1

Thermal Management In A Modern Graphics Card

Modern graphics cards from both AMD and Nvidia employ protection mechanisms to ramp up fan speeds, and eventually throttle back clock rates and voltages if they get too hot. This technology doesn't always work to keep your system stable (particularly when you're overclocking). Rather, it's meant to keep the hardware from getting damaged. So it's not unheard of for an over-tuned card to crash, requiring a reset.

There has been much debate about how hot is too hot for a GPU. However, higher temperatures, if they're tolerated by the equipment, are actually desirable as they result in better heat dissipation overall (as the difference with ambient temperature, and thus amount of heat that can be transferred, is higher). At least from a technical perspective, AMD's frustration over reactions to the Hawaii GPU's thermal ceiling is understandable. There are no long-term studies that I'm aware of speaking to the viability of given temperature set points. From my own experiences with device stability, I have to rely on manufacturer specifications.

On the other hand, it is a well-known fact that silicon transistors broadly perform better at lower temperatures. That is the main reason you see competitive overclockers using liquid nitrogen to get the chips they're testing as cold as possible. In general, lower temperatures help facilitate more overclocking headroom.

Some of the most power-hungry cards in the world are the Radeon HD 7990 (375 W TDP) and GeForce GTX 690 (300 W TDP). Both are dual-GPU cards. Single-GPU boards tend to be quite a bit lower, though the Radeon R9 290-series cards creep up closer to 300 W. In either case, that's a lot of heat to dissipate.

Volumes have been written about graphics card cooling, so we wont delve into that. Rather, we're interested in what actually happens when you begin applying load to a modern GPU.

  1. You launch a processing-intensive application like a 3D game or your favorite bitcoin miner
  2. The card's clock rates increase to their nominal/boost values; the board starts warming up due to greater current absorption
  3. Fan speed progressively rises, up to a point defined by firmware; usually it'll taper off when acoustics approach 50 dB(A)
  4. If the programmed fan speed isn't enough to keep the GPU's temperature below a certain level, clock rates scale back until the temperature falls below the set threshold
  5. Your card should operate stably within a relatively narrow frequency and temperature range until the application driving the load is shut down

As you can imagine, the exact thermal throttling point depends on many factors, including the specific load, the enclosure's airflow, the ambient air temperature, and even ambient air pressure. That's why cards throttle at different times, or not at all. This thermal throttling point can be used to define a reference level of performance. And if we set a card's fan speed (and thus noise level) manually, we can create a noise-dependent measurement level. What use is that? Let's find out...

This thread is closed for comments
135 comments
    Your comment
  • manwell999
    The info on V-Sync causing frame rate halving is out of date by about a decade. With multithreading the game can work on the next frame while the previous frame is waiting for V-Sync. Just look at BF3 with V-Sync on you get a continous range of FPS under 60 not just integer multiples. DirectX doesn't support triple buffering.
  • ingtar33
    awesome article, looking forward to the next half.
  • blackmagnum
    Myth #123: Gamers are lonely boys in Mother's dark basement or attic...
  • AlexSmith96
    Great Article! I love you guys for coming up with such a nice idea.
  • hansrotec
    with over clocking are you going to cover water cooling? it would seem disingenuous to dismiss overclocking based on a generating of cards designed to run up to maybe a speed if there is headroom and not include watercooling which reduces noise and temperature . my 7970 (pre ghz editon) is a whole different card water cooled vs air cooled. 1150 mhz without having to mess with the voltage on water with temps in 50c without the fans or pumps ever kicking up, where as on air that would be in the upper 70s lower 80s and really loud. on top of that tweeking memory incorrectly can lower frame rate
  • hansrotec
    I thought my last comment might have seemed to negative, and i did not mean it in that light. I did enjoy the read, and look forward to more!
  • hansrotec
    I thought my last comment might have seemed to negative, and i did not mean it in that light. I did enjoy the read, and look forward to more!
  • noobzilla771
    Nice article! I would like to know more about overclocking, specifically core clock and memory clock ratio. Does it matter to keep a certain ratio between the two or can I overclock either as much as I want? Thanks!
  • chimera201
    I can never win over input latency no matter what hardware i buy because of my shitty ISP
  • immanuel_aj
    I'd just like to mention that the dB(A) scale is attempting to correct for perceived human hearing. While it is true that 20 dB is 10 times louder than 10 dB, but because of the way our ears work, it would seem that it is only twice as loud. At least, that's the way the A-weighting is supposed to work. Apparently there are a few kinks...
  • FunSurfer
    On Page 3: "In the image below" should be "In the image above"
  • Formata
    "Performance Envelope" = GeniusNice work Filippo
  • beetlejuicegr
    I just want to mention that db is one thing, health of gpu over time is another. In many cases i have seen graphic cards going up to 90C before the default driver of ATI/Nvidia start to throttle down. i prefer a 50C-70C scenario
  • cats_Paw
    Awsometacular article.Not only its a new standard for GPU performance, but the Human Benchmark and audio test was really fun!Im normally very critisizing about toms articles becouse many times they feel a bit weak, but this one?10/10
  • ubercake
    What's up with Precision X? It seems like they would update it every couple of months and now there hasn't been an update since last June or July?Is EVGA getting out of the utility software business?
  • kzaske
    Its' been a long time since Tom's Hardware had such a good article. Very informative and easy to read. Thank you!
  • ddpruitt
    Very good article even though there are some technical errors. I look forward to seeing the second half! I would also be interesting in seeing some detailed comparisons of the same cards with different amounts and types of VRAM and case types on the overall impact of performance.
  • Jaroslav Jandek
    Quote:
    The info on V-Sync causing frame rate halving is out of date by about a decade. With multithreading the game can work on the next frame while the previous frame is waiting for V-Sync. Just look at BF3 with V-Sync on you get a continous range of FPS under 60 not just integer multiples. DirectX doesn't support triple buffering.
    The behavior of V-Sync is implementation-specific (GPU drivers/engine). By using render ahead, swap chains, Adaptive V-Sync, etc., you can avoid frame halving.

    DirectX DOES support TB by using DXGI_SWAP_CHAIN_DESC.BufferCount = 3; (or D3DPRESENT_PARAMETERS.BackBufferCount = 2; for DX9). It actually supports more than triple buffering - Direct3D 9Ex (Vista+'s WDDM) supports 30 buffers.
  • Adroid
    I would love to see a Tom's article on debunking the 2GB vs 4GB graphic card race. For instance, people spam the Tom's forum daily giving advice to buy the 4GB GTX 770 over the 2GB. Truth is, the 4 GB costs 50$ more and offers NO benefit over the 2GB. Even worse, I see people buying/suggesting the 4GB 760 over a 2GB 770 (which runs only 30$ more and is worth every penny). I am also curious about the 4GB 770 sli scenario. For everything I have seen, even in Sli the 4GB offers no real-world benefit (with the exclusion of MAYBE a few frames per second higher at 3 monitor scenarios, but the rates are unplayable regardless so the gain is negligible). The other myth is that the 4GB 770 is more "future proof". Give me a break. GPU and future proof do not belong in the same sentence. Further, if they were going to be "future proof" they would be "now proof". There are games that are plenty demanding to show the advantage of 2gb vs 4gb - and they simply don't. It's tiring seeing people giving shoddy advice all over the net. I wish a reputable website (Tom's) would settle it once and for all. In my opinion, the extra 2 GB of RAM isn't going to make a tangible difference unless the GPU architecture changes...
  • ubercake
    DisplayLag.com lists 120Hz and 240Hz HDTVs amongst the monitors, but the maximum input speed for the HDTVs' inputs equate to 60fps? Or am I missing something?If I buy a 240Hz refresh TV, that's output. It processes the 60Hz signal to transform it to a 240Hz output (usually through some form of frame duplication) to minimize motion blur. Does this displayLag.com site mentioned in the article compare apples to oranges by listing HDTVs with monitors as if they operate the same way or am I way off here?
  • ubercake
    735169 said:
    This article is more wrong, than right....... Seriously who let's this stuff get posted??


    How so? Most of the concepts explained make sense and are consistent with everything I've ever learned and observed.
  • ElMoIsEviL
    Let me guess... the R9 290x was the stock card with the stock cooler... no wonder it won't overclock.This idea/notion that Titan has more headroom is also bullshit. The reason your R9 290x has no extra headroom is because it is blocked thermally from achieving better results.Folks with stock reference coolers tend to be users of AMD Crossfire. Most folks who purchase stand alone cards purchase the cards with 3rd party coolers.
  • lowenz
    Quote:
    DirectX DOES support TB by using DXGI_SWAP_CHAIN_DESC.BufferCount = 3; (or D3DPRESENT_PARAMETERS.BackBufferCount = 2; for DX9). It actually supports more than triple buffering - Direct3D 9Ex (Vista+'s WDDM) supports 30 buffers.

    +1

    *http://gamedev.stackexchange.com/questions/58481/does-directx-implement-triple-buffering

    *http://www.gamedev.net/topic/649174-why-directx-doesnt-implement-triple-buffering/
  • houldendub
    Quote:
    I would love to see a Tom's article on debunking the 2GB vs 4GB graphic card race. For instance, people spam the Tom's forum daily giving advice to buy the 4GB GTX 770 over the 2GB. Truth is, the 4 GB costs 50$ more and offers NO benefit over the 2GB. Even worse, I see people buying/suggesting the 4GB 760 over a 2GB 770 (which runs only 30$ more and is worth every penny). I am also curious about the 4GB 770 sli scenario. For everything I have seen, even in Sli the 4GB offers no real-world benefit (with the exclusion of MAYBE a few frames per second higher at 3 monitor scenarios, but the rates are unplayable regardless so the gain is negligible). The other myth is that the 4GB 770 is more "future proof". Give me a break. GPU and future proof do not belong in the same sentence. Further, if they were going to be "future proof" they would be "now proof". There are games that are plenty demanding to show the advantage of 2gb vs 4gb - and they simply don't. It's tiring seeing people giving shoddy advice all over the net. I wish a reputable website (Tom's) would settle it once and for all. In my opinion, the extra 2 GB of RAM isn't going to make a tangible difference unless the GPU architecture changes...
    Games are currently going through a transitionary period due to the new consoles coming out. Those consoles have an awesome 8GB of shared RAM (about 5GB used for games), which developers are going to start using more and more. Games are already pushing the limits with settings turned up (BF4 gives off loads of VRAM problems with using a 2GB card on the Ultra preset), so as more and more demanding games come out with these consoles, 2GB just ain't gonna kick it.I'm fine with that because I upgrade my cards as soon as a newer generation comes out, but for people holding onto cards for 2, 3 or 4 years at a time, being told to invest in the extra VRAM when 1440p is slowly becoming the new standard and texture resolutions themselves are going up, isn't that bad of an idea really. But hey, I'll happily answer your questions when you come back in a couple years time asking why you're having massive juddering playing intensive games ;)