What is too hot for a gpu?

jrmurph3

Honorable
May 5, 2012
252
0
10,810
Specifically for a EVGA GTX 660ti superclocked. I'm not experiencing any problems, just curious.

So, what is too hot for this card or for most cards in general?
 

Limerick

Honorable
Oct 25, 2012
280
0
10,790
If it's maintaning a prolonged temperature of 100+ *C it may get damaged. GPU's can take quite of bit of heat before getting to the point where if it stays there it'll burn out.
 
It's actually a MYTH that cards can handle more and more heat because they are more efficient.

Look at the new i5-3570K as an example. It can't handle as high a temperature as the 2500K. WHY? It's because the DENSITY of the transistors has fallen. The card uses power more efficiently and therefore generates LESS HEAT itself, but the maximum heat it can handle is LOWER as die sizes shrink.

So you can't stick with a particular temperature.

Chances are, if you aren't getting any artifacts on your screen then your card should be okay. Many of the problems happen when OVERVOLTING as well.

Don't forget that good case cooling is important. At least ONE front case fan and ONE top-rear case fan. I have a GTX680 so have two front and two rear (very slow, quiet fans).

As for the EXACT TEMPERATURE? As I said, it varies and everyone seems to have a different opinion. My advice is MONITOR the GPU temp under load in your most demanding game such as BF3 and only overclock a few degrees above that (unless you get issues).

Since you already have a "superclocked" version you might not get a lot more out of it. I don't know if I'd even try.

*GPU BOOST:
This completely changes how frequency and heat are handles as well and is too long to discuss here. Basically, the frequency will increase if the temperature isn't too high. You can raise the base clock by overclocking but because the card can already vary the frequency according to heat there's no guarantee performance will change.

**The GOOD NEWS is that the card is far more difficult to damage. It has protection circuitry to force the frequency far below the normal gaming base clock (your idle clock) if temperature approaches the DAMAGE POINT.
 

The density of transistors is higher in the Core i5-3570k than the 2500k, not lower. Plus the main reason for the limited overclocking performance of the 3570k is the low-quality thermal compound between the silicon and the IHS.
Higher transistor density means you get the same or slightly less heat output in a smaller volume (with smaller surface area), so it can be harder to get rid of the heat even though the amount of heat generated is the same or lower. But this doesn't affect what temperatures the CPU can handle, just how much power it can draw while staying cool enough.

As for not overclocking a "superclocked" card, I have a Black Edition graphics card that I overclocked by 12% past its factory overclock, so just because a card is already factory overclocked it doesn't mean there'll necessarily be very little room left for a manual overclock. It depends on the card though, and IIRC the GTX 660 Ti doesn't overclock all that well. So you may in fact be right in this case.
 
It all depends on the sample, some that have very poor quality bonding will act up even as low as 70c while others will for short periods of time will survive going over 110c. Working to keep the load temps around 70c is a good guideline to helping your card last for several years.
 


I meant "higher" density. Thanks for the catch. You are also correct on the thermal paste issue. However, die size is also a contributor to the inability to handle higher temps.
http://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)#Heat_issue_when_overclocked

As for overclocking, I think it's just a simple matter of experimentation.

GPU BOOST changes the way people need to think about overclocking though. Normally, a card has a preset frequency (say 1000MHz) so a 10% overclock can give a 10% FPS boost I GPU-limited games. However, GPU BOOST can vary the frequency slightly so if you overclock and drive the temp too high it might just DROP the frequency slightly and give you the SAME performance you got before the overclock.

NVIDIA warn that overvolting to raise the frequency even higher is often counter-productive because the heat rises and GPU BOOST drops the frequency (and thus performance) even lower in some cases than what could have been achieve by a simply overclock.

Cheers.
 

Fair enough. However, I'd like to reiterate that the temperature the CPU can take is not what's in play here. It's the ability to get the heat away from the CPU and keep the same temperatures as you'd see with a Sandy Bridge chip.
 

jrmurph3

Honorable
May 5, 2012
252
0
10,810
Wow! Thanks for all the information. I actually learned quite a bit from this. I've been playing borderlands 2 lately a was noticing my temperatures for my cpu and my gpu and was just curious about what was actually considered bad for temp ranges for my new gpu.

I've been getting around 71C max on my gpu and 53C on my cpu. According to the thread, I'm in safe range of temperatures, but as I know, usually cooler means better in terms of reliability and longevity. I am using a Coolermaster HAF 922 case and it has an option for an additional 200mm fan on the side of the case along with the 3 that came stock. Do you think that would really cool down my components a few degrees or would it be something to not worry about?
 

jrmurph3

Honorable
May 5, 2012
252
0
10,810


I have one 200mm fan in the front for intake, one 200mm fan on the top for exhaust and one 120 fan in the top rear for exhaust. I planned on buying another 200mm fan for the side of the cause for intake.
 

jrmurph3

Honorable
May 5, 2012
252
0
10,810


Very helpful, scout_03, as that is what I had in mind after doing more research.