GArrigotti

Honorable
Apr 24, 2012
108
0
10,710
Hello everyone,

I've got a Gtx 580 currently overclocked to 975 mhz and 5,000 memory clock. That about average? Above average? Below Average? Trying to gauge it; because though it's overclocked the gains really seem minimal compared to 900 mhz and 4,500 memory clock.

What would your opinions be on the matter? Truly worth the extra ten-fifteen degree temperature points under load? The per watt performance just barley increases... At least in the few computer games I play.
 
Solution
Hello,

I wouldn't bother OCing at all. You still have a decently powerful GPU. You will limit the lifespan of it by OCing + all the extra heat.

You are pretty much limited by that GPU's architecture anyways (no matter how much you OC it).
Hello,

I wouldn't bother OCing at all. You still have a decently powerful GPU. You will limit the lifespan of it by OCing + all the extra heat.

You are pretty much limited by that GPU's architecture anyways (no matter how much you OC it).
 
Solution

rubix_1011

Contributing Writer
Moderator
Truly worth the extra ten-fifteen degree temperature points under load?

No. The added voltage needed to run the card at the OC specs is causing this spike. Is this jump in Celsius or Fahrenheit? (I will assume Celsius as that is what most of us use to determine component temps and is the international standard).

If your gains are minuscule at best and you are seeing temp spikes that high over stock- this is definitely not worth the trouble. Most graphics cards have sweet spots where they perform the best before simply just generating more heat and providing very little performance gain in return.
 

GArrigotti

Honorable
Apr 24, 2012
108
0
10,710
Thank you for the swift replies.

Chainzsaw and Rubix, that is sort of what I figured.

Was hoping to get a tad bit of comparisons as to other peoples placement with their Gtx 580's. My friend can't break 860 mhz and 4008 memory clock. It crashes shortly after stating "Nvidia Driver has failed." I've already exceeded that drastically; but as I said after 900 mhz and 4,500 memory clock it is very limited boost.

Yes, I was using Celsius. As it is the International Standard.

But to touch on Chainzsaws remark.
You are pretty much limited by that GPU's architecture anyways (no matter how much you OC it).

Is there any documentation I could find to find out those details? I'd think that with an increase in core-clock, shader, and memory it would provide the ability to obviously calculate better no matter what. While the heat increase; may become somewhat of a problem or unstable if increased too much.

An example would be like a highway. If it is a four lane highway it will offset way more traffic then a two lane highway. If I had two more lanes wouldn't a six lane always offset more traffic then a four lane? Seems weird that the fermi architecture would have a limitation like that.
 

bigj1985

Distinguished
Mar 12, 2010
331
0
18,810



If your jumping 15c from a measly 75mhz increase I would say hell no. Your just adding extra heat and stress to your card. And the bad part is you won't notice any performance improvement outside margin of error.
 

TRENDING THREADS