There are no good core/shader/memory OC guides

t33lo

Distinguished
Jul 15, 2005
405
0
18,780
Or at least I haven't seen any. I am wondering what the proper way to OC my EVGA GTX 275 FTW. The first method I tried was to set fan to 100%, set the shader and memory to default (713/1260) and then boost up the core until it gets unstable. I used GPUTOOL to get a core stable at 750MHz without any artifacts reported for over 30 minutes. Next I kept the core and memory at default (713/1260) and boosted the shader up as high as it would go. I got to 1630MHz and beyond that, artifacts would be reported. Finally, I was able to get the memory up to 1300MHz without any artifacts. Once I was able to get my max clocks, I set them all respectively (750/1630/1300). I was able to use GPUTOOL and not report any artifacts for over 30 minutes at 85fps and stable temperature of 81C.

Herein lies the problem. I play CS:S at those clocks for about 10 minutes and then my screen will freeze, flash for 1 sec, then all the textures get screwed up at only 65C. I am assuming that either the core, shader, or memory was clocked to high. I was able to do the same steps above for CS:S and had to use (750/1600/1270) so that I wouldn't report any freezing or artifacts. Since GPUTOOL is more graphic intensive (85fps compared to 180fps in CS:S), how come I have to use lower clocks for CS:S? Would those CS:S clocks cause a more demanding game like Crysis to freeze or artifact just like CS:S did at the GPUTOOL clocks?

And on a related note, does increasing either core, shader, or memory one at a time, lower the possible increase of either? For example, if I get the GPUTOOL results as stated above by setting each at default and overclocking one at a time, will setting all of the clocks at the maximum achieved frequency cause instability? What I guess I am trying to say is that I get stability at a core of 750 and defaults as well as a shader of 1600 and defaults. If I set both the core (750) and shader (1600) at the same time, is that not an accurate results because the core was set using default shader and memory while the shader was set at default core and memory. I hope you can understand this question, if not I will try to restate it.

If anyone can offer some detailed information on this, I would be most greatly appreciative. I haven't seen anything online yet explaining how to properly overclock with shaders or have given some kind of list of software to properly test it. I have tried Furmark which was good, but it doesn't tell you if you are artifacting like GPU tool does. Thanks for your help!
 

orangegator

Distinguished
Mar 30, 2007
1,163
0
19,310
Well, obviously, even though gputool is stable for 30min, those clocks aren't 100% stable. Lower them and test again with your games. And yes, when overclocking all 3 parts, the maximum may be lower than overclocking separately. This is due to the higher temperatures and more power used.
 

t33lo

Distinguished
Jul 15, 2005
405
0
18,780


Thanks for the reply. I figured that applying all 3 maximum clocks would limit it somewhat. Is there a certain game that would test it the best so that all other games would pass and not fail with artifacts or freezing? Is my PSU not strong enough for my system (in signiture) either? And finally, is the way I am going about finding the max clocks correct to find a starting point, then work my way down from there? Core>shader>memory when it comes to clock importance?
 

orangegator

Distinguished
Mar 30, 2007
1,163
0
19,310
Your psu is fine. Your method for overclocking sounds fine too. You may want to try EVGA precision to overclock and use ATITool to scan for artifacts. GPUTool is very new it seems and may not be reliable.