CPUs Have More Headroom
The performance of a graphics card is determined by the clock speeds at which its graphics processor (GPU) and video memory operate. Generally, higher clock speeds mean higher data throughput, translating into better performance and a smoother frame rate. Simply put, a higher frame rate is always better. Sixty fps is considered the optimum. However, this is really only a guideline, and sensitivity to motion differs from one person to the next.
The 60 fps mark is a topic of frequent discussion. The argument goes that movie theatres show their features at 24 fps, which happens to be the same frame rate many HD videos use--and both appear smooth to the eye. Depending on their genre, certain 3D games may require a higher frame rate than others. For example, a real-time strategy game such as Tom Clancy’s EndWar or the Command and Conquer series will usually look fine with as little as 20 fps. First person shooters like Far Cry 2 or Call of Duty are a different matter entirely. If your character is running sideways while simultaneously turning, you may find that 25 fps is too low and will cause stuttering, which is often fatal in this fast-paced genre.
Depending on how sensitive your eyes are, many gamers can spot the difference between 25, 35, and 60 fps in the faster sequences of a given title. Enthusiasts usually aim for an average of 60 fps, and with good reason. This speed offers something of an insurance policy. If things get hectic on-screen and the graphics card has to handle a heavier workload, frame rates can drop below the "playable" mark. If a card can sustain a higher average frame rate, chances are it won’t dip below that point as often as a weaker model might.
The frame rate a graphics card can achieve in a game should not be confused with your monitor’s refresh rate. As long as the vertical signal is synchronized (v-sync), the frame rate has to adapt to the monitor’s refresh rate. Flat-screen monitors usually use a refresh rate of 60 Hz, which means the frame rate is capped at 60 fps. The upside to this is that the frame rate is always in sync with the monitor’s refresh rate, but on the downside, more intricate scenes may be forced down to 30 or even 15 fps (divisors of the monitor’s 60Hz). Disabling v-sync lets the graphics card output other frame rates, such as 23 or 37 fps, as well.
However, since the frequency at which the 3D scene is being rendered is no longer synchronized with the screen, you can end up with mismatched lines, or “tearing.” If you’ve ever watched the camera pan across a scene in an HD video running at 24 fps on a 60 Hz display, you know what we’re talking about. Power users can overclock their graphics cards to help improve performance, especially at high resolutions, thereby avoiding some of the artifacts of insufficient horsepower.
Maximizing Frame Rate Through Overclocking
In order to overclock your graphics card, you will need a good overclocking tool, the right graphics driver, and a sufficiently powerful CPU to help realize the benefit of better graphics performance. At lower resolutions with reduced quality settings, a brawny graphics card will practically always be held back by the processor. So if you’re still using an older or slower CPU, overclocking your graphics card won’t really help you all that much. Luckily, that can easily be remedied by tweaking the CPU as well, giving it more performance to throw at a game’s AI and physics subsystems.
CPUs tend to offer quite a bit of overclocking headroom, too. For example, a typical Core i7-920 with a stock clock speed of 2.67 GHz, usually has no trouble hitting 3.8 GHz, which constitutes a clock speed increase of a full 42 percent. Things are very different where graphics cards are concerned. On an ATI Radeon HD 4870, overclocking the GPU from its stock speed of 750 MHz up to 820 MHz is a decent achievement, even though that makes it an improvement of merely 10 percent.
When trying to stabilize an overclocked CPU, the first (and usually best) approach is to increase its core voltage though the motherboard’s BIOS. However, we strongly caution anyone against casual changes to a graphics card’s BIOS. The reason is simple: heat. While you can crank up a processor’s speed and core voltage with relative impunity as long as you slap a heavier cooler on it, that’s not really an option with a graphics card. Their reference coolers are usually designed to keep the GPU below a certain threshold temperature. Depending on the specific model, that threshold can be anywhere between 80 and 105 degrees Celsius. Obviously, the only way to compensate for additional heat as a result of overclocking is to increase fan speed, simultaneously making the card louder.
Whether or not that works depends on the cooler and its fan speed control. It may be possible to swap out a card’s cooler for a better aftermarket model. On the other hand, even the dual-slot reference designs employed by most companies these days are very decent. As far as altering the fan’s speed profile, it depends entirely on the card’s manufacturer and your specific model whether you’ll be able to adjust it through the driver, a utility, or the card’s BIOS.
For this exploration, we chose two special models from MSI’s line-up. Both feature improved coolers and very good cooling profiles that react to increased clock speeds. To determine whether there were any differences between ATI cards and Nvidia models, we chose one of each, namely a Radeon HD 4870 and a GeForce GTX 260 (with 216 SPs). We then used drivers with integrated overclocking functionality as well as third-party tools to increase their clock speeds.
Stay on the Cutting Edge
Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.
What the hell, I thought this was a guide to overclocking the GPU as the title reads "Graphics Overclocking: Getting The Most From Your GPU"Reply
Then at the end Tom's Hardware screws me over and writes "Conclusion: It’s A Tie"
Isn't this a tutorial?
they tell you how to overclock using CCC or riva tuner, or evga precision, they also tell you, overclocking = more performance at the cost of more power. what else do you want?Reply
joeman42What is really needed is a "continuous" OC utility that can detect artifacts during actual use and adjust accordingly. I've noticed that my max OC tends to change each time I test and depending on the tool I test with (e.g., atitool, gputool, rivatuner, and my favorite, atitraytool). Some games, l4d in particular, crash at the slightest error. Others such as COD and Deadspace are somewhat tolerant. Games like Far Cry 2 and Fear 2 don't seem to care at all. It would be nice if the utility could take this into account.As for the tools themselves, Atitraytool has far and away the best fan speed adjuster, the dual ladder Temp/Speed is a model of simplicity. Plus, it can automatically sense a game and auto OC just for the duration. Nothing like this exists on the NV side (you must explicitly specify each exe). Unfortunately, I am on a NVidia card now and Rivatuner is pretty much the only game in town for serious tweaking. IT IS A DESIGN DISASTER! random design with no discernable structure. A help file which consist solely of the author bragging about his creation, without explanation as to where each feature is implemented or how to use it. And no, scattered tooltips is not an acceptable alternative. It took forever to figure out that I needed to create a fan profile and then a macro and then create a rules to fire the macro which contains the fan profile just to set one(!) fan speed/temp point (and repeat as needed). Sorry for the rant, but I really hate Rivatuner!Reply
Oh hello. That's what OCCT is for.
Rivatuner works just fine with the latest drivers (incl. 190.38). Just check the Power User tab and under System set Force Driver Version to 19038 (or in the articles case 18618) - no decimal point. Be sure that the hexidecimal display at the bottom is unchecked. All Rivatuner's usual features can now be accessed.Reply
I don't think this is intended to be an in-depth tutorial like dingumf perceives. It's just for people to realize that they could still get more from their GPUs using tools.Reply
On the other hand, I don't like the sound of "It's a tie". It looks like it is said just to show neutrality. ATI or Nvidia? It doesn't matter, as long as your satisfied with it.
I must say, the HD 2900 is a great card. I picked up the 2900 Pro for $250 back in 2007 and flashed the bios to a modified XT bios with slightly higher clocks (850/1000). The memory is only GDDR3, but with the 512bit interface, it really does rival the bandwidth of the 4870. I can get it to run Crysis at Very High, 1440x900 with moderately playable framerates (about 25fps, but the motion blur makes it seem quite smooth). Really quite amazing for any 2007 card, let alone one for $250.Reply
Just a bit of extra info on the 2900 Pro...Reply
The Pros were bassically binned XTs once ATI realized that the card was too difficult to manufacture cheaply (something about the high layer count it takes to make a 512bit PCB), so in order to sell their excess cores, the clocked them lower and branded them as Pros. Additionally, they changed the heatsink specs as well, adding an extra heatpipe. Because of this, the Pros could often OC higher than the XTs, making them essentially the best deal on the market (assuming you got a decent core).
Overclocking a GPU generally isn't worth it IMO, but sometimes can give that extra push into +60fps average. Or to make yourself feel better about a purchase like myself; one week after I bought a 9800GTX they came out with the GTX+. A little tweaking in EVGA Precision brought an impressive 10% overclock up to GTX+ levels and left me satisfied.Reply
I run 2 Palit GTX295's and have had great success with palit's "Vtune" over clocking software. I believe it works with cards from other vendors as well. Easy to use and driver independent.Reply
Cheers for the great article
Conclusion: It’s A Tie
I didn't know they were in competition until I read that.....I too thought this was about overclocking a GPU in general, not which card you should buy. Once again Toms throws that little barb at the end to stoke the fires.
I think they do this constantly to get more website hits.. If the can get a good ol' fanboy war on every article, they will get people coming back over and over again to add fuel to the fire. After all, the more hits they get, the more they get paid from their sponsors. Which BTW, seam to be taking up more real estate then actual content on this site these days.