The performance of a graphics card is determined by the clock speeds at which its graphics processor (GPU) and video memory operate. Generally, higher clock speeds mean higher data throughput, translating into better performance and a smoother frame rate. Simply put, a higher frame rate is always better. Sixty fps is considered the optimum. However, this is really only a guideline, and sensitivity to motion differs from one person to the next.
The 60 fps mark is a topic of frequent discussion. The argument goes that movie theatres show their features at 24 fps, which happens to be the same frame rate many HD videos use--and both appear smooth to the eye. Depending on their genre, certain 3D games may require a higher frame rate than others. For example, a real-time strategy game such as Tom Clancy’s EndWar or the Command and Conquer series will usually look fine with as little as 20 fps. First person shooters like Far Cry 2 or Call of Duty are a different matter entirely. If your character is running sideways while simultaneously turning, you may find that 25 fps is too low and will cause stuttering, which is often fatal in this fast-paced genre.
Depending on how sensitive your eyes are, many gamers can spot the difference between 25, 35, and 60 fps in the faster sequences of a given title. Enthusiasts usually aim for an average of 60 fps, and with good reason. This speed offers something of an insurance policy. If things get hectic on-screen and the graphics card has to handle a heavier workload, frame rates can drop below the "playable" mark. If a card can sustain a higher average frame rate, chances are it won’t dip below that point as often as a weaker model might.
The frame rate a graphics card can achieve in a game should not be confused with your monitor’s refresh rate. As long as the vertical signal is synchronized (v-sync), the frame rate has to adapt to the monitor’s refresh rate. Flat-screen monitors usually use a refresh rate of 60 Hz, which means the frame rate is capped at 60 fps. The upside to this is that the frame rate is always in sync with the monitor’s refresh rate, but on the downside, more intricate scenes may be forced down to 30 or even 15 fps (divisors of the monitor’s 60Hz). Disabling v-sync lets the graphics card output other frame rates, such as 23 or 37 fps, as well.
However, since the frequency at which the 3D scene is being rendered is no longer synchronized with the screen, you can end up with mismatched lines, or “tearing.” If you’ve ever watched the camera pan across a scene in an HD video running at 24 fps on a 60 Hz display, you know what we’re talking about. Power users can overclock their graphics cards to help improve performance, especially at high resolutions, thereby avoiding some of the artifacts of insufficient horsepower.
Gain Graphics Power through Overclocking
Maximizing Frame Rate Through Overclocking
In order to overclock your graphics card, you will need a good overclocking tool, the right graphics driver, and a sufficiently powerful CPU to help realize the benefit of better graphics performance. At lower resolutions with reduced quality settings, a brawny graphics card will practically always be held back by the processor. So if you’re still using an older or slower CPU, overclocking your graphics card won’t really help you all that much. Luckily, that can easily be remedied by tweaking the CPU as well, giving it more performance to throw at a game’s AI and physics subsystems.
CPUs tend to offer quite a bit of overclocking headroom, too. For example, a typical Core i7-920 with a stock clock speed of 2.67 GHz, usually has no trouble hitting 3.8 GHz, which constitutes a clock speed increase of a full 42 percent. Things are very different where graphics cards are concerned. On an ATI Radeon HD 4870, overclocking the GPU from its stock speed of 750 MHz up to 820 MHz is a decent achievement, even though that makes it an improvement of merely 10 percent.
When trying to stabilize an overclocked CPU, the first (and usually best) approach is to increase its core voltage though the motherboard’s BIOS. However, we strongly caution anyone against casual changes to a graphics card’s BIOS. The reason is simple: heat. While you can crank up a processor’s speed and core voltage with relative impunity as long as you slap a heavier cooler on it, that’s not really an option with a graphics card. Their reference coolers are usually designed to keep the GPU below a certain threshold temperature. Depending on the specific model, that threshold can be anywhere between 80 and 105 degrees Celsius. Obviously, the only way to compensate for additional heat as a result of overclocking is to increase fan speed, simultaneously making the card louder.
Whether or not that works depends on the cooler and its fan speed control. It may be possible to swap out a card’s cooler for a better aftermarket model. On the other hand, even the dual-slot reference designs employed by most companies these days are very decent. As far as altering the fan’s speed profile, it depends entirely on the card’s manufacturer and your specific model whether you’ll be able to adjust it through the driver, a utility, or the card’s BIOS.
For this exploration, we chose two special models from MSI’s line-up. Both feature improved coolers and very good cooling profiles that react to increased clock speeds. To determine whether there were any differences between ATI cards and Nvidia models, we chose one of each, namely a Radeon HD 4870 and a GeForce GTX 260 (with 216 SPs). We then used drivers with integrated overclocking functionality as well as third-party tools to increase their clock speeds.
- CPUs Have More Headroom
- Keeping Cool (Enough)
- Graphics Chips And Our Test Setup
- MSI’s D.O.T.-Enabled Driver
- Overclocking The ATI Card Via D.O.T.
- Benchmarks: ATI And D.O.T.
- Overclocking Using RivaTuner And Tray Tools
- Benchmarks: ATI And Catalyst 9.6
- Overclocking: Nvidia And D.O.T.
- Benchmarks: Nvidia And D.O.T.
- Overclocking: Nvidia With CoreCenter And AirForce
- Benchmarks: Nvidia And GeForce 186.18
- RivaTuner And Precision
- Effects Of Overclocking: ATI
- Effects Of Overclocking: Nvidia
- Overall Performance
- Performance Per Watt
- 3D Performance (Sorted By Anti-Aliasing)
- Conclusion: It’s A Tie
Then at the end Tom's Hardware screws me over and writes "Conclusion: It’s A Tie"
Isn't this a tutorial?
Then at the end Tom's Hardware screws me over and writes "Conclusion: It’s A Tie"
Isn't this a tutorial?
Oh hello. That's what OCCT is for.
On the other hand, I don't like the sound of "It's a tie". It looks like it is said just to show neutrality. ATI or Nvidia? It doesn't matter, as long as your satisfied with it.
The Pros were bassically binned XTs once ATI realized that the card was too difficult to manufacture cheaply (something about the high layer count it takes to make a 512bit PCB), so in order to sell their excess cores, the clocked them lower and branded them as Pros. Additionally, they changed the heatsink specs as well, adding an extra heatpipe. Because of this, the Pros could often OC higher than the XTs, making them essentially the best deal on the market (assuming you got a decent core).
Cheers for the great article
Jordan
I didn't know they were in competition until I read that.....I too thought this was about overclocking a GPU in general, not which card you should buy. Once again Toms throws that little barb at the end to stoke the fires.
I think they do this constantly to get more website hits.. If the can get a good ol' fanboy war on every article, they will get people coming back over and over again to add fuel to the fire. After all, the more hits they get, the more they get paid from their sponsors. Which BTW, seam to be taking up more real estate then actual content on this site these days.
I have 730 core clock on my evga gtx 260, 24/7.
I think it becomes important when the game you're playing hovers at 24+ fps, that minor/major OC might make your game from stutter hell to just about playable. I had a game before that I was able to set to mid-high settings but the fps hovered at 25fps, the minor OC helped me push 30fps+.
At that time that was the only game I was playing and spending money to get a vid card to play this one game imho wasn't worth it. The viable cards that I could buy at that time were 8600GT(this thing is a joke), 8800GTS 320MB(too expensive back then), 2600XT(a worse joke), 2900(a heat monster).
As you can see here, I underclock my 4870 to 200/250 with just 0.5v
here it is.......
http://i474.photobucket.com/albums/rr107/agent47_1/att.jpg
the author of the article had to be joking.
The HD 2900XT was a fast card. Not as fast as Nvidia's flag ship 8800 line-up, but not to far off. They were hot and power hungry, but not slow.
I think that most people remember the disappointment of the real-world performance of the HD 2900 series after all the hype and projected performance the card had on paper at that time. Going by the specs it looked to be a 8800 killer, but then it came to market and did no such thing. This is what I think people remember the HD 2900 series most by.
But, if you look past that, it was still a fast card. At that time it was beating CrossFired X1950XTX's and SLI'ed 79050 GT's, and even 7950GX2's in SLI.
So yes, in its day it was a fast card. Was it the fastest? No, but that doesn't mean it was a slow card by any means.
I think that people need to remember that at that time Nvidia broke all the molds with their 8800 series. It is a credit to Nvidia that they were able to bring performance levels to a new high. But, don't disregard the 2900 series as slow cards.
I still consider nVvdia 8800GTX as one of the finest, yet most expensive card that I nvr own. 8800GTX is legendary. Even by todays standard, it still rocks.