In SLI mode, we encountered two issues with the GeForce GTX 260. First, the fan on the card proved insufficient for the load test, and second, the CPU processing power of the test system was not enough to always properly drive the configuration.
With regard to CPU performance, one argument in favor of a more powerful processor is that anyone who invests in two GTX cards will have the cash available to buy the fastest quad-core chip. While you are not going to get much more graphics performance with a quad-core CPU running below 3 GHz, there are still games that react better to CPU clock speeds versus the number of processing cores. With a pair of GTX cards, you have three options: the game is optimized for quad-core chips and the four cores do their work on a quartet of threads, you buy a dual-core chip that emphasizes frequency for its performance gains, or you wait for a more powerful generation of processors.
MSI’s overclock yields a total benefit of just 2.2% with the GTX 260 running in SLI, while the single-card setup boosts performance by 4.5%. In the individual evaluation at various resolutions, SLI is only perceptibly faster at 1680x1050 pixels with anti-aliasing enabled and at 1920x1200 pixels. SLI becomes more interesting in Assassin’s Creed and World in Conflict at higher resolutions with anti-aliasing. In Call of Duty 4 and Mass Effect the values are considerably better as well. Crysis shows the best increase running at Very High Quality settings with AA. In Quake Wars, Half Life 2: Episode 2, and FSX, performance is lower.
If you select your games carefully and give the SLI combo a little more CPU power, then you will surely net a few extra frames. Here a couple of highlights: World in Conflict at 1920x1200 pixels with 4xAA on a single card yields 27.6 fps—with the GTX 260 in SLI it provides 43.3 fps (MSI’s overclock yields 44.5 fps). Mass Effect at 1920x1200 pixels and 8xAA with a single card delivers 49.3 fps, but the GTX 260 in SLI hits 72.8 fps (MSI’s overclock hits 77.8 fps).
When testing maximum power consumption, the fan of the GTX 260 reached its limit in SLI mode—the graphics processor throttles performance when temperatures exceed 105 degrees Celsius due to overheating. With a manually-configured 100 percent fan duty cycle and a noise level of 56 dB(A), the card reached its 105 degree ceiling without problems. The test system had a maximum power consumption of 610 watts, which then dropped to 490 to 505 watts as a result of thermal throttling.
With automatic fan control, a noise level of 55 dB(A) is reached and the computer can crash, but this does not always occur. As soon as the 105 degree limit is exceeded, the thermal GPU throttle starts up and a yellow warning LED starts to flash. Power consumption then fluctuates between 550 and 610 watts. Of course, in both of these situations, the system no longer runs at full 3D performance since the thermal throttle is only an unintended "energy saving" mode.
This overheating phenomenon also occurs with cards plugged in next to each other with no gap between them. For example, in a 3-way SLI setup employing a trio of Geforce 8800 Ultras, the gap between boards is too small and the graphics card fans are unable to draw in enough air. However, this is not just an Nvidia problem—the ATI X1950 XTX in CrossFire mode also cannot really survive without additional cooling. What does help is fresh air from the side provided by an 80 mm fan, which drops the GPU temperature of the GeForce GTX 260 SLI to 101 degrees.
The test for maximum power consumption is an extreme case that does not necessarily occur in real-world use. During normal game play, utilization fluctuates, only very rarely reaching the maximum value—as a result, the graphics chips have more time to cool down. If you wish to use the GTX cards for constant processing, you will soon feel the effects of the thermal limits and the restrictions of the cooling system.
The power consumption in 2D mode is 211 watts, while in 3D mode the top value with the entire system is 610 watts (power at the wall). If you wish to operate the GTX 260 OC in an SLI configuration, you will need a branded power supply with between 510 and 550 watts and 42 to 46 A on the 12 volt rail.
- Taxing Modern CPUs With Powerful Graphics
- Comparing The GPUs And Test Setup
- Radeon HD 4850
- CrossFire With Radeon HD 4850
- Radeon HD 4870 OC
- CrossFire With Radeon HD 4870 OC
- GeForce GTX 260 OC
- SLI With GeForce GTX 260 OC
- GeForce GTX 280 Superclocked
- SLI With GeForce GTX 280 Superclocked
- Assassin’s Creed v1.02
- Call of Duty 4 v1.6
- Crysis v1.21 High Quality
- Crysis v1.21 Very High Quality
- Enemy Territory: Quake Wars v1.4
- Half Life 2: Episode 2
- Mass Effect
- Microsoft Flight Simulator X SP2
- World in Conflict v1.05
- 3DMark06 1280x1024 v1.1.0
- How Overclocking Affected The MSI Cards
- Overall Performance
- Price/Performance Comparison
- How About Graphics Image Quality?
- Power Consumption, Noise, And Temperature
- Frames-Per-Watt For The GTX 200-Series And HD 4800-Series
- GTX 200-Series And HD 4800-Series At 1280x1024
- GTX 200-Series and HD 4800-Series At 1680x1050
- GTX 200-Series And HD 4800-Series at 1920x1200
- All Cards Compared At 1280x1024
- All Cards Compared At 1680x1050
- All Cards Compared At 1920x1200
- Is The Upgrade Worthwhile?
- Swapping Old Chips For New
- Evaluation Of The New Generation
- Conclusions – Radeon HD 4850 Is The Winner


My one complaint? Why use that CPU when you know that the test cards are going to max it out? Why not a quad core OC'ed to 4GHz? It'd give far more meaning to the SLI results. We don't want results that we can duplicate at home, we want results that show what these cards can do. Its a GPU card comparason, not a complain about not having a powerful enough CPU story.
Oh? And please get a native english speaker to give it the once over for spelling and grammar errors, although this one had far less then many articles posted lately.
Remember, the more you know.
but yes the article was off to a great start, maybe throw some vantage in there as well?
Precisely; several other websites tested with 8.7 and 8.8 long before this article was published. Why couldn't you? Look at the 8.6 release notes; it doesn't even mention the HD4000 series cards as supported devices.
Brilliant guys.
This is another reason why the results are tanked, in XP you get 15% more performance compared to these values
After having the Mythbusters appear, you would think this would be the most comprehensive, "scientific," factual, and update article meeting Tom's usual standards.... I didn't finish reading this.
(82 temperature in 2D 69 in 3D with no fanfix)
Also, I can understand why TH didn't have time to use 8.8 since it was released publicly on August 20, 2008 (Although ATI would have gladly released a beta version to TH for testing purposes).
However, AMD publicly released stable Catalyst 8.7(internal version 8.512) on July 21, 2008. That's more than a month ago. It has numerous improvements (for example, CF performance increase, improved stability and performance under Vista). To be honest, most of the improvements range from 4% to 15%. (In CF case, up to 1.7 X scaling)
TH has rarely been unfair and/or inaccurate and they always owned up to their mistakes before, and I trust them to re-test ATI products with at least 8.7 if not 8.8 to continue to uphold their values and integrity.
Now on to my criticism.
I can understand how you want to keep the results homogeneous with previous results but if you already know that a stock QX6800 will bottleneck the system, be proactive in fixing it. At the very least you should have done a small segment of the review showing the newer cards with a quad core overclocked to 4.0Ghz.
Also, if you have ever read any of the older Toms articles, you would know that you can still minimise the bottleneck from a slow GPU bye raising the resolution. Perhaps you should test the fastest cards at the highest resolutions?
I can also understand why you did not use the latest nVidia drivers. It takes time to create a review of this scale and the GF8/9 series drivers have been stable for some time. As the GT 200 series brings no new features to the table, they would needed little optimisation for their newer cards allowing the slightly dated drivers to perform nicely.
What I can not understand is why you would use ATI's 8.6 drivers??
The 8.7 drivers have been out for more than a month bringing quite a few fixes/optimisations with it. I understand it probably took more than 9 days to complete all of these benchmarks (today is the 29th, the 8.8 drivers were officially released on the 20th) but you should have called ATI and asked for their latest drivers. The 8.8 drivers were leaked at least a week before the official release which means, if you could nurture a relationship with the people you review, they could/probably would have provided them to you. There is still no excuse I can see for testing with the old 8.6 drivers. Seriously, it does not even have official support for the 48X0 cards...
From the title of the article,"The Fastest 3D Cards Go Head-To-Head", I would have assumed that you would have been testing the Fastest 3D cards? What happened to your 4870x2? As you have already attempted to review it, we know you have your hands on one. How can you claim to review the "Fastest 3D Cards" and still leave out the fastest card?
In summation, I liked many things from this article. The layout was nice and a little more technical than we have been seeing as of late. I enjoyed the comparison charts at the end and I think you should adopt a similar method for the CPU and GPU charts. I would have thought this was an excellent and well thought out article if it had not been for the glaring and obvious deficiencies in reason. I give you credit for stepping Toms in the right direction. With a little more unbiased comparison, critical thinking and common sense I could come to see reviews such as this in a very positive light.