Full Review NVIDIA's new GeForce256 'GPU'

CPU Scaling

Is it true, will GeForce put an end to CPU-scaling? Will you be able to run any game with any CPU as long as you've got GeForce? We tried to find the answer by running each of the above benchmarks with a Celeron 400 at 640x480x16 to keep the impact of the fill rate as low as possible. We ran those benchmarks on GeForce w/DDR and TNT2 Ultra.

The following chart shows you the percentage of the Celeron 400-score compared to the Pentium III 550 score. This means that a score of 69% means that with this card a Celeron 400 scored 69% of the Pentium III 550 score. We would obviously expect that with GeForce the scores are as close to 100% as possible. See yourself if it achieves that.

You're probably just as disappointed as I was when I ran the benchmarks. GeForce scales almost identical to TNT2, only in Descent 3 you can find a noticeable difference. The only real exception is the TreeMark. I was impressed to see that Celeron 400 scored the identical result as Pentium III 550 with GeForce, while with TNT2 the score was only 75%. After talking to NVIDIA about this issue I was told that most of the CPU-power is currently lost inside the rather young GeForce-drivers and that the CPU-scaling will get less once the drivers have matured.

The Facts Behind 3D-Games And A Possible Usage Of Geforce256's Integrated T&L-Engine

The term CPU-scaling in combination with NVIDIA's new GeForce256 GPU is becoming more and more an object of discussions and speculations. NVIDIA's comments to this topic add to the confusion, claiming that GeForce's 3D-performance is rather independent of the CPU-power, regardless if a K6, a Celeron, a Pentium III or a Athlon is being used. Fact is that games will ALWAYS be depending on CPU-performance, especially in case of complex or multiplayer games, where the CPU has a lot more to do than computing transform and lighting. The bandwidth of the system buses, like memory bandwidth, PCI-bandwidth and AGP-bandwidth are another factor that impacts GeForce just as much as any other 3D-chip without integrated T&L. Nevertheless, GeForce256 can reduce CPU-scaling. Games with complex graphics and rather simple AI and physics benefit greatly from GeForce's T&L-engine and even the other games should at least show a difference in CPU-scaling. Having said that, we should be aware that games have to fulfill some requirements to make usage of an integrated T&L.
  1. The game must be programmed for an API that supports integrated T&L-engines, like DirectX 7 or OpenGL.
  2. Current DX6-games can only take real advantage of GeForce with a DX7-patch.
  3. The game shouldn't have an engine too complex to be the performance-bottleneck in the first place.
  4. The rendering-engine of the 3D-chip should be able to draw the frames supplied by its integrated T&L-engine fast enough.
I case of our testing only TreeMark, Dagoth Moor Zoological Gardens and in some way Quake3 fulfill the requirements. The first two show a significant difference in CPU-scaling over chips without integrated T&L. Quake3 will show more of a difference once it was changed to depend more on the OpenGL transform and lighting procedures than on its own. In case of the other games we'll have to wait until the developers release DX7-patches.
Create a new thread in the US Reviews comments forum about this subject
This thread is closed for comments
No comments yet
    Your comment