New Nvidia GeForce 8800GT Surprises

GeForce 8800GT: The Chip

A little like the Penryn, the GeForce 8800GT was only possible because Nvidia has been able to use a new process. The G80's 90 nm process, the true father of the G92 on which this 8800GT is based (rather than the 80 nm process that was used on the 8600's G84), gives way to 65 nm. TSMC still can't use 45 nm processes on GPUs, but we need to keep in mind that GPUs' complexity (in terms of numbers of transistors) is higher than CPUs when you exclude their enormous L2 cache. The G92 thus integrates 754 million transistors; an 11% increase compared to the G80's 681 million. Yet, the number of computational units decreases; we go from 128 stream processors to 112, but we must keep in mind that, on the 8800GTS, only 96 of them are active. Accordingly, the number of texturing units increases from 48 to 56. Finally, thanks to the 65 nm process, the die's surface shrinks 33%, although its size is still twice as big as a G84.

Swipe to scroll horizontally
Main cards specifications
GPU8600GTS8800GTS 320 MB8800GT
GPU Clock675 MHz500 MHz600 MHz
Shaders Clock1450 MHz1200 MHz1500 MHz
Memory Clock1000 MHz800 MHz900 MHz
Memory bus width128 bits320 bits256 bits
Memory typeGDDR3GDDR3GDDR3
Memory Capacity256 MB320 MB512/256 MB
Number of Pixels/Vertex Pipelines(8)(24)(28)
Number of texturing units164856
Number of ROP82016
Theoretical Fill rate(11,600 MPixels)(28,800 MPixels)(42,000 MPixels)
Memory Bandwidth32 GB/s64 GB/s57.6 GB/s
Number of transistors289 million681 million754 million
Process0.08µ TSMC0.09µ TSMC0.065µ TSMC
Die area169 mm²484 mm²324 mm²
Generation200720072007
Shader model supported4.04.04.0

In this case, where does the increase in transistors come from? First and foremost, from the PureVideo 2 integration (more on that very soon). Nvidia hasn't made the same mistake again by denying its mid/high end cards the ability to decode HD videos. However, it hasn't evolved and still doesn't decode VC-1, a deficit that's more marketing than anything else (given the small demand for VC-1 videos compared to H.264) but that appears to be a preoccupation for the chameleon. HDMI support is also integrated on the chip.

There are also other enhancements showing up, like the optimization of ROPs' compression algorithm for extreme resolutions like 2560 x 1600. What is the use of that on an 8800GT? None. Is it the confirmation of a future very high end card based on two G92s? More likely... Finally, the GeForce 8800GT introduces PCI Express 2.0 support, which will double the bandwidth (moving up to a bi-directional 8 GB/s) with an X38 motherboard. This won't change anything from a gaming point of view, but could be more interesting for workstations, professional software or applications using Nvidia's GPGPU CUDA. It remains, of course, fully compatible with PCI Express 1.0.

Regarding the clock, Nvidia has pushed the G92's stream processors to no less than 1.5 GHz. At the end of the day, the GeForce 8800GT has a computational power 46% higher than that of the 8800GTS and merely 3% lower than the 8800GTX! The only downside is memory bandwidth decreasing by 10% (still in comparison to the 8800GTS). Memory capacity is now 512 MB, but a 256 MB version will appear in the coming weeks.