A little like the Penryn, the GeForce 8800GT was only possible because Nvidia has been able to use a new process. The G80's 90 nm process, the true father of the G92 on which this 8800GT is based (rather than the 80 nm process that was used on the 8600's G84), gives way to 65 nm. TSMC still can't use 45 nm processes on GPUs, but we need to keep in mind that GPUs' complexity (in terms of numbers of transistors) is higher than CPUs when you exclude their enormous L2 cache. The G92 thus integrates 754 million transistors; an 11% increase compared to the G80's 681 million. Yet, the number of computational units decreases; we go from 128 stream processors to 112, but we must keep in mind that, on the 8800GTS, only 96 of them are active. Accordingly, the number of texturing units increases from 48 to 56. Finally, thanks to the 65 nm process, the die's surface shrinks 33%, although its size is still twice as big as a G84.
|Main cards specifications|
|GPU||8600GTS||8800GTS 320 MB||8800GT|
|GPU Clock||675 MHz||500 MHz||600 MHz|
|Shaders Clock||1450 MHz||1200 MHz||1500 MHz|
|Memory Clock||1000 MHz||800 MHz||900 MHz|
|Memory bus width||128 bits||320 bits||256 bits|
|Memory Capacity||256 MB||320 MB||512/256 MB|
|Number of Pixels/Vertex Pipelines||(8)||(24)||(28)|
|Number of texturing units||16||48||56|
|Number of ROP||8||20||16|
|Theoretical Fill rate||(11,600 MPixels)||(28,800 MPixels)||(42,000 MPixels)|
|Memory Bandwidth||32 GB/s||64 GB/s||57.6 GB/s|
|Number of transistors||289 million||681 million||754 million|
|Process||0.08µ TSMC||0.09µ TSMC||0.065µ TSMC|
|Die area||169 mm²||484 mm²||324 mm²|
|Shader model supported||4.0||4.0||4.0|
In this case, where does the increase in transistors come from? First and foremost, from the PureVideo 2 integration (more on that very soon). Nvidia hasn't made the same mistake again by denying its mid/high end cards the ability to decode HD videos. However, it hasn't evolved and still doesn't decode VC-1, a deficit that's more marketing than anything else (given the small demand for VC-1 videos compared to H.264) but that appears to be a preoccupation for the chameleon. HDMI support is also integrated on the chip.
There are also other enhancements showing up, like the optimization of ROPs' compression algorithm for extreme resolutions like 2560 x 1600. What is the use of that on an 8800GT? None. Is it the confirmation of a future very high end card based on two G92s? More likely... Finally, the GeForce 8800GT introduces PCI Express 2.0 support, which will double the bandwidth (moving up to a bi-directional 8 GB/s) with an X38 motherboard. This won't change anything from a gaming point of view, but could be more interesting for workstations, professional software or applications using Nvidia's GPGPU CUDA. It remains, of course, fully compatible with PCI Express 1.0.
Regarding the clock, Nvidia has pushed the G92's stream processors to no less than 1.5 GHz. At the end of the day, the GeForce 8800GT has a computational power 46% higher than that of the 8800GTS and merely 3% lower than the 8800GTX! The only downside is memory bandwidth decreasing by 10% (still in comparison to the 8800GTS). Memory capacity is now 512 MB, but a 256 MB version will appear in the coming weeks.
- GeForce 8800GT: The Chip
- GeForce 8800GT: The Card and New Antialiasing Capability
- GeForce 8800GT: The Card and New Antialiasing Capability, Continued
- GeForce 8800GT: The Review
- Supreme Commander
- Age Of Empires 3
- The Elder Scrolls IV: Oblivion
- Word In Conflict
- Unreal Tournament
- Power Consumption