NVIDIA Strikes Back - The GeForce2 Ultra 3D Monster

Introduction

Late April 2000 NVIDIA released its latest high-end 3D chip, the GeForce2 GTS. Although this chip was again reaching new performance heights, it still left me with mixed feelings. GeForce2 GTS suffered and obviously still suffers from a serious imbalance between chip performance and memory bandwidth, keeping it from ever reaching the impressive fill rate numbers NVIDIA claims.

This imbalance is the very reason why GeForce2 GTS cards are not a lot faster than 3dfx' Voodoo5 5500 solution at high resolutions and true color, it is why GF2 GTS doesn't perform amazingly well in FSAA and it also signs responsible for the fact that ATi's new Radeon chip is even able to surpass the GeForce2 GTS at high-res/true-color settings.

To be properly prepared for this article, I'd like to suggest that you read the following articles addressing the above said, unless you have already done that.

  • Tom's Take On NVIDIA's New GeForce2 GTS
  • Full Review of NVIDIA's GeForce2 MX
  • 3D Benchmarking - Understanding Frame Rate Scores
  • The Fastest GeForce2 GTS Card - Gainward's CARDEXpert GeForce2 GTS/400
  • ATi's New Radeon - Smart Technology Meets Brute Force

Who Is To Blame?

It's not all fair to blame NVIDIA of bad performance or bad engineering. What NVIDIA could be blamed for is not telling the whole truth when claiming the mind blowing high fill rate numbers of GeForce2 GTS at its launch.

Now before we all kick the behind of NVIDIA, we should realize that the '3D chip developing game', if you don't mind me calling it that, is not that different from the 'CPU developing game'. In both cases teams are working on a new design many months or even years before it's finally released. By the time when GeForce2 GTS was taped out nobody could exactly know what the memory market would look like by the time of its release. Obviously NVIDIA took a lucky guess, hoping for fast DDR SDRAM to be available in the second quarter of 2000 already. We know today, that NVIDIA didn't have a choice and had to equip GeForce2 GTS with 6 ns DDR SDRAM, which does not quite deliver the bandwidth that it takes to feed this high-speed chip. ATi just released Radeon, which is suffering from memory bandwidth restrictions as well, because the best memory ATi could get was 5.5 ns DDR SDRAM, which is still not really good enough.

GeForce2 Ultra To The Rescue

Finally now NVIDIA was able to get hold of DDR SDRAM of 4.4 and even 4 ns specification. 64 MB of this new and fast memory gets combined with a GeForce2 GTS chip that is running at an even higher clock speed of no less than 250 MHz and ready it is, NVIDIA's new GeForce2 Ultra!

Experienced 3D cracks and the ones of you who have followed my advice and read all the above listed articles don't really need to read any further, because they only need to count 2 and 2 together to grasp what kind of performance boost GeForce2 is getting out of that high-octane mix. Finally NVIDIA's new top-performer isn't slowed down by dawdling memory anymore and so it can finally unleash the performance that its engineering fathers designed into it.

To bore you all to death I could now repeat all of GeForce2's 3D features once more, as some of my dear colleagues will certainly do. However, since the GeForce2 Ultra is not a new miracle, but simply a faster version of GeForce2 GTS I will save the valuable time of both of us. If you still require the details, I suggest that you consult the following list. All the others of you are given the chance to skip redundant stuff. This is a chance you don't have on every website!

GeForce2 Features

  • .18 micron Process
  • Second Generation Integrated Transform & Lighting Engine
  • NVIDIA Shading Rasterizer
  • Per Pixel Lighting and more
  • Full Scene Anti Aliasing (FSAA)
  • Cube Environment Mapping
  • Transform & Lighting - What is it?
  • Fill Rate, Rendering Pipelines and Triangle Size
  • Wasted Energy - The Rendering Of Hidden Surfaces
  • 3D Benchmarking - Understanding Frame Rate Scores
  • GeForce2 GTS Suffers Badly From Memory Bandwidth Limitation

There's even more, but I grew a bit tired of re-iterating it here. Please complain to me if you thought that the list was too short. :)

A Look Back In History

Before we get to the details of the new NVIDIA 3D accelerator I would like to point out to you how the issues fill rate and memory bandwidth went with the NVDIA products of the last two years.

Let's first have a look at the theoretical specs found in the press releases:

Well, in terms of theoretical numbers NVIDIA has come a long way since the release of TNT in fall 1998. TNT came with two rendering pipelines, running at 90 MHz. Today the GeForce2 Ultra comes with four pipes, each of those can render two texels per clock and they are clocked at 250 MHz. This means a theoretical pixel fill rate increase of 455% and a texel fill rate increase of more than 1000%!! That's called progress!

The next thing of interest is how the memory bandwidth of the card's video memory evolved:

First of all you might spot the step-back marked by the first released GeForce256 chip with SDR memory. It came with a memory bandwidth of only 2.5 GB/s, although its predecessor TNT2 Ultra had already been able to sport 2.7 GB/s.

The next important thing is that the increase in memory bandwidth from TNT to GeForce2 Ultra is only 331%. It is obviously unable to live up to the high pixel and the ultra high texel fill rate increase that took place from TNT to GF2 Ultra. This fact is a strong proof that memory technology isn't able to keep up with 3D chip technology. It has always been like this, but in the past the graphics-chip makers simply increased the data path. Going from 16-bit to 32-bit or from 32-bit to 64-bit may be possible, but today we've reached 128-bit and more would be very difficult to implement. Thus the memory HAS TO get faster.