S3 Savage 2000 Preview

Fill-rate Performance

This topic seems to be such an issue lately with so many marketing terms mixed in with factual numbers that people have been confused with marketing hype. S3 has claimed some 700 Mtexels/s when in fact the shipping part will be 500 Mtexels/s (due to the lower clocked core chips). On top of this, the true fill rate is limited to a smaller 250 Mpixels/s (compared to the GeForces's 480 Mpixels/s). The S2000 has the ability to render two pixels with one or two textures or one pixel with four textures. This makes the actual fill-rate very dependant on the software the card will be running. In some cases you may see an incredible fill-rate while in others you may see so-so performance. In the case of the quad texture pixel, this is where the GeForce would not be as efficient but the brute force of the true fill rate may keep it on par with the S2000. S3 has definitely geared this card for heavily textured situations where either dual texturing or quad texturing is needed. The biggest concern I have is the relatively low single textured fill-rate of the card, which may hurt them under certain circumstances.

Hardware T&L

S2000 will be the second card to market with hardware T&L and we so far haven't been able to accurately test the true performance of these units. For the GeForce I have read various performance ratings from 10-25M triangles/s but currently have no method of testing this. The folks at S3 are claiming a 2.5M lit and clipped triangles/s and say that their lighting is every bit as fast as the GeForce. To give you an example of how fast this is, an Athlon at 900 MHz can do about 740k triangles/s unclipped and unlit (which would lower this number). I however would like to keep my opinion to these comments until we are able to further analyze the GeForce and S2000 with better software (or at least runs on DMZG).


S3 has quietly been the only real competitor to ATI in the video arena for consumer graphics cards and things are improving for it. Although the S2000 doesn't have dedicated MPEG-2 decoding or DTV/HDTV decoding, it does offer the VESA VIP 2.0 standard interface. Video interface port (VIP) allows flexible connectivity to MPEG-2 decoders, HDTV decoders, digital video encoders, and the graphics card.

MPEG-2 decoding has improved also with the S2000. One of the more bragged about features was the high quality video up-scaling/downscaling. This is where you alter the size of a source image but try to keep as much visual accuracy as possible when doing so. I just happened to have my new video scaling benchmark program (no joke) with me at the time so I decided to give it a try on one of the test stations. I can safely say that this particular feature was on par with ATI's latest offerings in this area. It's only a small piece of the video playback but important nonetheless. Overall the MPEG-2 support seemed to be pretty good but until we get this card into the lab, I'm not making any final judgments.


With all the commotion about S3 having horrible drivers, I decided to ask one of the friendly engineers (who asked his name to be withheld) a few questions about this very topic. I questioned the driver status, the OpenGL ICD availability date, driver support for z-buffering and about a few artifacts I had noticed in Quake Arena while testing.

While using the software I had noticed an artifact that led be to believe there was an issue with their z-buffer support. I asked him what it was and he told me that they are using a more efficient z-buffer called a w-buffer that relieves bus bandwidth. It does this by having only to send x/y/w coordinates over the bus while z-buffer in certain circumstances has to send all that plus z. So what's the difference? The w-buffer is more flexible in certain ways but has to be used correctly for visual defects not to appear. One other possible negative side effect of this shortcut is that with games that use the z-buffer to create other special visuals, you might run into visual issues. This isn't a common case in games and isn't really something that game coders commonly do so I wouldn't be too worried about this. In most cases the z-buffer can be used instead of the w-buffer to deal with this. You may lose some performance but you have a working game. He went on to mention that their implementation of the OpenGL ICD (which will shipping with the card) only supports w-buffer. This choice was made to trade off possibly a little quality for greater performance. In most games I don't think this will be an issue but with games like Quake Arena in certain areas, this may be an annoying problem if the drivers have issues. When the drivers are mature, I don't expect to see these problems. In DirectX we have a different story in regards to the z-buffer. The card does support 32 bit z-buffering but DirectX is the only API that will use it (by S3's choice). Z-buffer will be the default choice in DirectX unless a software program asks for the w-buffer (such as Unreal does).

After discussing so much about OpenGL, I decided to inquire about the DX7 status of their drivers. Apparently they are much farther along with their OpenGL ICD than they are with the DX7 driver. He went on to state that they will have full DX7 support upon shipping but I found it interesting that the OpenGL driver took priority over it. I guess they took so much heat from not having that support with their other Savage products that they put more resources into getting that out of the way earlier.


Before I get to the juicy stuff, I want to remind you that all my comments and performance observations are being made on BETA hardware and BETA software drivers. It's obvious that there would be some relative performance issues as well as some quality problems this early on. Several benchmark programs were provided to let testers check out the performance of the S2000 but it wasn't as controlled of an environment as you have come to expect from THG. My game plan now was to check for high performance and stunning quality on the given applications I chose to run.

From the list of benchmarks, I first chose none other than Quake Arena to make some observations. Upon running benchmarks, I noted that the visual quality wasn't exactly stunning (colors bleeding here and there as well as video corruption at times) and that at high resolution (regardless of color depth) the performance was slower than I had expected. I realize that the driver will be improved as time goes by but at high resolution you usually can get a good picture of what a card can do unless the driver has some very serious issues. Keep in mind at the fill-rates S3 is claiming, it shouldn't have too much of a problem running Quake Arena at 1600x1200x16 but it did. It barely hobbled in at less than 25 FPS. That seems particularly odd to me being that the S2000 claims to have this monstrous fill-rate in multi texturing scenarios. I also noted the z-buffer problem among other visual defects so I don't think providing numbers this early is fair to the S2000 or its competition. Hopefully as the driver fixes occur, there are no performance drops. Look for our full review later this month.


The S2000 will provide an interesting story when it's released due to some promising features but interesting choices. S3 chose to make some possibly risky trade-offs to reach the performance level of something like the GeForce and in some cases maybe surpassing it. However, any one of the few gambles they took could possibly come back to haunt them if it becomes a major issue. I'm all for efficiencies as long as it doesn't hinder my visual experience, but that will be up to quite a few people to decide. I expected less than perfect quality at the moment until the card is released on final hardware and released drivers, but I did see some questionable things that I hopefully won't make it into production. Drivers are what S3 needs to chisel away before a final verdict can be drawn. I really expected to see more than I did, given all the hype, and I hope that S3 can get all the kinks out before the card is shipped, especially if they hope to compete with the likes of NVIDIA's GeForce GPU.