S3 Diamond Viper II Review

The Competition

Although this card is S3's flagship video solution at the moment, it doesn't necessarily mean that it has the ability or the intensions to compete with the high end solutions now available on the market. Keep in mind that this card is not nearly in the price range of something like a $300 (USD) GeForce DDR board.

3dfx won't have it's newest line-up available anytime soon so the VD3 3000 and VD3 3500 are the two contenders. Although both cards can't offer 32 bit color, they both are decently fast at 16 bit 3D and have very good driver support. The 3500 is slightly more expensive but offers a TV-tuner and superior video options than the Viper II offers. Also keep in mind that the VD3 series of cards does not have hardware T&L.

ATI really doesn't have a card that can compete with the Viper II just yet but we'll be taking a peek at the ATI Rage Fury MAXX as soon as it makes its way into the lab. The MAXX will be in a similar price range, offering awesome fill-rate performance but doesn't come with hardware T&L. Keep your eyes peeled for something about this, very soon.

Matrox is currently shipping the G400 series cards that will be competing at and above the price range of the Viper II. The G400 will be at the same price while the G400 MAX will be slightly more expensive. The G400 cards offer great visual quality, decent fill-rates, good video performance and feature environment bump mapping. Keep in mind the G400 solutions don't come with hardware T&L.

NVIDIA is offering two products that will end up competing with the Viper II; they are the TNT2 Ultra and GeForce SDR. Now the TNT2 Ultra is slightly cheaper than the Viper II while the GeForce board will be slightly more expensive. In theory, the Viper II should offer better performance than the TNT2 Ultra while offering relatively competitive solution to the GeForce SDR boards. The GeForce has a full functioning T&L unit and offers currently the best real world fill-rate we've seen to date.

Fill-Rate

Once again the term Mtexel has come to haunt us as people have abused the word and started comparing single textured fill-rate vs. multi-textured fill-rates. The fill rate of the Viper II is 250 Mpixels/s while having the ability to actually apply four textures per texture in a single pass. Let's reflect on a few terms before we get deeper into this discussion.

Fill-rate - the rate at which pixels are drawn to video memory.

Pixel - short for "pi cture el ement", a pixel is a single point graphic image. Monitors display pictures by segmenting the screen into hundreds of thousands (even into the millions) of pixels.

Texel - short for "te xture el ement", similar to a pixel but texels are actually textured pixels on 3D surface.

Now the confusion doesn't end here either, you can also apply multiple textures in a single pass (through hardware) or multiple passes (through software). You can even have different filtering methods that will eat up more bandwidth but we will save that discussion for another time. The main idea here is that texel fill-rate can be measured with one or more textures applied in various situations. This allows companies to play with the "Mtexels/sec" fill-rate measurement and market their product in its best light.

For example, you may have a card based on the GeForce 256 that can push four single textured pixels in a single pass while a card based on the S2000 can only do two textured pixels in a single pass. It's obvious who is faster in a single textured scenario so how does S3 market the Viper II at 20 Mtexels/sec faster than a GeForce? Easy, they take a multi texture situation and start multiplying numbers. The Viper II graphics core is running at 125 MHz, you can dual texture two pixels per pass (or one quad textured pixel in a pass) and up with the following:

125,000,000 (125 MHz core speed) * 4 (one quad textured or two dual textured pixels) = 500Mtexels/sec

If we look at the GeForce 256, we will see:

120,000,000 (120 MHz core speed) * 4 (four single textured pixels or two dual textured pixels) = 480 Mtexels/sec

So why on earth would a GeForce be faster than a Viper II in fill-rate tests or actual applications? Well first off, keep in mind that most games do not use multi-texturing like we'd hope they would. They resort to multi-pass multi-texturing to support legacy hardware (older generations of hardware). This means that in a single textured application the GeForce would perform at its maximum fill-rate performance while the Viper II would work at half its possible MTexel performance being that it's not being used efficiently. Given a dual texture scenario, things would change in theory and the Viper II would actually be faster. Note I said in theory, there are even more factors that come into play such as memory bandwidth, filtering methods and T&L bandwidth.

Next time you pick up a retail box and check for the specified fill-rate, keep in mind the other factors involved that you should note before deciding a card has a superior fill-rate. You just might be surprised.

Transform And Lighting Engine

Transform and lighting (or T&L) has been the latest buzzword after NVIDIA announced and then released the GeForce GPU to consumers this year. With the promise of greater 3D quality due to the newfound ability to have high polygon scenes and realistic lighting without great performance loss, competitors quickly shuffled to release their own T&L solution or explain why they felt it was still too early to release such a product. S3 happened to be one of the companies that embraced this new feature (keep in mind this is not new to the workstation market) and incorporated it into their latest product, the Viper II.

So how fast is the Viper II T&L engine? I have no idea. Why? The current drivers do not have hardware T&L enabled as of yet . S3 has officially stated that they have put all their efforts into making the current features of the Viper II that can be taken advantage of, available. This meant putting aside the availability of T&L in general because they have run into driver issues that couldn't be reasonably fixed in time for release. They also went on to let me know that the merger between Diamond and S3 made things a bit more difficult. I don't see how that's possible being that two combined companies making one product line causes a lack of manpower to develop stable, full-functional drivers.

So when will the driver be available that has T&L enabled? In the middle of January an OpenGL ICD will be release with it enabled and towards the end of Q1 a DirectX driver with T&L will makes its way onto the web. S3 claims that T&L isn't such a big issue right now because there really isn't any software that uses it as of yet and that by the time their drivers are released, T&L games will just start to trickle out. Although I didn't care to see T&L unavailable in the released drivers, it is true that currently there isn't much software that takes advantage of it. However, who really wants to buy a card that can't do what it claims right out of the box? With S3's recent driver problems with legacy video cards, who really wants to trust them? This is something that you, the consumer, will decide.

Memory Bandwidth

Memory bandwidth is becoming a very important factor in 3D accelerators quickly as we transition into high resolutions and high color depths. The more complex we make 3D games or graphic applications, the more memory bandwidth we need. Now, depending on the architecture of the chip, we may need more for one card than we will for another. For example, the GeForce based boards need a great amount of bandwidth to push such insane fill-rates with high resolution, high color, T&L and various filtering modes. This is why we're seeing DDR based graphics boards pop up to help alleviate this problem for the GeForce. In the future, 3dfx and ATI are going to take yet another route where "brute force" is used by allocated a set amount of memory per graphics chip and diving work between the two. This allows for theoretically double the memory bandwidth. S3 is using SDR for the Viper II that is clocked at 155 MHz giving them a 2.5 GBs/sec memory bandwidth. With the use of texture compression and efficient driver tweaks, S3 must not feel the need for greater memory bandwidth solutions just yet.

S3 Texture Compression

S3 texture compression (or S3TC) has been around since the birth of the Savage 3D and has been trying to get its foot in the door with software developers since. Although the feature hasn't received as much support as S3 wants, it seems that developers are beginning to slowly move over. We're seeing game developers like Epic (Unreal), id (Quake 3 Arena), Monolith (Shogo2) and Raven (Soldier of Fortune) providing support for S3TC in their upcoming titles. S3 not only offers higher visual quality with their texture compression but also accelerate performance for large texture scenes. These instances maybe be few right now but with 3dfx other big graphics players pushing texture compression, you sure can bet texture compression will become a standard feature.

Video

With ATI standing alone in the consumer high performance video arena, only S3 is beginning to offer some type of real challenge. The Viper II now features enhanced DVD features that improve visual quality. Motion compensation and 16-tap upscaling and downscaling greatly improve the visual quality of software DVD playback. While not offering the best DVD performance, the Viper II offers a very good runner-up when it comes to DVD playback. The software DVD playback on our P!!! 550 test system was smooth and visual quality differences between ATI and S3 were very difficult to tell.

Drivers

Before we take a look at the drivers that S3 provided with the Viper II, I wanted to address a topic that many of you have e-mailed me about in the recent weeks. It seems that S3 has still been under achieving when it comes to supporting the drivers of their legacy products and many of you are very upset about this. I understand the pains of dealing with "not-so-refined" video drivers as we have to use them periodically for reviews. I will say that aside from some problems with 3DMark 2000, there were no real issues with the drivers that I saw. I've confronted S3 about this and they assure me that they are trying their best to support their legacy hardware owners out there and they will keep a high standard of quality for their Viper II. Let's take a look at the drivers now.

Here we have the basic color correction adjustment window that lets us set "schemes" for various users or application settings.

Here we have the D3D properties window that adjusts the settings per application. I personally was annoyed by this but it can have its advantage for when you need to set certain things on or off on a given application. I still feel there should be a general tab for a universal setting.