ATi's X800 Pulls Off Another Coup in the Graphics Performance War

Pixel Shader Units

The pixel shader units of the Radeon X800 XT: 4 x 4 Pipes = 16 Pixel shader units (upper part of the diagram). The lower image shows a close-up view of the four-unit block.

The pixel shader units of the X800, on the other hand, have been given some attention. For example, the number of temporary registers has been increased from 12 to 32. Also, the maximum number of pixel shader instructions has grown from 120 to 1536 (512 each for vectors, scalars and textures). The memory management of the F-buffer has also been improved. The F-buffer theoretically allows the use of infinitely long pixel shader programs by caching pixels as they leave the pixel pipeline and making them available to pixel shader programs for new calculations. Another improvement is that now, only the pixels actually used by a shader program are saved, and not the entire frame of a scene.

A detailed look at a pixel shader pipeline (see diagram 1*).

ROP in detail (see diagram 2*).

A real innovation that the X800 brings to the table is called 3Dc, which is part of the texture unit. This abbreviation stands for a feature that promises compression of Normal Maps in hardware. You can find more information about 3Dc further down in the article.

The Hyper Z unit has also been revised. Not only does it operate faster now, it can also be employed in higher resolutions such as 1600x1200 or 1920x1080 as well. Of course, its performance also increases with the number of pixel pipelines. The same holds true for Color Compression, too.

Like its predecessors, the X800 only supports DirectX 9.0, meaning it is limited to PixelShader 2.0. On top of that, per component floating point precision is still restricted to a maximum of 24 bits. Again, on paper this looks like a clear disadvantage compared to NVIDIA's newest brainchild, which can woo buyers by offering 32 bit floating point precision as well as DirectX 9.0c (ShaderModel 3.0) support. For now, it's hard to tell whether the lack of these features will actually turn out to be a disadvantage for existing hardware and software. Of course, at some point in the future it will become noticeable, but it remains to be seen how far off that time is.