ATI's Optimized Texture Filtering Called Into Question

Optimization Fever

Our simplified explanation of different texture filtering techniques makes it clear that correct filtering is very computing-intensive. The higher quality the filtering the more the frame rate drops.

Over the years, graphics chip developers, primarily NVIDIA and ATI, have come up with numerous ways to get more performance from their chips. This is what manufacturers mean when they speak of optimization: doing less, not doing it better. In the beginning that was desperately needed, because computing power then was simply inadequate for the task of providing filtering at acceptable frame rates. Here you may recall the S3 Virge, which mutated from an "accelerator" to a "decelerator" when bilinear filtering was used.

For a long time, trilinear filtering was a luxury, which, however, became ever more useful with the increasing performance of 3D chips. After this, anisotropic filtering, which at first could only be adjusted in small steps, was introduced. The about-face came in the form of the ATI Radeon 8500. ATI implemented bilinear anisotropic filtering, but it was very heavily optimized. However, the criticism of this optimization remained limited, since it allowed for considerably better quality compared to pure trilinear filtering, and "true" anisotropic filtering was inconceivable for performance reasons.

It also has to be factored in that one can definitely optimize some things - or leave them out - without impacting image quality too much.