ATI's Optimized Texture Filtering Called Into Question

Bilinear And Trilinear Filtering

Texture scaling works fine with linear filtering. But its capabilities are limited. To get a decent result in games, you can't do without mipmaps. The following example shows this:

Here are the most common filtering modes available. Please keep in mind that screenshots do not show how a game really looks when being played! You might think that there's no big difference between bilinear and trilinear, but this changes completely while you're on the move in a game.

That is to say, you need both linear filtering and mipmaps. The simplest (and fastest) variation of linear filtering used by graphics cards is bilinear filtering. This type of filtering was used as far back as the days of Riva TNT and Voodoo1. Bilinear filters every mipmap texture separately. In the transitions from one mipmap to others, edges become visible. Running down our hallway, then, you push these horizontal lines out ahead of you. An unpleasant and very unrealistic effect. Generally this is called a "bow wave".

On the left you can see the banding between the two mipmap levels. This band moves all the time while you move in the game.

Trilinear filtering counteracts this by taking the pixels of the neighboring texture in the area of the mipmap transitions into account when filtering. In this area, therefore, the textures are blended with one another. As can easily be imagined, however, this demands much greater computing power. Bilinear filtering of a texture in today's graphics cards is practically "free," i.e. the chips manage it without any sacrifice in performance. Trilinear filtering, on the other hand, costs computing time, as more than one texture must be taken into account. With trilinear filtering, the final pixel is formed from the average of eight texels (in the shape of a box).

It is precisely here that the "optimization" of ATI and NVIDIA takes over to save computing time. But more about this later. First, another filtering method.