Why does AA and AF have such an impact on some cards?

bustapr

Distinguished
Jan 23, 2009
1,613
0
19,780
Everywhere I read , there are people saying that anti aliasing and AF dont have much of an imact on cards. But when I see actual benchmarks on some high end cards , the cards suffer with AA and AF alot more than without them.
 
It depends on the architecture of the card and the way AA is implemented. In DX10.1, AA is implemented through the shaders (which is a ton of power in ATI cards, hence why they like 10.1). The old method (DX10 and before) is to have a separate unit on the back to perform these functions. If this secondary processor is not large enough, a huge FPS reduction occurs when AA is turned on (example, the 2900). If this processor is large, then the impact is smaller or negligible (example, the 8800).
 

Dekasav

Distinguished
Sep 2, 2008
1,243
0
19,310
Also, AA takes up a bit of memory, so if turning it on (or setting it higher) causes you to overrun your memory buffer (Video RAM), you take a huge performance hit.
 

Dekasav

Distinguished
Sep 2, 2008
1,243
0
19,310
I believe there's something in the requirements of DX 10.1 that states a GPU must be able to do 4xAA some specific way, and that it allows AA (up to 4x) to be done in a single pass (In DX10.1 code), which is more efficient than DX10 and previous, but AF, I have no idea.

He could also just be referring to that the HD4000 series does AA astonishingly well and efficiently, and since they're DX10.1 as opposed to just DX10 like Nvidia, he assumes that is the difference.
 

dtq

Distinguished
Dec 21, 2006
515
0
18,990
When using anti aliasing the card has to effectively pre-render each frame at 2x the display resolution then reaverageing the frame down to display resolution to create the actual display resolution, thats for 2xaa 4xaa the pre render is to 4x the display resolution. This is why AA has such an impact.

The reason it looks "better" is that when rendering at display resolution things can look very "jaggy" basically the card isnt told the full "intended shape of an object its just told what the game thinks things should look like at that resolution which is less detailed curves etc than at higher resolution. By pre rendering first at a higher resolution the card can "see" what the image should look like and then decide for itself from that better quality image how to display the final image. This leads to different decisions as to what pixel is part of what object, because the card now knows mor of the "intended shape" of objects. This is also the reason why increasing AA is a case of diminshing returns, Each notch up of AA is less likely to make much of an impact on the final image as there is only so much the card can do woth the available pixels. Say at 2xAA the card finds 1000 pixels to change colour to better represent the scene the extra "detail from 4xaa might only add another 200 pixels to change the card still has to render back down to "display resolution" and although it has more pixels to work with for prerendering it can still only output to display resolution and so although its having to do doulbe the prerendering work for 4xaa over 2xaa it might only have a limited impact on the final image. go up to 8xaa and you are talking huge demands for very limited extra gains.

Of course no ammount of AA can ever make up for physically large pixels on most modern LCD's. No ammount off AA in the world will ever completely eliminate jaggies on a LCD screen, but it can still help improve the appearance.

There are many different "types" of AA, but as far as I know all of them use the prerender image and rerender to display resolution, some of them I think choose to only rerender parts of an image to save gpu load etc some of them probbaly employ short cuts etc to try to make aa work faster but as far as I know they are all variations and optimizations of the same basic principle.