Hello all just want get these things straight so some suggestions and sharing your experiences will be very helpful.
Well the HD 7xxx GPUs seem to thrive in MSAA format of anti-aliasing,may be because of the higher memory bus width and bandwidth configurations.
Now the GTX 6xx series manages to do the job upto 4x MSAA.Anything higher than that kills their lead over the HD 7xxx GPUs.
But does the table turn when we choose SSAA/FSAA?How does Radeon cards perform here?
I have the gtx 660 ti that's coming with a poor 192bit wide memory bus and I ran Unigine Heaven 3.0 @ 1920x1080p with tessellation extreme,anisotropy 16x,and AA 8x which is most probably msaa.The average fps was 38.1 fps.Then I went to nv control panel and set AA transparency to 8x SSAA and ran heaven again with its own AA turned off.
To my astonishment the avg fps rose to 63.8 fps(a 25.7 fps gain) while the image quality was better than ever.It was hard to believe and so I tried surfing the net about this and came across this anandtech page -
http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/12
After going through that it was certain to me that the 1344 cuda cores might be making the difference and that SSAA is rather more shader dependent than memory bandwidth.
But this option was not available in the control panel for Batman Arkham City demo so I switched to 16xQCSAA replacing fxaa to reduce the blur caused by fxaa.
Considering all these is it better for me to use AA from nv control panel and keep the application AA settings off.
Also how about using 16x QCSAA + 2x SSAA combo and lastly will I even need post processing AA along with these AA settings enabled?
Well the HD 7xxx GPUs seem to thrive in MSAA format of anti-aliasing,may be because of the higher memory bus width and bandwidth configurations.
Now the GTX 6xx series manages to do the job upto 4x MSAA.Anything higher than that kills their lead over the HD 7xxx GPUs.
But does the table turn when we choose SSAA/FSAA?How does Radeon cards perform here?
I have the gtx 660 ti that's coming with a poor 192bit wide memory bus and I ran Unigine Heaven 3.0 @ 1920x1080p with tessellation extreme,anisotropy 16x,and AA 8x which is most probably msaa.The average fps was 38.1 fps.Then I went to nv control panel and set AA transparency to 8x SSAA and ran heaven again with its own AA turned off.
To my astonishment the avg fps rose to 63.8 fps(a 25.7 fps gain) while the image quality was better than ever.It was hard to believe and so I tried surfing the net about this and came across this anandtech page -
http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/12
After going through that it was certain to me that the 1344 cuda cores might be making the difference and that SSAA is rather more shader dependent than memory bandwidth.
But this option was not available in the control panel for Batman Arkham City demo so I switched to 16xQCSAA replacing fxaa to reduce the blur caused by fxaa.
Considering all these is it better for me to use AA from nv control panel and keep the application AA settings off.
Also how about using 16x QCSAA + 2x SSAA combo and lastly will I even need post processing AA along with these AA settings enabled?