GPU OCing - AA depends on core not memory

Hi all. I posted a thread a few days ago asking why the higher antialiasing I used in FurMark, the less and less my OCs made any difference. For reference, my results were as follows (in Average FPS):
FurMark settings: 1280x1024, windowed, 120000ms benchmark
Stock: 725/1000
CCC: 775/1125
OC: 985/1200 (with Afterburner, and 1.237V)

0xAA
stock: 84
CCC: 90
OC: 110

4xAA
stock: 43
CCC: 48
OC: 50

8xAA
stock: 22
CCC: 24
OC: 25

So obviously something was up. I wanted to figure out what, and based on the helpful answers I received, I decided to investigate memory speed.

My test set up was the same as before at 4xAA, however I ran it first by only increasing the core speed, then at stock core I increased the memory. Already a definite trend was shown, so I added a few more tests just to check.
FPS achieved with different Memory and Core speeds
memandcorespeedvsfps.png


This is the data chart:
memandcorespeedvsfpschart.png


So, it's pretty obvious, right? Keeping the core speed stock and increasing the memory speed is the ONLY thing showing an improvement in FPS at 4xAA. Even with slightly higher memory, adding some core speed did not affect average FPS. I did jot down min and max FPS, however, and there is one interesting bit - the core did increase the min FPS anywhere from 1 to 5, but it did not affect max or average. The other thing to note is that at 0xAA, the stock setting could achieve 84 FPS average. I did not do as much testing, but I can tell you that at 0xAA, memory speed does not affect your FPS.

What does this mean?

In my opinion, this means that when you OC - or better yet, optimize your GPU for your system, you should probably first set Furmark to test at your preferred resolution and AA setting. Let's face it, 0xAA is pure crap. At 4xAA games look great, and personally I can't see much reason to go higher unless simply because you can. That said, here's my thoughts - set 4xAA and your monitor resolution. Then start increasing the Mem speed with stock core until satisfied, then boost core last to find that final stable setting. It's unlikely to get the memory fast enough to no longer be the bottleneck with AA - at a guess, it might have to be as high as 1400 to get 80FPS as seen with 0xAA. So, boost that memory and then boost the core to help smooth out frame rates.

Please, discuss as I am very interested what others think about this.
 
Well, not entirely sure what you mean. Anti aliasing is just a process to smooth out the look of objects in the distance. With it off you notice a lot of funny lines going through objects but with it on everything looks much more plesant. The degree of AA - 2x, 4x, 8x etc shouldn't really have different requirements aside from being more demanding on the system. So, no matter what you use, the faster your GPU memory the better it will run. Just a quick example from http://en.wikipedia.org/wiki/Anti_aliasing:
No AA vs with AA
Aliased.png
Antialiased.png
 

Good question, although I think for gaming purposes there will be a minimum based on your resolution and game detail settings.