The Myths Of Graphics Card Performance: Debunked, Part 1

More Graphics Memory Measurements

Io Interactive's Glacier 2 engine, which powers Hitman: Absolution, is memory-hungry, second only (in our tests) to the Warscape engine from Creative Assembly (Total War: Rome II) when the highest-quality presets are taken into account.

In Hitman: Absolution, a 1 GB card is not sufficient for playing at the game’s Ultra Quality level at 1080p. A 2 GB card does allow you to set 4xAA at 1080p, or to play without MSAA at 2160p.

To enable 8xMSAA at 1080p you need a 3 GB card, and nothing short of a 6 GB Titan supports 8xMSAA at 2160p.

Once again, enabling FXAA uses no additional memory.

Note: Ungine’s latest benchmark, Valley 1.0, does not support MLAA/FXAA directly. Thus, the results you see represent memory usage when MLAA/FXAA is force-enabled in CCC/NVCP.

The data shows us that Valley runs fine on a 2 GB card at 1080p (at least as far as memory use goes). You can even use a 1 GB card with 4xMSAA enabled, which is not the case for most games. At 2160p, however, the benchmark will only run properly on a 2 GB card so long as you don't turn on AA, or use a post-processing effect instead. The 2 GB ceiling gets hit with 4xMSAA turned on.

Ultra HD with 8xMSAA enabled gobbles up over 3 GB of graphics memory, which means this benchmark will only run properly at that preset using Nvidia's GeForce GTX Titan or one of AMD's 4 GB Hawaii-based boards.

Total War: Rome II uses an updated Warscape engine from Creative Assembly. It doesn't support SLI at the moment (CrossFire does work, however). It also doesn't support any form of MSAA. The only form of anti-aliasing that works is AMD's proprietary MLAA, which is a post-processing technique like SMAA and FXAA.

One notable feature of this engine is its ability to auto-downgrade image quality based on available video memory. That's a good way to keep the game playable with minimal end-user involvement. But a lack of SLI support cripples the title on Nvidia cards at 3840x2160. At least for now, you'll want to play on an AMD board if 4K is your resolution of choice.

With MLAA disabled, Total War: Rome II’s built-in “forest” benchmark at the Extreme preset uses 1848 MB of graphics memory. The GeForce GTX 690’s 2 GB limit is exceeded with MLAA enabled at 2160p. At 1920x1080, memory use is in the 1400 MB range.

Note the surprising factor of running a supposedly AMD-only technology (MLAA) on Nvidia hardware. As both FXAA and MLAA are post-processing-based techniques, there is no technical reason why they won't run on interchangeable hardware. Creative Assembly is either switching behind-the-scenes to FXAA (despite what the configuration file says), or AMD's marketing department hasn't picked up on the fact above.

You need at least a 2 GB card to play Total War: Rome II at its Extreme quality preset at 1080p, and likely a CrossFire array with 3 GB+ to play smoothly at 2160p. If you only have a 1 GB card, the game might still be playable at 1080p, but you'll have to make some quality compromises.

What happens when graphics memory is completely consumed? The short answer is that graphics data starts getting swapped to system memory over the PCI Express bus. Practically, this means performance slows dramatically, particularly when textures are being loaded. You don't want this to happen. It'll make any game unplayable due to massive stuttering.

So, how much graphics memory do I need?

If you own a 1 GB card and a 1080p display, there's probably no need to upgrade right this very moment. A 2 GB card would let you turn on more demanding AA settings in most games though, so consider that a minimum benchmark if you're planning a new purchase and want to enjoy the latest titles at 1920x1080.

As you scale up to 1440p, 1600p, 2160p or multi-monitor configurations, start thinking beyond 2 GB if you also want to use MSAA. Three gigabytes becomes a better target (or multiple 3 GB+ cards in SLI/CrossFire).

Of course, as I mentioned, balance is critical across the board. An underpowered GPU outfitted with 4 GB of GDDR5 memory (rather than 2 GB) isn't going to automatically be playable at high resolutions just because it's complemented by the right amount of memory. And that's why, when we review graphics cards, we test multiple games, resolutions, and detail settings. It takes fleshing out a card's bottlenecks before smart recommendations can be made.

Create a new thread in the US Reviews comments forum about this subject
This thread is closed for comments
135 comments
    Your comment
    Top Comments
  • blackmagnum
    Myth #123: Gamers are lonely boys in Mother's dark basement or attic...
    27
  • cats_Paw
    Awsometacular article.Not only its a new standard for GPU performance, but the Human Benchmark and audio test was really fun!Im normally very critisizing about toms articles becouse many times they feel a bit weak, but this one?10/10
    16
  • Jaroslav Jandek
    Quote:
    The info on V-Sync causing frame rate halving is out of date by about a decade. With multithreading the game can work on the next frame while the previous frame is waiting for V-Sync. Just look at BF3 with V-Sync on you get a continous range of FPS under 60 not just integer multiples. DirectX doesn't support triple buffering.
    The behavior of V-Sync is implementation-specific (GPU drivers/engine). By using render ahead, swap chains, Adaptive V-Sync, etc., you can avoid frame halving.

    DirectX DOES support TB by using DXGI_SWAP_CHAIN_DESC.BufferCount = 3; (or D3DPRESENT_PARAMETERS.BackBufferCount = 2; for DX9). It actually supports more than triple buffering - Direct3D 9Ex (Vista+'s WDDM) supports 30 buffers.
    12
  • Other Comments
  • manwell999
    The info on V-Sync causing frame rate halving is out of date by about a decade. With multithreading the game can work on the next frame while the previous frame is waiting for V-Sync. Just look at BF3 with V-Sync on you get a continous range of FPS under 60 not just integer multiples. DirectX doesn't support triple buffering.
    -11
  • ingtar33
    awesome article, looking forward to the next half.
    5
  • blackmagnum
    Myth #123: Gamers are lonely boys in Mother's dark basement or attic...
    27
  • AlexSmith96
    Great Article! I love you guys for coming up with such a nice idea.
    4
  • hansrotec
    with over clocking are you going to cover water cooling? it would seem disingenuous to dismiss overclocking based on a generating of cards designed to run up to maybe a speed if there is headroom and not include watercooling which reduces noise and temperature . my 7970 (pre ghz editon) is a whole different card water cooled vs air cooled. 1150 mhz without having to mess with the voltage on water with temps in 50c without the fans or pumps ever kicking up, where as on air that would be in the upper 70s lower 80s and really loud. on top of that tweeking memory incorrectly can lower frame rate
    2
  • hansrotec
    I thought my last comment might have seemed to negative, and i did not mean it in that light. I did enjoy the read, and look forward to more!
    6
  • hansrotec
    I thought my last comment might have seemed to negative, and i did not mean it in that light. I did enjoy the read, and look forward to more!
    -1
  • noobzilla771
    Nice article! I would like to know more about overclocking, specifically core clock and memory clock ratio. Does it matter to keep a certain ratio between the two or can I overclock either as much as I want? Thanks!
    -1
  • chimera201
    I can never win over input latency no matter what hardware i buy because of my shitty ISP
    5
  • immanuel_aj
    I'd just like to mention that the dB(A) scale is attempting to correct for perceived human hearing. While it is true that 20 dB is 10 times louder than 10 dB, but because of the way our ears work, it would seem that it is only twice as loud. At least, that's the way the A-weighting is supposed to work. Apparently there are a few kinks...
    -1
  • FunSurfer
    On Page 3: "In the image below" should be "In the image above"
    0
  • Formata
    "Performance Envelope" = GeniusNice work Filippo
    -1
  • beetlejuicegr
    I just want to mention that db is one thing, health of gpu over time is another. In many cases i have seen graphic cards going up to 90C before the default driver of ATI/Nvidia start to throttle down. i prefer a 50C-70C scenario
    -1
  • cats_Paw
    Awsometacular article.Not only its a new standard for GPU performance, but the Human Benchmark and audio test was really fun!Im normally very critisizing about toms articles becouse many times they feel a bit weak, but this one?10/10
    16
  • ubercake
    What's up with Precision X? It seems like they would update it every couple of months and now there hasn't been an update since last June or July?Is EVGA getting out of the utility software business?
    0
  • kzaske
    Its' been a long time since Tom's Hardware had such a good article. Very informative and easy to read. Thank you!
    8
  • ddpruitt
    Very good article even though there are some technical errors. I look forward to seeing the second half! I would also be interesting in seeing some detailed comparisons of the same cards with different amounts and types of VRAM and case types on the overall impact of performance.
    -1
  • Jaroslav Jandek
    Quote:
    The info on V-Sync causing frame rate halving is out of date by about a decade. With multithreading the game can work on the next frame while the previous frame is waiting for V-Sync. Just look at BF3 with V-Sync on you get a continous range of FPS under 60 not just integer multiples. DirectX doesn't support triple buffering.
    The behavior of V-Sync is implementation-specific (GPU drivers/engine). By using render ahead, swap chains, Adaptive V-Sync, etc., you can avoid frame halving.

    DirectX DOES support TB by using DXGI_SWAP_CHAIN_DESC.BufferCount = 3; (or D3DPRESENT_PARAMETERS.BackBufferCount = 2; for DX9). It actually supports more than triple buffering - Direct3D 9Ex (Vista+'s WDDM) supports 30 buffers.
    12
  • Adroid
    I would love to see a Tom's article on debunking the 2GB vs 4GB graphic card race. For instance, people spam the Tom's forum daily giving advice to buy the 4GB GTX 770 over the 2GB. Truth is, the 4 GB costs 50$ more and offers NO benefit over the 2GB. Even worse, I see people buying/suggesting the 4GB 760 over a 2GB 770 (which runs only 30$ more and is worth every penny). I am also curious about the 4GB 770 sli scenario. For everything I have seen, even in Sli the 4GB offers no real-world benefit (with the exclusion of MAYBE a few frames per second higher at 3 monitor scenarios, but the rates are unplayable regardless so the gain is negligible). The other myth is that the 4GB 770 is more "future proof". Give me a break. GPU and future proof do not belong in the same sentence. Further, if they were going to be "future proof" they would be "now proof". There are games that are plenty demanding to show the advantage of 2gb vs 4gb - and they simply don't. It's tiring seeing people giving shoddy advice all over the net. I wish a reputable website (Tom's) would settle it once and for all. In my opinion, the extra 2 GB of RAM isn't going to make a tangible difference unless the GPU architecture changes...
    8
  • ubercake
    DisplayLag.com lists 120Hz and 240Hz HDTVs amongst the monitors, but the maximum input speed for the HDTVs' inputs equate to 60fps? Or am I missing something?If I buy a 240Hz refresh TV, that's output. It processes the 60Hz signal to transform it to a 240Hz output (usually through some form of frame duplication) to minimize motion blur. Does this displayLag.com site mentioned in the article compare apples to oranges by listing HDTVs with monitors as if they operate the same way or am I way off here?
    0