The Myths Of Graphics Card Performance: Debunked, Part 2

Revisiting Graphics Cards Myths

Modern graphic cards are complex beasts. No wonder myths abound when it comes to their performance. This is the second article in a series that seeks to debunk a few of those myths. You can find the first part in The Myths Of Graphics Card Performance: Debunked, Part 1.

In Part I, we…

  • …introduced the concept of performance envelopes for graphics cards and illustrated why they matter.
  • …explained how the (arguably complex) V-sync technology works, and discussed when to and when not to enable it.
  • …looked at some surprising facts about how much graphics memory Windows 8.1 (and Windows 7 with aero enabled) consume.
  • …talked about reaction times, input lag, the variables that affect input lag, and when input lag matters.
  • …looked, in depth, at graphics memory utilization and requirements, and then provided the information to decide how much you need.
  • …explained how modern graphics cards handle thermal energy and talked about "equal noise" performance of a few reference boards.
  • …illustrated how overclocking sometimes doesn't help when cards are already operating in their thermal throttling range.

It was a dense and technical read--so dense and technical, in fact, that we split it in two parts. Today’s follow-up covers these additional topics:

  • We look into PCI Express and explore how many lanes of PCIe connectivity are required for maximum performance from a modern video card.
  • We explain why Nvidia’s Maxwell architecture does just fine with lower memory bandwidth by experimenting with a little-known API function measuring graphics memory bandwidth and PCIe bus utilization.
  • We tackle display-related questions: Is a bigger display better? What about HDTVs? And what about different types of antialiasing?
  • We look at different display connector technologies: DVI, HDMI and DisplayPort, and what each standard can do.
  • We talk about performance engineering and how to think about value-for-the-money in hardware.
  • We wrap up what we learned, what we heard and divulge what's next.

You Gave Us Some Great Ideas In Regard To Part I Topics

A lot of enthusiasts commented on the 40 dB(A) test in our own forums, on reddit and elsewhere around the Web. Some of you really appreciated it. Some thought that 40 dB(A) was more in the realm of quiet computing than performance computing, and would have appreciated seeing the reference point set higher. Almost all of you wanted us to include non-reference Radeon cards (with their aftermarket coolers) added to the round-up.

We heard you. We're in talks with AMD and will invite OEMs to submit Radeon cards for a generalized round-up at a given noise level, and we'll likely test at both 40 and 50 dB(A) (that latter test point, as a reminder, is perceived as twice as loud as 40 dB[A]). We actually set up a poll to get your input on the "right" reference level, so feel free to share your input directly!

The reference cooler on most high-end Nvidia cards is already pretty good. Hence, the incremental benefit of using non-reference GeForce cards is somewhat lower than with AMD, which uses comparatively less sophisticated reference coolers on its high-end offerings. That’s why we’re focusing on AMD on this point, though we might throw a few Nvidia cards in the mix.

We also saw that many of you appreciated the try-for-yourself audio/visual test links (even if unscientific). We leveraged that concept in What Does It Take To Turn The PC Into A Hi-Fi Audio Platform?, which also contains a set of tests that we hope you found interesting.

Valid points were raised about the importance of input lag to "twitch games" and virtual reality applications. It certainly does matter; it's surprisingly easy to get nauseous with a laggy visor!

We received a lot of questions on the 2 versus 4GB memory discussion, in particular about its relevancy to Nvidia’s GeForce GTX 760 and 770 cards. As you saw from the Steam hardware survey we posted in Part I, most gamers still have 1GB of graphics memory or less, and the adoption curve of 2GB is still in its early stages. While 4GB may benefit some uncommon scenarios today (for example, two or three 770s in SLI outputting to a 4K display or games with high-resolution texture packs/mods at 1440p), it is unlikely that the industry will come to rely on 4GB as a standard graphics memory format any time soon. Also, keep in mind that the 8GB of memory that the Xbox One and PS4 employ is shared - only a portion of that is usable as traditional graphics memory; the rest is required by the OS, and for application/game code and data. You may need to sacrifice MSAA for FXAA/MLAA at the highest resolutions with 2GB cards. You’ll have to decide if that compromise is worth making.

We also heard some criticism that we felt was off-target, specifically regarding the point of overclocking a card while maintaining a set fan speed (and thus noise level). Maybe the point we were making wasn’t clear enough. What we wanted to convey was that performance at a given noise level tends to be relatively unaffected by overclocking once thermal throttling kicks in (more power/heat translates to more throttling). And, using the envelope concept, we mentioned that you can frequently obtain higher overclocking performance at the cost of increased noise levels - and that tradeoff is for you to individually make. So if we led some of you to misinterpret our intentions there, our apologies. Just to be clear, we definitely don’t recommend overclocking your card while maintaining a fixed fan speed.

  • iam2thecrowe
    i've always had a beef with gpu ram utillization and how its measured and what driver tricks go on in the background. For example my old gtx660's never went above 1.5gb usage, searching forums suggests a driver trick as the last 512mb is half the speed due to it's weird memory layout. Upon getting my 7970 with identical settings memory usage loading from the same save game shot up to near 2gb. I found the 7970 to be smoother in the games with high vram usage compared to the dual 660's despite frame rates being a little lower measured by fraps. I would love one day to see an article "the be all and end all of gpu memory" covering everything.

    Another thing, i'd like to see a similar pcie bandwidth test across a variety of games and some including physx. I dont think unigine would throw much across the bus unless the card is running out of vram where it has to swap to system memory, where i think the higher bus speeds/memory speed would be an advantage.
    Reply
  • blackmagnum
    Suggestion for Myths Part 3: Nvidia offers superior graphics drivers, while AMD (ATI) gives better image quality.
    Reply
  • chimera201
    About HDTV refresh rates:
    http://www.rtings.com/info/fake-refresh-rates-samsung-clear-motion-rate-vs-sony-motionflow-vs-lg-trumotion
    Reply
  • photonboy
    Implying that an i7-4770K is little better than an i7-950 is just dead wrong for quite a number of games.

    There are plenty of real-world gaming benchmarks that prove this so I'm surprised you made such a glaring mistake. Using a synthetic benchmark is not a good idea either.

    Frankly, I found the article was very technically heavy were not necessary like the PCIe section and glossed over other things very quickly. I know a lot about computers so maybe I'm not the guy to ask but it felt to me like a non-PC guy wouldn't get the simplified and straightforward information he wanted.
    Reply
  • eldragon0
    If you're going to label your article "graphics performance myths" Please don't limit your article to just gaming, It's a well made and researched article, but as Photonboy touched, the 4770k vs 950 are about as similar as night and day. Try using that comparison for graphical development or design, and you'll get laughed off the site. I'd be willing to say it's rendering capabilities are actual multiples faster at those clock speeds.
    Reply
  • SteelCity1981
    photonboy this article isn't for non pc people, because non pc people wouldn't care about detailed stuff like this.
    Reply
  • renz496
    14561510 said:
    Suggestion for Myths Part 3: Nvidia offers superior graphics drivers

    even if toms's hardware really did their own test it doesn't really useful either because their test setup won't represent million of different pc configuration out there. you can see one set of driver working just fine with one setup and totally broken in another setup even with the same gpu being use. even if TH represent their finding you will most likely to see people to challenge the result if it did not reflect his experience. in the end the thread just turn into flame war mess.

    14561510 said:
    Suggestion for Myths Part 3: while AMD (ATI) gives better image quality.

    this has been discussed a lot in other tech forum site. but the general consensus is there is not much difference between the two actually. i only heard about AMD cards the in game colors can be a bit more saturated than nvidia which some people take that as 'better image quality'.
    Reply
  • ubercake
    Just something of note... You don't necessarily need Ivy Bridge-E to get PCIe 3.0 bandwidth. Sandy Bridge-E people with certain motherboards can run PCIe 3.0 with Nvidia cards (just like you can with AMD cards). I've been running the Nvidia X79 patch and getting PCIe gen 3 on my P9X79 Pro with a 3930K and GTX 980.
    Reply
  • dovah-chan
    There is one AM3+ board with PCI-E 3.0. That would be the Sabertooth Rev. 2.
    Reply
  • ubercake
    Another article on Tom's Hardware by which the 'ASUS ROG Swift PG...' link listed for an unbelievable price takes you to the PB278Q page.

    A little misleading.
    Reply