Revisiting Graphics Cards Myths
Modern graphic cards are complex beasts. No wonder myths abound when it comes to their performance. This is the second article in a series that seeks to debunk a few of those myths. You can find the first part in The Myths Of Graphics Card Performance: Debunked, Part 1.
In Part I, we…
- …introduced the concept of performance envelopes for graphics cards and illustrated why they matter.
- …explained how the (arguably complex) V-sync technology works, and discussed when to and when not to enable it.
- …looked at some surprising facts about how much graphics memory Windows 8.1 (and Windows 7 with aero enabled) consume.
- …talked about reaction times, input lag, the variables that affect input lag, and when input lag matters.
- …looked, in depth, at graphics memory utilization and requirements, and then provided the information to decide how much you need.
- …explained how modern graphics cards handle thermal energy and talked about "equal noise" performance of a few reference boards.
- …illustrated how overclocking sometimes doesn't help when cards are already operating in their thermal throttling range.
It was a dense and technical read--so dense and technical, in fact, that we split it in two parts. Today’s follow-up covers these additional topics:
- We look into PCI Express and explore how many lanes of PCIe connectivity are required for maximum performance from a modern video card.
- We explain why Nvidia’s Maxwell architecture does just fine with lower memory bandwidth by experimenting with a little-known API function measuring graphics memory bandwidth and PCIe bus utilization.
- We tackle display-related questions: Is a bigger display better? What about HDTVs? And what about different types of antialiasing?
- We look at different display connector technologies: DVI, HDMI and DisplayPort, and what each standard can do.
- We talk about performance engineering and how to think about value-for-the-money in hardware.
- We wrap up what we learned, what we heard and divulge what's next.
You Gave Us Some Great Ideas In Regard To Part I Topics
A lot of enthusiasts commented on the 40 dB(A) test in our own forums, on reddit and elsewhere around the Web. Some of you really appreciated it. Some thought that 40 dB(A) was more in the realm of quiet computing than performance computing, and would have appreciated seeing the reference point set higher. Almost all of you wanted us to include non-reference Radeon cards (with their aftermarket coolers) added to the round-up.
We heard you. We're in talks with AMD and will invite OEMs to submit Radeon cards for a generalized round-up at a given noise level, and we'll likely test at both 40 and 50 dB(A) (that latter test point, as a reminder, is perceived as twice as loud as 40 dB[A]). We actually set up a poll to get your input on the "right" reference level, so feel free to share your input directly!
The reference cooler on most high-end Nvidia cards is already pretty good. Hence, the incremental benefit of using non-reference GeForce cards is somewhat lower than with AMD, which uses comparatively less sophisticated reference coolers on its high-end offerings. That’s why we’re focusing on AMD on this point, though we might throw a few Nvidia cards in the mix.
We also saw that many of you appreciated the try-for-yourself audio/visual test links (even if unscientific). We leveraged that concept in What Does It Take To Turn The PC Into A Hi-Fi Audio Platform?, which also contains a set of tests that we hope you found interesting.
Valid points were raised about the importance of input lag to "twitch games" and virtual reality applications. It certainly does matter; it's surprisingly easy to get nauseous with a laggy visor!
We received a lot of questions on the 2 versus 4GB memory discussion, in particular about its relevancy to Nvidia’s GeForce GTX 760 and 770 cards. As you saw from the Steam hardware survey we posted in Part I, most gamers still have 1GB of graphics memory or less, and the adoption curve of 2GB is still in its early stages. While 4GB may benefit some uncommon scenarios today (for example, two or three 770s in SLI outputting to a 4K display or games with high-resolution texture packs/mods at 1440p), it is unlikely that the industry will come to rely on 4GB as a standard graphics memory format any time soon. Also, keep in mind that the 8GB of memory that the Xbox One and PS4 employ is shared - only a portion of that is usable as traditional graphics memory; the rest is required by the OS, and for application/game code and data. You may need to sacrifice MSAA for FXAA/MLAA at the highest resolutions with 2GB cards. You’ll have to decide if that compromise is worth making.
We also heard some criticism that we felt was off-target, specifically regarding the point of overclocking a card while maintaining a set fan speed (and thus noise level). Maybe the point we were making wasn’t clear enough. What we wanted to convey was that performance at a given noise level tends to be relatively unaffected by overclocking once thermal throttling kicks in (more power/heat translates to more throttling). And, using the envelope concept, we mentioned that you can frequently obtain higher overclocking performance at the cost of increased noise levels - and that tradeoff is for you to individually make. So if we led some of you to misinterpret our intentions there, our apologies. Just to be clear, we definitely don’t recommend overclocking your card while maintaining a fixed fan speed.