The Myths Of Graphics Card Performance: Debunked, Part 2

Vendor-Specific Technologies: Mantle, ShadowPlay, TXAA And G-Sync

Let us start by making a clear statement: we applaud both AMD and Nvidia for their groundbreaking work in pushing the envelope of what is possible on PC gaming platforms.

Low-Level APIs: AMD's Mantle

Mantle is designed to give developers more direct control of hardware, following in the distant footsteps of Glide. Some of you may be too young to understand why that comparison is important, but it is.

Glide was introduced by 3dfx to complement and closely mirror the graphics capability of its Voodoo Graphics card. OpenGL was a massive beast for 1990s hardware, and Glide contained a smaller subset of features that were easy to learn and implement. The API’s main downside was its specificity to 3dfx hardware, just as Mantle is currently specific to AMD hardware.

Eventually, DirectX and full OpenGL drivers matured, and a variety of additional hardware appeared (does anyone remember the Riva TNT?). These developments led to the dusk of Glide as a mainstream API.

Mantle is an interesting gamble by AMD. With already-established ecosystems relying on OpenGL and DirectX, the need for a new low-level API is debatable, although AMD claims that developers are clamoring for it).

Currently, Mantle support is limited to a handful of titles. The SDK is in beta, and currently limited to a handful of developers selected by AMD. And, as our own tests in AMD Mantle: A Graphics API Tested In Depth show, Mantle’s primary benefit is reducing CPU overhead, limiting its most noticeable benefit to low-end CPUs coupled with high-end GPUs.

I believe that Mantle's success will ultimately hinge on two factors:

  1. Whether Mantle is easy enough to code that DirectX/OpenGL ports aren't too cumbersome for developers
  2. Whether Mantle’s performance gains extend to enthusiast-class platforms

We took our time in covering Mantle with real performance data. But the story linked above is illuminating if you haven’t already read it.

Advanced Temporal Antialiasing: Nvidia's TXAA

I often use the example of humanity putting a man on the moon before figuring out that it was a good idea to put wheels on suitcases as an example of how sometimes brilliantly simple ideas go overlooked for a very long time with no good reason. MLAA and FXAA, Class B post-processing-based anti-aliasing techniques are one such innovation.

A further take on anti-aliasing, available only from Nvidia and then only on a few game titles, goes a step further. It builds on the fact that some of the most annoying aliasing artifacts, referred to as "shimmering", happen because of movement across frames. By analyzing not a single frame, but rather a sequence of them, it is possible to predict where those artifacts will appear and compensate accordingly.

Nvidia's TXAA is a variation of MSAA. The company says that "TXAA uses a contribution of samples, both inside and outside of the pixel, in conjunction with samples from prior frames". Hence you can expect its image quality to exceed even Class A anti-aliasing algorithms at the cost of even more memory and throughput (FPS).

Should they mature enough, temporal-based multi-sampling anti-aliasing technologies could be defined by us as a new "Class A+". We'd also love to see an implementation of MLAA/FXAA that leverages, in addition to the current frame, the prior frame in its post-processing calculation. We bet such additional information could be put to good use for improving image quality.

G-Sync And FreeSync: No More The Compromise Between V-sync On And Off

We covered Nvidia's G-Sync technology extensively in the already-mentioned G-Sync Technology Preview: Quite Literally A Game Changer, and we'd point you in that direction for a deep-dive if you want to learn more.

We also mentioned FreeSync, which was added to the DisplayPort 1.2a standard as the Adaptive-Sync amendment. AMD recently announced that it’s collaborating with MStar, Novatek and Realtek to enable scalers able to drive the next generation of FreeSync-capable monitors. According to the company, its newest graphics cards already support dynamic frame rates in games, and the rest of the ecosystem should begin materializing in 2015.

Kudos to Nvidia for leading the way in innovation on this front, and to AMD for proposing a free, open standard for the benefit of gamers at every budget point.

Other Vendor-Specific Technologies Worth Mentioning

Below is a list of vendor-specific technologies that are applicable to specific cases, such as running more than one monitor or card, stereoscopic gaming, gameplay recording and so on. We’ll let you explore them through each vendor’s website.

GPU-Based Physics Calculations

Multi-Display Technologies

Gameplay Recording Technologies

Cooperative Rendering Technologies

Stereoscopic Gaming Technologies

Computing Libraries Support

Multi-Card Rendering Technologies

  • iam2thecrowe
    i've always had a beef with gpu ram utillization and how its measured and what driver tricks go on in the background. For example my old gtx660's never went above 1.5gb usage, searching forums suggests a driver trick as the last 512mb is half the speed due to it's weird memory layout. Upon getting my 7970 with identical settings memory usage loading from the same save game shot up to near 2gb. I found the 7970 to be smoother in the games with high vram usage compared to the dual 660's despite frame rates being a little lower measured by fraps. I would love one day to see an article "the be all and end all of gpu memory" covering everything.

    Another thing, i'd like to see a similar pcie bandwidth test across a variety of games and some including physx. I dont think unigine would throw much across the bus unless the card is running out of vram where it has to swap to system memory, where i think the higher bus speeds/memory speed would be an advantage.
    Reply
  • blackmagnum
    Suggestion for Myths Part 3: Nvidia offers superior graphics drivers, while AMD (ATI) gives better image quality.
    Reply
  • chimera201
    About HDTV refresh rates:
    http://www.rtings.com/info/fake-refresh-rates-samsung-clear-motion-rate-vs-sony-motionflow-vs-lg-trumotion
    Reply
  • photonboy
    Implying that an i7-4770K is little better than an i7-950 is just dead wrong for quite a number of games.

    There are plenty of real-world gaming benchmarks that prove this so I'm surprised you made such a glaring mistake. Using a synthetic benchmark is not a good idea either.

    Frankly, I found the article was very technically heavy were not necessary like the PCIe section and glossed over other things very quickly. I know a lot about computers so maybe I'm not the guy to ask but it felt to me like a non-PC guy wouldn't get the simplified and straightforward information he wanted.
    Reply
  • eldragon0
    If you're going to label your article "graphics performance myths" Please don't limit your article to just gaming, It's a well made and researched article, but as Photonboy touched, the 4770k vs 950 are about as similar as night and day. Try using that comparison for graphical development or design, and you'll get laughed off the site. I'd be willing to say it's rendering capabilities are actual multiples faster at those clock speeds.
    Reply
  • SteelCity1981
    photonboy this article isn't for non pc people, because non pc people wouldn't care about detailed stuff like this.
    Reply
  • renz496
    14561510 said:
    Suggestion for Myths Part 3: Nvidia offers superior graphics drivers

    even if toms's hardware really did their own test it doesn't really useful either because their test setup won't represent million of different pc configuration out there. you can see one set of driver working just fine with one setup and totally broken in another setup even with the same gpu being use. even if TH represent their finding you will most likely to see people to challenge the result if it did not reflect his experience. in the end the thread just turn into flame war mess.

    14561510 said:
    Suggestion for Myths Part 3: while AMD (ATI) gives better image quality.

    this has been discussed a lot in other tech forum site. but the general consensus is there is not much difference between the two actually. i only heard about AMD cards the in game colors can be a bit more saturated than nvidia which some people take that as 'better image quality'.
    Reply
  • ubercake
    Just something of note... You don't necessarily need Ivy Bridge-E to get PCIe 3.0 bandwidth. Sandy Bridge-E people with certain motherboards can run PCIe 3.0 with Nvidia cards (just like you can with AMD cards). I've been running the Nvidia X79 patch and getting PCIe gen 3 on my P9X79 Pro with a 3930K and GTX 980.
    Reply
  • dovah-chan
    There is one AM3+ board with PCI-E 3.0. That would be the Sabertooth Rev. 2.
    Reply
  • ubercake
    Another article on Tom's Hardware by which the 'ASUS ROG Swift PG...' link listed for an unbelievable price takes you to the PB278Q page.

    A little misleading.
    Reply