Thief Patch Enables TrueAudio And Mantle: First Benchmarks

This is the second time an existing game has added optimizations for AMD's Mantle API, and it's the first time we've seen a title updated with TrueAudio support. Both features are part of a patch, introduced today, for Thief

If you're not already familiar with Mantle, again, it's an API designed to circumvent some of DirectX's and OpenGL's inefficiencies by giving software developers more direct access to graphics resources. It's still in what AMD calls an early beta stage, and we've witnessed some of its growing pains as DICE emerged as the first ISV exposing Mantle support in Battlefield 4.

TrueAudio describes a hardware feature in certain AMD products able to accelerate sound processing, offloading that task from the host. You can read all about it in TrueAudio: Dedicated Resources For Sound Processing. In short, though it consists of three Tensilica HiFi2 EP Audio DSP cores built into a handful of GPUs. In Thief, TrueAudio is said to handle the game's new convoluted reverb audio effect. You can turn the feature on or off on any machine. But on a system without TrueAudio support, it'll purportedly increase CPU utilization.

Putting AMD's Proprietary Technologies To The Test

Now that there are two games out there with Mantle support, I wanted to embark on a proper analysis of AMD's API. Unfortunately, we just didn't have enough time to thoroughly benchmark this game's performance with and without the patch's new functionality. I want to understand the impact of both Mantle and TrueAudio across a range of hardware, and the best way to facilitate that is through tests of multiple graphics cards on multiple platforms. It's a time-intensive task that doesn't come together in a weekend. With that said, I don't want to leave you hanging. So, I put together a teaser using a small sample of the data I've already collected.

The following charts represent Thief run on an Intel Core i7-4770K, AMD FX-8350, and AMD FX-4170 CPU. Each processor is complemented by a Radeon R7 250X and Radeon R7 270 (once in DirectX and once using Mantle), in addition to a GeForce GTX 650 and 660 in DirectX for comparison. Not only does this illustrate some of the performance ramifications in Thief, but it should also give you an idea of the scope we're targeting for our upcoming deep-dive, which will include more CPUs, APUs, graphics cards, and detail settings (along with a look at real-world quality, too).

Given a glimpse at our preliminary data, we're optimistic about Mantle's impact on Thief, which appears to stretch the playability of an affordable CPU like AMD's FX-4170, allowing it to behave more like Intel's Core i7 in this title. Most encouraging to me are those minimum frame rate results, which represent worst-case performance. That's exactly where you want to see higher numbers in order to make a marginal setting more playable.

But it's still too early for absolutes when it comes to Mantle's impact on gaming. There's a lot more to this story, including image quality comparisons. The Battlefield 4 introduction was affected by bugs that changed the game's graphics output, and we want to make sure Thief isn't subject to something similar. I also have some interesting/anomalous performance figures that I'll hold off on publishing until we can get some answers from AMD. Finally, I haven't yet tested TrueAudio, and Thief gives us an excellent opportunity to do so.

Those caveats aside, Mantle looks to be especially promising for owners of entry-level CPUs, based on the data I've collected so far. Fans of Thief playing the game on a modern Radeon card get a quantifiable performance boost through a free patch, and that's pretty cool. I'm working hard to gather and assimilate as much data as I can to give you a meaningful analysis in the near future, so stay tuned.

  • scrumworks
    Huge gains on AMD CPUs. Well done AMD. Now just make a decent 65W CPU I can buy.
    Reply
  • ta152h
    The most interesting thing is the FX-8350 beats the i7-4770K with Mantle(minimum frame rates), and gets smoked on DirectX. Being 10% faster probably isn't within margin of error either, so it's interesting.The newest drivers (14.2) also are helping the Kaveri APUs pretty dramatically, at least with regards to the A10-7850K. The performance improvement fundamentally changes the limitations of the GPU, and makes it more consistent with what we expected when we looked at the specs. I'm a little surprised it hasn't been reviewed here, since the one site that did showed between 20% and 40% improvements in frame rates. A more thorough review would be interesting, to say the least.Either way, with hardware pushing against a wall, software is going to be an increasingly important driver (forgive the pun) in performance. It always was important, but now it's critical. Software companies can't be lazy anymore, and hope the next generation of hardware will carry the load to the next level of performance.It amazes me Microsoft had to wait for AMD to release Mantle before they realized they might want to improve DirectX. It wasn't always right there before their eyes? A hardware company had to figure it out, for them to copy? Intel always made better compilers than the trash Microsoft sells, but for AMD to show up Microsoft is something new, and should be embarrassing for them.
    Reply
  • benedict78
    There's something wrong with the charts. Low detail has lower FPS than normal.
    Reply
  • Shodoman
    There's something wrong with the charts. Low detail has lower FPS than normal.

    because there are different cards involved
    Reply
  • cats_Paw
    I think we want a bit more in depth comparision than 2 graphs from different GPUs and different detail presets...
    Reply
  • bemused_fred
    Since this is all about shifting the load from CPU to GPU, it seems to have no effect whatsoever when you've got a CPU powerful enough to prevent bottlenecking (e.g., the I7 in this graph, or probably any I5 post-sandy bridge). Really, this is only useful if you've mis-managed your build and have a GPU far more powerful than your CPU (e.g., an FX-4170 and an R9-270). It's a cool technology, but the applications seem limited.
    Reply
  • outlw6669
    Wow, those are some great performance gains from the Red team!
    Reply
  • maxalge
    Since this is all about shifting the load from CPU to GPU, it seems to have no effect whatsoever when you've got a CPU powerful enough to prevent bottlenecking (e.g., the I7 in this graph, or probably any I5 post-sandy bridge). Really, this is only useful if you've mis-managed your build and have a GPU far more powerful than your CPU (e.g., an FX-4170 and an R9-270). It's a cool technology, but the applications seem limited.
    Actually, if you take a look at the gpu help section on tom's own website there are a LOT of people that made such a choice.
    Reply
  • silverblue
    Setting games to maximum details levels the playing field somewhat, thus hiding CPU inadequacies.
    It amazes me Microsoft had to wait for AMD to release Mantle before they realized they might want to improve DirectX. It wasn't always right there before their eyes? A hardware company had to figure it out, for them to copy? Intel always made better compilers than the trash Microsoft sells, but for AMD to show up Microsoft is something new, and should be embarrassing for them.
    Well, some might say that it took a lot of wailing and gnashing from consumers about CrossFire frame pacing before AMD did something about that, but what forced them to do so was that their main competitor developed a way of measuring said issues...
    Reply
  • cleeve
    I think we want a bit more in depth comparision than 2 graphs from different GPUs and different detail presets...

    I should hope so!

    And that's why, as the article states, this is merely a tiny sample of the data we've collected. Full analysis coming in the near future.

    Reading the article helps put the charts in context. :)
    Reply