TrueAudio: Dedicated Resources For Sound Processing
If you followed along with AMD’s tech day webcast, then you sat through a lot of TrueAudio discussion. In fact, given the amount of time dedicated to TrueAudio, the feature seemed like it’d be the day’s emphasis.
At the event, we were hearing the partner demos across eight channels, and the positional audio was certainly discernable, if not overwhelmingly busy (on purpose, no doubt). But we all know that 7.1- and even 5.1-channel sound setups outside of a home theater are very uncommon. Two- and 2.1-channel configurations, including headsets, are far more common. Unfortunately, it didn’t sound like anyone tuned in over Livestream was hearing the same output over stereo.
For anyone who was around in the late ‘90s to hear Aureal’s and Sensaura’s technologies, before both were acquired by Creative, you know that the head-related transfer functions used to create effective positional audio over two channels are not new. The point of TrueAudio is to facilitate more complex sound effects (those HRTFs aren’t computationally free) without burdening the host processor. Today, AMD says that audio gets as much as 10% of a game’s CPU utilization budget, limiting what developers can do. But with TrueAudio, AMD wants to guarantee the availability of real-time processing resources specifically for sound, and regardless of the host CPU you have installed.
This is achieved through the Tensilica HiFi2 EP Audio DSP cores mentioned on the previous page. In the R7 260X, there are
two three cores integrated on the Bonaire GPU. The higher-end R9 290 and 290X will also feature three DSP cores dedicated to TrueAudio. Those DSPs employ Tensilica’s Xtensa ISA with fixed- and floating-point number support, which AMD says is equally useful for high-end gaming and embedded applications. Because the DSP is programmable by nature, you can really feed anything you want into it, so long as there’s a decoder available. To that end, the professional audio software vendors are purportedly showing an interest, eager to see what dedicated hardware can do that host-based processing couldn’t.
The real-time nature of audio in a gaming environment means that fast access to compute cycles and memory is imperative, even if the cores themselves aren't particularly powerful. Each one includes 32 KB of instruction and data cache, along with 8 KB of scratch RAM. A fast routing interface connects the DSPs to 384 KB of shared internal memory organized in 8 KB banks. The local resources are fed by a multi-channel DMA engine able to keep the cores busy. And up to 64 MB of frame buffer memory is addressable through a low-latency bus interface shared with the display pipeline.
One of the first questions that came to mind upon hearing about TrueAudio was, “will game developers, already strapped for time and money as they get their titles to market, put resources into sound when there’s so much going on in graphics, physics, and AI?” AMD seems to think that the impact on ISVs will be minimal, though. Because a majority of developers are utilizing middleware for their audio, TrueAudio needs support from those companies first and foremost. Once you get support in Audiokinetic and Firelight’s FMOD, detecting and utilizing TrueAudio becomes much easier. From there, the feature exerts its influence before getting handed off to a codec, and is consequently compatible with any output type.
What about the fact that AMD is only making TrueAudio available across three products, two of which aren’t even available yet? Representatives say that AMD has to start somewhere with TrueAudio, and this is simply the first public airing. I’d add that high-end graphics cards, destined for high-end PCs also don’t need audio effects acceleration as much as less powerful platforms. But you can guess where this is going: expect the same technology to start showing up in AMD’s APUs and mobile GPUs, which are less powerful and might even realize power benefits from accelerating audio.