AMD TrueAudio Next Reserves Part Of Your GPU For Physics-Based Acoustic Rendering

AMD announced TrueAudio Next, an open source physics-based acoustic rendering technique that leverages the parallel processing power of AMD GPUs. TrueAudio Next is part of AMD’s LiquidVR initiative, which is open source and free for anyone.

For a long time, this kind of advancement in audio technology wasn’t really necessary for personal computing. Surround sound algorithms do an acceptable job approximating audio sources when your head is more or less in a stationary position. In VR, immersive audio becomes much more important. In fact, many experts agree that the auditory experience is responsible for more than half of the immersive experience. If the sound profile of objects and environments doesn’t sound right, it breaks the immersion. For example, if you step outside of a building in a VR experience, you would expect the audio to change with the environment. Furthermore, if there is an object between you and the sound source, you’d expect the auditory experience to be different than if there wasn’t an obstruction between you and the sound.

In May, during the 10-series (Pascal) launch, Nvidia revealed VR Works Audio, a physics-based audio technology that leverages Nvidia’s GPUs to render an environment with ray tracing technology, which is then applied to the sound profile to create accurate physics reactive sound waves. Now, AMD has technology that does more or less the same thing.

AMD TrueAudio Next is AMDs answer to VR Works Audio. It takes advantage of AMD’s Radeon Rays ray tracing technology to map the digital environment’s physical space and the objects within it to produce a real-time, dynamic, physics-based audio rendering. TrueAudio Next then uses this information to map out the entire soundscape of the environment, which can have “more than 32 stereo 2-second convolution sources.” AMD said this allows TrueAudio Next to “deliver spatially- and positionally-accurate audio.”

AMD said that taking advantage of TrueAudio Next to process audio signals can be done without adding any latency to the graphics rendering process thanks to a feature called CU Reservation. CU Reservation allows for a section of Compute Units to be segregated from the graphics rendering pipeline and reserved for other tasks, such as TrueAudio Next’s Radeon Ray tracing tasks.

AMD said that the CE Reservation technology is available only to approved developers, but TrueAudio Next doesn’t rely on CU Reservation to function, it simply adds more predictability to how the audio signals are handled.

AMD TrueAudio Next is part of AMD’s LiquidVR SDK, which open source and available through AMD’s Github repository. For more information about LiquidVR, Radeon Rays or TrueAudio Next, visit the GPUOpen website.

TrueAudio Next

This thread is closed for comments
    Your comment
  • Puiucs
    A really cool way of using the ACEs in AMD's GPUs
  • fixxxer113
    Interesting idea! I remember playing the original Half-Life where one of the sound engine features was the ability to change the way effects sounded depending on the shape, size and material of the room. Back then I thought maybe one day we could do away with pre-recorded effects completely and have a physics engine handle the generation of sound. Some kind of algorithm that would calculate the wheight, density and other properties of objects and would create the appropriate sounds when they touched, collided etc. I guess it would be a lot more complex than what physics engines do for movement and collision but I think the only limit is computational power. Maybe AMD made the first step towards that.
  • memadmax
    "Maybe AMD made the first step towards that."


    Once again, AMD is one step behind nvidia and intel in both innovation and performance not withstanding just their CPUs and Vid cards... AMD might as well rename themselves CHINA, as that would give them an excellent synonym in terms of the products they punch out: Underwhelming, cheap knockoffs of something else...