Compute Shader And Texture Compression
We mentioned this open secret in the conclusion of our article on CUDA. Microsoft wasn’t about to let the GPGPU market get away and now has its own language for using the GPU to crunch other tasks besides drawing pretty pictures. And guess what? The model they chose, like OpenCL, appears to be quite similar to CUDA, confirming the clarity of Nvidia’s vision. The advantage over the Nvidia solution lies in portability—a Compute Shader will work on an Nvidia or ATI GPU and on the future Larrabee, plus feature better integration with Direct3D, even if CUDA does already have a certain amount of support. But we won’t spend any more time on this subject, even if it is a huge one. Instead, we’ll look at all this in more detail in a few months with a story on OpenCL and Compute Shaders.
Improved Texture Compression
First included with DirectX 6 10 years ago, DXTC texture compression quickly spread to GPUs and has been used massively by developers ever since. Admittedly, the technology developed by S3 Graphics was effective, and the hardware cost was modest, which no doubt explains its success. But now needs have changed. DXTC wasn’t designed with compressing HDR image sources or normal maps in mind. So Direct3D’s goal was twofold: enabling compression of HDR images and limiting the “blockiness” of traditional DXTC modes. To do that, Microsoft introduced two new modes: BC6 for HDR images and BC7 for improving the quality of compression for LDR images.