DirectCompute And OpenCL: The Not-So-Secret Sauces
On and off over the past couple of years, we've tried to evaluate AMD's efforts to advocate Stream and the implementations of Nvidia's CUDA infrastructure. Ultimately they're tough comparisons to make, though. Proprietary efforts wind up limiting the number of components you can match up against each other in a severe way. We were able to combine Stream, CUDA, and Quick Sync into one transcoding piece thanks to the diligent engineering of CyberLink and Arcsoft: Video Transcoding Examined: AMD, Intel, And Nvidia In-Depth.
Since then, AMD has moved away from a proprietary approach to general-purpose GPU (GPGPU) processing in favor of industry-standard DirectCompute and OpenCL APIs. With these, developers can more easily take advantage of the GPU’s programmable logic to perform highly parallelized tasks faster and often more efficiently than an x86 CPU on its own. Such tasks often exist within graphics-intensive workloads, but developers are gradually expanding how GPUs—and now APUs—can be applied in other areas. In fact, APUs may turn out to be a more optimized solution because they feature silicon that is ideal for both single data item (SISD) and multiple data item (SIMD) processing on the same die. Whereas applications used to stress one data type or the other, we’re now seeing increasingly graphical interfaces applied to structured data software, making a hybrid approach to processing more forward-looking. Nvidia, in comparison, is still pushing CUDA hard. But it isn't ignoring OpenCL; the company's drivers incorporate OpenCL 1.1 support.
In late 2006, ATI began catering to developers who wanted to dive more deeply into SIMD, vector-based, highly parallel computing tasks. Soon, the ATI Stream SDK and Brook+ language started providing tools that let software vendors get, as ATI said, “closer to the metal” in graphics processors. But a broader, standards-oriented approach was needed. This is where the Windows DirectX API called DirectCompute and its counterpart from the Khronos Group, OpenCL, came into play. As with DirectX and OpenGL, Windows-based apps are likely to adopt DirectCompute while OpenCL has a more platform-agnostic design.
With standard APIs on the table, developers are finally comfortable adopting GPU/APU acceleration in ways that simply didn’t happen when AMD and Nvidia were each pursuing their own competing interests. To give you a taste of what’s on deck in this article series, we're going to be exploring graphics hardware-based acceleration in:
- Video post-processing
- Gaming
- Personal smart cloud apps
- Videoconferencing
- Video editing
- Media transcoding
- Productivity and security software
- Photography and facial recognition
- Advanced user interface design
If AMD and Nvidia are to be believed, we should expect to see GPU/APU acceleration spread through a more diverse range of applications, introducing significant performance gains. Will more expensive graphics cards or more complex APUs deliver better results? Probably. Thousands of stream processors should naturally do more work in less time than hundreds. But even modest mainstream APUs should deliver quantifiable benefits.
Note that AMD’s architecture allows for the APU and certain discrete GPUs to work in tandem, much like CrossFire or SLI. So, it should be possible to start on a budget and scale up acceleration down the road. We don't really touch on this multi-GPU functionality here today, but we might as the series progresses.