Q&A: Under The Hood With AMD
One of our core objectives in this series on heterogeneous computing is to get a better understanding of some of the decisions surrounding OpenCL and DirectCompute. Why would a vendor choose to utilize them instead of other APIs, such as OpenGL or DirectX? What are these programming interfaces doing with data behind the scenes? What are their limits and how much untapped potential do they leave on the table?
Those are the questions you don’t see answered in marketing materials. Fortunately, we were able to corner two excellent authorities for this article and start gathering some answers. First up is Alex Lyashevsky, a performance application engineer at AMD and a senior member of the technical staff brought in from AMD’s acquisition of ATI in 2006. Lyashevsky is no talking head from marketing. He holds patents on parallel lossless image compression and the world’s first GPU-based H.264 decoder. Few people understand GPGPU computing as well as Lyashevsky, and fewer still can discuss it in the same breath as OpenCL acceleration.
Tom's Hardware: Photoshop CS6 is our headliner benchmarking app for this article, and Photoshop is no stranger to OpenGL. So why are we now getting OpenCL added into the mix?
Alex Lyashevsky: OpenGL is pretty widely used, and it actually has many of the same compute capabilities as OpenCL. However, OpenGL is targeted more towards graphics. When you run OpenGL, you usually assume there is some kind of an image or buffer you are trying to draw upon. OpenCL actually provides much more of a generic programming platform, more in the sense of computational domain. You can have an absolutely free way of defining your own computational domain instead of being attached to some kind of image or two-dimensional, pixel-based guestimation. Other than that, frankly, I sometimes encourage people to use OpenGL, because it has very good hardware-supported input buffer filtering, for example, and very efficient color buffer composting on output.
Tom's Hardware: For developers, is there a significant difference between the two APIs in coding?
Alex Lyashevsky: Programming the OpenGL shader language is a difficult thing to get on top of. OpenCL may be a bit easier for developers. You see, OpenGL assumes that you have to set up some graphic context, meaning you have to set up viewpoint, model matrix transformations, and so on. OpenGL is a graphical language, and this works well for some types of operations where graphics are related to the computation problem. But from a general programmer’s point of view, OpenGL is kind of nonsense. If they want to do data manipulation, why should they set up a triangle, viewpoint, or matrix? A more general way to program the GPU, which is enabled by OpenCL, is necessary for more widespread adoption. For example, it’s probably not very useful to use OpenGL to accelerate something like deflate and encryption in compression apps, but it is probably useful for image processing apps.