- Articles & News
- For IT Pros
- Your Opinion
Add it all up, and the results definitively show that GPU-based acceleration of some sort should be mandatory for anyone with a significant amount of editing work to process. Not only do content creators need to keep an eye out for hardware able to accelerate their favorite applications, but they also need to pay attention to how their software of choice utilizes that hardware. And if the tools you're using don't yet take advantage of GPU-based acceleration, it's worth finding out why. Not all workloads are ideally suited to the sort of parallelism that a GPU introduces. But when it comes to media-oriented tasks, many do, in fact, benefit. The question now becomes how quickly vendors will make this support widely available throughout their wares.
“I don't think the OpenCL API in itself is hard,” says GIMP/GEGL developer Victor Oliveira. “In fact, in my opinion, it is cleaner for general-purpose computation than other APIs like OpenGL and CUDA. Things can get hairy when you have to integrate OpenCL in an existing application that doesn't take performance and parallelism into account. Especially when data processing is split in many functions and you have to put all this in a kernel, that can complicate things and may explain why OpenCL adoption is slow.”
Oliveira expects proprietary acceleration APIs to keep their current footholds in niche vertical markets, such as HPC, because such organizations tend to be more forgiving of proprietary systems. In the consumer world, though, he expects open source APIs to become the dominant, superior paradigm. The more vendors that step up their efforts in this space, the faster the transition will happen.
“I think it's very positive that AMD pushes open standards,” says Oliveira. “It really helps to make developers—at least me—more confident about OpenCL, especially in the open source world. As OpenCL support becomes commonplace, we’ll see more applications like GIMP using it, starting with areas that can easily take advantage of the GPGPU parallel programming model: image/video/audio editing, machine learning, games, and so on.”
“We'll continue to look at all new GPGPU advances, as well as CPU advances, for ways to make our products faster,” adds Corel’s Jeff Stephen. "Faster doesn't just mean doing the same thing in less time; it also means opening up new options and opportunities that would otherwise be too slow to consider.”
For us, these changes can’t come fast enough. As long as users keep their processing local rather than in the cloud, we see a new breed of need for current-gen systems with GPGPU support, especially in the mobile arena. Previously, we never would have dreamed of throwing these sorts of graphics loads at notebooks, and now heterogeneous platforms are able to knife through them more elegantly. By the time our next heterogeneous compute story is on deck (anticipate a focus on media transcoding tools), we should have the next generation of GPU and APU parts ready, and then...well, we expect awesomeness. But there’s only one way to know for sure.