Q&A: Under The Hood With AMD, Cont.
Tom's Hardware: How much enthusiasm have you found in the developer community? Are they coming to you saying, “Help us out—get us running with OpenCL”?
Alex Lyashevsky: Well, there is no resistance. Frankly, using something like OpenCL is not easy, but I try to break the psychological barrier. People start using OpenCL and often, because it is cumbersome, they won’t get the performance they expect from the marketing claims, so they assume it doesn’t work. Users think it’s a marketing ploy and they won’t use it. My goal is to break the barriers and show that there is a lot of benefit. How to do that is another matter. The problem is that it is not exactly C. It’s not that it’s quite different from C; it’s the problem of mindset. You have to understand that you are really dealing with massively parallel problems. It’s more a problem of people understanding how to run parallel 32 or 64 synchronous threads, and this prevents wider, easier adoption.
From there, you probably know there are architectural problems. There are system-wide performance concerns, because we need to move data from the CPU or system sides, or “system heaps” as we say, to the video heap to get the best performance. That movement, first of all, causes a problem and resistance. Second, on a system level, it decreases performance because you need to move the data. The question is how efficiently you move it. That’s what we explain how to do, even without optimal efficiency. The future will have a fully unified memory as part of HSA, and it’s physically unified on our APUs, but not unified on our discrete offerings. So, to get the best performance from our device, you have to use specialized memory or a specialized bus. This is another piece of information that people miss when they start developing with OpenCL. People come with some simplified assumption and feel that they cannot or should not differentiate CPU architectures from GPU architectures. And fortunately or unfortunately—it’s difficult to say—while OpenCL is guaranteed to work on both sides, there is no guarantee that CPU and GPU performance will be equal on both sides. You have to program knowing that you are coding for the GPU, not the CPU, to make the best use of OpenCL on the GPU.
Tom's Hardware: In a nutshell, what is your mission? What message mustget through to developers?
Alex Lyashevsky: First of all, you must understand that you have to move data. Second, understand that your programming must massively parallelize the data. And third, almost the same as the second, you have to understand that optimization is achieved through parallel processing. You have to understand the architecture you are programming for. That’s the purpose of the help we are providing to developers, and it’s starting to really pay off. We have surveys of developers in multiple geographies showing that OpenCL is gaining a lot of momentum.
Tom's Hardware: Let’s talk about APU in the context of mobile computing and power. Have the rules of power-savings algorithms changed? Does an APU throttle differently depending on AC or battery power?
Alex Lyashevsky: On CPU, I’m afraid this is the same thing. The memory and power management system is very sophisticated, and sometimes it depends on the grading system. The grading system has most of the rights to decrease activity. But our GPU is pretty adaptive, and its activity sometimes even affects our performance management. If you don’t put reasonable load pressure on our hardware, it will try to be as low-power as possible. It basically slows itself down when it doesn’t have enough work to do. So yeah, it is adaptive. I couldn’t say that it’s absolutely great, but we try to make it as power-efficient as possible.