HSA's Big Picture
With all of the focus on APUs and GPGPU, it’s easy to forget that there’s more to life than parallel code. The CPU remains a critical part of heterogeneous computing. Much of the code in modern applications remains serial and scalar in nature and will only run well on strong CPU cores. But even for the CPU, there are different types of workloads. Some loads do best on a few fast cores, while others excel on a larger number of lower-power cores. In both cases, and as mentioned earlier, applications need to be tailored to fit a power envelope for a particular device, whether it’s an all-in-one, notebook, tablet, or phone. As APUs gradually take over most (but not all) of the CPU arena, we’re seeing APUs diversify and segment in order to address these different power requirements. The difference now is that APUs seem likely to soon offer nearly twice the diversity of recent CPU families since they must address both scalar- as well as vector-based needs across device markets.
I attended AMD’s 2012 Fusion Developer Summit (AFDS) in Seattle, and, to my ears, it sounded like the last thing on anyone’s mind was the desktop market. There was a lot of buzz about AMD leveraging HSA to find better roads into the mobile markets. The biggest news was far and away ARM, the leading name in ultramobile processors, joining the HSA Foundation (remember, HSA is agnostic to architecture). This carries significant ramifications in many directions.
Everybody knows that mobile is hot and desktop is flat, at least in industry sales terms. To me, much of the messaging at AFDS seemed to reflect this, perhaps because desktop is the segment that seems to care least about the power and efficiency benefits HSA promises. So when I was able to sit down with Phil Rogers, I asked him if one of the outcomes of HSA would be a gradually increasing shift by AMD toward battery-powered devices and a leaning away from desktops.
"This is a common misconception," he said. "Power matters a lot on the whole range of platforms. On the battery situation, everybody gets it. Even with desktops, and more and more, what we’re calling desktops are becoming all-in-ones, people want to know not just how fast it runs but how quiet it is and how attractive it is as a product. We’re seeing that even gamers don’t want a box next to their leg pumping hot air on their shins while they’re playing. What they really want is a 30” screen on the wall with a PC built into it that runs fantastic. And in that environment, you do care about power. Even if you don’t care about the electricity bill, you don’t want fans whining and screaming or heating up and taking away clock speed."
At the other end of the market, servers stand to benefit greatly from HSA. Consider data centers and the continuing growth of cloud computing. With even smallish data centers now hosting more than 10 000 servers each, power efficiency continues to grow in urgency. Generally speaking, hardware costs comprise only one-third of a server's total cost over its service life. Another third is spent on electricity used, and the remaining third goes toward cooling costs. If HSA can help improve compute efficiency, allowing systems to complete tasks more quickly so they can turn off large logic blocks or entire cores, then power consumption can decline drastically.
"You can only pack processors so densely," says Hegde. "HSA allows you to process more densely and at a lower power envelope. I don’t need to tell you that CUDA has been doing HPC applications for five years. All those applications are so much easier with HSA because HSA is heterogeneous between GPU and CPU. Nvidia always leans toward the GPU. And certainly, there are some embarrassingly parallel applications out there, but the vast majority of applications are not, including many HPC applications. So don’t think that HSA is just about client; it’s an architecture that spans many platforms."