Skip to main content

AMD Fusion: How It Started, Where It’s Going, And What It Means

Heterogeneous Roots

In the end, did Fusion matter? Quite simply, it changed the direction of modern mainstream computing. All parties agree that discrete graphics will remain firmly entrenched at the high-end. But according to IDC, by the end of 2011, nearly three out of every four PC processors sold were integrated, hybrid processors—APUs, as AMD calls them. AMD adds that half of all processors sold across all computing device segments, including smartphones, are now what it refers to as APUs.

APU Sales

Ubiquitous as that might sound, though, the APU is not the endgame; it’s only the beginning. Simply having two different cores on the same die may improve latency, but the aim of Fusion was always to leverage heterogeneous computing in the most effective ways possible. Having discrete CPUs and GPUs each chew on the application tasks best suited to them is still heterogeneous computing. Having those two cores present on the same die is merely an expression of heterogeneous computing more suited to certain system goals, such as optimizing high performance in a lower power envelope. Of course, this assumes that programs are being written to leverage a heterogeneous compute model—and most are not.

Ageia was one of the first companies in the PC world to address this problem. In 2004, a fledgling semiconductor company named Ageia purchased a physics middleware company called NovodeX, and thus was born the short-lived field of physics processing units (PPUs), available on third-party standalone cards. For games coded to leverage Ageia’s PhysX engine, these cards could radically improve physics simulation and particle motion. PhysX caught on with many developers, and Nvidia bought Ageia in 2008. Over time, Nvidia phased out the PPU side of the technology and supported PhysX on any CUDA-ready 8-series or newer GeForce card.


Ageia’s fame drew the attention of Dave Orton and others at ATI. Even before the AMD merger, ATI had been working to enable general-purpose GPU computing (GPGPU) in its Radeon line. In 2006, the R580 GPU became the first ATI product to support GPGPU, which the company soon branded as Stream. The confusing nomenclature of Stream, FireStream, stream processors, and so on gives some indication to the initial lack of cohesion in this effort. Stanford’s folding@home distributed computing project became ATI’s first showcase for just how mind-blowing the GPGPU performance advantage could be.

The trouble was that Stream never caught on. Nvidia seized its 2006/2007 execution upswing, capitalized on the confusion reigning at AMD at that time, and solidified CUDA as the go-to technology for GPGPU computing. But this is a bit like describing a goldfish as the hugest creature in the tank when all of the other fish are guppies. Despite a lot of notoriety in gaming and academic circles, GPGPU development remained very niche and far from mainstream awareness.

"AMD has been promoting GPU compute for a really long time," says Manju Hegde, former CEO of Ageia and now corporate vice president of heterogeneous applications and developer solutions at AMD. "But eight years ago, it wasn’t right. Five years ago, it wasn’t right. Now, with the explosion of the low-power market, smartphones and tablets, it’s right. And for developers to create the kinds of experiences that normal PC users expect, they have to go to GPU computing—but it has to be based on something easy like HSA."