AMD Bulldozer Review: FX-8150 Gets Tested
Perhaps the most hotly-anticipated launch in 2011, AMD’s FX processor line-up is finally ready for prime time. Does the company’s new Bulldozer architecture have what it takes to face Intel’s Sandy Bridge and usher in a new era of competition?
Single Floating-Point Unit, AVX Performance, And L2
Two Cores, One FPU
The shared floating-point unit is separate from the integer pipelines. So, when operations hit the dispatch interface at the end of the decode stage and on the way to the integer units, any floating-point operation part of that stream instead goes to the floating-point scheduler. There, they compete for resources and bandwidth independent of the thread to which they belong.
As you can see in the diagram below, AMD’s floating-point logic is different from the integer side. Its purpose is execution-only; it reports its completion and exception information back to the parent integer core, which is responsible for instruction retirement.
The floating-point unit features two MMX pipelines and a pair of 128-bit fused multiply-accumulate (FMAC) units. Those FMAC pipes support four-operand instructions, which give you a non-destructive result. Intel plans to incorporate the three-operand format in its Haswell micro-architecture (the one to follow Ivy Bridge). AMD says it’ll also support FMA3 in the successor to Bulldozer, called Piledriver, expected in 2012.
Any time we see vendors take divergent plans like this, we have to wonder how it’ll affect developers. So, we asked Adrian Silasi of SiSoftware what he expected to happen, and he pointed out that most developers won’t want to implement three code paths (one for AVX-only, one for AVX plus FMA3, and one for AVX plus FMA4). This makes good sense. And when you consider that few applications exploit AVX today and none of them utilize FMA, AMD should be in a better position to support all three paths when Piledriver becomes a reality.
The more pressing question today is how will Bulldozer’s AVX support size up to Intel’s? Sandy Bridge gives you two 256-bit AVX operations per clock, while Bulldozer facilitates one.
Leading up to this launch, I started talking to Noel Borthwick, a talented musician and CTO of Cakewalk, Inc., about his company’s work optimizing Sonar X1 for AVX. According to a whitepaper that Noel co-authored, AVX instruction support helps reduce the software load applied by performing audio bit depth conversations while streaming audio buffers through the playback graph, rendering, and mixing. Common conversions include 24-bit integer to 32-bit floating-point, 64-bit double-precision conversion, and 32-bit float to 64-bit double-precision.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
To that end, Noel sent over the binary for a test application that compares two of Cakewalk’s AVX-optimized routines to the unoptimized version. Both AMD and Intel have access to this very same metric, so its results shouldn’t come as a surprise to either company.
Architecture | Operation | Result (CPU Cycles Gained/Lost) |
---|---|---|
AMD Bulldozer | Copy Int24toFloat64 | 61% Gain |
AMD Bulldozer | Copy Float32toFloat64 | 77% Loss |
Intel Sandy Bridge | Copy Int24toFloat64 | 69% Gain |
Intel Sandy Bridge | Copy Float32toFloat64 | 14% Gain |
In the Copy Int24toFloat64 operation, Intel’s Core i7-2600K sees a 69% gain, while AMD’s FX-8150 realizes an also-impressive 61% gain. What does “a gain” actually constitute? We’re talking about the number of CPU cycles that AVX helps reduce, yielding an increase in potential processor bandwidth. Phrased differently, Sandy Bridge cuts the number of cycles by 1.69x, while Bulldozer reduces them by 1.61x.
On the other hand, in the Copy Float32toFloat64 operation, Core i7-2600K realizes a 14% gain as FX-8150 suffers a 77% loss. In trying to explain that loss, it seems that the native Visual Studio 2010 intrinsics either Cakewalk’s vectorization intrinsics (or, less likely, Microsoft’s) aren’t optimized for AMD’s architecture. In either case, an application patch or Visual Studio service pack could be needed.
If you flip to the Sandra 2011 results, you’ll see that AVX support does help FX-8150’s integer and floating-point performance. Sandy Bridge simply realizes a much larger floating-point gain in this synthetic metric.
Just before we wrapped up testing, AMD forwarded along two versions of x264, the software library behind front-ends like HandBrake (you’ll see us test the latest version of HandBrake shortly). However, these builds incorporate support for AVX and XOP instructions, the latter of which is exclusive to AMD’s architecture.
I modified Tech ARP’s x264 HD Benchmark 4.0 to utilize each of the new code paths, plus CPU-Z 1.58 for system information, and ran FX-8150 through the pair, along with Core i5-2500K through the AVX-optimized build.
The results between AMD’s AVX and XOP code paths are pretty similar. Intel manages to finish the first pass faster, but AMD delivers better performance on the second pass.
Now, bear in mind that the number of AVX-optimized tests is small. It’s going to take a lot of software development work before we get a clearer picture of how AVX instruction support affects each of these architectures.
Sharing The L2
We already mentioned the shared L2 TLB responsible for servicing instruction- (front-end) and data-side (integer core) requests. However, there’s also a unified L2 cache shared between the two cores. This repository is 2 MB per module, giving you 8 MB of total L2 on a four-module FX-8000-series processor.
AMD says the Bulldozer module’s data prefetcher is also the product of significant power and silicon investment, which it gets away with by amortizing across both cores.
Current page: Single Floating-Point Unit, AVX Performance, And L2
Prev Page A Shared Front-End And Dual Integer Cores Next Page Per-Core Performance