We've seen processor performance double every three to four years. And yet, some of the most demanding game engines we've tested are as old as the Core 2 Duo that still resides in my office PC. Surely, CPU bottlenecks would be a thing of the past, right? Well, as it turns out, GPU performance speeds ahead at an even faster rate than that of host processors. And so, the debate over whether to buy a faster CPU or even more graphics muscle rages on.
There comes a point where it's pointless to continue the battle, though. For us, that happened when our games ran smoothly at our largest monitor's 2560x1600 native resolution. It simply didn't matter if a faster component took us from an average of 120 to 200 frames per second.
In response to the stagnation caused by increasingly faster components, but limited resolutions, AMD introduced its Eyefinity technology as Nvidia responded with Surround. Both expand beyond a single display, making 5760x1080 a very playable resolution on high-end GPUs. In fact, a trio of 1920x1080 displays is both less expensive and more engrossing than a single 2560x1600 screen, giving us the perfect excuse to splurge on some extra pixel-pushing power.
But does a display surface stretching 5760x1080 require any additional processing muscle in order to prevent bottlenecks? Ah, suddenly that becomes an interesting question again.

Up until now, when we've used AMD's GPUs, we've typically paired them with its competition's processors. Is such a move backed by hard data? Previously, based on plenty of benchmark results, we would have said so. However, the company has a new architecture available, so we bought a boxed FX-8350 to challenge prior convention. After all, there was a lot to like in AMD FX-8350 Review: Does Piledriver Fix Bulldozer's Flaws?
Entering this contest at a heavy economical disadvantage, Intel’s Core i7-3770K needs to prove that it's not only faster than the AMD chip in games, but fast enough to overcome its price premium in our value analysis.
Although both of the motherboards we're using come from Asus' Sabertooth family, the company charges more for its LGA 1155-equipped model, further complicating the value story for Intel. We picked these platforms specifically to achieve the ultimate fairness from a performance standpoint, without pricing getting in the way.
We began our testing with an older boxed Core i7-3770K as we waited for the FX-8350 we purchased. Relatively certain the AMD processor would hit at least 4.4 GHz without thermal issues, we started off Intel's processor at the same clock rate. Later, it became clear that our estimate was too conservative, as both CPUs exceeded 4.5 GHz at our chosen voltage levels.
Retesting at higher frequencies would have further delayed this story, so we stuck with 4.4 GHz on both the Intel and AMD chips, at least in the clock-matched portion of our benchmarking.
| Test System Configuration | |
|---|---|
| Intel CPU | Intel Core i7-3770K (Ivy Bridge): 3.5 GHz, 8 MB Shared L3 Cache, LGA 1155 Overclocked to 4.4 GHz at 1.25 V |
| Intel Motherboard | Asus Sabertooth Z77, BIOS 1504 (08/03/2012) |
| Intel CPU Cooler | Thermalright MUX-120 w/Zalman ZM-STG1 Paste |
| AMD CPU | AMD FX-8350 (Vishera): 4.0 GHz, 8 MB Shared L3 Cache, Socket AM3+ Overclocked to 4.4 GHz at 1.35 V |
| AMD Motherboard | Asus Sabertooth 990FX, BIOS 1604 (10/24/2012) |
| AMD CPU Cooler | Sunbeamtech Core-Contact Freezer w/Zalman ZM-STG1 Paste |
| RAM | G.Skill F3-17600CL9Q-16GBXLD (16 GB) DDR3-2200 CAS 9-11-9-36 1.65 V |
| Graphics | 2 x MSI R7970-2PMD3GD5/OC: 1010 MHz GPU, GDDR5-5500 |
| Hard Drive | Mushkin Chronos Deluxe DX 240 GB, SATA 6Gb/s SSD |
| Sound | Integrated HD Audio |
| Network | Integrated Gigabit Networking |
| Power | Seasonic X760 SS-760KM: ATX12V v2.3, EPS12V, 80 PLUS Gold |
| Software | |
| OS | Microsoft Windows 8 Professional RTM x64 |
| Graphics | AMD Catalyst 12.10 |
Great performance and quick installation have kept Thermalright’s MUX-120 and Sunbeamtech’s Core Contact Freezer on my shelf for several years. The brackets that came with these older samples make them non-interchangeable, however.
G.Skill’s F3-17600CL9Q-16GBXLD has a remarkable DDR3-2200 CAS 9 rating, using Intel XMP technology for semi-automatic configuration. As a non-Intel platform, the Sabertooth 990FX configures XMP values through Asus' DOCP setting.

Seasonic’s X760 provides the consistent efficiency required to assess platform power differences.

StarCraft II doesn’t support AMD's Eyefinity technology, so I looked at the recent work of our other editors before bringing back a few classics in today’s test: Aliens vs. Predator and Metro 2033.
| Benchmark Configuration (3D Games) | |
|---|---|
| Aliens vs. Predator | Using AvP Tool v.1.03, SSAO/Tesselation/Shadows On Test Set 1: High Textures, No AA, 4x AF Test Set 2: Very High Textures, 4x AA, 16x AF |
| Battlefield 3 | Campaign Mode, "Going Hunting" 90-Second Fraps Test Set 1: Medium Quality Defaults (No AA, 4x AF) Test Set 2: Ultra Quality Defaults (4x AA, 16x AF) |
| F1 2012 | Steam version, in-game benchmark Test Set 1: High Quality Preset, No AA Test Set 2: Ultra Quality Preset, 8x AA |
| The Elder Scrolls V: Skyrim | Update 1.7, Celedon Aethirborn Level 6, 25-Second Fraps Test Set 1: DX11, High Details No AA, 8x AF, FXAA enabled Test Set 2: DX11, Ultra Details, 8x AA, 16x AF, FXAA enabled |
| Metro 2033 | Full Game, Built-In Benchmark, "Frontline" Scene Test Set 1: DX11, High, AAA, 4x AF, No PhysX, No DoF Test Set 2: DX11, Very High, 4x AA, 16x AF, No PhysX, DoF On |
Although it reflects differences in the overall and Physics scores, 3DMark 11 indicates relatively little difference in graphics performance betewen the Core i7-3770K and FX-8350. As a result, AMD's desktop flagship already looks like a better value. Of course, we need to see how it fares in real-world games first.


Because we're sticking to the benchmark-only version of the old Aliens vs. Predator title, I wedged the results between the purely synthetic 3DMark and Metro 2033's in-game flyby sequence.


Even at the lowest resolution (the one that'd be most susceptible to a processor bottleneck), AMD's FX is only negligibly slower, on average, than the Intel platform.
Naturally, we know that averages aren't everything though. Stick with us; we're going somewhere with this...


Processor bottlenecks are most common at low resolutions. But nobody games at 1920x1080 using an $800 combination of high-end cards. Scaling up from High to Very High details and full eye-candy in Metro 2033 tips the scales in favor of AMD's FX processor at 4800x900. As we approach 5760x1080, performance becomes marginal, and so we copied a few of the benchmark’s performance graphs to gauge playability more accurately.
Our Radeon HD 7970s in CrossFire appear to have a little trouble achieving playable frame rates at our highest quality settings in Metro 2033. So, we captured a few images from its benchmark output. At 1920x1080, for example, the FX-8350 is only able to carry these cards to around 30 FPS.

Overclocking the FX-8350 has little effect. The slow spot on the benchmark map simply narrows a bit.

Intel’s Core i7-3770K suffers similarly, though. CPU overclocking yields very little benefit.


Minimum frame rates have to be taken into context. When they occur during the first second or two of a benchmark, as they often do in Metro 2033, we tend not to count them. Instead, we refer to the game's benchmark diagrams to see if performance is actually playable at 4800x900. Unfortunately, even backed by overclocked processors, our CrossFire configuration frequently drops below 20 FPS.


We're not here to evaluate graphics performance, though. Neither Intel's Core i7-3770K nor AMD's FX-8350 makes enough of a difference to make this game playable at the next-highest setting, so the FX-8350's lower price is going to count toward better value.
If you want to look at Metro 2033 in more depth, a full set of result graphs is available in this story's image gallery.
AMD’s lower-cost FX-8350 continues to maintain performance parity in Battlefield 3, even as our highest resolution and detail settings lean hard against a pair of Radeon HD 7970s.


Both AMD and Intel employ integrated memory controllers. However, Intel's exhibits better performance. We recently stumbled across a memory bottleneck in DiRT 3, and that could be reflected in F1 2012. If nothing else, this sets us up for another story idea.


Overclocking gives Intel's Core i7-3770K a quantifiable boost in F1 2012, but a clock rate increase barely nudges AMD's FX-8350. Memory frequency is held constant throughout, in case you need any hint as to what's happening behind the scenes.


Skyrim appears to be the most CPU-dependent game in today’s suite. It also appears to be the most heavily slanted toward Intel's architecture. AMD's FX-8350 appears adequate across all of the tested settings, though we do have a little more data to discuss.
Though we usually talk about average frames per second, an even more important measure of playability is milliseconds per frame. That's because frames that take a relatively long time to render can be quite jarring. In theory, a 91 FPS rate could include a single 100-ms frame and ninety 10-ms frames, and that one 100-ms frame would be what kills your experience.
This can happen on a single-GPU card. However, the complexities of synchronizing multiple GPUs make them more common in CrossFire and SLI configurations. We covered this micro-stutter effect in Micro-Stuttering And GPU Scaling In CrossFire And SLI, and have plans to cover this phenomenon in more depth in the next couple of months.
Since an evenly-spread 20 FPS rate would consist of 20 50-ms frames, we’re using 50 ms as the cut-off for actual playability in today’s analysis. Many gamers get annoyed with frame intervals far shorter (say, 30 ms), but that isn't as likely to get you killed as it is to simply bug you.


The performance of our Radeon HD 7970s in CrossFire appears fairly similar on our AMD- and Intel-based platforms when we run at 1920x1080. Our system based on the FX-8350 encounters a couple of higher spikes, but the worst of these we see only reaches up to 40 ms.
It's worth noting that we're using Fraps to take these measurements (currently the only solution, short of capturing the output with a PCI Express-based frame grabber). Consequently, we're not representing the entire rendering pipeline. After comparing our recorded results to actual gameplay, however, we're confident that the most egregious performance interruptions are being illustrated. Moreover, we're not comparing SLI to CrossFire, so the frame-time spikes are truly attributable to each platform.


Frame times simultaneously appear more variable (the bulk of the graph is wider) and with lower variability (the largest spikes are smaller) at 4800x900. Both platforms seldom cross the 30 ms barrier, and the AMD-based machine only spikes to 40 ms once.


You'll probably want to stop at 4800x900 or dial detail settings back to the Medium preset if 30-ms and greater frame times bother you. Ultra-quality details at this super-high resolution appear barely playable.
As mentioned, our frame-time measurements come from Fraps, which isn't necessarily ideal given the work that happens during the CrossFire pipeline, and where Fraps derives its data. So, we wanted to at least run a second title to gauge whether the numbers and "feeling" aligned. Charting these values tells us more than the worst frame time; it adds how often and where they're occurring during the benchmark.


The benchmark that seems best-optimized for Intel's platform, Skyrim already appears devastating to AMD's FX-8350 at a mere 1920x1080. We're hoping that this artifact is a little easier to tolerate in an RPG, but spikes above 70 ms are certainly jolts you can "feel" while you're playing.


The FX-8350 gets hammered even harder at 4800x900, and the difference between AMD and Intel CPUs tells us that the graphics subsystem isn't to blame. Regardless of where Fraps takes its measurement, we simply cannot ignore the notably-higher spikes on AMD's flagship.


Both the AMD and Intel platforms fail our 50 ms upper limit at 5760x1080, at least when the game is set to the High quality preset. Somehow, the Core i7-3770K ducks in under 50 ms throughout the test using Ultra quality settings, showing far smaller spikes, even as its average frame time increases.
Today is AMD’s big chance to prove the value of its FX-8350 in a gaming environment, particularly with a price tag far lower than the competition from Intel. You see, it might appear that Intel has an advantage in our test because we picked its highest-end Ivy Bridge-based chip (the same opportunity given to AMD, by the way). But the price difference between the two doesn't escape us. Intel will need to justify its higher price in relation to the FX-8350.
But first, a look at power consumption and efficiency.

The FX-8350's stock power consumption doesn't look too terrible compared to Intel's, even though it's indeed higher. But we don't get the whole story from this chart, either. We didn't see AMD's chip running at its rated 4 GHz when it was under duress at stock settings. Rather, it dropped both its multiplier and voltage level under an eight-thread Prime95 workload to stay within its rated power envelope. Throttling artificially curbs the CPU's power consumption, and the big increases we see when the Vishera-based processor is overclocked come from fixed multiplier and voltage settings.
At the same time, games don't really utilize the FX-8350's ability to handle eight threads concurrently, and consequently never seem to trigger the same throttling mechanism. Also interesting is that the FX-8350, at its stock voltage setting, often exceeds the 1.35 V we set manually for overclocking. That explains why system power consumption doesn't change much between the stock and overclocked GPU load tests.

As mentioned, the stock FX-8350 doesn't throttle at all during gaming, since most titles aren't able to fully tax the chip. In fact, games actually enjoy a benefit from Turbo Core technology, which takes the CPU to 4.2 GHz. AMD’s biggest problem in the performance chart, then, is that Intel walks away with a noticeably higher average.

Using the average power consumption and average performance of all four configurations as the average for our efficiency chart, AMD's FX-8350 generates around two-thirds as much performance per watt compared to Intel's Core i7-3770K. If you’d like to run these calculations yourself, please note that we zeroed-out the average by subtracting one (100%) from the charted values.
When we talk about affordable hardware that performs well, we like to use phrases like "80% the performance for 60% the price." Those are always very honest numbers, since we make it a habit to measure performance, power, and efficiency. But they only capture the value of a single component, and components cannot operate on their own.
After adding up the parts used in today's benchmark analysis, the Intel-based system crested $1,900, while the AMD platform ran us $1,724, both without cases, peripherals, or operating systems. If we wanted to call both setups "complete" solutions, we could add an $80 chassis to give us $1,984 and $1,804 machines, respectively. Since we're adding cost to both boxes, AMD's overall $180 cost savings becomes a smaller percentage of the total price tag. In other words, the other pieces that go into a nice high-end PC serve to diminish AMD's value leadership.

That leaves us with two completely biased ways to compare price to performance. We can only hope that pointing this out upfront keeps us transparent as we present the numbers.
An AMD bias would only include the price of the motherboard and CPU, maximizing value, like so:

A third alternative would allow us to talk about the motherboards and CPUs as upgrades, assuming you already have cases, power supplies, memory, and storage lying around. Of course, you probably don't have a pair of Radeon HD 7970s left over from some old machine, so the most balanced approach we can take at least takes processors, platforms, and graphics into consideration. Therefore, we're adding the $800 Tahiti-based duo to our shopping list.

The only way we can make AMD's FX-8350 look like a better gaming value than Intel's Core i7-3770K (specifically in the games and at the settings we used to test) is if the rest of the system is free. Because the rest of the system is never free, the FX-8350 never serves up better high-end gaming value.
From now on, we'll need to limit the use of AMD's flagship to systems already bottlenecked by their graphics cards. A less expensive CPU is more attractive when it isn't affecting performance negatively.
Intel Bias is in the (AMD) Cards?
Our benchmark results have long shown that ATI's graphics architectures are more dependent on a strong processor than Nvidia's. As a result, we usually arm our test beds with high-end Intel CPUs when it comes time to benchmark high-end GPUs, sidestepping platform issues that might adversely affect results designed to isolate graphics performance.
We were hoping that AMD's Piledriver update would break that trend, but even a handful of impressive advancements aren't enough to match the effectiveness of AMD's graphics team. Might Steamroller be the evolutionary step forward needed to unleash the GCN architecture's peak performance?



