AMD and Intel continue serving up increasingly faster CPUs. But graphics card performance is accelerating even faster. Is there still such a thing as processor-bound gaming? We take two Radeon HD 7970s, high-end desktop CPUs, and a few games to find out.
We've seen processor performance double every three to four years. And yet, some of the most demanding game engines we've tested are as old as the Core 2 Duo that still resides in my office PC. Surely, CPU bottlenecks would be a thing of the past, right? Well, as it turns out, GPU performance speeds ahead at an even faster rate than that of host processors. And so, the debate over whether to buy a faster CPU or even more graphics muscle rages on.
There comes a point where it's pointless to continue the battle, though. For us, that happened when our games ran smoothly at our largest monitor's 2560x1600 native resolution. It simply didn't matter if a faster component took us from an average of 120 to 200 frames per second.
In response to the stagnation caused by increasingly faster components, but limited resolutions, AMD introduced its Eyefinity technology as Nvidia responded with Surround. Both expand beyond a single display, making 5760x1080 a very playable resolution on high-end GPUs. In fact, a trio of 1920x1080 displays is both less expensive and more engrossing than a single 2560x1600 screen, giving us the perfect excuse to splurge on some extra pixel-pushing power.
But does a display surface stretching 5760x1080 require any additional processing muscle in order to prevent bottlenecks? Ah, suddenly that becomes an interesting question again.

Up until now, when we've used AMD's GPUs, we've typically paired them with its competition's processors. Is such a move backed by hard data? Previously, based on plenty of benchmark results, we would have said so. However, the company has a new architecture available, so we bought a boxed FX-8350 to challenge prior convention. After all, there was a lot to like in AMD FX-8350 Review: Does Piledriver Fix Bulldozer's Flaws?
Entering this contest at a heavy economical disadvantage, Intel’s Core i7-3770K needs to prove that it's not only faster than the AMD chip in games, but fast enough to overcome its price premium in our value analysis.
Although both of the motherboards we're using come from Asus' Sabertooth family, the company charges more for its LGA 1155-equipped model, further complicating the value story for Intel. We picked these platforms specifically to achieve the ultimate fairness from a performance standpoint, without pricing getting in the way.
- Chasing Bottlenecks To Eyefinity (But Not Beyond)
- Test Settings And Benchmarks
- Results: 3DMark, Aliens Vs. Predator, And Metro 2033
- Metro 2033, Second By Second
- Results: Battlefield 3, F1 2012, And Skyrim
- Battlefield 3, Frame By Frame
- Skyrim, Frame By Frame
- Power And Efficiency
- Can AMD's FX Keep Up With Its Radeon HD 7970?


I disagree. What's needed is even stronger push on the developers to use more than four cores, effectively, not some 100% load on one core and 10% on the other five cores.
I disagree. What's needed is even stronger push on the developers to use more than four cores, effectively, not some 100% load on one core and 10% on the other five cores.
I thought more cores were for multi-tasking, as in having multiple programs running simultaneously. It would suck to turn on BF3 and everything else running on my PC simply shut down because the CPU is under 100% utilization. How would i be able to play BF3 while streaming/playing some HD content on my TV that's hooked up to my same computer.
single core performance... look up some other benchmarks, where they use itunes to encode things, or when i believe winzip went from single core to multicore, it shows a GREAT difference more cores can do to performance.
the problem is that few games and few programs really scale, sure, pro applications almost always take advantage of whatever you put in them, but consumer, different story.
more cores can offer more multitasking, but they also allow the load to be shifted from one core to all 4 cores and get over all more performance when properly coded.
woulda liked to see how a 3570k does against the fx8350 running the same cfx setup. impo, the price/perf woulda tipped further in favor of intel in configs like this.
lastly, woulda liked some newer games like sleeping dogs, far cry3, max payne 3 in the benches instead of the ol' bf3 single player. i hear bf3 sp doesn't stress cpus that much. may be bf3 skewed the benches in favor of amd as much as skyrim favored intel.
Why not just use two computers?
If you could buy $4 RAM instead of $40 RAM, but the $4 RAM made your system 50% slower, would you buy it? No, because it would make your $1000 PC perform like a $500 PC.
You can only do per-component value when you're only comparing one component. In this case, the graphics cards and CPUs were being tested as a pairing (just like the title says).
Again, I enjoy reading the article. Get ready for b!tching by fanboies.... Tom.
No need. My sister's FX 8350 kicks my 3570k's ass at 4.2 ghz consistently in most benchmarks. We both run GTX 480's
what world do you live in? I payed 200 euro for my i5 3570k while my sister's 8350 cost ~ 160 and gets better performance.
Why don't use AMD FX 6x00? They are cheaper, almost 60€ in my country. You have compared AMD FX 4x00 already, but i don't see any review or article using a FX6x00 and i think it's the sweet spot for an all-in-one PC (game and work, with 8GB at least of RAM).
Sorry for my english.
Your processor is only as good as the Programming that supports it, and Intel pays developers to use code that supports it and that is missing on AMD's architecture.
Can you run the whole test again with a $200 intel quadcore
and ditch the old DX 9 game engines , too?