At least on the desktop, dual-core processors rarely helped bolster performance when they were first introduced. Most mainstream apps simply hadn't been optimized for multiple cores; that sort of technology was principally enabled in the server and workstation space. You had multi-socket motherboards with single-core chips cranking on complex problems in parallel. But games were almost exclusively written to run on a one core.
Programming with threading in mind isn't easy, and it took developers years to adapt to a world where CPUs seemed destined to improve performance through parallelism rather than then 10 GHz clock rates Intel had foreshadowed back in 2000. Slowly, though, the applications most able to benefit from multiple cores working in concert have been rewritten to utilize modern hardware.
Want proof? Just have a look at our benchmark suite. We test something like two pieces of software that are still single-threaded: Lame and iTunes. Everything else, to one degree or another, is threaded. Content creation, compression, and even productivity apps tax the highest-end four- and six-core CPUs.
Games, on the other hand, have taken longer to "get there." With a primary emphasis on graphics performance, it's not surprising that single-threaded engines still exist. However, spawning additional threads and utilizing a greater number of cores allows ISVs to implement better artificial intelligence or add more rigid bodies that can be affected by physics.
Increasingly, then, we're seeing more examples of games exhibiting better performance when we use quad-core processor. They're still the exception though, rather than the rule. And that's why the great single-threaded performance of Intel's Sandy Bridge architecture (and later Ivy Bridge) dominated most of our processor-bound game testing. Back in the day, dual-core Pentiums went heads-up against quad-core CPUs from AMD, and came out in the lead.
It's now clear that gunning for higher and higher clock rates is not the direction AMD and Intel are going. They're both building desktop-oriented CPUs with as many as four modules (in AMD's case) or six cores (in Intel's). In turn, game developers continue getting better about utilizing available on-die resources. We're clearly at a point where you need at least a dual-core CPU to enjoy today's hottest titles, if for no other reason than sticking with a single-core chip would put you about eight years back in processor technology. But is there a reason to skip over the dual-core models and jump right into the world of gaming on a quad-core CPU?
That's what we're hoping to answer today, and we have a new tool to help us.
As you may have noticed in a handful of our more recent graphics stories, we're working toward a new type of test standard that measures the impact of changes in frame rate latency. Historically, average frame rate, represented by frames per second (FPS), was our primary go-to for comparing the performance of graphics cards relative to each other. However, Scott Wasson of The Tech Report has put in a tremendous effort demonstrating where average frame rate comes up short in characterizing your experience gaming on a specific graphics subsystem.
By now, whether through Scott's work or our own, you're probably familiar with the phenomenon known as micro-stuttering, which is often associated with multi-card CrossFire or SLI configurations. This occurs when the amount of time that passes between frames appearing on-screen is inconsistent, resulting in what appears to be choppy gameplay, even in spite of high average frame rates. For example, two different PCs, both generating an average of 24 frames per second, may convey different experiences, one stuttery and one smooth, if the amount of time between each frame is less regular on one and more regular on the other.
In the chart below, System A sees a consistent amount of time between those 24 frames, while System B does not. Therefore, you may notice a stuttering effect on System B, even though both machines still average 24 FPS.

As you can see, System B has four frames that take significantly longer to render. System A is more consistent. The issue is easy to identify when we go beyond the average frame per second rate, and look at the frame times inside that second. But because most of our time-based benchmarks run for a while (at least a minute) in order to generate plenty of data, that would give us 3,600 data points at 60 FPS, at least. That's simply too much to squeeze onto an easily-digestible chart. Zooming in to a portion of the graph helps. However, how do you pick the most relevant slice to dissect? There's no easy answer.
In addition, raw frame times aren't the end-all in performance analysis because high frame rates have low corresponding frame times and low frame rates have high frame times. What we're trying to find is the variance, the amount of time that anomalous frames stray from the ideal norm.
Our preference is to take this data and put it into a simple, meaningful format that's easy to understand and analyze. To do this, we won't scrutinize the individual frame times, but we'll look closely at the difference (or variance) between the time it takes to display a frame compared to the ideal time it should take to display the frame based on the average of the frames surrounding it.
For example, in the chart above the average frame time for System B is just under 40 milliseconds. But four frames from System B suffer from abnormally long lag times from about 10 to 20 milliseconds, compared to the 40 millisecond average.
To describe this phenomenon, the frame time variance chart we're introducing today includes the average time variance across the entire benchmark, the 75th percentile time variance, and the 95th percentile time variance. Percentiles show us how bad things get, on average, over a larger sample. As a case in point, the 75th percentile result shows us the longest (worst) frame time variance that we see in the shortest (best) 75 percent of the samples, and so on with the 95th percentile.
Below, you'll find an example of how our frame time variance chart would describe the difference between System A and System B in the consecutive frame time chart presented above.

As you can see, this chart does not reflect raw frame rates. That's not its job. And that's fine with us because we're still going to continue capturing average frame rate for the foreseeable future. It may not tell the entire performance story, but it remains an important metric. We're simply adding the new data to help fill in the blanks.
Our hope is that, by comparing the results across different CPUs, we'll be able to identify issues where some models experience significantly higher latencies than we previously quantified. As you'll see in the results, each game has a different average frame time variance, too.
Testing LGA 1155-, Socket AM3-, Socket FM1-, and Socket FM2-based CPUs requires four different platforms. We're using the same memory, hard drive, and graphics card on all four to eliminate as many variables as possible.
We're using Gigabyte's Z77X-UP7 for our LGA 1155-based platform. This is the company's flagship offering for that processor interface. The similar (but lower-priced) Z77X-UP5 earned our Recommended Buy award in Six $220-280 Z77 Express-Based Motherboards, Reviewed, so the -UP7 appears to be a great choice for our testing.

For the Socket AM3+-based system, we're using Gigabyte's 990FXA-UD5 motherboard. This product achieved the highest CPU overclock in our Five $160 To $240 990FX-Based Socket AM3+ Motherboards comparison, and distinguishes itself with true four-way SLI support.
Our Socket FM1-based platform is Asus' F1A75-V Pro, a board that demonstrated excellent overclocking potential in Professional Help: Getting The Best Overclock From AMD's A8-3870K.
Finally, the Socket FM2-based platform is represented by Asrock's FM2A85X Extreme6. This motherboard garnered our Tom's Hardware 2012 Approved award in Six Socket FM2 Motherboards For AMD's Trinity APUs thanks to a combination of high performance and low price.
As for the graphics card, we chose one of the fastest single-GPU boards available, MSI's GeForce GTX 680 Lightning. Our goal is to isolate CPU performance, so we made this choice with the intention of de-emphasizing GPU bottlenecks.

We're testing the 18 most notable CPUs under $200, including some previous-gen products (the Core i5-2500K costs more than $200, but it's included for comparison purposes) to gauge the improvements that have been made. The Athlon II X4s didn't make it because they employ the same micro-architecture and cache configuration as the Llano-based APUs, so they'd be redundant. AMD's A8-3870K processor accurately represents the performance of a 3.0 GHz Athlon II X4.
The Corsair Vengeance memory kit we used is rated at 1000 MHz with 10-10-10-27 2T timings using a built-in XMP profile, and we set it to that speed whenever possible. Due to limitations on certain platforms, we dropped the modules to 933 MHz on the Pentium G860, and 800 MHz at 9-9-9-24 2T timings on the Athlon II X3 450.
| Interface | Socket FM1 | Socket FM2 | Socket AM3+ | LGA 1155 | ||
|---|---|---|---|---|---|---|
| CPU/APU | AMD A4-3400 (Llano) 2.7 GHz AMD A8-3870K (Llano) 3.0 GHz | AMD A4-5300 (Trinity) 3.4 GHz Base, 3.6 GHz Turbo Core AMD A10-5800K (Trinity) 3.8 GHz Base, 4.2 GHz Turbo Core | AMD Athlon II X3 450 (Rana) 3.2 GHz AMD Phenom II X4 980 (Deneb) 3.7 GHz AMD Phenom II X6 1100T (Thuban) 3.3 GHz Base, 3.7 GHz Turbo Core AMD FX-4170 (Zambezi) 4.2 GHz Base, 4.3 GHz Turbo Core AMD FX-6200 (Zambezi) 3.8 GHz Base, 4.1 GHz Turbo Core AMD FX-8120 (Zambezi) 3.1 GHz Base, 4.0 GHz Turbo Core AMD FX-4300 (Vishera) 3.8 GHz Base, 4.0 GHz Turbo Core AMD FX-6300 (Vishera) 3.5 GHz Base, 4.1 GHz Turbo Core AMD FX-8350 (Vishera) 4.0 GHz Base, 4.2 GHz Turbo Core | Intel Pentium G860 (Sandy Bridge) 3.0 GHz Intel Core i3-2120 (Sandy Bridge) 3.3 GHz Intel Core i3-3220 (Ivy Bridge) 3.3 GHz Intel Core i5-2500K (Sandy Bridge) 3.3 GHz Base, 3.7 GHz Turbo Boost Intel Core i5-3550 (Ivy Bridge) 3.3 GHz Base, 3.7 GHz Turbo Boost | ||
| Motherboard | Asus F1A75-V Pro Socket FM1, Chipset: AMD A75 | ASRock FM2A85X Socket FM2, Chipset: AMD A85 | Gigabyte 990FXA-UD5 Socket AM3+, Chipset: AMD 990 | Gigabyte Z77X-UP7 LGA 1155, Chipset: Intel Z77 Express | ||
| Networking | On-board Gigabit LAN controller | |||||
| Memory | Corsair Vengeance 2 x 2 GB, 2000 MT/s, CL 10-10-10-24-2T, except Pentium G860 at 1866 MT/s, CL 13-13-13-34-2T and AMD Athlon II X3 at 1600 MT/s, CL 9-9-9-24-2T | |||||
| Graphics | MSI GTX 680 Lightning 1110 MHz GPU (1176 MHz Max Boost), 2 GB GDDR5 at 1502 MHz (6008 MHz effective) (Set to -105 MHz core to approximate 1006 MHz reference clock) | |||||
| Hard Drive | Western Digital Caviar Black 1 TB 7,200 RPM, 32 MB Cache, SATA 3Gb/s | |||||
| Power | ePower EP-1200E10-T2 1200 W ATX12V, EPS12V | |||||
| Software and Drivers | ||||||
| Operating System | Microsoft Windows 8 x64 | |||||
| DirectX | DirectX 11 | |||||
| Graphics Drivers | Nvidia 310.70 WHQL | |||||
| Benchmark Configuration | |
|---|---|
| 3D Games | |
| Metro 2033 | Version 1.0.0.1, Built-In Benchmark |
| Far Cry 3 | Version 0.1.0.1, Tom's Hardware Guide Custom Benchmark, Fraps runs |
| The Elder Scrolls V: Skyrim | Version 1.3.22.0, Tom's Hardware Guide Custom Benchmark, Fraps runs |
| DiRT Showdown | Version 1.2.0.0, Built-In Benchmark |
| StarCraft II | Version: 1.5.4, Tom's Hardware Guide Custom Benchmark, Fraps runs |
Metro 2033 is an older title, but it still presents modern hardware with a taxing workload. It's notorious for being GPU-limited, so we're not expecting CPU performance to make a massive difference in this game's outcome (at least, based on the data we've generated up until now).

Most of our CPUs maintain an average of at least 50 FPS. The dual-core A4s and the Athlon II X3 fall under 40, and the Pentium G860 sits in between with a 47 FPS average.
The minimums are similar across the board, suggesting that something other than the varying CPUs is dictating the performance floor.
Compare these results to what we saw in Picking A Sub-$200 Gaming CPU: FX, An APU, Or A Pentium? a year ago. You'll notice that the average frame rates are up on almost all of the CPUs except for Intel's Pentium G860 and AMD's Athlon II X3. At the time, the Radeon HD 7970 we used was brand new, and its drivers were decidedly immature, which may help explain why a similar-performing GeForce GTX 680 fares better today.

It's quite clear that the processors we're testing limit most of the first half of this benchmark, while the GPU load picks up about two-thirds of the way in, pulling all test beds down under 20 FPS.

In general, the results are very good news for folks with inexpensive processors, since we're seeing low latencies between consecutive frames. The dual-core A4s are the worst performers by far, with a 95th percentile lag time of about 20 milliseconds.
How about a bit of perspective? On average, the A4 APUs run at about 35 FPS. That 20-millisecond lag would insert a single frame corresponding to around 20 FPS. If that happened once every 100 frames, at 35 FPS, it would happen about once every 2.5 seconds. That's an oversimplification because the percentile is pulled from the entire course of the benchmark, but it's a useful way of imagining what's going on.
On the other hand, the Pentium G860 and Athlon II X3 450 experience a roughly 10-millisecond lag at the 95th percentile, representing a single frame time drop at 40 FPS down to an equivalent 30 FPS. That's not as big of a difference. Only the FX-4300, A10-5800K, and FX-4170 have a 95th percentile lag of more than eight milliseconds. The rest of the field is under six, and we consider that result statistically insignificant.
With Metro 2033 behind us, along with our first look at average consecutive frame time difference, let's apply the same methodology to Far Cry 3.

This time, the Pentium, Athlon II X3, and dual-core APUs are brutalized at 30 FPS and under, while the rest of the pack delivers smooth frame rates. This is in line with our CPU scaling results from Far Cry 3 Performance, Benchmarked, except that the Phenom II X4 fares much better. We're benchmarking with more demanding graphics settings this time around, so perhaps that is helping normalize the processor performance.

The order falls in place just as we'd expect it to when we look at frame rates over time.

We didn't expect this; the Intel chips generate the highest consecutive frame latencies in Far Cry 3. With that said, those latencies are pretty low. They're almost irrelevant, in fact, until we get down to the Core i3-2120 and Pentium G860.
When it first came out, Skyrim was brutally hard on any processor other than a Core i5 or i7. But we've seen a number of patches that help fix the performance issues that previously plagued this title.

Only the Athlon II X3 and A4 APUs fall below the 50 FPS average, and only the A4s drop under 40. That's a lot different from what we saw in Picking A Sub-$200 Gaming CPU: FX, An APU, Or A Pentium? a year ago.
Intel's dual-core Pentium fares much better in Skyrim than it did in Metro 2033 or Far Cry 3.

An analysis of frame rates over time turn up no surprises. The only processor that falls below a 30-FPS minimum is the A4-5300.

The consecutive frame latencies are both short and consistent. Even the slowest dual-core APUs demonstrate a six-millisecond lag at the 95th percentile.
DiRT Showdown features an advanced lighting model, but how does the CPU factor in?

Once again, the Pentium, Athlon II X3, and dual-core APUs struggle at the back of the pack. All of the other chips achieve at least 60 FPS on average, and no fewer than 50 FPS. Again, that's a different story altogether compared to what we saw in last year's shoot-out under $200, where the Pentium tied AMD's A8-3870K in DiRT 3.

The frame rate over time chart suggests that even AMD's A4 APUs might be playable, since they hug 30 FPS most of the time.

DiRT Showdown exhibits low consecutive frame latencies. Only the A4-5300 hits 10 milliseconds; the rest of the field is under eight.
StarCraft II is one of those titles that consistently shows Intel's CPUs in the lead. We've already seen a number of significant changes from our previous look at sub-$200 processors, though. Might a string of patches, including a post-processing anti-aliasing option, alter the outcome today?

Intel continues to dominate in StarCraft. But its Pentium G860 no longer embarrasses the competition from AMD like it did last year. In fact, it's now in the lower third of our line-up.

Charting out frame rate over time shows how this test speeds up as the benchmark progresses. This is because our test starts with a large number of computer-controlled units. The demand on the system lessens as they are destroyed.

On average, we see high consecutive frame time differences. Given the demanding start to our test, though, and the frame rate increases that occur as the benchmark progresses, there's a good reason we'd see this happen.
The Llano-based APUs and Athlon II X3 get hit the hardest.
With our testing complete, we wanted to plot out average performance at 1920x1080, with Intel's Pentium G860 standing in as our 100% baseline.
What you see below is a very different aggregate chart compared to the one that showed up in Picking A Sub-$200 Gaming CPU: FX, An APU, Or A Pentium?, particularly when it comes to the Pentium. The rest of the results seem like they fall close to where we would have expected them, based on our previous testing. AMD's processors do come closer to Intel's last-gen and Ivy Bridge-based Core i3s. Indeed, the FX-8350, FX-6300, and FX-4300 are nipping at the entry-level Intel chip. The Phenom II X4 and X6 are as well, though neither is available any more. Even quad-core APUs like the A10-5800K and A8-3870K hold their own.
The performance curve starts to fall off pretty quickly once we look at the Pentium G860, Athlon II X3 450, and the two A4 APUs.

It isn't explicitly clear what changed in the last year, since our previous look at processors under $200, to affect performance. But we are using some new games, old games that have been patched, new drivers, and a new operating system, so all of that is in play. Regardless, AMD's FX processors, its two-generation-old Phenom II X6 and X4 CPUs, and the company's Athlon II X4 look a little better compared to Intel's Core i3 than they used to. In contrast, the Sandy Bridge-based Pentium G860 falls relative to where it was.
The Pentium isn't bad, to be sure. In fact, for $70, it still does really well against the FX chips we tested that cost $125 and up, use quite a bit of power, and generate significantly more heat. Nevertheless, we see the trend toward more threaded titles continuing, compelling us to start distancing ourselves from dual-core non-Hyper-Threaded CPUs in 2013. At least for the time being, whatever quad-core Athlon II and Phenom II processors that are still available seem like smart buys.
Once those dry up, what then? Intel still holds the aces. For your dollar, the Core i5 has no competition above $160. At $130, the Core i3-3220 is tough to beat. It no longer humiliates the FX line-up in games thanks to AMD's most recent architectural update, but it's still cheaper, faster, and more power-friendly than most of the Vishera-based models.
Fortunately for AMD, its chips fare better in the non-gaming components of our benchmark suite, where its modular architecture is better able to benefit from today's threaded software. In a general-purpose workstation, that's certainly something to think about. But in a pure gaming machine, there's just no ignoring the effectiveness of Intel's Sandy and Ivy Bridge designs.
