Page 1:Are We There Yet?
Page 2: Test Methodology: How Do You Compare Multiple Cores?
Page 3:Test System Setup And Benchmarks
Page 4:Synthetic Benchmarks: 3DMark And PCMark Vantage
Page 5:Synthetic Benchmarks: SiSoft Sandra
Page 6:Application Benchmarks: Audio Encoding
Page 7:Application Benchmarks: Video Encoding
Page 8:Application Benchmarks: 2D And 3D Graphics
Page 9:Application Benchmarks: General Usage
Page 10:Game Benchmarks
Page 11:Performance Analysis And Conclusion
In the early years of the new millennium, with CPU clock speeds finally accelerating past the 1 GHz mark, some folks (Ed.: including Intel itself) predicted that the company's new NetBurst architecture would reach speeds of 10 GHz in the future. PC enthusiasts looked forward to a new world where CPU clocks kept increasing at an accelerating pace. Need more power? Just add clock speed.
Newton’s apple inevitably fell soundly on the heads of those starry-eyed dreamers who looked to MHz as the easiest way to continue scaling PC performance. Physics doesn’t allow for exponential increases in clock rate without exponential increases in heat, and there were a number of other challenges to consider, such as manufacturing technology. Indeed, the fastest commercial CPUs have been hovering between 3 GHz and 4 GHz for a number of years now.
Of course, progress can’t be stopped when money is involved, and with folks willing to shell out cash for more powerful computers, engineers set out to find ways to increase performance by improving efficiency rather than relying solely on clock speed. Parallelism presented itself as a solution--if you can’t make the CPU faster, well, why not add additional compute resources?
The pentium EE 840, the first commercially available dual-core CPU
The trouble with parallelism is that software has to be specifically written to run in multiple threads--it doesn't offer an immediate return on investment, like clock speed. Back in 2005, when the first dual-core CPUs were seeing the light of day, they didn’t offer much in the way of tangible performance increases because there was so little desktop software available properly supporting them. In fact, most dual-core CPUs were slower than single-core CPUs in a great majority of tasks because single-core CPUs were available at higher clock speeds.
However, that was four years ago and a lot has changed. Many software developers have since been hard at work optimizing their applications to take advantage of multiple cores. Single-core CPUs are actually hard to find and two-, three-, and four-core CPUS are now the norm.
Which begs the question: how many CPU cores are right for me? Is a triple-core processor good enough for gaming, or should you splurge on a quad-core chip? Is a dual-core CPU good enough for the average user, or do more cores really make a difference? Which applications are optimized for multiple cores and which ones react only to specifications like frequency or cache size?
We thought it would be a good time to run some tests with apps from our updated benchmark suite (there are still more to come, too), running the gamut of one, two, three, and quad-core configurations to illustrate what multi-core CPUs really offer in 2009.
- Are We There Yet?
- Test Methodology: How Do You Compare Multiple Cores?
- Test System Setup And Benchmarks
- Synthetic Benchmarks: 3DMark And PCMark Vantage
- Synthetic Benchmarks: SiSoft Sandra
- Application Benchmarks: Audio Encoding
- Application Benchmarks: Video Encoding
- Application Benchmarks: 2D And 3D Graphics
- Application Benchmarks: General Usage
- Game Benchmarks
- Performance Analysis And Conclusion