Are We There Yet?
In the early years of the new millennium, with CPU clock speeds finally accelerating past the 1 GHz mark, some folks (Ed.: including Intel itself) predicted that the company's new NetBurst architecture would reach speeds of 10 GHz in the future. PC enthusiasts looked forward to a new world where CPU clocks kept increasing at an accelerating pace. Need more power? Just add clock speed.
Newton’s apple inevitably fell soundly on the heads of those starry-eyed dreamers who looked to MHz as the easiest way to continue scaling PC performance. Physics doesn’t allow for exponential increases in clock rate without exponential increases in heat, and there were a number of other challenges to consider, such as manufacturing technology. Indeed, the fastest commercial CPUs have been hovering between 3 GHz and 4 GHz for a number of years now.
Of course, progress can’t be stopped when money is involved, and with folks willing to shell out cash for more powerful computers, engineers set out to find ways to increase performance by improving efficiency rather than relying solely on clock speed. Parallelism presented itself as a solution--if you can’t make the CPU faster, well, why not add additional compute resources?
The trouble with parallelism is that software has to be specifically written to run in multiple threads--it doesn't offer an immediate return on investment, like clock speed. Back in 2005, when the first dual-core CPUs were seeing the light of day, they didn’t offer much in the way of tangible performance increases because there was so little desktop software available properly supporting them. In fact, most dual-core CPUs were slower than single-core CPUs in a great majority of tasks because single-core CPUs were available at higher clock speeds.
However, that was four years ago and a lot has changed. Many software developers have since been hard at work optimizing their applications to take advantage of multiple cores. Single-core CPUs are actually hard to find and two-, three-, and four-core CPUS are now the norm.
Which begs the question: how many CPU cores are right for me? Is a triple-core processor good enough for gaming, or should you splurge on a quad-core chip? Is a dual-core CPU good enough for the average user, or do more cores really make a difference? Which applications are optimized for multiple cores and which ones react only to specifications like frequency or cache size?
We thought it would be a good time to run some tests with apps from our updated benchmark suite (there are still more to come, too), running the gamut of one, two, three, and quad-core configurations to illustrate what multi-core CPUs really offer in 2009.
And that's a neat trick for creating a standardized platform for the tests, eliminating the architectural differences between single and various multi-core processors.
Since I see a lot of Tom's articles considering power efficiency and read a lot of comments asking for underclock results, it would have been nice to throw some data about power usage with each configuration. Does disabling a core (or three) significantly reduce power consumption? What about temps?
Oh, such things already exist, whaddya know :)
But what I'm most interested in is what would happen when you move this to a Corei7. It seems to me that some of the apps that see a slowdown while moving to four cores are likely bumping into bandwidth and bus arbitration overheads, as the Q6600 is essentially two C2D's packaged on the same chip, sharing the FSB. The Corei7 eliminates this bottleneck, and I'd be willing to bet the performance decrease from 3->4 cores goes away as well. And when you play around with the i7, you can toy with Turbo and HyperThreading as well, but it'd be most interesting to directly compare the two architectures based on real cores.