Sandy Bridge-E: Does The E Stand For Efficiency?
Intel sells the fastest desktop processors you can buy; this much is known. Although some of the company's offerings are disappointingly neutered by locked ratio multipliers, the ones that aren't regularly turn out new speed records thanks to ambitious overclockers. Topping 4 GHz is no problem, even with six cores and a 15 MB of shared L3 cache pushing the complexity of Intel's chips into the billions of transistors. But what happens to efficiency when such a large piece of silicon is pushed to its limits?
It's a good question. As we showed in Core i7-2600K Overclocked: Speed Meets Efficiency, you can actually get better efficiency from this architecture if you overclock it sensibly. Now, we're gunning to see if those results can be duplicated with Sandy Bridge-E, the configuration with six cores, and soon, when Xeon E5 emerges, eight.
Overclocking: For Sport Or Necessity?
Gone are the days when you'd search high and low for that one processor model able to overclock like a beast at a price that was too good to be true. There are so many models now, and so much feature-level differentiation, that it makes more sense to find an affordable CPU that can do what you need it to, and then push from there. For most of us, there's nothing a Core i5-2500K can't do that a Core i7-2600K can for a much higher price. Of course, it doesn't help that most mainstream hardware leads the software industry. Little of what we run on our desktops requires a 4.5 or 5 GHz version of what we already have running at 3 or 4 GHz.
That isn't stopping AMD and Intel from becoming more overclocking-friendly, though (or perhaps it'd be more accurate to say that they're getting more savvy about using overclocking as a differentiator worth a price premium). AMD boasts unlocked ratios up and down its FX stack, for example. Meanwhile, Intel just announced that it will offer, for a small fee, CPU insurance that covers processor replacement in the event of overclocking damage.
Furthermore, Intel finds itself without a competitor in the high-end segment. AMD is currently selling more value-oriented processors, but its best effort currently competes (in terms of absolute performance) with models in the middle of Intel's mainstream portfolio. It can't compete where more affluent enthusiasts are spending money. In terms of manufacturing, Intel is currently about 18 months ahead, which is why AMD's 32 nm-based CPUs and APUs are relatively new, as Intel readies its 22 nm Ivy Bridge-based line-up.
This competitive advantage gives Intel considerable scalability for product planning and efficiency: processors that operate below their design ceiling naturally use less power, giving us plenty of room to measure the effect of tuning for even more speed.
Finding The Optimal Clock Rate
Every processor has an ideal clock rate (or at least an optimal range) at which the chip provides the best possible performance per watt. If you can find that point for your platform, you're sure to get the best performance for the amount of power used. We're using a Core i7-3960X to come up with the ideal combination of low energy consumption at idle with the highest possible clock rate still able to keep energy consumption within reasonable limits.