Probably use more cores. The heat generated by higher frequencies (which require more voltage) has more and more of an impact on the transistors, wiring, and other components of a processor as they become smaller through die shrinks. More cores may be the answer - as Moore's Law states, the number of transistors that can be crammed into a chip increases exponentially, indefinitely. If they cannot be used to garner higher clock frequencies, they can be used for more cores, which will create more heat, but not locally, and the heat is more controllable through efficient cooling. In other words, as much or more heat is generated, but it is not intense heat emanating from one small area, it is less intense heat coming for all over the chip. Under efficient cooling, this can be controlled, creating a reliable system.
The problem is, it is difficult bordering on impossible to well-optimize software for multi-core systems. Most notably in games, but relevant in almost all demanding computational tasks, is that it is impossible to predict how every, or even a majority, of situations how the load will be applied to the system. A game programmer cannot accurately predict the number or nature of the calculations required at any given time, even within the boundaries of the game, so it is easy for cores to become swamped or left unused. This is a problem now with multi-core and multi-threaded systems, especially the Core i7. Much of it's theoretical performance capability doesn't translated into reliable performance much of the time.
Another possible answer is more instructions per clock, but we seem to be reaching the limits of the possible optimizations in the x86 system.
We'll just have to wait and see, I suppose.