Here is a slide from ATI demonstrating that during particular picoseconds there are shader unit cycles that are not being utilized. The space above the lines is the percentage that is not being used.
The next major advancement on deck is effects physics via the 3D engine. We are excited to see how future games will incorporate more things to simulate the real world. What we have seen already is enough to wet our whistles, but it may be a year until we really see the implementation. There is even talk of building programs to utilize the parallel nature of the graphics processor to handle similar task processes instead of a CPU.
While this is all fascinating, what happens when you add even more work to your power hungry graphics card? When I start adding effects physics and all of the other tasks that we are thinking of doing to a graphics card, I can begin to smell the silicon melting. Off-cycling isn't going to cut it when we have an overlord of an operating system telling the graphics card what to do.
Couple that with all of the nifty tasks to throw at the GPU during its off cycles, and I can foresee engine throttling to prevent the graphics processor from overheating. While ATI demonstrated how a third graphics card can manage effects physics calculations, that means even more power requirements and a bigger PSU. We might as well develop add-in toasters and coffee makers for PCs (gamers love Pop Tarts and a fresh cup of mud while fragging...)