Think of it this way. in order to see what effect shader count and what effect clock speed has on cards, you just need to run some tests and look at some benchmarks. Clock speed generally scales linerally, as in bump clock speed up 20%, and you will get 20% more performance. Shaders, and the accompanying parts of the SIMD, dont. Ive seen that generally, you get 50% efficiency out of more shaders, as in if you add 20% more shaders (and the rest that goes along...), you get about 10% more performance. This is illustrated by this benchmark:
Now, a 6970 should have 1920 shaders. Also since they are doing the 2+2 design instead of the 4+1 design, these shaders should be atleast 10-15% more powerful, since previously, 1 of the 5 in the 4+1 design was almost never used. Now, assuming there are 1920 shaders, and each shader is 10-15% more powerful, it would be like this having about 2200 Cypress shaders. Seeing that shaders get about 50% efficiency, that comes to 96% more shaders than Barts (which uses cypress shaders). 96% more shaders, and you should see around 48% more performance than the 6870. The GTX 580, the topic of this conversation is about 30% faster than the 6870:
If the 6970 is 50% faster than the 6870, and the GTX 580 is 30% faster than the 6870, that means the most we should expect is that the 6970 is 20% faster than the 580. Id even be joyous with that, im expecting closer to 10% at launch.