Please, let me say that "bechmarks/concrete tests/measured power" should be the intelligent and honest base for making a comparison.
These are facts beyond the point that "Tesla is for computing and GeForce is for gaming ... end of discussion". I think this is the way Nvidia wants it to be.
The point is that Tesla and GeForce cards have a so similar architecture that performance cannot be so different in term of G/flops single/double precision. (I won't post the specifics of the cards that you already know).
If you click on the link above it is clear that:
a) In single precision GTX 480/580/680 are much faster than a Tesla C2070 (580 and 680 are around 2x faster)
b) In double precision Tesla C2070 is around 1,5x faster than GTX 580/480. Surprisingly 680 finishes LAST.
A little detail: Tesla C2070 costs around $2.000, a GTX 580 costs around $200/300$ on ebay... Hypothetically with the same money I could buy 10 GTX 580 kick the ass of a Tesla.
What I mean is that the difference in price between Tesla and GeForce it's not justified even for heavy calculation purposes, one GTX 580 is enough to be near the performance of a Tesla with 1/10 of the cost.
If all this is true, Nvidia is aware of it and this should explain why in their benchmark they always do not test GeForce cards in computation power against Tesla cards. They want to keep, as much as possible, the two businesses separated, otherwise they wouldn't be able to sell other Tesla cards with that premium price.
The part of the story that really pisses me off is that they are not only very unclear about the difference in performance between Tesla and GeForce cards but they have also introduced limitations on new Geforce cards and this should explain why the 680 is the slowest in double precision calculations.
Why if you want to use the card to play games you pay 200/300 $ and if you want to use the same card for another purpose you pay thousands of $??.
Dear Nvidia, IMHO, I think this is not a fair behaviour, so unless someone will demonstrate me with facts/bechmarks that the premium price for a Tesla is justified I am going to buy a GTX 580 for my researches but you should pray that ATI does not implement their GPUs in Matlab because at that point, if you continue with this behaviour, you will lose an old client!
The GeForce GPU's used for scientific computations do not use the full TDP as specified by Nvidia. Parts of the chip that are used in gaming are not powered-up in scientific computation mode. For example the GTX580 LC 3GB is rated at 244W TDP, yet MilkyWay@Home [double precision] running at 99%, the GPU consumes about 150W. The same goes for the GTX590 (dual processor), being reported at 250W versus 365W in gaming mode.