A down-vote eh? I guess the proverbial NVIDIA-haters still lurk, unwilling to
present any rationale as usual.
And falchard is right, Viewperf tests showed enormous differences between
pro & gamer cards in previous years, but it seems vendors are deliberately
blurring the tech now, optimising for consumer APIs (ie. not OGL), which
means pro tests often run well on gamer cards. In which case where is their
rationale for the cost difference? Apart from support and supposedly better
drivers, basic performance used to be a major factor of choosing a pro card
and a sensible justification for the extra cost, but this appears to be not the
case anymore; check Viewperf11 scores for any gamer vs. pro card, the only
test where a gamer card isn't massively slower is ENSIGHT-04. For MAYA-03,
a Quadro 4000 is 3X faster than a GTX 580; for PROE-05, a Q4K is 10X faster;
for TCVIS-02, a Q4K is 30X faster.
Today though, with Viewperf12, a 580 is faster than a K5000 for MAYA-04,
about the same for CREO-01, about the same for SHOWCASE-01 and
not that much slower for SW-03. Only for CATIA-04 and SNX-02 does the
expected difference persist.
Meanwhile we get OpenCL touted everywhere, even though there are plenty
of apps which can exploit CUDA, but little attempt to properly compare the
two when the option to use the latter is also available, eg. 3DS Max, Maya,
Cinema4D, AE, LW, SI, etc.
Ian.
PS. nebun, the core structure on these cards is completely different. The number
of cores is a totally useless measure, it tells one nothing. One can't even compare
between different cards from the same vendor, eg. a GTX 770 has way more cores
than a GTX 580, but a 580 hammers the 780 for CUDA. Indeed, a 580 beats all
the 600 series cards for CUDA despite having far few cores (it's because the newer
cards use a much lower core clock, less bandwidth per core, etc.)