Could some please educate me on an issue I have yet to find a clear answer to?
I am wondering about core clock speeds and how the different iterations of GPUs are effected by them. For example:
Say I have a GTX 780 3GB with a core clock of 980MHz and a GTX 760 4GB 1085MHz.
Is the GTX 760 faster like the numbers would make someone assume? Or is the 780 faster than the 760 in spite of the lower core clock MHz because of the chip architecture?
Thanks to anyone who responds.
I am wondering about core clock speeds and how the different iterations of GPUs are effected by them. For example:
Say I have a GTX 780 3GB with a core clock of 980MHz and a GTX 760 4GB 1085MHz.
Is the GTX 760 faster like the numbers would make someone assume? Or is the 780 faster than the 760 in spite of the lower core clock MHz because of the chip architecture?
Thanks to anyone who responds.