I don't understand how Nvidia didn't up their texture address and filtering when they reworked their SP with GT200. The old G92 cores had 8 texture address/filter for every 16SP but GT200 has 8 texture address/filter for every 24SP. Those textures did a whole lot more for games than just higher SP clocks with G92 with modern games.
GT200 has the same 8 by 8 texturing ability just like G92 but only 10 clusters of 24 SP instead of 8 by 16SP which equals out to 80tmu. Texture fillrate was the biggest difference when comparing G92 vs G80 and why G92 was able to beat it in lower resolutions or get very close to high resolution with much lower memory bandwidth and less ROP. If they did 12 by 12 which would be the exact same number as G92 SP/texture ratio it would have 120 tmu instead of 80. GT200 is inferior far as texturing ability when you compare ratio to G92.
GeForce 9800 GTX 10.8 pixel fillrate 43.2 bilinear fillrate 21.6 FP16 fillrate 70.4 GB/s
GeForce GTX 260 16.1 pixel fillrate 41.5 bilinear fillrate 20.7 FP16 fillrate 111.9 GB/s
GeForce GTX 280 19.3 pixel fillrate 48.2 bilinear fillrate 24.1 FP16 fillrate 141.7 GB/s
Radeon HD 4850 10.0 pixel fillrate 25.0 bilinear fillrate 25.0 FP 16 Fillrate 64 GB/s
Radeon HD 4870 12.0 pixel fillrate 30.0 bilinear fillrate 30.0 FP 16 Fillrate 115.2 GB/s
Games don't need all that processing power as of yet. Most games off loads to textures and back to the memory for the most part straight from Nvidia by nRollo. So having more fillrate makes the biggest difference when you want the performance NOW long as you aren't shader limited. Sure 280gtx has more fillrate than 9800gtx but in reality it doesn't have that much more. 260gtx has even less than 9800gtx. This is where bandwidth comes into play with GT200 where it's not so limited compared to 9800gtx which you see the performance gains from most games. Just look at any of the reviews. You will see that 260gtx isn't that far off in performance compared to 9800gtx only when AA is applied in some ridiculous high resolution does it seem like it's more faster because of bandwidth advantages. Nvidia made a future product like 2900xt tried to do. But it's still not happening.
GT200 has the same 8 by 8 texturing ability just like G92 but only 10 clusters of 24 SP instead of 8 by 16SP which equals out to 80tmu. Texture fillrate was the biggest difference when comparing G92 vs G80 and why G92 was able to beat it in lower resolutions or get very close to high resolution with much lower memory bandwidth and less ROP. If they did 12 by 12 which would be the exact same number as G92 SP/texture ratio it would have 120 tmu instead of 80. GT200 is inferior far as texturing ability when you compare ratio to G92.
GeForce 9800 GTX 10.8 pixel fillrate 43.2 bilinear fillrate 21.6 FP16 fillrate 70.4 GB/s
GeForce GTX 260 16.1 pixel fillrate 41.5 bilinear fillrate 20.7 FP16 fillrate 111.9 GB/s
GeForce GTX 280 19.3 pixel fillrate 48.2 bilinear fillrate 24.1 FP16 fillrate 141.7 GB/s
Radeon HD 4850 10.0 pixel fillrate 25.0 bilinear fillrate 25.0 FP 16 Fillrate 64 GB/s
Radeon HD 4870 12.0 pixel fillrate 30.0 bilinear fillrate 30.0 FP 16 Fillrate 115.2 GB/s
Games don't need all that processing power as of yet. Most games off loads to textures and back to the memory for the most part straight from Nvidia by nRollo. So having more fillrate makes the biggest difference when you want the performance NOW long as you aren't shader limited. Sure 280gtx has more fillrate than 9800gtx but in reality it doesn't have that much more. 260gtx has even less than 9800gtx. This is where bandwidth comes into play with GT200 where it's not so limited compared to 9800gtx which you see the performance gains from most games. Just look at any of the reviews. You will see that 260gtx isn't that far off in performance compared to 9800gtx only when AA is applied in some ridiculous high resolution does it seem like it's more faster because of bandwidth advantages. Nvidia made a future product like 2900xt tried to do. But it's still not happening.