1 nvidia shader = 2.384 ati shaders

Did some crazy maths: Assuming the 4850 is 95% the performance of the 9800gtx+ and the 4870 1gb is also 95% the performance of the GTX 260 216 (a good approximation i reckon) then running at the same clock speeds the relative shader performance looks something like this:

9800GTX+ vs. 4850


128 shaders x 1836 shader clock = performance index 100


800 shaders x 625 shader clock = performance index 95

(Normalised 800 x 1.05 = 840 = performance index 100)

Nvidia shader clock speed divided by ATI shader clock speed: 1836 / 625 = 2.9376

840 shaders / 2.9376 shader speed disparity = 285.9478

285.9478 ATI shaders @ Nvidia 9800GTX+ speeds / 128 Nvidia 9800GTX+ shaders = 2.240

1 Nvidia G92 shader = 2.240 ATI rv770pro shaders running at same clock speeds.

GTX 260 216 vs. 4870 1gb

GTX 260 216

216 shaders x 1242 shader clock = performance index 100

4870 1gb

800 shaders x 750 shader clock = performance index 95

(Normalised 800 x 1.05 = 840 = performance index 100)

Nvidia shader clock speed divided by ATI shader clock speed: 1242 / 750 = 1.656

840 shaders / 1.656 shader speed disparity = 507.2464

507.2464 ATI shaders @ Nvidia GTX 260 216 speeds / 216 Nvidia GTX 260 216 shaders = 2.3484

1 Nvidia GT200 shader = 2.3484 ATI rv770xt shaders

This maths is pretty ugly and assumes and ignores alot. But what the hell.
13 answers Last reply
More about nvidia shader shaders
  1. More importantly, how much die space does each take?
  2. well yeah there is that. But over at guru3d they say 5 ati shaders for 1 nvidia shader. They admit its a crap description but i wanted to try and work it our for myself. Seems with gt200 alot of the die is taken up with interconnects and control logic, at least from the die shots.
  3. well...4850 is not weaker than the 9800 GTX+ I'm pretty sure it doesn't make sense:)

    Or are they just "lets say" situation?

    Yet this doesn't apply to real world tests:D :P

    Funny how its double the shaders and half the cost sometimes;)
  4. lol i OWN a 4870 dude and i love the little thing, but pretty sure the 180 drivers have tipped g92 and gt200 over the edge. Pls show me a recnt benchmark saying otherwise. The amd cards being 95% as fast is very fair. Yes its very much a "lets say" situation with my maths lol. Dont get me wrong i would rather have a 1gb 4870 over a gtx260 216 or a 4850 over a 9800gtx+, the 4850 vs. 9800gtx+ choice being the most clear cut out of the two.
  5. Just because it works for the 260 doesn't mean it does for the 4850. Thats all I'm saying:)

    How about this you show me 1 with the 4850:) You can't generalize according to 1 card...

    THe 9800 GTX+ had a hard time keeping up with the 4850...sooooo we'll see

    but there are no 4850s...that I've seen recently. Its mostly the 260 v the 4870

    good review, shows that most of the time the 4870 does come out on top at high res with AA, and when the nvidia wins its about 3% over the ATI (other than Nvdia based games:P )
  7. how dare you quote my favourite website against me! :)

    mmm i'll have to find a 4850 vs 9800gtx+ benchmark now lol

    Still think the geforce is quicker than the radeon here. I'd rather have a 4850, lets get that straight though. Just thought with some funky maths it would be fun to see who compares to who and what interesting that on a power per shader kind of level gt200 is only fractionally above g92. All thanks to my crappy maths lol
  8. well no argument agianst u, u kno that. just that there won't be that many benchmarks on the 4850 vs 9800 GTX directly unless there is a revamp, new cooler, new card inthe same category etc. Unless some1 does a Driver comparison or if they are behind the rest of the sites.
  9. lol ok ok no probs. Yep those benchmarks are thin on the ground right now, but i'll keep my eye out for any in the future. Cheersy-bye. Spoon.
  10. I find this to be accurate, regardless of the architecture.
  11. There are alot more important thing than that.

    like price and performance....

    unless you can buy shaders one by one

    "hey nVidia/AMD, I want this many shaders on my card!"

    PS. your title make it sounds like nVidia products are more "efficient", which they aren't, they are more power hungry, more expensive to make.
  12. Right yet no one has brought up that none of them are performing any ware near their true potential as to the nature of the architecture due to I/O and tasking of each shader unit requiring that some resources are used for scheduling. For the G92 that was about 88% of the total computing power was available to the user while the ratios are different depending on configuration and for different Nvidia gpus. ATI's architecture is much of the same way but I do not know the ratios and poor driver support complicates this negatively impacting it's performance. So a correct conclusion can be derived that none are performing as to their full potential. The same applies to DX11 gen hardware however things improved in the 68x0 cards. The Fermi has lower ratios compared to older gen hardware so they are under performing compared to their specs.
  13. This topic has been closed by Mousemonkey
Ask a new question

Read More

Graphics Cards Performance ATI Shader Clock Nvidia Graphics Product