Sign in with
Sign up | Sign in
Your question
Closed

1 nvidia shader = 2.384 ati shaders

Last response: in Graphics & Displays
Share
January 23, 2009 10:56:42 AM

Did some crazy maths: Assuming the 4850 is 95% the performance of the 9800gtx+ and the 4870 1gb is also 95% the performance of the GTX 260 216 (a good approximation i reckon) then running at the same clock speeds the relative shader performance looks something like this:

9800GTX+ vs. 4850


9800GTX+

128 shaders x 1836 shader clock = performance index 100

4850

800 shaders x 625 shader clock = performance index 95

(Normalised 800 x 1.05 = 840 = performance index 100)

Nvidia shader clock speed divided by ATI shader clock speed: 1836 / 625 = 2.9376

840 shaders / 2.9376 shader speed disparity = 285.9478

285.9478 ATI shaders @ Nvidia 9800GTX+ speeds / 128 Nvidia 9800GTX+ shaders = 2.240

1 Nvidia G92 shader = 2.240 ATI rv770pro shaders running at same clock speeds.
-------------------------------------------------------------------------------------------------


GTX 260 216 vs. 4870 1gb


GTX 260 216

216 shaders x 1242 shader clock = performance index 100

4870 1gb

800 shaders x 750 shader clock = performance index 95

(Normalised 800 x 1.05 = 840 = performance index 100)

Nvidia shader clock speed divided by ATI shader clock speed: 1242 / 750 = 1.656

840 shaders / 1.656 shader speed disparity = 507.2464

507.2464 ATI shaders @ Nvidia GTX 260 216 speeds / 216 Nvidia GTX 260 216 shaders = 2.3484

1 Nvidia GT200 shader = 2.3484 ATI rv770xt shaders
-------------------------------------------------------------------------------------------------


This maths is pretty ugly and assumes and ignores alot. But what the hell.
a b U Graphics card
January 23, 2009 11:01:40 AM

More importantly, how much die space does each take?
January 23, 2009 11:07:47 AM

well yeah there is that. But over at guru3d they say 5 ati shaders for 1 nvidia shader. They admit its a crap description but i wanted to try and work it our for myself. Seems with gt200 alot of the die is taken up with interconnects and control logic, at least from the die shots.
Related resources
January 23, 2009 11:36:42 AM

well...4850 is not weaker than the 9800 GTX+ I'm pretty sure it doesn't make sense:) 

Or are they just "lets say" situation?

Yet this doesn't apply to real world tests:D  :p 

Funny how its double the shaders and half the cost sometimes;)
January 23, 2009 11:47:30 AM

lol i OWN a 4870 dude and i love the little thing, but pretty sure the 180 drivers have tipped g92 and gt200 over the edge. Pls show me a recnt benchmark saying otherwise. The amd cards being 95% as fast is very fair. Yes its very much a "lets say" situation with my maths lol. Dont get me wrong i would rather have a 1gb 4870 over a gtx260 216 or a 4850 over a 9800gtx+, the 4850 vs. 9800gtx+ choice being the most clear cut out of the two.
January 23, 2009 11:56:33 AM

Just because it works for the 260 doesn't mean it does for the 4850. Thats all I'm saying:) 

How about this you show me 1 with the 4850:)  You can't generalize according to 1 card...

THe 9800 GTX+ had a hard time keeping up with the 4850...sooooo we'll see

but there are no 4850s...that I've seen recently. Its mostly the 260 v the 4870
January 23, 2009 12:14:16 PM

how dare you quote my favourite website against me! :) 

mmm i'll have to find a 4850 vs 9800gtx+ benchmark now lol

Still think the geforce is quicker than the radeon here. I'd rather have a 4850, lets get that straight though. Just thought with some funky maths it would be fun to see who compares to who and what interesting that on a power per shader kind of level gt200 is only fractionally above g92. All thanks to my crappy maths lol
January 23, 2009 12:19:42 PM

well no argument agianst u, u kno that. just that there won't be that many benchmarks on the 4850 vs 9800 GTX directly unless there is a revamp, new cooler, new card inthe same category etc. Unless some1 does a Driver comparison or if they are behind the rest of the sites.
January 23, 2009 12:46:13 PM

lol ok ok no probs. Yep those benchmarks are thin on the ground right now, but i'll keep my eye out for any in the future. Cheersy-bye. Spoon.
October 28, 2010 1:26:40 AM

I find this to be accurate, regardless of the architecture.
October 28, 2010 1:41:29 AM

There are alot more important thing than that.

like price and performance....

unless you can buy shaders one by one

"hey nVidia/AMD, I want this many shaders on my card!"

PS. your title make it sounds like nVidia products are more "efficient", which they aren't, they are more power hungry, more expensive to make.
a c 173 U Graphics card
a b Î Nvidia
October 28, 2010 2:38:57 AM

Right yet no one has brought up that none of them are performing any ware near their true potential as to the nature of the architecture due to I/O and tasking of each shader unit requiring that some resources are used for scheduling. For the G92 that was about 88% of the total computing power was available to the user while the ratios are different depending on configuration and for different Nvidia gpus. ATI's architecture is much of the same way but I do not know the ratios and poor driver support complicates this negatively impacting it's performance. So a correct conclusion can be derived that none are performing as to their full potential. The same applies to DX11 gen hardware however things improved in the 68x0 cards. The Fermi has lower ratios compared to older gen hardware so they are under performing compared to their specs.
a c 273 U Graphics card
a c 172 Î Nvidia
October 28, 2010 2:40:42 AM

This topic has been closed by Mousemonkey
!