Which card is more powerfull??

intelatifan

Distinguished
May 27, 2011
26
0
18,530
Definitely GTX 460, it has got more Shraders = 224, Bus witdh = 256 bits, Rops =32 , and Physx. whereas Hd 5770 comprises of Shaders = 160, Bus Width = 128 bit, Rops = 16. These are the threee key elements which proves GTX 460 is lot better than HD 5770. And also remember "no Physx" but that doesn't make too much of difference.
 


You cant just compare the numbers like that the architecture of the cards is totally different. That being said your numbers are all wrong anyway.
Take a look at the table you will see the 5770 actually has way more shaders but is still a lot slower. Also note that the 460 comes on 2 different bus sizes the 768mb memory card has a smaller bus than a 1GB memory card.

http://www.techpowerup.com/reviews/Zotac/GeForce_GTX_460_1_GB/

Mactronix :)
 

majesticlizard

Distinguished
Mar 28, 2010
62
0
18,640


This is all true. This is because ATI counts threads as shaders. Their shaders are supposed to support 3 simultaneous threads, but it doesn't really work out like that.

Example:

The original 8800 GTS 640 has 96 shaders, but performed close to as well as the HD 3850 which is supposed to have 320 shaders. This is because the HD 3850 really doesn't have 320 shaders, it has around 110 that each, hypothetically, support multiple threads.

Also the HD 5770 has 800 shaders (or fragment pipelines), not 160. Even taking into consideration that the ATI card actually has shaders running multiple threads, it would still be roughly equivalent to 267 shaders on an Nvidia card, not 160.

The bandwidth on the 5770 is what causes it to perform so much less than the GTX 460. It creates a huge bottleneck on the card's architecture.
 



Not exactly correct, you have an understanding of how it is but your a little off on the numbers, and of course it depends if you are talking theoretical throughput or actual when talking about the AMD cards. Cypress can theoretically run 5 threads but in real world scenarios its usually 3-4 but can be as low as 1. Then there is teh 6 series cayman cards that only have a 4 wide shader set up.
The 5770 does have 800 shaders, its got nothing to do with threads. What each company counts as a shader is different and the 460 dosent even use shaders it uses Cuda Cores if you go by company naming schemes.
Here are a couple of good articles for those of you that are really interested.
http://www.beyond3d.com/content/reviews/53/1
http://www.beyond3d.com/content/reviews/55/1

Mactronix :)
Mactronix
 

majesticlizard

Distinguished
Mar 28, 2010
62
0
18,640



Notice that I used the word "around" that means I was making an estimate. They can be called cuda cores, shaders, stream processors, pixel pipelines, fragment pipelines, etc. If we just stick to "shaders" people generally know what you are talking about.

Copyrighting a slightly different version of the same architecture with a different name does not change the function.

Hence why I used the number 3 and the word "around".
 
Ok lets go from the top again. Please don't get upset this is a technical forum and you need to get things right as while we know what we are talking about others wont. thats why i made my original post.

Things you posted that are incorrect.

1. This is because ATI counts threads as shaders.
2. This is because the HD 3850 really doesn't have 320 shaders, it has around 110
3. Also the HD 5770 has 800 shaders (or fragment pipelines), not 160. Even taking into consideration that the ATI card actually has shaders running multiple threads, it would still be roughly equivalent to 267 shaders on an Nvidia card, not 160.
4. The bandwidth on the 5770 is what causes it to perform so much less than the GTX 460. It creates a huge bottleneck on the card's architecture.

The reasons are as follows,

1. A shader is hardware, a physical piece of the card while a thread is not. The threads cant effect the shader count anymore than the petrol can effect the amount of cylinders in an engine.
2. No it has 320 shaders. Your getting confused again with the thread thing. It has 320 shaders but probably only utilizes approx 110 of then at any given time would be correct
3. You plain 100% cant compare AMD and Nvidia shaders. they are a totally different thing and go about doing the same job in completely different ways. Its the old Apples to Oranges scenario.
4. People have moaned about this from the moment the card was released but its not founded.
http://www.hardocp.com/article/2009/10/12/amd_ati_radeon_hd_5770_5750_review/9

Mactronix :)
 

majesticlizard

Distinguished
Mar 28, 2010
62
0
18,640


No, I'm not incorrect. You are thinking in black and white and taking approximate statements to be exact mathematical statements.

To say that you 100% can't compare Nvidia shaders to ATI shaders is not reasonable.

Comparison:

HD 4850 = 800 shaders (really more like 200 some odd shaders running roughly 3 threads), 256 bit bandwidth, running at 625 MHz
GTX 260 = 216 shaders, 448 bit bandwidth, running at 576 MHz

HD 4890 =800 shaders (really more like 200 some odd shaders running multiple threads), 256 bit bandwidth, but at 850 MHz
GTX 285 = 240 shaders, 512-bit, 648 MHz

GTX 480 = 480 shaders, 384-bit, 700 MHz
HD 6970 = 1536 shaders (really more like 500 some odd shaders running roughly 3 threads), 256 bit bandwidth, clocked at 880 MHz

Basically you can look at pretty much any Nvidia card and its competing ATI counterpart in the same price segment and see that the power really isn't that much different.

Again, this is an approximation of one technology to another it is not an exact mathematical comparison of physical units.

Do not take it LITERALLY.


 
The one major issue I'd have with your posts, is you are acting like AMD is cheating by saying how many shaders they have. A shader is a unit that AMD uses to make their GPU's go. It is what it is. A CUDA core is what Nvidia uses. They aren't the same thing. There is no "it's really only XXX shaders."

If you said, "it performs like XXX CUDA cores" then it might not rub people the wrong way, and you won't sound so wrong.
 



I pixel pipeline (agp) is where the cardShaders mem Core) sends the data to the slot
 



I really don't know if i should laugh or cry at this :pfff: :pfff: :cry: :cry: [:lectrocrew:1] [:lectrocrew:1]

Mactronix :)
 
In response to the post, I had a 5770 for a short while before i got my 570, and i thought bang for buck it was a great performer. Low power consumption, low temp, and small enough to fit in a congested case if you are working with an OEM HP case or somethign cramped like that. Sorry for not having the scientific breakdown for you like the others, just thought id throw my 2 cents in about my experience with the 5770; which was positive.
 
Oh and another point to make is if you notice the clock speeds are a little lower on the 4xx series Nvidia cards as opposed to the 5xxx series ATI. Seems to me they were trying to combat cooling/power consumption issues with lower clock speeds. I could be wrong, just an observation, probably a poorly researched one. Im no fanboy of either ATI or Nvidia either; I just calls em as i sees em.