Closed
HD6970 30% faster than the GTX580
successful_troll
I got news for you, how does 3040% faster at stock sound?
I cant reveal my source but it's 100% correct and you can quote me on this.
I cant reveal my source but it's 100% correct and you can quote me on this.
64
answers
Last reply
More about hd6970 faster gtx580

Did your magical source tell you if it OC's worth a bit too? Heres the magical source.
http://forums.overclockers.co.uk/showpost.php?p=17930581&postcount=38 
ragingmercenary said:True, if it turns out to be that much better than the GTX 580, what kind of price tag will it carry? Will AMD continue its lineup of great price/performance cards? Also, will the HD 6970 be efficient or a hot mess like the GTX 480?
Well its using vapor chambers, so cooling shouldnt be too bad. Id expect more than 58xx power consumption, but less than fermi if that helps, fermi 100 specifically. His source is a guy who said he knows a guy who got an email which showed some benchmarks. 
ragingmercenary said:True, if it turns out to be that much better than the GTX 580, what kind of price tag will it carry? Will AMD continue its lineup of great price/performance cards? Also, will the HD 6970 be efficient or a hot mess like the GTX 480?
Think about it.For 30% more performance than the gtx 580 at everything would you not sacrifice efficiency?Anyway if the AMD continues it's previous performance scaling between different cards,then if the 6870 is the one to replace the 5770 and 5770 xfire almost equals the 5870,then 6870 xfire should equal the 6970.So the 6970 should be slightly faster if they follow that pattern. 
Think of it this way. in order to see what effect shader count and what effect clock speed has on cards, you just need to run some tests and look at some benchmarks. Clock speed generally scales linerally, as in bump clock speed up 20%, and you will get 20% more performance. Shaders, and the accompanying parts of the SIMD, dont. Ive seen that generally, you get 50% efficiency out of more shaders, as in if you add 20% more shaders (and the rest that goes along...), you get about 10% more performance. This is illustrated by this benchmark:
Now, a 6970 should have 1920 shaders. Also since they are doing the 2+2 design instead of the 4+1 design, these shaders should be atleast 1015% more powerful, since previously, 1 of the 5 in the 4+1 design was almost never used. Now, assuming there are 1920 shaders, and each shader is 1015% more powerful, it would be like this having about 2200 Cypress shaders. Seeing that shaders get about 50% efficiency, that comes to 96% more shaders than Barts (which uses cypress shaders). 96% more shaders, and you should see around 48% more performance than the 6870. The GTX 580, the topic of this conversation is about 30% faster than the 6870:
If the 6970 is 50% faster than the 6870, and the GTX 580 is 30% faster than the 6870, that means the most we should expect is that the 6970 is 20% faster than the 580. Id even be joyous with that, im expecting closer to 10% at launch. 
ares1214 said:If the 6970 is 50% faster than the 6870, and the GTX 580 is 30% faster than the 6870, that means the most we should expect is that the 6970 is 20% faster than the 580. Id even be joyous with that, im expecting closer to 10% at launch.
You are reading the numbers on that chart a bit wrong. It shows the GTX 580 as 37% faster than the HD6870. 50% faster than the HD6870 would come in at 110% on that chart. 
jyjjy said:You are reading the numbers on that chart a bit wrong. It shows the GTX 580 as 37% faster than the HD6870. 50% faster than the HD6870 would come in at 110% on that chart.
Didnt i say the 580 is 30 something % faster than the 6870? I was refering to the 6970 estimates when i said 50%.Quote:Also the chart shows nothing. All resolutions, no mention of games or setups used.
Why do people always post those charts?
Not quite, it is derived from this:
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/27.html
Which shows all resolutions and is just the overall average of the games tested. Thats the average of the games tested, at the average of the resolutions used, therefore showing the greatest overview, hence the reason i used it. 
strangestranger said:Also the chart shows nothing. All resolutions, no mention of games or setups used.
Why do people always post those charts?
Those charts come from techpowerup and people use them because they are convenient, fairly accurate and based on more games than most sites use. If you want to know the test setup or see the individual benchmarks head over to the site.
I agree about the resolution thing though. Especially for cards of this caliber using the 1920x1200 chart would have been more relevant. 
ares1214 said:Didnt i say the 580 is 30 something % faster than the 6870? I was refering to the 6970 estimates when i said 50%.
You said "about 30%." I know you were referring to the HD6970 with the 50%. I was just pointing out that using your numbers and the provided chart it would come in at 10% faster than the GTX 580 not 120% like you calculated which is actually exactly in line with your final guess. 
jyjjy said:You said "about 30%." I know you were referring to the HD6970 with the 50%. I was just pointing out that using your numbers and the provided chart it would come in at 10% faster than the GTX 580 not 120% like you calculated which is actually exactly in line with your final guess.
How? 580>6870 by about 30% (27% if you really want to be exact), 6970 estimate puts 6970>6870 by 50%, 5030=20, and therefore 6970>580 by 20%, by extension putting it at 120% of the 480. Im confused, do you have a problem with my math or the charts? 
ares1214 said:How? 580>6870 by about 30% (27% if you really want to be exact), 6970 estimate puts 6970>6870 by 50%, 5030=20, and therefore 6970>580 by 20%, by extension putting it at 120% of the 480. Im confused, do you have a problem with my math or the charts?
That chart shows the HD6870 is 27% slower than the GTX 580. This is not the same as saying the GTX 580 is 27% faster than the HD6870. To readjust and use the HD6870 as the baseline you must divide.
(GTX 580 performance)/(HD6870 performance) = 100/73 = 1.37 = 37% faster.
Similarly to calculate 50% more than 73 you must multiply by 1.5 not add 50. 73 x 1.5 = 109.5 
@ ares 1214. Actually at 1920x1200 it's 100/71 which is 40% and that's not at the same quality settings . Another two unfounded assumptions were assumed in your calculations, 1st the fact that the firth shader was never used, that's simply wrong and will hence reduce your speculated increase in efficiency. 2nd, you assumed that performance increase linearly with the increase in clock speed, again that's can't be proven by a single benchmark, i'd urgue you to provide a link where a 10% increase in clock speeds resulted in 10% in overall performance.

Well if 4 shaders could now do the work of 5 in the previous architecture that would be a 25% increase in performance per shader rather than the 1015% he uses. So your complaint there is covered though whether by design or accident I don't know.
I agree that performance does not tend to scale perfectly with clock speed but the estimate of only 50% scaling with shaders count is quite low IMO. Probably to the point where he is underestimating at least a bit in total if anything.
His final guess of 10% better than the GTX 580 looks decent though. Then again none of us truly know how the changes made to the architecture will affect any of the numbers discussed exactly. 
if 4 shaders "perfectly equals" 5 that means a 20% increase in efficiency, however, AMD stated in their slides that the fifth is not often used. The 20% is only in the case where the fifth shader isn't used at all. We'll yet see how will the new drivers manage to get the maximum out of this new architecture. The 6970 may very well end up 5%~10% faster than the 580, but my thought is that all this calculations are simply unfounded speculations.
On a side note, the charts show a 40% advantage for the 580 over the 6870, that's not comparing apples to apples as the default quality settings are not equal. Reports that AMD cards lose around 7% of their performance at the "respective" quality, so that's a 95/100*71 = 67.5 and then a 100/67.5 = 48% advantage for the 580 over the 6870, if the 6970 is 50% faster than the 6870 according to those speculations, that's 67.5*1.5 = 101.25% of the 580 at the "relative comparable" quality settings and that's on the overall average at 1920*1200.
Now that we know that the gap between the 6870 and 580 increases in dx11 titles "where the performance difference really matters as dx9, dx10 games are easily maxed anyway". That means the 6970 may end up slower than the 580 in applications where performance increases are more relevant.
I'm not trying to suggest the 6970 will end up slower than the 580, the fact is that I, you, everyone around doesn't really know. I'm just showing how calculations can go both ways. 
I wonder if the improvement in performance over the GTX 580 includes the "optimization" that reduces image quality?
Quote:
"The optimization however allows ATI to gain 6% up to 10% performance at very little image quality cost."
http://www.guru3d.com/article/exploringatiimagequalityoptimizations/ 
Yeah zen, im leaving a cushion here ya know AMD themselves said the 5th shader was used 510% of the time. Hence the reason i said 1015%, not entirely useless, but still, we should see a decent bit here. Secondly, jyjjy, i understand what you mean, sorry for the math fail on my part However, still, thats not too far off. And Nvidia cards generally scale a bit better with clock speed. IE, the 460. OC it 30%, it almost always gets 30% more performance. The 6850 on the other hand can OC 30%, however only gets 25% more performance give or take. However, up until about a 20% OC, it scales linearly. Same thing with AMD CPU's, up until about 3.7 GHz, they scale linearly, after that, not so much. Anyway, if we all want to assume these new shaders are 25% efficient, as the 15% wasnt really including any optimisations, just getting rid of the waste, the 6970 should be 114% more shaders. That equates to about 57% more performance. Since the 6870 is 37% slower than the 580, thats 20%. So the range id say is 1020% faster, not 3040%

BTW, the 50% isnt far off at all. Assuming clock speed scales linearly for the first say 20%, then a OC of 16% for the 6850 to 6870 speeds should yield 16% more performance. Then its just a simple calculation to see the 16% more shaders the 6870 has gets it 7% more performance, showing the shaders are 50ish% efficient in performance.

JAYDEEJOHN said:I think efficiency is more around 98%, or a 2% or so real loss.
Ive read that somewheres as well, and believe it to be correct.
The transcendentals arent used that often at all, and when they are used, theres is something close to 10% or so loss, but, theyre never used 10% of the time
So your telling me clock for clock, 960 shaders (and corresponding parts) and 1120 shaders scale linearly? 
This is why this troll could be right... apparently there is a switch on the 6970 that increases the shader count from 1920 to 2520. This also blows it off the PCI 300W cap.
http://www.semiaccurate.com/2010/12/13/secretamd6900ssecondswitch/ 
So, in the past day, we've had a "6970 is 30% faster then the 580" thread, and a "6970 is slower then the 580 at launch" thread.
And right now, the second at least LOOKED like to could be true, even though I doubt that very much. Then again, lets look at the name of the OP and call it a day. 
If the article is true or not, it still doesnt change the fact that leaks we have gotten:
Put it firmly ahead of the 570, and its almost undoubtable we will see some more performance with drivers and such. However, one more viable reason for the performance deficit is that its using AMD's new power tune feature to downclock itself due to missing drivers. 
here you go guys http://forums.overclockers.co.uk/showthread.php?t=18217817
i guess the successful troll was unsuccessful this time 
here are some prices @ OCUK http://www.overclockers.co.uk/productlist.php?groupid=701&catid=56&subid=1752&sortby=priceAsc
very cheap IMO 
Derbixrace said:here are some prices @ OCUK http://www.overclockers.co.uk/productlist.php?groupid=701&catid=56&subid=1752&sortby=priceAsc
very cheap IMO
Thats $379 across the pond in all likelyhood. For some comfort room, we might see them between $379429 at launch, either way a LOT cheaper than the 580. Also, a lot of estimates give the 580 a 57% lead over it, which may dissolve and even go negative with 10.12 drivers. No point in guessing now though, reviews are already popping up.
Related Resources
Ask a new question
Read More
Radeon
Graphics
Related Resources
 5xxx series not all that impressive
 New specs for HD 5870 & 5850
 The Truth About PC Power Consumption
 Power Consumption of my Rig?
 3ghz Power Consumption?
 GTX 480/470 Discussion Thread.
 SLI 470s VS Crossfire 5870s
 ATI vs Nvidia
 Power Consumption 5770 vs 8800GT
 Low Power Consumption Graphics Card
 +++ Is it possible for nVIDIA & ATI to...
 Doom ][ Benchmark
 Gpu power consumption ?
 Power consumption of the 460
 Sapphire HD 7850 2GB OC Power Consumption