HD6970 30% faster than the GTX580

Status
Not open for further replies.

ares1214

Splendid


Yeah, this isnt going to happen. I think what your "source" meant was in one cherry picked game it is that fast. As in a mildly favoring AMD game. Id say 20% faster is the max id expect overall.
 

notty22

Distinguished
My [strike]sourse [/strike]is 100% correct.
And how can you make that claim ?
facepalm
two_halloween_pumpkins_throwing_up.jpg
 

ares1214

Splendid


Well its pretty obvious that that is your source. If its just one game, maybe, but i doubt overall. Believe me, id love to see it happen, but i HIGHLY doubt it.
 

ragingmercenary

Distinguished
Feb 17, 2010
64
0
18,640
True, if it turns out to be that much better than the GTX 580, what kind of price tag will it carry? Will AMD continue its lineup of great price/performance cards? Also, will the HD 6970 be efficient or a hot mess like the GTX 480?
 

ares1214

Splendid


Well its using vapor chambers, so cooling shouldnt be too bad. Id expect more than 58xx power consumption, but less than fermi if that helps, fermi 100 specifically. His source is a guy who said he knows a guy who got an email which showed some benchmarks. :lol:
 

BeCoolBro

Distinguished
Feb 27, 2010
428
0
18,810


Think about it.For 30% more performance than the gtx 580 at everything would you not sacrifice efficiency?Anyway if the AMD continues it's previous performance scaling between different cards,then if the 6870 is the one to replace the 5770 and 5770 xfire almost equals the 5870,then 6870 xfire should equal the 6970.So the 6970 should be slightly faster if they follow that pattern.
 

ares1214

Splendid
Think of it this way. in order to see what effect shader count and what effect clock speed has on cards, you just need to run some tests and look at some benchmarks. Clock speed generally scales linerally, as in bump clock speed up 20%, and you will get 20% more performance. Shaders, and the accompanying parts of the SIMD, dont. Ive seen that generally, you get 50% efficiency out of more shaders, as in if you add 20% more shaders (and the rest that goes along...), you get about 10% more performance. This is illustrated by this benchmark:

HD6800-OC-60.jpg


Now, a 6970 should have 1920 shaders. Also since they are doing the 2+2 design instead of the 4+1 design, these shaders should be atleast 10-15% more powerful, since previously, 1 of the 5 in the 4+1 design was almost never used. Now, assuming there are 1920 shaders, and each shader is 10-15% more powerful, it would be like this having about 2200 Cypress shaders. Seeing that shaders get about 50% efficiency, that comes to 96% more shaders than Barts (which uses cypress shaders). 96% more shaders, and you should see around 48% more performance than the 6870. The GTX 580, the topic of this conversation is about 30% faster than the 6870:

perfrel.gif


If the 6970 is 50% faster than the 6870, and the GTX 580 is 30% faster than the 6870, that means the most we should expect is that the 6970 is 20% faster than the 580. Id even be joyous with that, im expecting closer to 10% at launch.
 

You are reading the numbers on that chart a bit wrong. It shows the GTX 580 as 37% faster than the HD6870. 50% faster than the HD6870 would come in at 110% on that chart.
 

ares1214

Splendid


Didnt i say the 580 is 30 something % faster than the 6870? I was refering to the 6970 estimates when i said 50%.

Also the chart shows nothing. All resolutions, no mention of games or setups used.

Why do people always post those charts?

Not quite, it is derived from this:

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/27.html

Which shows all resolutions and is just the overall average of the games tested. Thats the average of the games tested, at the average of the resolutions used, therefore showing the greatest overview, hence the reason i used it.
 

Those charts come from techpowerup and people use them because they are convenient, fairly accurate and based on more games than most sites use. If you want to know the test setup or see the individual benchmarks head over to the site.
I agree about the resolution thing though. Especially for cards of this caliber using the 1920x1200 chart would have been more relevant.
 

You said "about 30%." I know you were referring to the HD6970 with the 50%. I was just pointing out that using your numbers and the provided chart it would come in at 10% faster than the GTX 580 not 120% like you calculated which is actually exactly in line with your final guess.
 

ares1214

Splendid


How? 580>6870 by about 30% (27% if you really want to be exact), 6970 estimate puts 6970>6870 by 50%, 50-30=20, and therefore 6970>580 by 20%, by extension putting it at 120% of the 480. Im confused, do you have a problem with my math or the charts?
 

ares1214

Splendid


Clearly he isnt sharing the good stuff :lol: If he said 20%, i might actually get my hopes up, as thats within the realm of possibility, even if its best case scenario. 30-40%, thats too much IMO... :non:
 

That chart shows the HD6870 is 27% slower than the GTX 580. This is not the same as saying the GTX 580 is 27% faster than the HD6870. To readjust and use the HD6870 as the baseline you must divide.
(GTX 580 performance)/(HD6870 performance) = 100/73 = 1.37 = 37% faster.
Similarly to calculate 50% more than 73 you must multiply by 1.5 not add 50. 73 x 1.5 = 109.5
 

Zen911

Distinguished
Jan 22, 2008
88
0
18,630
@ ares 1214. Actually at 1920x1200 it's 100/71 which is 40% and that's not at the same quality settings :). Another two unfounded assumptions were assumed in your calculations, 1st the fact that the firth shader was never used, that's simply wrong and will hence reduce your speculated increase in efficiency. 2nd, you assumed that performance increase linearly with the increase in clock speed, again that's can't be proven by a single benchmark, i'd urgue you to provide a link where a 10% increase in clock speeds resulted in 10% in overall performance.
 
Well if 4 shaders could now do the work of 5 in the previous architecture that would be a 25% increase in performance per shader rather than the 10-15% he uses. So your complaint there is covered though whether by design or accident I don't know.
I agree that performance does not tend to scale perfectly with clock speed but the estimate of only 50% scaling with shaders count is quite low IMO. Probably to the point where he is underestimating at least a bit in total if anything.
His final guess of 10% better than the GTX 580 looks decent though. Then again none of us truly know how the changes made to the architecture will affect any of the numbers discussed exactly.
 
Status
Not open for further replies.