Does gtx 280SLI have an unnecessarily high CPU overhead?

ovaltineplease

Distinguished
May 9, 2008
1,198
0
19,280
http://techreport.com/r.x/r700-preview/crysis-1920-veryhigh.gif

Something about this particular screenshot which was linked in another thread might give a great deal of explanation as to why the gtx280 is quite evidently not working as it should be in SLI or otherwise.

There is something fundimentally odd about 9800 GTX SLI having higher performance than GTX 280 SLI - and I don't think it has anything to do with driver maturity.

On that token, maybe there is a driver maturity issue - is the software driver somehow blowing cpu performance into the gutter which is driving down the framerates? Is the SLI bridge hardware flawed on this card somehow? Just looking at that particular screenshot is telling me that there is something quite seriously wrong with the gtx280's SLI performance.

Any comments?
 

ovaltineplease

Distinguished
May 9, 2008
1,198
0
19,280
Surely, we've seen this before.

However the 4870 and 4870x2 aren't having this problem - which implies to me there has to be a /specific/ reason as to why it simply is not scaling worth a damn.

Even in some of the other tests, this particular setup is quite evidently being very held back by cpu power - but more to the point is why is it holding it back this much, but it doesn't hold back the AMD solution's scaling..? Like, I would think that there would be more of a hit to their solution - all other things considered equal.
 

invisik

Distinguished
Mar 27, 2008
2,476
0
19,810
is it possible nvidia is doing it on purpose? u no how now nvidia and intel are enemies now maybe there doing this on purpose just "testing" maybe via and nvidia have something up there sleeve to through intel off. they might be working on something which when used intel cpus, the cards dont perform as well and if via releases a new cpu it works well with theirs. intel doesnt want to use sli wat option does nvidia have and via is against intel. just my wild theory lol.
 

ovaltineplease

Distinguished
May 9, 2008
1,198
0
19,280
That is a pretty wild one - but given that Nvidia is losing a great deal of money right now I don't think thats the reason.

I imagine there has to be a logical explanation for why this is effecting NV's cards more than AMD's; but I just can't be damned to figure it out.
 

ovaltineplease

Distinguished
May 9, 2008
1,198
0
19,280
Eklipz330 - goto techreport.com and look at the 4870x2 preview article; you will see that the 4870x2 is typically a little bit ahead of 4870 CF and that the 4850/4870 CF scaling is usually noticeable in everything but Crysis (which is very software bound for AMD at the moment)

Invisik - it was actually something of a suspicion that I had, because i've heard the phrase used before when describing SLI technology.. i'll paraphrase; SLI is not just about throwing 2 cards together, its about how it interacts with the entire platform.

NV claimed they were reluctant to give Intel a license for SLI tech for this reason. Whether that is valid or not I really couldn't tell you.

Now that said, I use gtx260s in SLI myself - but I run on a 4.05 ghz C2D. I can say for absolute certainty that my min and avg framerates are higher than the gtx280 SLI numbers from that techreport article - but i'm limited to testing in 1680*1050, I only own a 22" display; so I can't validate for the higher resolutions.

Regardless, if I clock my processor to stock frequencies - I have noticed a markable difference in my framerates, especially in Crysis.

While in all practical terms this really doesn't bother me as I have a solid and stable build that handles the 4ghz OC quite well; however it does puzzle me a great deal as to why this configuration at least /seems/ very CPU sensitive - especially when the AMD solution is nowhere near as CPU sensitive. This obviously makes the AMD solution the better of the two for most users, however it doesn't mean that i'm still not curious as to why this phenomenon is so evident.
 

invisik

Distinguished
Mar 27, 2008
2,476
0
19,810
i get wat ur saying now it makes me curious to. hmmm just wondering wat does the cpu do while playing games? i mean the ai, physics? isnt it mostly being done through the gpu.
 
So the card that is meant to be lower down the pecking order scales better than the new all singing and dancing one. And you havent seen this before ???

Dont see any logical reason as far as specs go to say it should or shouldnt be the case either way i cant say im that bothered about it.

It would seem strange if it had much to do with CPU usage though unless the card is really bad in that department in which case it would effect the single card score as well.

Mactronix
 

ovaltineplease

Distinguished
May 9, 2008
1,198
0
19,280



NOVEL IDEA, i'm taking hardware donations for that project :bounce:
 
G

Guest

Guest
well... I think it was always suspected the new gen of graphics cards would be cpu bottlenecked... I mean.... the faster the processor... the more frames... usually that dies off at higher resolutions... but it doesn't... I'm thinking 4 ghz + processors are needed to really driver these things... and since most people don't have 4 ghz + processors... we don't ever really see how good these cards are.... hopefully not only will the architecture of nehalem reduce bottlenecking... but the supposed insane OCing abilities (which seem true...) of nehalem should allow higher clock potentials to really reduce bottlenecking

although this may not actually be the case... and the cards could just plain suck :kaola: