Does gtx 280SLI have an unnecessarily high CPU overhead?

http://techreport.com/r.x/r700-preview/crysis-1920-veryhigh.gif

Something about this particular screenshot which was linked in another thread might give a great deal of explanation as to why the gtx280 is quite evidently not working as it should be in SLI or otherwise.

There is something fundimentally odd about 9800 GTX SLI having higher performance than GTX 280 SLI - and I don't think it has anything to do with driver maturity.

On that token, maybe there is a driver maturity issue - is the software driver somehow blowing cpu performance into the gutter which is driving down the framerates? Is the SLI bridge hardware flawed on this card somehow? Just looking at that particular screenshot is telling me that there is something quite seriously wrong with the gtx280's SLI performance.

Any comments?
16 answers Last reply
More about does 280sli unnecessarily high overhead
  1. maybe the new cards need more kick on the cpu to put forward there full potential?
  2. Surely, we've seen this before.

    However the 4870 and 4870x2 aren't having this problem - which implies to me there has to be a /specific/ reason as to why it simply is not scaling worth a damn.

    Even in some of the other tests, this particular setup is quite evidently being very held back by cpu power - but more to the point is why is it holding it back this much, but it doesn't hold back the AMD solution's scaling..? Like, I would think that there would be more of a hit to their solution - all other things considered equal.
  3. is it possible nvidia is doing it on purpose? u no how now nvidia and intel are enemies now maybe there doing this on purpose just "testing" maybe via and nvidia have something up there sleeve to through intel off. they might be working on something which when used intel cpus, the cards dont perform as well and if via releases a new cpu it works well with theirs. intel doesnt want to use sli wat option does nvidia have and via is against intel. just my wild theory lol.
  4. That is a pretty wild one - but given that Nvidia is losing a great deal of money right now I don't think thats the reason.

    I imagine there has to be a logical explanation for why this is effecting NV's cards more than AMD's; but I just can't be damned to figure it out.
  5. now when you say 4870x2... does that mean cf, or the one thats coming out in less than a month?
  6. do u think its just an architecture problem on the gt200?
  7. Eklipz330 - goto techreport.com and look at the 4870x2 preview article; you will see that the 4870x2 is typically a little bit ahead of 4870 CF and that the 4850/4870 CF scaling is usually noticeable in everything but Crysis (which is very software bound for AMD at the moment)

    Invisik - it was actually something of a suspicion that I had, because i've heard the phrase used before when describing SLI technology.. i'll paraphrase; SLI is not just about throwing 2 cards together, its about how it interacts with the entire platform.

    NV claimed they were reluctant to give Intel a license for SLI tech for this reason. Whether that is valid or not I really couldn't tell you.

    Now that said, I use gtx260s in SLI myself - but I run on a 4.05 ghz C2D. I can say for absolute certainty that my min and avg framerates are higher than the gtx280 SLI numbers from that techreport article - but i'm limited to testing in 1680*1050, I only own a 22" display; so I can't validate for the higher resolutions.

    Regardless, if I clock my processor to stock frequencies - I have noticed a markable difference in my framerates, especially in Crysis.

    While in all practical terms this really doesn't bother me as I have a solid and stable build that handles the 4ghz OC quite well; however it does puzzle me a great deal as to why this configuration at least /seems/ very CPU sensitive - especially when the AMD solution is nowhere near as CPU sensitive. This obviously makes the AMD solution the better of the two for most users, however it doesn't mean that i'm still not curious as to why this phenomenon is so evident.
  8. i get wat ur saying now it makes me curious to. hmmm just wondering wat does the cpu do while playing games? i mean the ai, physics? isnt it mostly being done through the gpu.
  9. hold up - I looked at the "our testing methods page"

    http://www.techreport.com/articles.x/15105/2

    Driver version 177.39

    That is definitely going to affect the results quite a bit - I wish there was some official sources using 177.41 drivers :/
  10. http://www.anandtech.com/video/showdoc.aspx?i=3354&p=3

    even in this site they are using 177.34 drivers for the gt200 cards - and the 177.41 drivers were quite available when they benchmarked the 4870x2 preview

    Why do I have a feeling i'm going to be backtracking drivers to see if there is performance differences.... X_X
  11. does 177.41 make that much of a difference usually drivers improve performance like 5-10%
  12. I'm not sure, I started out on the 177.41 - I didn't get mine at launch

    I'll have to check another day and be sure - I doubt that's it though
  13. So the card that is meant to be lower down the pecking order scales better than the new all singing and dancing one. And you havent seen this before ???

    Dont see any logical reason as far as specs go to say it should or shouldnt be the case either way i cant say im that bothered about it.

    It would seem strange if it had much to do with CPU usage though unless the card is really bad in that department in which case it would effect the single card score as well.

    Mactronix
  14. rig it up with a skulltrail dual QX9770's at 6.0GHz and see if its bottlenecked by the CPU
  15. sabot00 said:
    rig it up with a skulltrail dual QX9770's at 6.0GHz and see if its bottlenecked by the CPU



    NOVEL IDEA, i'm taking hardware donations for that project :bounce:
  16. well... I think it was always suspected the new gen of graphics cards would be cpu bottlenecked... I mean.... the faster the processor... the more frames... usually that dies off at higher resolutions... but it doesn't... I'm thinking 4 ghz + processors are needed to really driver these things... and since most people don't have 4 ghz + processors... we don't ever really see how good these cards are.... hopefully not only will the architecture of nehalem reduce bottlenecking... but the supposed insane OCing abilities (which seem true...) of nehalem should allow higher clock potentials to really reduce bottlenecking

    although this may not actually be the case... and the cards could just plain suck :kaola:
Ask a new question

Read More

Graphics Cards Gtx Performance SLI Graphics