Sign in with
Sign up | Sign in

Summary And Conclusion

Analysis: PhysX On Systems With AMD Graphics Cards
By

CPU-Based PhysX summary

To summarize the headlines of the last few months and summarize the test results, we can conclude the following:

  • The CPU-based PhysX mode mostly uses only the older x87 instruction set instead of SSE2.
  • Testing other compilations in the Bullet benchmark shows only a maximum performance increase of 10% to 20% when using SSE2.
  • The optimization performance gains would thus only be marginal in a purely single-core application.
  • Contrary to many reports, CPU-based PhysX supports multi-threading.
  • There are scenarios in which PhysX is better on the CPU than the GPU.
  • A game like Metro 2033 shows that CPU-based PhysX could be quite competitive.


Then why is the performance picture so dreary right now?

  • With CPU-based PhysX, the game developers are largely responsible for fixing thread allocation and management, while GPU-based PhysX handles this automatically.
  • This is a time and money issue for the game developers.
  • The current situation is also architected to help promote GPU-based PhysX over CPU-based PhysX.
  • With SSE2 optimizations and good threading management for the CPU, modern quad-core processors would be highly competitive compared to GPU PhysX. Predictably, Nvidia’s interest in this is lackluster.


The AMD graphics card + Nvidia graphics card (as dedicated PhysX card) hybrid mode

Here, too, our verdict is a bit more moderate compared to the recent hype. We conclude the following:

Pros:

One can claim that using the additional card results in a huge performance gain if PhysX was previously running on the CPU instead of the GPU. In such cases, the performance of a Radeon HD 5870 with a dedicated PhysX card is far superior to a single GeForce GTX 480. Even if you combine the GTX 480 with the same dedicated PhysX card, the lead of the GTX 480 is very small. The GPU-based PhysX solution is possible for all AMD users if the dedicated Nvidia PhysX-capable board is powerful enough. Mafia II shows that there are times when even a single GeForce GTX 480 reaches its limits and that “real” PhysX with highly-playable frame rates is only possible with a dedicated PhysX card.

Cons:

On the other hand, we have the fact that Nvidia incorporates strategic barriers in its drivers to prevent these combinations and performance gains if non-Nvidia cards are installed as primary graphics solutions.

It's good that the community does not take this lying down, but instead continues to produce pragmatic countermeasures. But there are more pressing drawbacks. In addition to the high costs of buying an extra card, we have added power consumption. If you use an older card, this is disturbingly noticeable, even in idle mode or normal desktop operation. Everyone will have to decide just how much money an enthusiast project like this is worth. It works, and it's fun. But whether it makes sense for you is something only you can decide for yourself.

Display all 149 comments.
This thread is closed for comments
Top Comments
  • 41 Hide
    eyefinity , November 18, 2010 5:17 AM
    So it's basically what everybody in the know already knew - nVidia is holding back progress in order to line their own pockets.
  • 32 Hide
    Emperus , November 18, 2010 5:25 AM
    Is it 'Physx by Nvidia' or 'Physx for Nvidia'..!! Its a pity to read those lines wherein it says that Nvidia is holding back performance when a non-Nvidia primary card is detected..
  • 31 Hide
    rohitbaran , November 18, 2010 5:49 AM
    In short, a good config to enjoy Physx requires selling an arm or a leg and the game developers and nVidia keep screwing the users to save their money and propagate their business interests respectively.
Other Comments
  • 41 Hide
    eyefinity , November 18, 2010 5:17 AM
    So it's basically what everybody in the know already knew - nVidia is holding back progress in order to line their own pockets.
  • 32 Hide
    Emperus , November 18, 2010 5:25 AM
    Is it 'Physx by Nvidia' or 'Physx for Nvidia'..!! Its a pity to read those lines wherein it says that Nvidia is holding back performance when a non-Nvidia primary card is detected..
  • -2 Hide
    Anonymous , November 18, 2010 5:40 AM
    It looks like the increase in CPU utilization with CPU physX is only 154%, which could be 1 thread plus synchronization overhead with the main rendering threads.
  • 30 Hide
    eyefinity , November 18, 2010 5:48 AM
    The article could barely spell it out more clearly.

    Everyone could be enjoying cpu based Physics, making use of their otherwise idle cores.


    The problem is, nVidia doesn't want that. They have a proprietary solution which slows down their own cards, and AMD cards even more, making theirs seem better. On top of that, they throw money at games devs so they don't include better cpu physics.

    Everybody loses except nVidia. This is not unusual behaviour for them, they are doing it with Tesellation now too - slowing down their own cards because it slows down AMD cards even more, when there is a better solution that doesn't hurt anybody.

    They are a pure scumbag company.
  • 31 Hide
    rohitbaran , November 18, 2010 5:49 AM
    In short, a good config to enjoy Physx requires selling an arm or a leg and the game developers and nVidia keep screwing the users to save their money and propagate their business interests respectively.
  • 27 Hide
    iam2thecrowe , November 18, 2010 6:00 AM
    The world needs need opencl physics NOW! Also, while this is an informative article, it would be good to see what single nvidia cards make games using physx playable. Will a single gts450 cut it? probably not. That way budget gamers can make a more informed choice as its no point chosing nvidia for physx and finding it doesnt run well anyway on mid range cards so they could have just bought an ATI card and been better off.
  • 9 Hide
    guanyu210379 , November 18, 2010 6:15 AM
    I have never cared about Physics.
  • 27 Hide
    archange , November 18, 2010 6:46 AM
    Believe it or not, this morning I was determined to look into this same problem, since I just upgraded from an 8800 GTS 512 to an HD 6850. :o 

    Thank you, Tom's, thank you Igor Wallossek for makinng it easy!
    You just made my day: a big thumbs up!
  • 2 Hide
    jamesedgeuk2000 , November 18, 2010 6:58 AM
    What about people with dedicated PPU's? I have 8800 GTX SLi and an Ageia Physx card where do I come into it?
  • 10 Hide
    skokie2 , November 18, 2010 7:09 AM
    What is failed to be mentioned (and if what I see is real its much more predatory) that simply having an onboard AMD graphics, even if its disabled in the BIOS, stops PhysX working. This is simply outragous. My main hope is that AMD finally gets better at linux drivers so my next card does not need to be nVidia. I will vote with my feet... so long as there is another name on the slip :(  Sad state of graphics generally and been getting worse since AMD bought ATI.. it was then that this game started... nVidia just takes it up a notch.
  • 5 Hide
    TKolb13 , November 18, 2010 7:35 AM
    is it possible to run 2 5870's in crossfire and a gtx 260 all at once? (I upgraded to the crossfire)
  • 8 Hide
    super_tycoon , November 18, 2010 7:37 AM
    @jamesedgeuk2000, the ppu is abandoned. You would have to run some pretty old drivers to have it recognized. That being said, almost all of the older physx games require a ppu for acceleration. I'm still annoyed Nvidia hasn't come up with some sort of compatibility layer so a gpu ppu can act as a ppu.

    I've been running a hybrid system for almost half a year now. I have a 5770 (replacing it with a 6970, assuming it ever comes out while I still have money to my name) and GT 240 with 512 GDDR5. (I got it for 30 before tax on a whim) The only game I've ever found improved by the 240 is Mirror's Edge. I can get some pretty glass shattering while my friend's GTS 250 just craps out. However, a hybrid system does have the advantage of CUDA. Start up a CUDA app, boom, get awesome opengl (or directX) performance and cuda acceleration. One caveat with my 240 is that you need at least 768mb (I THINK) of vram to enable the Mercury Playback Engine is CS5.
  • -2 Hide
    IzzyCraft , November 18, 2010 7:53 AM
    Quote:
    The article could barely spell it out more clearly.

    Everyone could be enjoying cpu based Physics, making use of their otherwise idle cores.


    The problem is, nVidia doesn't want that. They have a proprietary solution which slows down their own cards, and AMD cards even more, making theirs seem better. On top of that, they throw money at games devs so they don't include better cpu physics.

    Everybody loses except nVidia. This is not unusual behaviour for them, they are doing it with Tesellation now too - slowing down their own cards because it slows down AMD cards even more, when there is a better solution that doesn't hurt anybody.

    They are a pure scumbag company.

    I always enjoy reading your hate it's like i'm reading Charlie except not quite as eloquent.

    So would you rather have a game with 0 physics/really shitty or one that nvidia proved to the devs for free? Should i point out all the games that utter lack AA all together funny how such a basic thing can be overlooked, things cost money. The article points out a large portion of the bad cpu utilization is due to no dev work to make it better that cuts both ways not just to nvidia.

    Nvidia is a publicly traded company any action that they make is made in the interest in profits anything else gets people fired.

    It's cool how the article is about phsyx but you bring up tessellation and then end it with scumbag company. Maybe i should bring up how ATI cuts texture quality.

    So why would nvidia who already is spending butt loads of money developing a game for another company cut down it's own bottom line? The stuff is all there it's just a matter of devs actually doing the leg work, which nvidia would be stupid to do themselves. With people like you they could cure cancer but still be satin, so you already are the case study to why they shouldn't do any real work do improving cpu utilization with their Phsyx, because i'm sure to you it would just fall on deaf ears.

    Granted even i don't quite get the gambit of cutting ATI support for physics but business is business, and like all things proprietary the end users always loose.
  • -5 Hide
    alyoshka , November 18, 2010 7:55 AM
    I still have to try a Hybrid setup.... and really have my doubts with the mix and match thing..... 25750's in CF and a Nvidia for PhysX :)  that should do the trick.......
  • 14 Hide
    blibba , November 18, 2010 8:09 AM
    jamesedgeuk2000What about people with dedicated PPU's? I have 8800 GTX SLi and an Ageia Physx card where do I come into it?


    Looks like you might be better leaving physx to the 8800s...
  • -9 Hide
    varfantomen , November 18, 2010 8:23 AM
    eyefinityThey are a pure scumbag company.


    Heh why should nvidia spend their time and money to help AMD? It's as much nonsense as saying Toyota should help Ford be cause that too would be for the greater good. Yeah damn those scumbags at Toyota!
  • 11 Hide
    ern88 , November 18, 2010 8:37 AM
    Screw Physix anways. I don't care much for t and It is only a hand full of games that Nvidia bribed to get it in.
  • 6 Hide
    jgv115 , November 18, 2010 8:52 AM
    PhysX? Haven't heard of it...
  • 9 Hide
    sudeshc , November 18, 2010 9:00 AM
    I believe that Physix should be improved and should take advantage to all cores of CPU and the GPU-CPU load balancing should be more practical and performance based...........After we want to play better Games :D 
  • 13 Hide
    gti88 , November 18, 2010 9:24 AM
    Only 1 out of 100 gamers may have a dedicated "phisix" card.
    Other 99 don't bother at all and avoid phisix to get smoother gameplay.
    Is it really worth it in eyes of Jen-Hsung?
Display more comments