Sign in with
Sign up | Sign in

CPU-Based PhysX: Relevance

Analysis: PhysX On Systems With AMD Graphics Cards
By

Relevance of the CPU PhysX solution

Let’s first examine the fact that Nvidia currently only allows GPU-accelerated PhysX on its own graphics cards, thus forcing everyone else to calculate the PhysX instructions implemented in games using the CPU. The result for non-Nvidia gamers is usually an unplayable game when you turn PhysX on without a GeForce card installed. Obviously, the goal of this article is not to judge business decisions, but rather to understand the lack of performance experienced on systems not equipped with Nvidia graphics cards.

Why is CPU PhysX so much slower than GPU PhysX in modern games?

Assuming that a calculation can be parallelized, a GPU with its multiple shader units is faster than a conventional CPU with two, three, four, or even six cores. According to Nvidia, physics calculations are two to four times faster on GPUs than CPUs. That’s just half of the truth, though, because there are no physics features that couldn’t be implemented solely on the CPU. Quite often, games use a combined CPU + GPU approach, with the highly parallelizable calculation,s such as particle effects, performed by the GPU and the more static, non-parallelizable calculations, such as ragdolls, performed by the CPU. This is the case in Sacred 2, for example. In theory, the ratio of highly parallelizable calculations should in many cases be too low to really take noticeable advantage of the immense GPU speed.

But then why is the difference often so drastic in practice?

There are at least two reasons for this. The first one is that, in almost all of the games tested, CPU-based PhysX uses just a single thread, regardless of how many cores are available. The second one is that Nvidia seems to be intentionally not optimizing the CPU calculations in order to make the GPU solution look better. We’ll have to investigate multithreading at a later time with a suitable battery of benchmarks. Right now, we want to explore Nvidia deliberately leaving its code in a state where CPUs just can’t compete with GPUs.

Display all 149 comments.
This thread is closed for comments
Top Comments
  • 41 Hide
    eyefinity , November 18, 2010 5:17 AM
    So it's basically what everybody in the know already knew - nVidia is holding back progress in order to line their own pockets.
  • 32 Hide
    Emperus , November 18, 2010 5:25 AM
    Is it 'Physx by Nvidia' or 'Physx for Nvidia'..!! Its a pity to read those lines wherein it says that Nvidia is holding back performance when a non-Nvidia primary card is detected..
  • 31 Hide
    rohitbaran , November 18, 2010 5:49 AM
    In short, a good config to enjoy Physx requires selling an arm or a leg and the game developers and nVidia keep screwing the users to save their money and propagate their business interests respectively.
Other Comments
  • 41 Hide
    eyefinity , November 18, 2010 5:17 AM
    So it's basically what everybody in the know already knew - nVidia is holding back progress in order to line their own pockets.
  • 32 Hide
    Emperus , November 18, 2010 5:25 AM
    Is it 'Physx by Nvidia' or 'Physx for Nvidia'..!! Its a pity to read those lines wherein it says that Nvidia is holding back performance when a non-Nvidia primary card is detected..
  • -2 Hide
    Anonymous , November 18, 2010 5:40 AM
    It looks like the increase in CPU utilization with CPU physX is only 154%, which could be 1 thread plus synchronization overhead with the main rendering threads.
  • 30 Hide
    eyefinity , November 18, 2010 5:48 AM
    The article could barely spell it out more clearly.

    Everyone could be enjoying cpu based Physics, making use of their otherwise idle cores.


    The problem is, nVidia doesn't want that. They have a proprietary solution which slows down their own cards, and AMD cards even more, making theirs seem better. On top of that, they throw money at games devs so they don't include better cpu physics.

    Everybody loses except nVidia. This is not unusual behaviour for them, they are doing it with Tesellation now too - slowing down their own cards because it slows down AMD cards even more, when there is a better solution that doesn't hurt anybody.

    They are a pure scumbag company.
  • 31 Hide
    rohitbaran , November 18, 2010 5:49 AM
    In short, a good config to enjoy Physx requires selling an arm or a leg and the game developers and nVidia keep screwing the users to save their money and propagate their business interests respectively.
  • 27 Hide
    iam2thecrowe , November 18, 2010 6:00 AM
    The world needs need opencl physics NOW! Also, while this is an informative article, it would be good to see what single nvidia cards make games using physx playable. Will a single gts450 cut it? probably not. That way budget gamers can make a more informed choice as its no point chosing nvidia for physx and finding it doesnt run well anyway on mid range cards so they could have just bought an ATI card and been better off.
  • 9 Hide
    guanyu210379 , November 18, 2010 6:15 AM
    I have never cared about Physics.
  • 27 Hide
    archange , November 18, 2010 6:46 AM
    Believe it or not, this morning I was determined to look into this same problem, since I just upgraded from an 8800 GTS 512 to an HD 6850. :o 

    Thank you, Tom's, thank you Igor Wallossek for makinng it easy!
    You just made my day: a big thumbs up!
  • 2 Hide
    jamesedgeuk2000 , November 18, 2010 6:58 AM
    What about people with dedicated PPU's? I have 8800 GTX SLi and an Ageia Physx card where do I come into it?
  • 10 Hide
    skokie2 , November 18, 2010 7:09 AM
    What is failed to be mentioned (and if what I see is real its much more predatory) that simply having an onboard AMD graphics, even if its disabled in the BIOS, stops PhysX working. This is simply outragous. My main hope is that AMD finally gets better at linux drivers so my next card does not need to be nVidia. I will vote with my feet... so long as there is another name on the slip :(  Sad state of graphics generally and been getting worse since AMD bought ATI.. it was then that this game started... nVidia just takes it up a notch.
  • 5 Hide
    TKolb13 , November 18, 2010 7:35 AM
    is it possible to run 2 5870's in crossfire and a gtx 260 all at once? (I upgraded to the crossfire)
  • 8 Hide
    super_tycoon , November 18, 2010 7:37 AM
    @jamesedgeuk2000, the ppu is abandoned. You would have to run some pretty old drivers to have it recognized. That being said, almost all of the older physx games require a ppu for acceleration. I'm still annoyed Nvidia hasn't come up with some sort of compatibility layer so a gpu ppu can act as a ppu.

    I've been running a hybrid system for almost half a year now. I have a 5770 (replacing it with a 6970, assuming it ever comes out while I still have money to my name) and GT 240 with 512 GDDR5. (I got it for 30 before tax on a whim) The only game I've ever found improved by the 240 is Mirror's Edge. I can get some pretty glass shattering while my friend's GTS 250 just craps out. However, a hybrid system does have the advantage of CUDA. Start up a CUDA app, boom, get awesome opengl (or directX) performance and cuda acceleration. One caveat with my 240 is that you need at least 768mb (I THINK) of vram to enable the Mercury Playback Engine is CS5.
  • -2 Hide
    IzzyCraft , November 18, 2010 7:53 AM
    Quote:
    The article could barely spell it out more clearly.

    Everyone could be enjoying cpu based Physics, making use of their otherwise idle cores.


    The problem is, nVidia doesn't want that. They have a proprietary solution which slows down their own cards, and AMD cards even more, making theirs seem better. On top of that, they throw money at games devs so they don't include better cpu physics.

    Everybody loses except nVidia. This is not unusual behaviour for them, they are doing it with Tesellation now too - slowing down their own cards because it slows down AMD cards even more, when there is a better solution that doesn't hurt anybody.

    They are a pure scumbag company.

    I always enjoy reading your hate it's like i'm reading Charlie except not quite as eloquent.

    So would you rather have a game with 0 physics/really shitty or one that nvidia proved to the devs for free? Should i point out all the games that utter lack AA all together funny how such a basic thing can be overlooked, things cost money. The article points out a large portion of the bad cpu utilization is due to no dev work to make it better that cuts both ways not just to nvidia.

    Nvidia is a publicly traded company any action that they make is made in the interest in profits anything else gets people fired.

    It's cool how the article is about phsyx but you bring up tessellation and then end it with scumbag company. Maybe i should bring up how ATI cuts texture quality.

    So why would nvidia who already is spending butt loads of money developing a game for another company cut down it's own bottom line? The stuff is all there it's just a matter of devs actually doing the leg work, which nvidia would be stupid to do themselves. With people like you they could cure cancer but still be satin, so you already are the case study to why they shouldn't do any real work do improving cpu utilization with their Phsyx, because i'm sure to you it would just fall on deaf ears.

    Granted even i don't quite get the gambit of cutting ATI support for physics but business is business, and like all things proprietary the end users always loose.
  • -5 Hide
    alyoshka , November 18, 2010 7:55 AM
    I still have to try a Hybrid setup.... and really have my doubts with the mix and match thing..... 25750's in CF and a Nvidia for PhysX :)  that should do the trick.......
  • 14 Hide
    blibba , November 18, 2010 8:09 AM
    jamesedgeuk2000What about people with dedicated PPU's? I have 8800 GTX SLi and an Ageia Physx card where do I come into it?


    Looks like you might be better leaving physx to the 8800s...
  • -9 Hide
    varfantomen , November 18, 2010 8:23 AM
    eyefinityThey are a pure scumbag company.


    Heh why should nvidia spend their time and money to help AMD? It's as much nonsense as saying Toyota should help Ford be cause that too would be for the greater good. Yeah damn those scumbags at Toyota!
  • 11 Hide
    ern88 , November 18, 2010 8:37 AM
    Screw Physix anways. I don't care much for t and It is only a hand full of games that Nvidia bribed to get it in.
  • 6 Hide
    jgv115 , November 18, 2010 8:52 AM
    PhysX? Haven't heard of it...
  • 9 Hide
    sudeshc , November 18, 2010 9:00 AM
    I believe that Physix should be improved and should take advantage to all cores of CPU and the GPU-CPU load balancing should be more practical and performance based...........After we want to play better Games :D 
  • 13 Hide
    gti88 , November 18, 2010 9:24 AM
    Only 1 out of 100 gamers may have a dedicated "phisix" card.
    Other 99 don't bother at all and avoid phisix to get smoother gameplay.
    Is it really worth it in eyes of Jen-Hsung?
Display more comments