Analysis: PhysX On Systems With AMD Graphics Cards

CPU PhysX: Multi-Threading?

Does CPU PhysX Really Not Support Multiple Cores?

Our next problem is that, in almost all previous benchmarks, only one CPU core has really been used for PhysX in the absence of GPU hardware acceleration--or so some say. Again, this seems like somewhat of a contradiction given our measurements of fairly good CPU-based PhysX scaling in Metro 2033 benchmarks.

Graphics card
GeForce GTX 480 1.5 GB
Dedicated PhysX card
GeForce GTX 285 1 GB
Graphic drivers
GeForce 258.96
PhysX
9.10.0513


First, we measure CPU core utilization. We switch to DirectX 11 mode with its multi-threading support to get a real picture of performance. The top section of the graph below shows that CPU cores are rather evenly utilized when extended physics is deactivated.

In order to widen the bottleneck effect of the graphics card, we start out with a resolution of just 1280x1024. The less the graphics card acts as a limiting factor, the better the game scales with more cores. This would change with the DirectX 9 mode, as it limits the scaling to two CPU cores.

We notice a small increase in CPU utilization when activating GPU-based PhysX because the graphics card needs to be supplied with data for calculations. However, the increase is much larger with CPU-based PhysX activated, indicating a fairly successful parallelization implementation by the developers.

Looking at Metro 2033, we also see that a reasonable use of PhysX effects is playable, even if no PhysX acceleration is available. This is because Metro 2033 is mostly limited by the main graphics card and its 3D performance, rather than added PhysX effects. There is one exception, though: the simultaneous explosions of several bombs. In this case, the CPU suffers from serious frame rate drops, although the game is still playable. Most people won’t want to play at such low resolutions, so we switched to the other extreme.

Performing these benchmarks with a powerful main graphics card and a dedicated PhysX card was a deliberate choice, given that a single Nvidia card normally suffers from some performance penalties with GPU-based PhysX enabled. Things would get quite bad in this already-GPU-constrained game. In this case, the difference between CPU-based PhysX on a fast six-core processor with well-implemented multi-threading and a single GPU is almost zero.

Assessment

Contrary to some headlines, the Nvidia PhysX SDK actually offers multi-core support for CPUs. When used correctly, it even comes dangerously close to the performance of a single-card, GPU-based solution. Despite this, however, there's still a catch. PhysX automatically handles thread distribution, moving the load away from the CPU and onto the GPU when a compatible graphics card is active. Game developers need to shift some of the load back to the CPU.

Why does this so rarely happen?

The effort and expenditure required to implement coding changes obviously works as a deterrent. We still think that developers should be honest and openly admit this, though. Studying certain games (with a certain logo in the credits) begs the question of whether this additional expense was spared for commercial or marketing reasons. On one hand, Nvidia has a duty to developers, helping them integrate compelling effects that gamers will be able to enjoy that might not have made it into the game otherwise. On the other hand, Nvidia wants to prevent (and with good reason) prejudices from getting out of hand. According to Nvidia, SDK 3.0 already offers these capabilities, so we look forward to seeing developers implement them.

Create a new thread in the US Reviews comments forum about this subject
This thread is closed for comments
149 comments
Comment from the forums
    Your comment
    Top Comments
  • eyefinity
    So it's basically what everybody in the know already knew - nVidia is holding back progress in order to line their own pockets.
    41
  • Emperus
    Is it 'Physx by Nvidia' or 'Physx for Nvidia'..!! Its a pity to read those lines wherein it says that Nvidia is holding back performance when a non-Nvidia primary card is detected..
    32
  • rohitbaran
    In short, a good config to enjoy Physx requires selling an arm or a leg and the game developers and nVidia keep screwing the users to save their money and propagate their business interests respectively.
    31
  • Other Comments
  • eyefinity
    So it's basically what everybody in the know already knew - nVidia is holding back progress in order to line their own pockets.
    41
  • Emperus
    Is it 'Physx by Nvidia' or 'Physx for Nvidia'..!! Its a pity to read those lines wherein it says that Nvidia is holding back performance when a non-Nvidia primary card is detected..
    32
  • Anonymous
    It looks like the increase in CPU utilization with CPU physX is only 154%, which could be 1 thread plus synchronization overhead with the main rendering threads.
    -2
  • eyefinity
    The article could barely spell it out more clearly.

    Everyone could be enjoying cpu based Physics, making use of their otherwise idle cores.


    The problem is, nVidia doesn't want that. They have a proprietary solution which slows down their own cards, and AMD cards even more, making theirs seem better. On top of that, they throw money at games devs so they don't include better cpu physics.

    Everybody loses except nVidia. This is not unusual behaviour for them, they are doing it with Tesellation now too - slowing down their own cards because it slows down AMD cards even more, when there is a better solution that doesn't hurt anybody.

    They are a pure scumbag company.
    30
  • rohitbaran
    In short, a good config to enjoy Physx requires selling an arm or a leg and the game developers and nVidia keep screwing the users to save their money and propagate their business interests respectively.
    31
  • iam2thecrowe
    The world needs need opencl physics NOW! Also, while this is an informative article, it would be good to see what single nvidia cards make games using physx playable. Will a single gts450 cut it? probably not. That way budget gamers can make a more informed choice as its no point chosing nvidia for physx and finding it doesnt run well anyway on mid range cards so they could have just bought an ATI card and been better off.
    27
  • guanyu210379
    I have never cared about Physics.
    9
  • archange
    Believe it or not, this morning I was determined to look into this same problem, since I just upgraded from an 8800 GTS 512 to an HD 6850. :O

    Thank you, Tom's, thank you Igor Wallossek for makinng it easy!
    You just made my day: a big thumbs up!
    27
  • jamesedgeuk2000
    What about people with dedicated PPU's? I have 8800 GTX SLi and an Ageia Physx card where do I come into it?
    2
  • skokie2
    What is failed to be mentioned (and if what I see is real its much more predatory) that simply having an onboard AMD graphics, even if its disabled in the BIOS, stops PhysX working. This is simply outragous. My main hope is that AMD finally gets better at linux drivers so my next card does not need to be nVidia. I will vote with my feet... so long as there is another name on the slip :( Sad state of graphics generally and been getting worse since AMD bought ATI.. it was then that this game started... nVidia just takes it up a notch.
    10
  • TKolb13
    is it possible to run 2 5870's in crossfire and a gtx 260 all at once? (I upgraded to the crossfire)
    5
  • super_tycoon
    @jamesedgeuk2000, the ppu is abandoned. You would have to run some pretty old drivers to have it recognized. That being said, almost all of the older physx games require a ppu for acceleration. I'm still annoyed Nvidia hasn't come up with some sort of compatibility layer so a gpu ppu can act as a ppu.

    I've been running a hybrid system for almost half a year now. I have a 5770 (replacing it with a 6970, assuming it ever comes out while I still have money to my name) and GT 240 with 512 GDDR5. (I got it for 30 before tax on a whim) The only game I've ever found improved by the 240 is Mirror's Edge. I can get some pretty glass shattering while my friend's GTS 250 just craps out. However, a hybrid system does have the advantage of CUDA. Start up a CUDA app, boom, get awesome opengl (or directX) performance and cuda acceleration. One caveat with my 240 is that you need at least 768mb (I THINK) of vram to enable the Mercury Playback Engine is CS5.
    8
  • IzzyCraft
    Anonymous said:
    The article could barely spell it out more clearly.

    Everyone could be enjoying cpu based Physics, making use of their otherwise idle cores.


    The problem is, nVidia doesn't want that. They have a proprietary solution which slows down their own cards, and AMD cards even more, making theirs seem better. On top of that, they throw money at games devs so they don't include better cpu physics.

    Everybody loses except nVidia. This is not unusual behaviour for them, they are doing it with Tesellation now too - slowing down their own cards because it slows down AMD cards even more, when there is a better solution that doesn't hurt anybody.

    They are a pure scumbag company.

    I always enjoy reading your hate it's like i'm reading Charlie except not quite as eloquent.

    So would you rather have a game with 0 physics/really shitty or one that nvidia proved to the devs for free? Should i point out all the games that utter lack AA all together funny how such a basic thing can be overlooked, things cost money. The article points out a large portion of the bad cpu utilization is due to no dev work to make it better that cuts both ways not just to nvidia.

    Nvidia is a publicly traded company any action that they make is made in the interest in profits anything else gets people fired.

    It's cool how the article is about phsyx but you bring up tessellation and then end it with scumbag company. Maybe i should bring up how ATI cuts texture quality.

    So why would nvidia who already is spending butt loads of money developing a game for another company cut down it's own bottom line? The stuff is all there it's just a matter of devs actually doing the leg work, which nvidia would be stupid to do themselves. With people like you they could cure cancer but still be satin, so you already are the case study to why they shouldn't do any real work do improving cpu utilization with their Phsyx, because i'm sure to you it would just fall on deaf ears.

    Granted even i don't quite get the gambit of cutting ATI support for physics but business is business, and like all things proprietary the end users always loose.
    -2
  • alyoshka
    I still have to try a Hybrid setup.... and really have my doubts with the mix and match thing..... 25750's in CF and a Nvidia for PhysX :) that should do the trick.......
    -5
  • blibba
    jamesedgeuk2000What about people with dedicated PPU's? I have 8800 GTX SLi and an Ageia Physx card where do I come into it?


    Looks like you might be better leaving physx to the 8800s...
    14
  • varfantomen
    eyefinityThey are a pure scumbag company.


    Heh why should nvidia spend their time and money to help AMD? It's as much nonsense as saying Toyota should help Ford be cause that too would be for the greater good. Yeah damn those scumbags at Toyota!
    -9
  • ern88
    Screw Physix anways. I don't care much for t and It is only a hand full of games that Nvidia bribed to get it in.
    11
  • jgv115
    PhysX? Haven't heard of it...
    6
  • sudeshc
    I believe that Physix should be improved and should take advantage to all cores of CPU and the GPU-CPU load balancing should be more practical and performance based...........After we want to play better Games :D
    9
  • gti88
    Only 1 out of 100 gamers may have a dedicated "phisix" card.
    Other 99 don't bother at all and avoid phisix to get smoother gameplay.
    Is it really worth it in eyes of Jen-Hsung?
    13