Sign in with
Sign up | Sign in

Nvidia Responds to AMD's Claim of PhysX Failure

By - Source: Tom's Hardware US | B 80 comments

AMD accuses Nvidia of disabling multi-core CPU support in PhysX API -- Nvidia says it's untrue.

With PhysX being an Nvidia property, there are obvious reasons why AMD wouldn't be first in line to sing the praises of that specific proprietary physics technology.

Earlier this month, AMD worldwide developer relations manager Richard Huddy said in an interview with Bit-tech that Nvidia is squandering away CPU resources.

"The other thing is that all these CPU cores we have are underutilised and I'm going to take another pop at Nvidia here. When they bought Ageia, they had a fairly respectable multicore implementation of PhysX. If you look at it now it basically runs predominantly on one, or at most, two cores," said Huddy. "It's the same thing as Intel's old compiler tricks that it used to do; Nvidia simply takes out all the multicore optimisations in PhysX. In fact, if coded well, the CPU can tackle most of the physics situations presented to it."

We asked Nvidia for its response to the allegations made by AMD, Nadeem Mohammad, PhysX director of product management, stepped up to the mic in hopes of setting the record straight:

I have been a member of the PhysX team, first with AEGIA, and then with Nvidia, and I can honestly say that since the merger with Nvidia there have been no changes to the SDK code which purposely reduces the software performance of PhysX or its use of CPU multi-cores.

Our PhysX SDK API is designed such that thread control is done explicitly by the application developer, not by the SDK functions themselves.  One of the best examples is 3DMarkVantage which can use 12 threads while running in software-only PhysX. This can easily be tested by anyone with a multi-core CPU system and a PhysX-capable GeForce GPU. This level of multi-core support and programming methodology has not changed since day one. And to anticipate another ridiculous claim, it would be nonsense to say we “tuned” PhysX multi-core support for this case.

PhysX is a cross platform solution. Our SDKs and tools are available for the Wii, PS3, Xbox 360, the PC and even the iPhone through one of our partners. We continue to invest substantial resources into improving PhysX support on ALL platforms--not just for those supporting GPU acceleration.

As is par for the course, this is yet another completely unsubstantiated accusation made by an employee of one of our competitors. I am writing here to address it directly and call it for what it is, completely false. Nvidia PhysX fully supports multi-core CPUs and multithreaded applications, period. Our developer tools allow developers to design their use of PhysX in PC games to take full advantage of multi-core CPUs and to fully use the multithreaded capabilities.

Discuss
Display all 80 comments.
This thread is closed for comments
Top Comments
  • 29 Hide
    FoShizzleDizzle , January 21, 2010 12:54 AM
    Not to take sides here, as I own an Nvidia card fwiw. But I came to the same conclusion as Richard Huddy before ever knowing he made this statement. It struck me when toying around with PhysX on Batman Arkham Asylum.

    I disabled card PhysX and let the CPU handle them just to see how it performed. Strangely, my CPU usage barely increased at all and framerates suffered immensely as a result - same thing reportedly occurs with ATI cards.

    The physics being calculated on this application are not particularly intensive from a visual standpoint, especially not when compared to say what GTA IV does (which relies solely on the CPU). They are just terribly optimized and by my estimation intentionally gimped when handled by the CPU.

    Anyone can connect the dots and understand why this is so. It's just stupid because I bet a quad core CPU, or even a triple core paired with say a measly 9800 GT can max out PhysX and the in-game settings if the CPU handled the PhysX without being gimped. But since it is gimped, owners of such a card pretty much cannot run PhysX.
  • 28 Hide
    Honis , January 21, 2010 12:59 AM
    randomizerOh absolutely, nonsense indeed. In fact it's such utter nonsense that he won't even bother to provide evidence to substantiate any of his claims - which, funnily enough, is what he accused Richard Huddy of.

    I think Batman Arkham Asylum benchmarks are evidence enough that something fishy is going wrong in Nvidia's APIs.

    http://www.tomshardware.com/reviews/batman-arkham-asylum,2465-10.html
  • 25 Hide
    ElMoIsEviL , January 21, 2010 12:59 AM
    I am of the same opinion as AMD here.
Other Comments
  • 22 Hide
    randomizer , January 21, 2010 12:47 AM
    Nadeem MohammadAnd to anticipate another ridiculous claim, it would be nonsense to say we “tuned” PhysX multi-core support for this case.


    Oh absolutely, nonsense indeed. :sarcastic:  In fact it's such utter nonsense that he won't even bother to provide evidence to substantiate any of his claims - which, funnily enough, is what he accused Richard Huddy of.
  • 29 Hide
    FoShizzleDizzle , January 21, 2010 12:54 AM
    Not to take sides here, as I own an Nvidia card fwiw. But I came to the same conclusion as Richard Huddy before ever knowing he made this statement. It struck me when toying around with PhysX on Batman Arkham Asylum.

    I disabled card PhysX and let the CPU handle them just to see how it performed. Strangely, my CPU usage barely increased at all and framerates suffered immensely as a result - same thing reportedly occurs with ATI cards.

    The physics being calculated on this application are not particularly intensive from a visual standpoint, especially not when compared to say what GTA IV does (which relies solely on the CPU). They are just terribly optimized and by my estimation intentionally gimped when handled by the CPU.

    Anyone can connect the dots and understand why this is so. It's just stupid because I bet a quad core CPU, or even a triple core paired with say a measly 9800 GT can max out PhysX and the in-game settings if the CPU handled the PhysX without being gimped. But since it is gimped, owners of such a card pretty much cannot run PhysX.
  • 15 Hide
    demosthenes81 , January 21, 2010 12:57 AM
    If game developers added true multicore support in the first place i bet this would have never even come up even the newest games like borderlands have bad multicore support I know almost nobody with single core cpus these days the devs need to step up
  • 25 Hide
    ElMoIsEviL , January 21, 2010 12:59 AM
    I am of the same opinion as AMD here.
  • 28 Hide
    Honis , January 21, 2010 12:59 AM
    randomizerOh absolutely, nonsense indeed. In fact it's such utter nonsense that he won't even bother to provide evidence to substantiate any of his claims - which, funnily enough, is what he accused Richard Huddy of.

    I think Batman Arkham Asylum benchmarks are evidence enough that something fishy is going wrong in Nvidia's APIs.

    http://www.tomshardware.com/reviews/batman-arkham-asylum,2465-10.html
  • -9 Hide
    Murissokah , January 21, 2010 1:02 AM
    The response sounded quite well founded. Don't think Nvidia is to blame on this one.
  • 8 Hide
    porksmuggler , January 21, 2010 1:04 AM
    My first thought is PhysX has been a market failure since it's Ageia days. Nvidia is just using this proprietary gimmick to hock more GPUs. I was stunned when Nvidia bought Ageia, but I guess the price was right, and their in-house development was lagging. The list of games using PhysX is just sad, and the performance hit with PhysX enabled is rough. Makes you wonder how big of a carrot Nvidia has to dangle out there to get the developers to bite.
  • 2 Hide
    randomizer , January 21, 2010 1:04 AM
    HonisI think Batman Arkham Asylum benchmarks are evidence enough that something fishy is going wrong in Nvidia's APIs.http://www.tomshardware.com/review [...] 65-10.html

    Oh I wasn't doubting that at all. My post was meant to have a sarcastic tone, but text doesn't convey sarcasm well. I'll have to fix it up.

    EDIT: A smilie makes all the difference :D 
  • 0 Hide
    mlopinto2k1 , January 21, 2010 1:05 AM
    randomizerOh absolutely, nonsense indeed. In fact it's such utter nonsense that he won't even bother to provide evidence to substantiate any of his claims - which, funnily enough, is what he accused Richard Huddy of.
    Funnily? :p 
  • 15 Hide
    randomizer , January 21, 2010 1:05 AM
    MurissokahThe response sounded quite well founded.

    Of course it did. It's PR.
  • 20 Hide
    randomizer , January 21, 2010 1:07 AM
    mlopinto2k1Funnily?

    Yes, it's a dictionary word with basically the same meaning as "strangely", but has more of a "hehe, you fail" tone to it. ;) 
  • 12 Hide
    mlopinto2k1 , January 21, 2010 1:08 AM
    I think PhsyX is a bunch of bull anyway. Any game I have seen that utilizes it isn't anything special. Like, I mean.. the physics aspects implemented in the game. I.E... they wouldn't be missed. (Arkham Asylum is a good game) They wouldn't need an GPU. It's just a money maker in my opinion. Garbage.
  • 5 Hide
    mlopinto2k1 , January 21, 2010 1:10 AM
    randomizerYes, it's a dictionary word with basically the same meaning as "strangely", but has more of a "hehe, you fail" tone to it.
    I had no clue that was a real word! Haha.. well in that case... :) 
  • 11 Hide
    AMW1011 , January 21, 2010 1:14 AM
    It is true, it has been proven many times like FoSjizzleDizzle explained.

    I own nVidia as well, but their anti-competitive acts are really starting to piss me off.

    Luckily DX11 will make PhysX completely useless anyway.
  • 21 Hide
    ptroen , January 21, 2010 1:23 AM
    As a amateur game developer I was intrigued by Physx since it's a significantly cheaper route then Havok. However, I found that Physx just had more problems then it's worth. For instance, 1) Physx makes use heavy of the PCI bus when in pure hardware mode. How fast can it really be if your utilizing the PCI bus which has a maximum bandwidth of 133megabytes/second(or 4 megs per frame) 2) Nvidia has been caught already locking the competition out of Ageia physics 3) the pci express bus is quoted by microsoft as SLOW and needs to be shared by the graphics card 4) No direct hlsl interface with the physics directly(you have to use a C++ call to get around it 5)bullet physics is free and offers cross platform gpu based physics 6) To write custom physics with ageia you will need to write a event handler that will have to be invoked by the C++ api on a PER ACTOR/ENTITY basis. This can be a problem if you wish to have LOTS of entities/actors.

    So yeah not too crazy about Ageia and havok is costly as well. Anyways that's my 2 cents.
  • 13 Hide
    welshmousepk , January 21, 2010 1:31 AM
    'We continue to invest substantial resources into improving PhysX support on ALL platforms--not just for those supporting GPU acceleration.'

    this made me lol.

    if they are so intent to support multi-platforms, why is their primary platform (PC GPU acceleration) locked out in the presence of competitor hardware?
  • 2 Hide
    climber , January 21, 2010 1:35 AM
    Each corporation or business wants to drive it's competitors out of the marketplace, but doesn't want to pay the price for this in ant-competitive practice law suites. Nvidia is no better than ATI/AMD (or A^2 = A Squared as I like to think of them), or Intel.
  • 9 Hide
    elno , January 21, 2010 1:36 AM
    "Our PhysX SDK API is designed such that thread control is done explicitly by the application developer, not by the SDK functions themselves."

    To me, this means that it is up to the game developers to optimize thread control for multi-core CPU. It is not nVidia's fault that game developers choose to only spend time making phsyx work with GPU and not optimize it for multi-core CPU use.

    Can AMD point to changes within the code that can show that performance of Physx has deterioted on multi-core CPU if you compare pre-Nvida Ageia API versus present day Nvidia Physx API?

    Then we'll know who is telling the truth? If there is no deterioration, then nVidia is not in the wrong. Why would they spend resources making Physx work better on multi-core cpus. That is just a dumb business decision unless they see the value of doing so. It may be that they should do that or risk phsyx being ditched as a widely used physics engine.
  • 1 Hide
    Anonymous , January 21, 2010 1:39 AM
    Any modern CPU can pretty well handle physics without any slowdowns or affecting fps. All it takes is some multi-core optimizing, with Octacores CPUs coming, games are barely taking advantage of 2 CPUS, that's really really Sad but good for GPU makers. GTA graphics are badly optimized because it was ported from consoles. Best games never used PhysX on my book. Crysis, Stalker, MW1/2, NFS , Unreal 3 had physX in only 1 map and the physx rain was really really annoying during gameplay.
  • 0 Hide
    Anonymous , January 21, 2010 1:43 AM
    Developers wake Up please, It is Time to make use of Quad cores that have been in the market for 3 Years! Thats ridiculous seriously Q6600 release date January 7, 2007! God developers some is paying them to not do so,selling more gpus?
Display more comments