Nvidia Responds to AMD's Claim of PhysX Failure

With PhysX being an Nvidia property, there are obvious reasons why AMD wouldn't be first in line to sing the praises of that specific proprietary physics technology.

Earlier this month, AMD worldwide developer relations manager Richard Huddy said in an interview with Bit-tech that Nvidia is squandering away CPU resources.

"The other thing is that all these CPU cores we have are underutilised and I'm going to take another pop at Nvidia here. When they bought Ageia, they had a fairly respectable multicore implementation of PhysX. If you look at it now it basically runs predominantly on one, or at most, two cores," said Huddy. "It's the same thing as Intel's old compiler tricks that it used to do; Nvidia simply takes out all the multicore optimisations in PhysX. In fact, if coded well, the CPU can tackle most of the physics situations presented to it."

We asked Nvidia for its response to the allegations made by AMD, Nadeem Mohammad, PhysX director of product management, stepped up to the mic in hopes of setting the record straight:

I have been a member of the PhysX team, first with AEGIA, and then with Nvidia, and I can honestly say that since the merger with Nvidia there have been no changes to the SDK code which purposely reduces the software performance of PhysX or its use of CPU multi-cores.

Our PhysX SDK API is designed such that thread control is done explicitly by the application developer, not by the SDK functions themselves.  One of the best examples is 3DMarkVantage which can use 12 threads while running in software-only PhysX. This can easily be tested by anyone with a multi-core CPU system and a PhysX-capable GeForce GPU. This level of multi-core support and programming methodology has not changed since day one. And to anticipate another ridiculous claim, it would be nonsense to say we “tuned” PhysX multi-core support for this case.

PhysX is a cross platform solution. Our SDKs and tools are available for the Wii, PS3, Xbox 360, the PC and even the iPhone through one of our partners. We continue to invest substantial resources into improving PhysX support on ALL platforms--not just for those supporting GPU acceleration.

As is par for the course, this is yet another completely unsubstantiated accusation made by an employee of one of our competitors. I am writing here to address it directly and call it for what it is, completely false. Nvidia PhysX fully supports multi-core CPUs and multithreaded applications, period. Our developer tools allow developers to design their use of PhysX in PC games to take full advantage of multi-core CPUs and to fully use the multithreaded capabilities.

Create a new thread in the US News comments forum about this subject
This thread is closed for comments
80 comments
    Your comment
    Top Comments
  • FoShizzleDizzle
    Not to take sides here, as I own an Nvidia card fwiw. But I came to the same conclusion as Richard Huddy before ever knowing he made this statement. It struck me when toying around with PhysX on Batman Arkham Asylum.

    I disabled card PhysX and let the CPU handle them just to see how it performed. Strangely, my CPU usage barely increased at all and framerates suffered immensely as a result - same thing reportedly occurs with ATI cards.

    The physics being calculated on this application are not particularly intensive from a visual standpoint, especially not when compared to say what GTA IV does (which relies solely on the CPU). They are just terribly optimized and by my estimation intentionally gimped when handled by the CPU.

    Anyone can connect the dots and understand why this is so. It's just stupid because I bet a quad core CPU, or even a triple core paired with say a measly 9800 GT can max out PhysX and the in-game settings if the CPU handled the PhysX without being gimped. But since it is gimped, owners of such a card pretty much cannot run PhysX.
    29
  • Honis
    randomizerOh absolutely, nonsense indeed. In fact it's such utter nonsense that he won't even bother to provide evidence to substantiate any of his claims - which, funnily enough, is what he accused Richard Huddy of.

    I think Batman Arkham Asylum benchmarks are evidence enough that something fishy is going wrong in Nvidia's APIs.

    http://www.tomshardware.com/reviews/batman-arkham-asylum,2465-10.html
    28
  • ElMoIsEviL
    I am of the same opinion as AMD here.
    25
  • Other Comments
  • randomizer
    Nadeem MohammadAnd to anticipate another ridiculous claim, it would be nonsense to say we “tuned” PhysX multi-core support for this case.


    Oh absolutely, nonsense indeed. :sarcastic: In fact it's such utter nonsense that he won't even bother to provide evidence to substantiate any of his claims - which, funnily enough, is what he accused Richard Huddy of.
    22
  • FoShizzleDizzle
    Not to take sides here, as I own an Nvidia card fwiw. But I came to the same conclusion as Richard Huddy before ever knowing he made this statement. It struck me when toying around with PhysX on Batman Arkham Asylum.

    I disabled card PhysX and let the CPU handle them just to see how it performed. Strangely, my CPU usage barely increased at all and framerates suffered immensely as a result - same thing reportedly occurs with ATI cards.

    The physics being calculated on this application are not particularly intensive from a visual standpoint, especially not when compared to say what GTA IV does (which relies solely on the CPU). They are just terribly optimized and by my estimation intentionally gimped when handled by the CPU.

    Anyone can connect the dots and understand why this is so. It's just stupid because I bet a quad core CPU, or even a triple core paired with say a measly 9800 GT can max out PhysX and the in-game settings if the CPU handled the PhysX without being gimped. But since it is gimped, owners of such a card pretty much cannot run PhysX.
    29
  • demosthenes81
    If game developers added true multicore support in the first place i bet this would have never even come up even the newest games like borderlands have bad multicore support I know almost nobody with single core cpus these days the devs need to step up
    15