Nvidia Responds to AMD's Claim of PhysX Failure
AMD accuses Nvidia of disabling multi-core CPU support in PhysX API -- Nvidia says it's untrue.
With PhysX being an Nvidia property, there are obvious reasons why AMD wouldn't be first in line to sing the praises of that specific proprietary physics technology.
Earlier this month, AMD worldwide developer relations manager Richard Huddy said in an interview with Bit-tech that Nvidia is squandering away CPU resources.
"The other thing is that all these CPU cores we have are underutilised and I'm going to take another pop at Nvidia here. When they bought Ageia, they had a fairly respectable multicore implementation of PhysX. If you look at it now it basically runs predominantly on one, or at most, two cores," said Huddy. "It's the same thing as Intel's old compiler tricks that it used to do; Nvidia simply takes out all the multicore optimisations in PhysX. In fact, if coded well, the CPU can tackle most of the physics situations presented to it."
We asked Nvidia for its response to the allegations made by AMD, Nadeem Mohammad, PhysX director of product management, stepped up to the mic in hopes of setting the record straight:
I have been a member of the PhysX team, first with AEGIA, and then with Nvidia, and I can honestly say that since the merger with Nvidia there have been no changes to the SDK code which purposely reduces the software performance of PhysX or its use of CPU multi-cores.
Our PhysX SDK API is designed such that thread control is done explicitly by the application developer, not by the SDK functions themselves. One of the best examples is 3DMarkVantage which can use 12 threads while running in software-only PhysX. This can easily be tested by anyone with a multi-core CPU system and a PhysX-capable GeForce GPU. This level of multi-core support and programming methodology has not changed since day one. And to anticipate another ridiculous claim, it would be nonsense to say we “tuned” PhysX multi-core support for this case.
PhysX is a cross platform solution. Our SDKs and tools are available for the Wii, PS3, Xbox 360, the PC and even the iPhone through one of our partners. We continue to invest substantial resources into improving PhysX support on ALL platforms--not just for those supporting GPU acceleration.
As is par for the course, this is yet another completely unsubstantiated accusation made by an employee of one of our competitors. I am writing here to address it directly and call it for what it is, completely false. Nvidia PhysX fully supports multi-core CPUs and multithreaded applications, period. Our developer tools allow developers to design their use of PhysX in PC games to take full advantage of multi-core CPUs and to fully use the multithreaded capabilities.
I disabled card PhysX and let the CPU handle them just to see how it performed. Strangely, my CPU usage barely increased at all and framerates suffered immensely as a result - same thing reportedly occurs with ATI cards.
The physics being calculated on this application are not particularly intensive from a visual standpoint, especially not when compared to say what GTA IV does (which relies solely on the CPU). They are just terribly optimized and by my estimation intentionally gimped when handled by the CPU.
Anyone can connect the dots and understand why this is so. It's just stupid because I bet a quad core CPU, or even a triple core paired with say a measly 9800 GT can max out PhysX and the in-game settings if the CPU handled the PhysX without being gimped. But since it is gimped, owners of such a card pretty much cannot run PhysX.
I think Batman Arkham Asylum benchmarks are evidence enough that something fishy is going wrong in Nvidia's APIs.
http://www.tomshardware.com/reviews/batman-arkham-asylum,2465-10.html
Oh absolutely, nonsense indeed.
I disabled card PhysX and let the CPU handle them just to see how it performed. Strangely, my CPU usage barely increased at all and framerates suffered immensely as a result - same thing reportedly occurs with ATI cards.
The physics being calculated on this application are not particularly intensive from a visual standpoint, especially not when compared to say what GTA IV does (which relies solely on the CPU). They are just terribly optimized and by my estimation intentionally gimped when handled by the CPU.
Anyone can connect the dots and understand why this is so. It's just stupid because I bet a quad core CPU, or even a triple core paired with say a measly 9800 GT can max out PhysX and the in-game settings if the CPU handled the PhysX without being gimped. But since it is gimped, owners of such a card pretty much cannot run PhysX.
I think Batman Arkham Asylum benchmarks are evidence enough that something fishy is going wrong in Nvidia's APIs.
http://www.tomshardware.com/reviews/batman-arkham-asylum,2465-10.html
Oh I wasn't doubting that at all. My post was meant to have a sarcastic tone, but text doesn't convey sarcasm well. I'll have to fix it up.
EDIT: A smilie makes all the difference
Of course it did. It's PR.
Yes, it's a dictionary word with basically the same meaning as "strangely", but has more of a "hehe, you fail" tone to it.
I own nVidia as well, but their anti-competitive acts are really starting to piss me off.
Luckily DX11 will make PhysX completely useless anyway.
So yeah not too crazy about Ageia and havok is costly as well. Anyways that's my 2 cents.
this made me lol.
if they are so intent to support multi-platforms, why is their primary platform (PC GPU acceleration) locked out in the presence of competitor hardware?
To me, this means that it is up to the game developers to optimize thread control for multi-core CPU. It is not nVidia's fault that game developers choose to only spend time making phsyx work with GPU and not optimize it for multi-core CPU use.
Can AMD point to changes within the code that can show that performance of Physx has deterioted on multi-core CPU if you compare pre-Nvida Ageia API versus present day Nvidia Physx API?
Then we'll know who is telling the truth? If there is no deterioration, then nVidia is not in the wrong. Why would they spend resources making Physx work better on multi-core cpus. That is just a dumb business decision unless they see the value of doing so. It may be that they should do that or risk phsyx being ditched as a widely used physics engine.