Sign in with
Sign up | Sign in
Your question

PhysX or other physics engines ?

Last response: in Graphics & Displays
Share
February 17, 2010 9:41:21 AM

hello.

i am sorry if some dumb english grammer will bother you.

as much as i know "PhysX" is a "physics engine".
it is made by a great company(NVIDA).
i think it does its job by GPU ( not CPU ) and it seems it can be more efficient more than others which works with CPU.
at least it doesn't waste CPU's time to simulate the physics rules and the GPU which has more cores can do it better and
the CPU can do other things.
do you think the programmers still want to make their own "physics engine" ?
will they stop makeing "physics engine" and use the "PhysX" instead of them ?
i mean when there is a FREE "physics engine" which it works by GPU and it is made by a great company and it works fine ,
it seems it is crazy if the programmers still make their own "physics engine".
can you explain it to me ?
thanks.

good luck.

More about : physx physics engines

a b à CPUs
February 17, 2010 10:59:53 AM

this is so starting a flame war...

PhysX was on Open API developed by Agiea for the sole purpose to create more realistic physics effects in games. Because simulating physics on multiple objects at all times is a massivly parralel task in nature, they created a PPU [Physics Processing Unit] to do the work instead.

Ageia was brought by NVIDIA, which split the API into two parts [note: This might have occured earlier]: A general PhysX which works on the CPU, and the advanced PhysX features, which runs on the GPU. As of now, NVIDIA only supports PhysX on its GPU's [which makes sense; NVIDIA should not be on the hook for other companies implementations of the API]

For whatever reason ATI has been unwilling/unable to port the API to its cards, so NVIDIA GPU's are currently the only ones that can implement GPU PhysX effects. As a result, that part of the API is not used much, and then, only with very limited [and hard to compute] effects. The standard PhysX functions can still run on the CPU though, and has actually overtaken Hovok as the Physics implementation of choice for developers.

And yes, at this point, its crazy that each individual developer needs to create their own "physics" implementation on a per-game basis. Even OpenCL won't solve that problem, as it only ALLOWS for physics to be offloaded to the GPU, but doesn't give any specific tools/API to allow this to happen, so it will have 0 impact on the quality of effects in games. At some point, just like Glide/OpenGL/DirectX, a universal API is needed to improve in-game physics effects. Imagine if no 3d API were ever created, and where we would be if each game needed its own unique 3d engine...
a c 171 Î Nvidia
a b à CPUs
February 17, 2010 11:07:44 AM

To use Nvidia's PhysX you have to pay for a licence, which you may or may not be granted (especially if your company's name has a T an I and an A in it [:mousemonkey] ) and the PhysX does use CPU cycles even if there is an Nvidia card in the system, so on the whole it's pants.
Related resources
a b Î Nvidia
a b à CPUs
February 17, 2010 11:08:38 AM

Only the GPU version works well, and even then it incurs a big performance hit. The CPU PhysX runs poorly (which makes sense, why should NVidia support something they don't sell: CPUs). Thus, for most game developers, if they want decent physics and decent performance, their only choices are to make their own or use a very limited PhysX
February 17, 2010 2:22:13 PM

Quote:
Thus, for most game developers, if they want decent physics and decent performance, their only choices are to make their own or use a very limited PhysX

i agree.if all of them want to use the same APIs , all games will be a boring things and they will be some repetitive games and
they will not have new things to show the users who are looking for new attributes in games.

Quote:
And yes, at this point, its crazy that each individual developer needs to create their own "physics" implementation on a per-game basis. Even OpenCL won't solve that problem, as it only ALLOWS for physics to be offloaded to the GPU, but doesn't give any specific tools/API to allow this to happen, so it will have 0 impact on the quality of effects in games. At some point, just like Glide/OpenGL/DirectX, a universal API is needed to improve in-game physics effects. Imagine if no 3d API were ever created, and where we would be if each game needed its own unique 3d engine...

good reasons.
but compare PhysX and physics engines which are made by great companies as the commercial softwares.
will the developers drop those commercial softwares and use PhysX ?
because they(physics engines) use CPU time or i should said they waste CPU time but PhysX use GPU time and GPU still can do its main job without even GPU says "OUCH ! , it is too much for me"

and i have a BIG question :
(as much as i know , right now ) why do GPU have more cores more than CPU ?
it seems GPU became a GOOD hardware to do some heavy algorithms like physics ?
i think CPU should have more cores and this will increase the efficient of games.

Quote:
this is so starting a flame war...

i can see Nvidia didn't stop on PhysX and it created SceniX ™ & CompleX ™ & OptiX ™ & Cg Toolkit.
i have no idea what are these but ATI is going to lose , if it just sit back and do nothing !!!

i am sure these words prove i missed some informations.
but please guide me.
February 17, 2010 2:30:11 PM

Mousemonkey said:
To use Nvidia's PhysX you have to pay for a licence, which you may or may not be granted (especially if your company's name has a T an I and an A in it [:mousemonkey] ) and the PhysX does use CPU cycles even if there is an Nvidia card in the system, so on the whole it's pants.


i did think it was free and it was just a thing which persuade developers to use Nvidia products( GeForce )
a c 171 Î Nvidia
a b à CPUs
February 17, 2010 3:19:59 PM

If say ATi want to use PhysX not only will they (more than likely) have to cough up a not insubstantial sum of money but they would also have to let Nvidia see their GPU designs (or IP) so that Nvidia can make sure that things will work as they should, basically they would have to give their secrets to their competitor.
a b à CPUs
February 17, 2010 3:34:39 PM

Mousemonkey said:
If say ATi want to use PhysX not only will they (more than likely) have to cough up a not insubstantial sum of money but they would also have to let Nvidia see their GPU designs (or IP) so that Nvidia can make sure that things will work as they should, basically they would have to give their secrets to their competitor.


Wrong. NVIDIA is responsable for ensuring the API is correct, but individual implementations are left up to the companies that implement the API, not NVIDIA.

Quote:
i agree.if all of them want to use the same APIs , all games will be a boring things and they will be some repetitive games and
they will not have new things to show the users who are looking for new attributes in games.


Then lets get rid of DirectX and OpenGL, and have every developer create their own 3d Rendering Engine from scratch. API's provide a common interface that allows for easy code creation, portability, and room for growth (by extending the API). Without an API, there would be no mechanism to bring change to the market as a whole, and we'll be stuck with the same implementation we've had for the past decade.

Quote:
To use Nvidia's PhysX you have to pay for a licence, which you may or may not be granted (especially if your company's name has a T an I and an A in it [:mousemonkey] ) and the PhysX does use CPU cycles even if there is an Nvidia card in the system, so on the whole it's pants.


To use DirectX or Havok, or any proprietary API, you also have to pay for a license, so that part of the argument fails.

Also, PhysX has to use CPU cycles at some point; more advanced physics effects means more inputs on every effected object. While the PPU/GPU is responsable for calculating how those inputs affect the movement/shape of the object, the CPU still needs to gather the data and send it for processing. No different then how the CPU is involved in GPU rendering; it gathers the data, and the GPU computes the result.
February 17, 2010 3:39:02 PM

Mousemonkey said:
If say ATi want to use PhysX not only will they (more than likely) have to cough up a not insubstantial sum of money but they would also have to let Nvidia see their GPU designs (or IP) so that Nvidia can make sure that things will work as they should, basically they would have to give their secrets to their competitor.

well , i don't think i did mean something like that. :o 
i did mean ATi should make their own products to get Nvidia.

February 17, 2010 3:48:00 PM

With DX11, Physxis basically a dying thing, it wont last much longer, as theres several alternatives which not only ATI will use, and not mentioned here, and not sure why, Intel will use as well, besides nVidia.
So, add Intel into the "not welcome" arena as well
February 17, 2010 3:53:59 PM

Enabling PhysX usually slashes frame rates by half.
Doing so on an ATI card activates a deliberate frame rate cap which gets you frame rate stuck at 14. Try it, you'll suddenly find a game which should be suckin 100% on three cores go down to a mere 60% on only one core.
a c 171 Î Nvidia
a b à CPUs
February 17, 2010 4:01:58 PM
a b Î Nvidia
a b à CPUs
February 17, 2010 4:10:48 PM

http://www.youtube.com/watch?v=gLgb9AdnaBI

This is the sole reason besides Agea why Nvidia moved in GPU/CPU physics arena. I am yet to see any released software for us home and office users for physics on ATI cards. They build the demos and run them on the hardware yet they NEVER see the light of day since so WTF ATI!?! We ATI users should already have this but nope Nvidia has the monopoly on the gpu based physics market. However since I enjoy using both ATI as well Nvidia I do run on my main rig a 9800gt 1gb sli set up while having a 8800gtx on the side. Bought a 3870 last week for my collection so I can upgrade from my x1900xt that has been modded. I am a collector so don't hate on me I even got Trident cards from the mid 90s. My MGA200 (Matrox) turns 12 this year and still going strong.
February 17, 2010 4:15:59 PM

Quote:
Then lets get rid of DirectX and OpenGL, and have every developer create their own 3d Rendering Engine from scratch. API's provide a common interface that allows for easy code creation, portability, and room for growth (by extending the API). Without an API, there would be no mechanism to bring change to the market as a whole, and we'll be stuck with the same implementation we've had for the past decade.


this is not what i said. :ouch: 
certainly , DirectX and OpenGL made some APIs which the useing graphics cards became so easy and right now , the developers don't need to worry about the Hardware.
i hope you will get the rest of idea.i can't not explain it more than this because of my poor english.
but the physics engine are the thing which allow the developers to put their new ideas to make wonderfull games and earn more money.

if you said something like that well , why don't you see this subject through this window ?

  • let the Nvidia and ATi make the story and then it can help the developers to stop wasting their time on that.
  • let the Nvidia and ATi make the game engine so the developers don't need to worry about this anymore.
  • let the Nvidia and ATi make APIs for anything of 3D game and the the developers don't need to waste their time and they just need to collect money without doing anything.
  • so if having APIs for anything helps the developers to do their jobs , then what should the developers do now ?

    GPU is a hardware and i think there should be a easy way to get the benefits of this processor like the way DirectX and OpenGL is doing now then the developers can implement their own physics engine by those tools and it can reduce the wasting time on CPU.
    !