Sign in with
Sign up | Sign in
Your question

Physx on Ati

Last response: in Graphics & Displays
Share
August 23, 2009 9:07:21 PM

i just installed x men wolverine i keep seeing Physx engine of Nvidia which of course i use Ati Radeon and Amd so what should i do delete it or update it for the game

More about : physx ati

August 23, 2009 9:14:41 PM

ATi cards do not support Hardware Physx, therefore having it on, off or uninstalled shouldn't change anything.
a c 271 U Graphics card
August 23, 2009 9:26:05 PM

Or you could just get an Nvidia card and enjoy the game The Way It's Meant To Be Played. [:mousemonkey]
Related resources
a c 271 U Graphics card
August 23, 2009 9:30:56 PM

Aw c'mon, I was subtle an everything.
August 23, 2009 9:45:46 PM

Physx is none starter, HAVOK is where it's at.
a b U Graphics card
August 24, 2009 12:05:42 PM

Please. Havok is too limited, and you're tied to that particular engine. At least PhysX is an outside API, free of engine restraints...
a b U Graphics card
August 24, 2009 4:30:33 PM

Not really; its just that no developer has actually used PhysX to REPLACE an already in place engine. Right now, PhysX is just an add-in layer, which is limiting its potential.
a b U Graphics card
August 24, 2009 5:04:32 PM

and they never will use it to replace anything. That would be every company would have to agree to use it. Ati/nvidia/intel because no dev is going to lose that kind of money over Intel and ATI customers not being able to play a game.

And it would be a PC only game or else they will have to update the consoles as well.
a c 130 U Graphics card
August 24, 2009 5:09:28 PM

flykbng said:
i just installed x men wolverine i keep seeing Physx engine of Nvidia which of course i use Ati Radeon and Amd so what should i do delete it or update it for the game


I run an ATI/AMD card and i play games which use the Physx engine, i just let it install and update as it wants, the way i see it is this, If the game checks for it and its not there then it could cause issues, so i just let it do what it wants. I dont have any problems i have played a few games now that say they are better with Physix but if you didnt have it in the first place you wont miss it and the game play and quality dont suffer in my opinion.
(And before anyone asks yes i have seen the games i played both with and without Physix)
Mactronix
a b U Graphics card
August 24, 2009 6:44:05 PM

darkvine said:
and they never will use it to replace anything. That would be every company would have to agree to use it. Ati/nvidia/intel because no dev is going to lose that kind of money over Intel and ATI customers not being able to play a game.

And it would be a PC only game or else they will have to update the consoles as well.


PhysX has already been ported to consoles; the only ones not adopting it is ATI on the PC platform. And again, NVIDIA offered to help in the porting process...I fully expect to see more console ports including a physX option, and more console exclusives using the PhysX engine.
a b U Graphics card
August 24, 2009 7:33:00 PM

It's ok Gamer, why don't you tell us again where ATI put the firecracker. You realise that by your logic Intel must also be a huge player in blocking PhysX as well, right?.. Otherwise why hasn't Nvidia released a PhysX for PC's that use the CPU ala the consoles? But no no, I'm sure Nvidia is trying to give away the API to all parties interested with absolutely no strings attached.... :\

I'll care about GPU accelerated physics when someone can show me how much faster it is, apples to apples, over the same thing on a cpu. Obviously we are not going to see physics on the gpu that matters to gameplay until someone releases a phsyics API that is truely open (you know.. no strings attached kind of thing..), and works effectively (though perhaps not as fast) on both a cpu and a gpu.

Besides that though.. by the time physics is so advanced in games as to require GPU over current CPU acceleration, CPU's will be parallel enough that it won't matter anyway..
a b U Graphics card
August 24, 2009 7:47:01 PM

Quote:
Physx is for cpu's. It has always been for cpu's.


Aye, I know. my point was not that it wasn't.. but that I have never been able to tell the difference with accelerating it through a GPU or through a relatively high end CPU. I ahve always been of teh belief that current physics works fine on a CPU.. by the time It doesn't cpu's will be radically different anyway.. Though perhaps I am underestimating how much paralelism physics requires should it actualyl be implimented corectly.

The thing that bothers me is how some games may totally remove certain effects without teh hardware acceleration enabled. It can surely work on a CPU.. I'd realyl like to see how much of a difference to my fps hardware acceleration vs software makesin an equal omparison. I would think it would be a far better advertisement of how awesome the game was if one could have the option of leaving the effects intact but see the fps drop a dozen or more points.
a b U Graphics card
August 24, 2009 7:59:36 PM

Aye that is what I seem to see going on, and what really bothers me.

Instead of the powers that be saying "here are they physics of the game, we used PhysX to make them, you can accelerate them in hardware if you want." We get "Here are the physics for the game, you can accelerate extra features in teh ahrdware if you want"

I am firmly of the oppinion that if I were able to run all of the effects through my CPU instead of having the "accererated" effects simply turned off at a driver/dev level I would not see more than a few FPS difference.

If it really made a difference why wouldn't Nvidia and devs that use physX be jumping at the ability to show everyone how crappy the FPS is without PhysX GPU acceleration on? If they could get a game out the door that had "enable advanced physics" and "enable hardware acceleration with the nvidia GPU" as separate options we could all see first hand how crippling physics on the CPU can be... but I have yet to see an example of that. Please correct me if there are examples of this.
a c 106 U Graphics card
August 24, 2009 8:05:27 PM

Leave it alone. The game will run fine. Honestly I'm sure you will be too busy dismembering baddies to notice the lack of hardware physx acceleration.
a c 130 U Graphics card
August 24, 2009 8:16:32 PM

Not 100% sure but i think the batman demo gave the option of using software Physix i will benchmark it if it does and see.

Mactronix
a c 130 U Graphics card
August 24, 2009 8:51:12 PM

Well its not got a software option its turn it on if you want (normal or high) it does warn of a performance hit, which youget but then thats down to drivers i think as daedalus685 said.
Its not even like the effects are Physics based so calling then Physx effects is a bit rich really. Things like dust when you hit people or they hit the floor are not what i would call PhysX but it seems that is what they are doing

Mactronix
August 24, 2009 9:23:47 PM

daedalus685 said:
Aye that is what I seem to see going on, and what really bothers me.

Instead of the powers that be saying "here are they physics of the game, we used PhysX to make them, you can accelerate them in hardware if you want." We get "Here are the physics for the game, you can accelerate extra features in teh ahrdware if you want"

I am firmly of the oppinion that if I were able to run all of the effects through my CPU instead of having the "accererated" effects simply turned off at a driver/dev level I would not see more than a few FPS difference.

If it really made a difference why wouldn't Nvidia and devs that use physX be jumping at the ability to show everyone how crappy the FPS is without PhysX GPU acceleration on? If they could get a game out the door that had "enable advanced physics" and "enable hardware acceleration with the nvidia GPU" as separate options we could all see first hand how crippling physics on the CPU can be... but I have yet to see an example of that. Please correct me if there are examples of this.


I totally agree with this, i wouldnt even be surprised if it wouldnt shown any decrease in fps if done right. Seeing as most games only utilize 2 threads rather then 4-8 a lot of ppl are having today, It would be much easier to write a application (w/o being integrated in a game, so it stays universal) who can use those ideling cores and basicly ''emulate physics''.

It could just be impossible what im trying to explain here tho..
a b U Graphics card
August 25, 2009 2:01:39 AM

Hell freezing over? That is less then I think it will take. That game better be the best FPS ever made, and not the 'more of the same but a bit better' that HL2:EP2 felt like.
a b U Graphics card
August 25, 2009 11:54:25 AM

The reason current physics implementations work is because they are limited in scope. If the full PhysX API is used as the base physics engine, the CPU will simply not be able to keep up. Using you're logic, CPU's would be just fine for rendering too...
a b U Graphics card
August 25, 2009 1:36:20 PM

gamerk316 said:
The reason current physics implementations work is because they are limited in scope. If the full PhysX API is used as the base physics engine, the CPU will simply not be able to keep up. Using you're logic, CPU's would be just fine for rendering too...


I have no doubt that physics computation would benefit greatly from parallel calculations. However, I do not see this happening any time soon. For physics to matter in a game it has to be the baseline of the game, otherwise it can't have any impact on the gameplay. Thus it requires the minimum specs to greatly jump. It also requires either the far and away vast majority to have a gpu capable of harware acceleration.. or the cpu to be able to demonstrate the same physics, though perhaps slower. We can already use the cpu to do the physics.. but for whatever reason nvidia and the like choose to nly allow the advanced effects while hardware acceleration is on.

Additionally, I have never seen any proof that the CPU can not keep up. This was my point. If the cpu chugs along at 5fps becasue of physX, yet the same effects on the gpu can be run at 40fps I really would like to know.. Does running the physics on the CPU simply increase any bottleneck you might have, or is teh cpu fundamentally flawed at the computation.

Rendering is different. With rendering you get what ammounts to rediculously simple calculations done a few thousand(million) times, over and over. Thus it would be silly to have a complicated few core computer to calculate it. With physics, I can envision approximatinos that may be rediculously simple to compute.. but once you add more than two particles, more than 1 force, and so on the complexity increases exponentially. It couldn't be done effectively on todays GPU's, least I don't think it could. Once the complexity, among other things, is there for a gpu, I really feel that a CPU will have changed enough that the distinction between "what is better" will be lost.

Something as simple as a cloud of particles, each with a position and velocity, would obviusly be best on a massive number of simple cores. But what if we apply charge to these particles? Even adding something as simple as newtonian gravity would increase the complexity impressively. At some point the API would not be able to run well on parallel cores. We need something with the freedom to run complex tasks on a fast complicated core, and the simple approximations of particular movement on a GPU like core. Something that has teh freedom to use whatever resources are there.. without totally hogging either. Frankly, we need more powerful computers, more dedicated software folks, and the bottom to entirely drop off the average joe's computer. I can;t see physics in a game that really matter to teh gameplay until the minimum specs of a game call for an advanced multi gpu configuration, a very fast quad core cpu, and loads of RAM on a 64bit OS.

I don't want us to fight about what acceleration is better.. cpu or GPU. I want someone to make a game using an advanced physics API and show everyone how much it changes games. Perhaps even show everyone how useless our CPU's are at computing particle approximations of solids. People seem to forget that CPU's and the OS as well, are advancingl. By the end of next year we will have a CPU with 8 physical cores, 16 effective. That is approaching he number of cores an Aegia card had. It woul not surprise me in the least if the lines between a GPU and CPU to infact start to grey.
a b U Graphics card
August 25, 2009 2:55:43 PM

daedalus685 said:
I have no doubt that physics computation would benefit greatly from parallel calculations. However, I do not see this happening any time soon. For physics to matter in a game it has to be the baseline of the game, otherwise it can't have any impact on the gameplay. Thus it requires the minimum specs to greatly jump. It also requires either the far and away vast majority to have a gpu capable of harware acceleration.. or the cpu to be able to demonstrate the same physics, though perhaps slower. We can already use the cpu to do the physics.. but for whatever reason nvidia and the like choose to nly allow the advanced effects while hardware acceleration is on.

Additionally, I have never seen any proof that the CPU can not keep up. This was my point. If the cpu chugs along at 5fps becasue of physX, yet the same effects on the gpu can be run at 40fps I really would like to know.. Does running the physics on the CPU simply increase any bottleneck you might have, or is teh cpu fundamentally flawed at the computation.

Rendering is different. With rendering you get what ammounts to rediculously simple calculations done a few thousand(million) times, over and over. Thus it would be silly to have a complicated few core computer to calculate it. With physics, I can envision approximatinos that may be rediculously simple to compute.. but once you add more than two particles, more than 1 force, and so on the complexity increases exponentially. It couldn't be done effectively on todays GPU's, least I don't think it could. Once the complexity, among other things, is there for a gpu, I really feel that a CPU will have changed enough that the distinction between "what is better" will be lost.

Something as simple as a cloud of particles, each with a position and velocity, would obviusly be best on a massive number of simple cores. But what if we apply charge to these particles? Even adding something as simple as newtonian gravity would increase the complexity impressively. At some point the API would not be able to run well on parallel cores. We need something with the freedom to run complex tasks on a fast complicated core, and the simple approximations of particular movement on a GPU like core. Something that has teh freedom to use whatever resources are there.. without totally hogging either. Frankly, we need more powerful computers, more dedicated software folks, and the bottom to entirely drop off the average joe's computer. I can;t see physics in a game that really matter to teh gameplay until the minimum specs of a game call for an advanced multi gpu configuration, a very fast quad core cpu, and loads of RAM on a 64bit OS.

I don't want us to fight about what acceleration is better.. cpu or GPU. I want someone to make a game using an advanced physics API and show everyone how much it changes games. Perhaps even show everyone how useless our CPU's are at computing particle approximations of solids. People seem to forget that CPU's and the OS as well, are advancingl. By the end of next year we will have a CPU with 8 physical cores, 16 effective. That is approaching he number of cores an Aegia card had. It woul not surprise me in the least if the lines between a GPU and CPU to infact start to grey.


I more or less agree, but still argue that CPU's will start to bottleneck once you move away from F=ma + the force of gravity on objects due to limited onboard cache and small data bus. To move forward, a unified API is needed, so it can start to become used as a basline proccess. Until then, we are at the mercy of different game engines, and the restrictions they put on us.
a b U Graphics card
August 25, 2009 3:08:10 PM

Heres the example I currently use:

Im playing a FPS. Im in the mountains and a enemy patrol is comming at me. They are far more heavily armed, and theres no way past them.

I notice that the snow on the cliff above them appears loose. I shoot RPG at the top of the cliff, starting an avalanche. The enemies try to run, but get crushed.

-----------------------

Now, I can make a game that scripts this entire event, but I want this handled by the API itself. The snow would be an object thats plasted on the top of the cliffs, treated as a seperate object. When i shoot the RPG, I exert a force upon the snow, which in turn knocks some of it loose. That snow, in turn, knocks more snow loose, creating an avalanche.

Now, a simplified formula would start to kick in, I call this the "Prediction Formula", which simply estimates that path of the avalanche as a whole (treat it as one giant object). Eventually, that path would overlap with the enemies, which in turn would kick in an AI routine which would basically tell the AI "Theres a giant mass of a dense, high speed object moving toward us. Run". This would ensure the AI simply doesn't ignore their inpending doom.

Now, PhysX could already accomplish this. Other engines, however, would simply cause an avalanche when certain weapons hit the peaks, as opposed to actually checking to see weather or not the force of the explosion was enough to trigger the event.

An even easier implementation would be bulletproof vests, helmets, bricks, etc. Anything you can shoot through really. Bullet hits object, and either pierces through, or does not. Either way, the obeject it hits takes some damage, maybe a bullet logded in the middle, or maybe the object shatters entirly. So right there, I just solved low/high caliber bullets, protective armor, and destroyable cover. Again, I'm not talking COD4's "High caliber bullets go through cement and others dont", I'm talking "Based on the strength of the material, the bullet either does or does not pierce the material".

A Physics API would greatly enhance gaming, hence why I am not happy ATI is dragging this out.
a c 130 U Graphics card
August 25, 2009 3:10:51 PM

Please dont bite my head off but how exactly is it ATI's fault ?

Mactronix
a b U Graphics card
August 25, 2009 4:52:43 PM

The level of physics (approximation using large masses instead of representaton using fluids) does not to me, scream that it requires a GPU.

I see no reason to really believe that a properly implemented physics API could not work just as well on a CPU. However, I do suspect that the vast majority of gamers do have a more powerful GPU than CPU (they are already CPU bottlenecked to some degree) and thus using the overhead avaliable makes some sense instead of pushing the system farther into a bottlenecked senerio.

That being said, I do not see this as a feature of GPU paralelism, but merely in thier relative power and expected load compared to a CPU in current generation games.

I don't see paralelism really mattering until true particle representation is used. Where millions of positions and whatnot will have to be calculated. However, for this to really make things work well it will have to be many orders of magnitude better than what we have now, as a half assed approach would not look as good as an approximate. A favourite example of mine is a cloud of ions, something we looked at a lot in astrophysics. The mind bending amounts of power and memory to come up with even a somewhat realistic simulation are absolutely astounding.

For paralelism to matter we need to represent things as a system of particles.. for that to get us anywhere n-body problems would crop up rather quickly (each particles position will depend on all the other particles, thus the position and direction of all of them msut be known to give an idea of what any one is doing). I'm sorry to say we are miles away from the computing power to make that work in real time. For the time being we will have to settle for approximations, which is fine and 99% of the time our eye's could not tell the difference.. but it is something I do not think is any better suited for a GPU or a cpu. But something that should be done on both, given multithreading improvements, dependig on the overhead avaliable at a given time.

Effectively you either have lots of simple calculations or a few complicated ones to solve. The way physics works is you often have many complicated ones. A CPU could do the complicated calculationes, although slowly. A GPU could do many simple ones fast, but get nothing on the complex ones. Until the GPU is more advanced we get what we get to play with.. To design a system that ignores the massive positives of either a GPU and CPU to be set on using only one is very short sighted.

As for ATI.. I can't see any logical reason to blame them. What do they have to do with anything? Could nvidia not simply allow physX effects to be in a game and simply only allow hardware acceleration on thier cards. If it is as good as you seem to think gamer, would this not be the best add for nvidia ever. "Play games accelerated on our physics through our gpu's, or get 5 fps instead of 50." I remain unconvinced that the cpu could not, at least today, do just as good of a job as the GPU, hense why the fuss seems to be around this.

Additionally, does anyone really believe that nvidia simply wants to give it away without asking for anything in return.. If they offer an olive branch and ATI refuses, well tehn sure we can talk about it being ati's own fault.. But as it stands Nvidia never officially offered anything.. and the rumors around were all about how they "offered to help ATI with it" with the catch being anything from a fee on each card ATI sells, to ati having to open their entire architecture to nvidia.. If this was just about ATI's ego then I ould understand the situations.. but it is clearly far more than that, to simplify it as such seems silly.
a b U Graphics card
August 25, 2009 5:06:41 PM

To me, the only reason ATI would refuse is because of the new Havok engine (which again, locks us into an engine and not an outside API).

The sad part, is this reminds me of the old VGA debates I had ages ago, where I called for an outside API to handle graphic acceleration. People said we wouldn't need that for at least another decade...

Barring ray-tracing becomming a legitamate technique, I think we've hit the cealing graphically. All thats left now is the physics/AI backend in my mind.
a c 130 U Graphics card
August 25, 2009 5:15:59 PM

I must be having a dim day today as i really cant see how working things the way gamer is saying would ever be the best way to do things.
Im talking about the avalanche thing here. I just dont see a game getting coded for the second scenario when its so much easier the first way and would make no differance to game play, they still get hit right ?
Same as the bullets why not just have it where a certain ammo with a certain weapon either does or dosent go through ? Why put extra un needed calculations in ?

Im not having a go im just trying to get my head around it is all. I mean you see stuff in the news section where devs are saying they wont ever specifically code a game for multi cores because it takes twice as long so costs twice as much, and the next thing you know there are games out that seem to be taking advantage of the extra cores.

The devs will always take the easy route which is why we are stuck with hardwre that outstrips the games 90% of the time. I wonder if they had taken on the task of coding for multi threading and properly implimenting DX10 in the first place how far down this road we would have come already. Just seems a bit like"hey we have all this performance left over , what shall we do with it"

Mactronix
!