Sign in with
Sign up | Sign in
Your question

nvidia physX + cpu Usage

Last response: in Graphics & Displays
Share
a b à CPUs
August 13, 2008 12:39:48 PM

Nvidia has released a full version of forceware 177.83 with whql......it has built in physX driver.......so i thought i would try out the countless physX demos out there. but here is the thing.

1st i try PhysX_FluidMark v1.0. i found that i get a ok fps of 20.but thats not the thing, my CPU usage for one of the core jumped 90% and stays there for as long as i run the application. the other core stays at 40%.......no there app was running and ma cpu usage idles at 15% max.

2nd i try NVIDIA PhysX Particle Fluid Demo only to find the same result that my cpu is being hogged only for 20fps....

but this is the most interesting thing........
when i turn of Geforce physX, the performance of NVIDIA PhysX Particle Fluid Demo drops from 20 to 5fps.........u would imagine that the CPU is doing the physX so u would expect CPU usage to be same as before or maybe higher.........BUT: the cpu usage actually falls to 40 to 50%

also after running NVIDIA PhysX Particle Fluid Demo for 10mins with GPU enabled, my gpu temp is merely 53C.....where as GPU temp at full load is 60-65C

my spec:
amd x2 5600@2.8GHz
2GB 800Mhz Bus
xfx 9600GT xt

now if i am correct, physX on GPU means that physX codes will be run on the GPU and not on the CPU. so y is my CPU usage so high......

is Nvidia playing some kind of game in making the customer believ that physX on nvidia is really good by cheating on these demo physX app by taking help from the CPU??

More about : nvidia physx cpu usage

August 13, 2008 12:55:37 PM

I'm eagerly waiting to get home so I can try this as well, I was hoping for a much more impressive GPU offload.
August 13, 2008 1:15:50 PM

Check this out: http://www.fudzilla.com/index.php?option=com_content&ta...

Quote:
After our initial PhysX test results, we figured we%u2019d investigate this a little bit further, as something is going on here. A new benchmark that tests PhysX has just been released and it%u2019s called FluidMark and we decided to give it a go.

The results are intriguing to say the least. With no extra load on the CPU, the Ageia PPU scores 1,753 points, or an average of 30fps, while the Geforce 8800 GT card manages 4,921 points or an average of 83fps. A clear lead for the Geforce card, but things are set to get interesting.

When we loaded the CPU to 100 percent things changed dramatically for one of the cards, and it%u2019s not the Ageia card. Even though the PPU took a performance hit down to 1,282 points or 22fps, it was nothing compared to the performance hit that the Geforce card took, as it dropped all the way down to 1,344 points or 22fps.


I'd still wait a little more to jump into any conclusions, but... Honestly? The whole PhysX thing looks like green BS + cheating to me.

Edit: Thankfully, ATI is not focusing on this piece of crap, since Havok already gets the job done with the CPU.
Related resources
August 13, 2008 1:23:22 PM

sarwar_r87 said:

but this is the most interesting thing........
when i turn of Geforce physX, the performance of NVIDIA PhysX Particle Fluid Demo drops from 20 to 5fps.........u would imagine that the CPU is doing the physX so u would expect CPU usage to be same as before or maybe higher.........BUT: the cpu usage actually falls to 40 to 50%

also after running NVIDIA PhysX Particle Fluid Demo for 10mins with GPU enabled, my gpu temp is merely 53C.....where as GPU temp at full load is 60-65C


If the CPU doesn't do well in floating or double-precision calculations, no wonder he is not a 100%. Or if is there a bottleneck. In this case, that demo runs in "software mode", and that software might not be "optimized" to use your CPU.
And a 20fps to 5 fps. Its just a 300% performance bump. While processing D3D data. That sounds pretty good to me mate.

sarwar_r87 said:

now if i am correct, physX on GPU means that physX codes will be run on the GPU and not on the CPU. so y is my CPU usage so high......

is Nvidia playing some kind of game in making the customer believ that physX on nvidia is really good by cheating on these demo physX app by taking help from the CPU??


Will offload to the GPU. Not GPU exclusive !!!!!! in your case just offloaded to the GPU with a 300% performance bump !!!!
God i think your right !!! It is really bad to have (in some apps/cases) a 300% performance bump in a freaking driver update !! You should ask for you money back.

I'm a Ati fan, and i guess in this case, you should have nothing to rant about. Geez.

August 13, 2008 2:25:38 PM

radnor said:
If the CPU doesn't do well in floating or double-precision calculations, no wonder he is not a 100%. Or if is there a bottleneck. In this case, that demo runs in "software mode", and that software might not be "optimized" to use your CPU.
And a 20fps to 5 fps. Its just a 300% performance bump. While processing D3D data. That sounds pretty good to me mate.


It sounds good but is it "fair" if it's the CPU who's doing most of the work? Also, it won't sound so good when you notice you have 1 or 2 cores running close to 90% when they shouldn't (like this: http://www.fudzilla.com/index.php?option=com_content&ta...)

Not even with the "Quad going mainstream" argument.

Quote:
Will offload to the GPU. Not GPU exclusive !!!!!! in your case just offloaded to the GPU with a 300% performance bump !!!!
God i think your right !!! It is really bad to have (in some apps/cases) a 300% performance bump in a freaking driver update !! You should ask for you money back.


Offload to the GPU? Try checking again the link I posted above (just as the one on the post before this one).

There's not much evidence of the work being offloaded from the CPU to the GPU, but rather of the inverse situation. Sorry, but according to Nvidia marketing the CPU was to be replaced by the GPU in this task. I see your point and there's a performance gain, but that's at the cost of higher CPU utilization (when it shouldn't be). If they say that their hardware (GeForce 8, 9 and on) is an alternative to a normal solution (Havok's CPU-based physics) and can do it better then why is their solution even more dependant on the hardware used by the first one (CPU)? Geez, I thought the CPU was dead.

Quote:
I'm a Ati fan, and i guess in this case, you should have nothing to rant about. Geez.


Yes, he has. It's called corporate BS. It happens once in a while. Perhaps they should call it "PhysX: making your Geforce work happier with your CPU" - and then it would be OK.
August 13, 2008 2:44:19 PM

Yeah, I read a couple of the fudzilla articles on this, and it does seem odd that the CPU usage is so heavy when the GPU is supposed to be doing the physics. So that makes it seem like the CPU is actually doing the physics work, which I think makes the more sense anyway. The GPU should focus on drawing frames rather than trying to do more, but it seems like this is what is going on with PhysX anyway.

Plus I prefer Havok. You don't need special hardware and it uses the CPU and let's the GPU do it's own thing.
August 13, 2008 3:05:45 PM

dattimr said:
It sounds good but is it "fair" if it's the CPU who's doing most of the work? Also, it won't sound so good when you notice you have 1 or 2 cores running close to 90% when they shouldn't (like this: http://www.fudzilla.com/index.php?option=com_content&ta...)

Not even with the "Quad going mainstream" argument.



It will behave like that because the Ageia Card has a modified G5 CPU on it !! It is not a x86 CPU but it is a CPU !!!
Your G80/G92 aren't true GPGPUs !! That is still to come ! This is a First good step. Just people that don't know a bit about the diferences between the Ageia Cards and your G80/G92 chips that will find it strange !!!

It is a great gimmik. I really. Really don't understand you folks. What you wanted? That all that consumed processing cycles just disappeared into thin air ? That was being done magically ? No.

Dam your asking more than it is possible.

- You have your GPU doing physics and D3D.
- You have a 300% performance bump is some cases.
- Your total performance in some cases just quadrupled. What did you expect in terms of CPU usage ?
- And your quoting FUDzilla has your official source ? Link me a official/less dubious/backed up source please.


August 13, 2008 3:07:59 PM

BUT the CPU can not simulate complex physics fast enough (cloth movement, advanced physics explosions with each particle having its on physics calculated) while the GPU can. Having the GPU do that many PhysX calculations and increasing your FPS by 300% will of course make more data flow through the CPU, increasing its usage.

I see havok as a intel joke to try counteract physX because if PhysX succeeds they are screwed.
a b U Graphics card
a b à CPUs
August 13, 2008 3:29:56 PM

CPU physics has reached its peak. As above, CPU's cant run physics calculations quick enough in non-liner equations.

Heres an example: In Havok: if you fall a long distance, you go stright down the entire way. If something blocks that path, it mearly adjusts the angle of the drop until you can continue stright down. (turn of gravity on CSS, and slide of a building).

In PhysX, if you fall a long distance, you go stright down. But if you hit an obstacle, it not only adjusts the angle at which you fall, it can also cause the body to bounce/turn in midair, and allows one to continually bounce of objects the entire way down in a realistic manner. (Think of falling off a cliff and bouncing off every rock on the way down).


Also, some extra work is expected to be done, becuase the 8/9000 series of cards weren't designed with PhysX in mind, so some software workaounds are almost surely needed in order to get the software to run correctly (which could account for the CPU usage). Also note: i don't know HOW the data for the calculatons get to the card, so that could account for the extra CPU usage (CPU 'feeding' the GFX card with data?).

One final point: Because most games don't use/disable physX by default, you should expect games to lose performance using physX. Makes sense. The point it that we can get it for free now, instead of having to buy a $150 PPU.
August 13, 2008 3:35:16 PM

All I can say is PhysX + UT3 = AWESOME!
August 13, 2008 4:06:19 PM

radnor said:
It will behave like that because the Ageia Card has a modified G5 CPU on it !! It is not a x86 CPU but it is a CPU !!!
Your G80/G92 aren't true GPGPUs !! That is still to come ! This is a First good step. Just people that don't know a bit about the diferences between the Ageia Cards and your G80/G92 chips that will find it strange !!!

It is a great gimmik. I really. Really don't understand you folks. What you wanted? That all that consumed processing cycles just disappeared into thin air ? That was being done magically ? No.

Dam your asking more than it is possible.

- You have your GPU doing physics and D3D.
- You have a 300% performance bump is some cases.
- Your total performance in some cases just quadrupled. What did you expect in terms of CPU usage ?
- And your quoting FUDzilla has your official source ? Link me a official/less dubious/backed up source please.


Actually it appears the GPU isn't doing too much more physics, it looks a lot more like it's getting off loaded to the CPU. When the Ageia card does PhysX there is no use on the CPU, but when you do it with a Geforce the CPU usage soars, that sounds like the CPU is doing the work. No processing cycles went into thin air, they went to the CPU, not the GPU. Hmm, I like how you dis Fudzilla when they actually are right alot more than they are wrong, but you know since it doesn't back up what you say, it's BS. Either way, I'm sure will see some more in depth reviews of this.
a b à CPUs
August 13, 2008 4:08:59 PM

radnor said:
It will behave like that because the Ageia Card has a modified G5 CPU on it !! It is not a x86 CPU but it is a CPU !!!
Your G80/G92 aren't true GPGPUs !! That is still to come ! This is a First good step. Just people that don't know a bit about the diferences between the Ageia Cards and your G80/G92 chips that will find it strange !!!

It is a great gimmik. I really. Really don't understand you folks. What you wanted? That all that consumed processing cycles just disappeared into thin air ? That was being done magically ? No.

Dam your asking more than it is possible.

- You have your GPU doing physics and D3D.
- You have a 300% performance bump is some cases.
- Your total performance in some cases just quadrupled. What did you expect in terms of CPU usage ?
- And your quoting FUDzilla has your official source ? Link me a official/less dubious/backed up source please.


i din even no it was on fudzilla.
so basicalli wateva gain that geforce has is bcoz of the cpu. fudzila ppl took out the cpu and performnce droped lik hell.

but thats not the point. nv has used physx for marketin.this means they hav mislead ppl who chose nv ova ati bcoz of physX. ii hav seem loads of ppl add physiX as a plus point 4 nv. but is it valid....boz
a) 300% increase dozn giv any good playabl quality
b)ut3 runs fine on all current gpu
C) it still takes the whole cpu where as if u turn of hardware accelaration, cpu utilization coms down to.....i don c y i would wan that sumthing like that....i cant even run those physx demo and a web browser witout hitting 100% cpu usage

it's lik saying intels new IGP is 3 times faster that previous gen.........which means a jump from 3fps to 9fps in lost planet....
a b à CPUs
August 13, 2008 4:14:20 PM

San Pedro said:
Actually it appears the GPU isn't doing too much more physics, it looks a lot more like it's getting off loaded to the CPU. When the Ageia card does PhysX there is no use on the CPU, but when you do it with a Geforce the CPU usage soars, that sounds like the CPU is doing the work. No processing cycles went into thin air, they went to the CPU, not the GPU. Hmm, I like how you dis Fudzilla when they actually are right alot more than they are wrong, but you know since it doesn't back up what you say, it's BS. Either way, I'm sure will see some more in depth reviews of this.


+1
a b U Graphics card
a b à CPUs
August 13, 2008 4:27:06 PM

Heres my explanation:

What I'm guessing is that the PhysX drivers are making the CPU send that data to the GFX card (as a replacement for the PPU), which executes the result, which then sends the data back to the CPU, which then executes that data.

As there is a higher workload (CPU now has to send and receive large amounts of Physics data), the CPU workload goes up as a result, despite the CPU not actually calculating anything. Worse, if the drivers are unoptimized, its possible that CPU could actually be trying to execute the standard, non PhysX physics as well, even though it is unneeded.


Once i get home, ill take a look through the SDK. I'll post if i figure out what is going on here. I do know the CPU isn't doing the work though, but i'm still confused about where the extra workload is comming from.

Again, the CPU is NOT executing the PhysX data.
August 13, 2008 4:28:00 PM

San Pedro said:
Actually it appears the GPU isn't doing too much more physics, it looks a lot more like it's getting off loaded to the CPU. When the Ageia card does PhysX there is no use on the CPU, but when you do it with a Geforce the CPU usage soars, that sounds like the CPU is doing the work. No processing cycles went into thin air, they went to the CPU, not the GPU. Hmm, I like how you dis Fudzilla when they actually are right alot more than they are wrong, but you know since it doesn't back up what you say, it's BS. Either way, I'm sure will see some more in depth reviews of this.


+2

That's exactly what I was talking about.
August 13, 2008 4:30:46 PM

San Pedro said:
Actually it appears the GPU isn't doing too much more physics, it looks a lot more like it's getting off loaded to the CPU. When the Ageia card does PhysX there is no use on the CPU, but when you do it with a Geforce the CPU usage soars, that sounds like the CPU is doing the work. No processing cycles went into thin air, they went to the CPU, not the GPU. Hmm, I like how you dis Fudzilla when they actually are right alot more than they are wrong, but you know since it doesn't back up what you say, it's BS. Either way, I'm sure will see some more in depth reviews of this.


I guess you missed some of my earlier posts. The Ageia card was a dedicated solution, based on the G5 CPU, especially modified for the type of calculus that physics needs.

Our GPUs/VPUs can do "some" part of those calculus but still have to rely on a x86 CPU (AMD/Intel). The G5 base was excellent, with 5 excellent pipelines. Because it was already a multi-purpose CPU (not x86 fully compatible, but did the trick) use in PCs, in this case a PowerMac G5 !!! So it can cope with the multitude of calculus and operations that a GPU just can't.

And for dishing Fudzilla, i can tell ya i read it daily, as i do with The Inquirer. I consider them both good sources of "somewhat" reliable information. But they are hardly official, i do use them sometimes for a "argument" but i do underline it is The Inquirer or Fudzilla. All with a grain of salt.
Like you don't bring a knife to a gun fight, you don't base your argument on the interpretation of homemade benchmarks ( that,honestly are usually the best, with real world hardware) on FUDzilla. You bring benchmarks from credited (or the most relevant) sites for comparison/conclusions.

On one thing we agree, i too want to see more "official" benchmarking, from toms,anand,Hardware Canucks , Guru3D and others.


August 13, 2008 4:43:57 PM

^ The problem here is the way Nvidia is talking PhysX, Radnor. They said they would bring that load from the CPU to the GPU and do it better. What's the point of that technology if it uses not the same CPU power than others, but almost the double? Given multi-core CPUs arrival - just as Larrabee's - I wonder what's the point of trying to create another clearly dubious "standard".
a b à CPUs
August 13, 2008 5:01:53 PM

radnor said:
I guess you missed some of my earlier posts. The Ageia card was a dedicated solution, based on the G5 CPU, especially modified for the type of calculus that physics needs.

Our GPUs/VPUs can do "some" part of those calculus but still have to rely on a x86 CPU (AMD/Intel). The G5 base was excellent, with 5 excellent pipelines. Because it was already a multi-purpose CPU (not x86 fully compatible, but did the trick) use in PCs, in this case a PowerMac G5 !!! So it can cope with the multitude of calculus and operations that a GPU just can't.

And for dishing Fudzilla, i can tell ya i read it daily, as i do with The Inquirer. I consider them both good sources of "somewhat" reliable information. But they are hardly official, i do use them sometimes for a "argument" but i do underline it is The Inquirer or Fudzilla. All with a grain of salt.
Like you don't bring a knife to a gun fight, you don't base your argument on the interpretation of homemade benchmarks ( that,honestly are usually the best, with real world hardware) on FUDzilla. You bring benchmarks from credited (or the most relevant) sites for comparison/conclusions.

On one thing we agree, i too want to see more "official" benchmarking, from toms,anand,Hardware Canucks , Guru3D and others.


problem is nvidia's market line with geforce 8/9. which is "physX will take load off CPU to GPU"
BUT it doesnt and it actually make things worse compared to ur G5 that already gives OK performance

its misleading.BIGTIME!!!!!!!!
August 13, 2008 5:03:12 PM

Just found this in a post over Techreport's forums and it looks pretty much the truth for me: "This is the point of PhysX , buy 2 gpu and no quad (at least thats what Nvidia said)." - bogbox.

In other words: PhysX is nothing but marketing BS.

Also, Techreport said in its article about PhysX (http://www.techreport.com/articles.x/15261): "CPU utilization was paradoxically higher in the hardware physics mode, even though the GPU shouldered the simulation work." No numbers, though.

I'd rather have a Quad over 2 GPUs anyday (supposing that one of them would be *only* dedicated to PhysX). That's just my 2 cents, anyway.
August 13, 2008 5:06:26 PM

dattimr said:
^ The problem here is the way Nvidia is talking PhysX, Radnor. They said they would bring that load from the CPU to the GPU and do it better. What's the point of that technology if it uses not the same CPU power than others, but almost the double? Given multi-core CPUs arrival - just as Larrabee's - I wonder what's the point of trying to create another clearly dubious "standard".


Ill try to search about it. While this post being rolling (really slow day at work) Ive been checking Nvidia homepage. Nothing about exclusively using the GPU for those calculations. In this case it is still a great deal, because the test subject is a *drumroll* Athlon X2 6000+ at 90nm. It is hardly one of the most powerful CPUs.

It doubled the CPU load for the Quadruple of frame rates !!!!!
If you think in proportion, that is a pretty big jump from what it costs. Nothing.

The "driver" update included 2 free games, addons for other games, and compatibility with a crap load of other software.

Now imagine the OP system, but a quad core on it ? That 6000+ is probably a bottleneck. In my POV it is a facking great driver update. Really. About the Dubious standard it will probably be regulated by future version of DirectX. Or will use DirectX as a Interface. Whatever is havok or CUDA.

I understand your point, i really do. But the gain is just too good for the money invested. Every decision/matter/life decision/hardware/car/women/jobs have their strong and weak points. You just need to weight them all and see whats better. Weighting this pack (from a angry consumer don't forget) seems to me like a technologically bliss !!! Will double the load Quadrupling the performance !!

Edit @dattmir: Reading your link to tech report. BRB !!

a b à CPUs
August 13, 2008 5:10:19 PM

dattimr said:
Just found this in a post over Techreport's forums and it looks pretty much the truth for me: "This is the point of PhysX , buy 2 gpu and no quad (at least thats what Nvidia said)." - bogbox.

In other words: PhysX is nothing but marketing BS.

Also, Techreport said in its article about PhysX (http://www.techreport.com/articles.x/15261): "CPU utilization was paradoxically higher in the hardware physics mode, even though the GPU shouldered the simulation work." No numbers, though.

I'd rather have a Quad over 2 GPUs anyday (supposing that one of them would be *only* dedicated to PhysX). That's just my 2 cents, anyway.


thats funny....this is wat i reported....n ppl say homemade benchmark don mean anything... :cry:  :cry: 

i say quad core if u wana stick with nvidia physX....let one of the core be busy doing wat nv gpu is suposed to do.

mayb thats y ati din go for the physX thing......makes sense now....huh??
a b à CPUs
August 13, 2008 5:15:58 PM

radnor said:
Ill try to search about it. While this post being rolling (really slow day at work) Ive been checking Nvidia homepage. Nothing about exclusively using the GPU for those calculations. In this case it is still a great deal, because the test subject is a *drumroll* Athlon X2 6000+ at 90nm. It is hardly one of the most powerful CPUs.

It doubled the CPU load for the Quadruple of frame rates !!!!!
If you think in proportion, that is a pretty big jump from what it costs. Nothing.

The "driver" update included 2 free games, addons for other games, and compatibility with a crap load of other software.

Now imagine the OP system, but a quad core on it ? That 6000+ is probably a bottleneck. In my POV it is a facking great driver update. Really. About the Dubious standard it will probably be regulated by future version of DirectX. Or will use DirectX as a Interface. Whatever is havok or CUDA.

I understand your point, i really do. But the gain is just too good for the money invested. Every decision/matter/life decision/hardware/car/women/jobs have their strong and weak points. You just need to weight them all and see whats better. Weighting this pack (from a angry consumer don't forget) seems to me like a technologically bliss !!! Will double the load Quadrupling the performance !!

Edit @dattmir: Reading your link to tech report. BRB !!


so basically nvidia not only wants us to buy a overpriced badass gpu, but also a kik ass quad core.......but it also contradicts their theory that GPU is more IMPORTANT than CPU.remember the whole nvidia intel thing
a b U Graphics card
a b à CPUs
August 13, 2008 5:18:08 PM

I managed to download the SDK while at work (everyone is at lunch now :D ), and im currently looking through it, as well as a batch of documents, to figure out why the CPU is getting so much of the load.

Right now, at least some of the data, if not all, is going to the GPU. I'll post if i find anything that explains for sure the CPU numbers...
August 13, 2008 5:23:46 PM

dattimr said:
Just found this in a post over Techreport's forums and it looks pretty much the truth for me: "This is the point of PhysX , buy 2 gpu and no quad (at least thats what Nvidia said)." - bogbox.

In other words: PhysX is nothing but marketing BS.

Also, Techreport said in its article about PhysX (http://www.techreport.com/articles.x/15261): "CPU utilization was paradoxically higher in the hardware physics mode, even though the GPU shouldered the simulation work." No numbers, though.

I'd rather have a Quad over 2 GPUs anyday (supposing that one of them would be *only* dedicated to PhysX). That's just my 2 cents, anyway.


Already read tech report. Sorry, it just stated the obvious. You want more eye-candy ? It will be slower, or will tax your system more. But that has been always like that !! From good old Transform&Lightning to FSAA (Full-Screen Anti-Aliasing).
You want more eye-candy it will tax your system more. So ?

This small graph explains everything:


- No physics, gaming as usual, and like we all knew.
- Physics up on GPU your "experience" gets taxed. Like FSAA,MSAA, TnL, and every other ugly word in GPU world.
- Physics by Software, this MUST be read by CPU, It isnt the Holy Ghost rendering this, it is just the CPU.

So by that graph, on the article you showed, from a, lets say, respectable site, shows on average, a 300% improvement in frame from CPU doing it !! And this guy was using a Intel Core 2 Duo E6400 2.13GHz. That should not but much different from our OP CPU (at stock settings). That graph is very consisting sarwar reports. It is similar with the OP system, with a similar performance.

The only thing we can conclude is that, with physics up using the GPU, on a average Dual core with a 8800GT/9600GT, you will have 300% MORE fps.

But again i must refrain, i also want to see hard evidence/benchmarking on this one with several systems !!! With diferent systems. We all want.

August 13, 2008 5:36:37 PM

radnor said:
Ill try to search about it. While this post being rolling (really slow day at work) Ive been checking Nvidia homepage. Nothing about exclusively using the GPU for those calculations. In this case it is still a great deal, because the test subject is a *drumroll* Athlon X2 6000+ at 90nm. It is hardly one of the most powerful CPUs.

It doubled the CPU load for the Quadruple of frame rates !!!!!
If you think in proportion, that is a pretty big jump from what it costs. Nothing.

The "driver" update included 2 free games, addons for other games, and compatibility with a crap load of other software.

Now imagine the OP system, but a quad core on it ? That 6000+ is probably a bottleneck. In my POV it is a facking great driver update. Really. About the Dubious standard it will probably be regulated by future version of DirectX. Or will use DirectX as a Interface. Whatever is havok or CUDA.

I understand your point, i really do. But the gain is just too good for the money invested. Every decision/matter/life decision/hardware/car/women/jobs have their strong and weak points. You just need to weight them all and see whats better. Weighting this pack (from a angry consumer don't forget) seems to me like a technologically bliss !!! Will double the load Quadrupling the performance !!

Edit @dattmir: Reading your link to tech report. BRB !!


Oh, man. It's been quite a slow day over here too and it even looks like it's going to rain. Too bad I won't see my girl before Friday. :C But hey, we both are missing a point here: the tests with 4x the FPS are nothing but *physics-only* benchmarks, are they? That's to be expected.

I found that Techreport took quite a hit in FPS with PhsyX enabled in Unreal Tournament 3: the chart states an average of 66.2 fps running with no PhysX and an average of 40.1 fps with GPU PhysX. There's also the "Software PhysX" mode, but it's hard to tell how the calculations are being done on it. You must have seen that page by now.

I don't really know what would be the advantages of PhysX in such a scenario if taking into consideration the higher CPU utilization. Of course developers will have new and great resources to explore and so on, but, honestly, won't that be possible with Havok - which is already used by most games - and also be improved with the arrival of Nehalem, Deneb and so on? Damn, I bet gamers will be able to buy an 8-core CPU by this time next year. What do you think about it? Seriously, I think there's a reason why DAMMIT doesn't believe in all that PhysX - and even Havok - thing right now inside of their GPUs.

Edit: I just saw your last post. The issue here is that we can't tell what's going on with the "Software PhysX" mode. Of course any "PhysX mode" is optimized to work better - read *quite-a-damn-lot-better* - with the GeForces, so, it doesn't surprise me that it runs like crap on that Core 2 Duo. They would *never* make any effort to optimize it for CPU-only operation. But keep in mind that CPU-based physics calculations are done through Havok, not PhysX (and it surely doesn't put your game under 2 FPS).
a b U Graphics card
a b à CPUs
August 13, 2008 6:15:59 PM

The point is that Havok can't do the precise calculations needed for real world physics quickly enough on a CPU; no platform can. Because a PPU or a GFX card can do physics calculations faster, you can create better physics effects.

Physics in games have been horrible for years, especially in FPS (which is mostly what Havok does). You can see how far ahead PhysX is right now, becuase it has the medium to do the calculations that are required.
August 13, 2008 6:38:23 PM

gamerk316 said:
The point is that Havok can't do the precise calculations needed for real world physics quickly enough on a CPU; no platform can. Because a PPU or a GFX card can do physics calculations faster, you can create better physics effects.

Physics in games have been horrible for years, especially in FPS (which is mostly what Havok does). You can see how far ahead PhysX is right now, becuase it has the medium to do the calculations that are required.


But we should take into consideration that we now have quad-cores and soon we'll have octo-cores CPUs. That's 8x the power of some years ago (since I believe developers can easily target x number of cores with physics calculations). Also, it's not just the number of cores that will improve, but the CPUs' designs too. Although it's proven that the GPU can do many things "better" than a CPU (and I'm not talking about PhysX and its 2x CPU utilization yet), let us also not forget that games are mostly GPU dependant and many already have issues while played with its maximum settings. Of course the cards will improve with time, but so will the CPUs (with their soon-to-arrive dozens of cores).
a b U Graphics card
a b à CPUs
August 13, 2008 6:43:58 PM

dattimr said:
But we should take into consideration that we now have quad-cores and soon we'll have octo-cores CPUs. That's 8x the power of some years ago (since I believe developers can easily target x number of cores with physics calculations). Also, it's not just the number of cores that will improve, but the CPUs' designs too. Although it's proven that the GPU can do many things "better" than a CPU (and I'm not talking about PhysX and its 2x CPU utilization yet), let us also not forget that games are mostly GPU dependant and many already have issues while played with its maximum settings. Of course the cards will improve with time, but so will the CPUs (with their soon-to-arrive dozens of cores).


Irrelevent. Even if you triple the number of cores, becuase of the time each physics calculation would take, you wouldn't see anywhere close to a linear increase in physics calculation speed, because of the way current CPU's deal with non-linear functions. Any CPU can handle 3+2; its doing F(x)=A+3^(32.5-43)/X*3.14 they have issues with. GFX cards are simply better suited for massive non-linear equations.

Now, if you double the number of threads that can be executed while doubling the execution rate of each CPU core and double the number of cores, you might be on to something. But that won't be around for at least one more development cycle after Nehalem. So we can wait 4-5 years for CPU physics, or go with GPU physics, which are ready now. Whats the problem?
August 13, 2008 7:00:32 PM

gamerk316 said:
Irrelevent. Even if you triple the number of cores, becuase of the time each physics calculation would take, you wouldn't see anywhere close to a linear increase in physics calculation speed, because of the way current CPU's deal with non-linear functions. Any CPU can handle 3+2; its doing F(x)=A+3^(32.5-43)/X*3.14 they have issues with. GFX cards are simply better suited for massive non-linear equations.

Now, if you double the number of threads that can be executed while doubling the execution rate of each CPU core and double the number of cores, you might be on to something. But that won't be around for at least one more development cycle after Nehalem. So we can wait 4-5 years for CPU physics, or go with GPU physics, which are ready now. Whats the problem?


The problem is that a company is selling something which does something dubious at best. They are trying to create a new standard and yet they are most dependant than ever on the hardware they talked crap (CPU). I'm not a specialist on the subject and you look far better than me while on it, so, I won't extend much the discussion of CPU vs GPU in physics calculations. Yet, don't forget we are 1-2 years away from Larrabee - just as from Sandy Bridge and Bulldozer, probably.
a b U Graphics card
a b à CPUs
August 13, 2008 7:49:25 PM

You make it sound like Larrabee will be the end all GFX chip, which I doubt...

Lets use a hypothetical based on the graph above us a bit:

Software (CPU PhysX) = 10 FPS
With GPU = 45 (lowered to 40 for my example for easy math).
No PhysX = 65 (lowered to 60 for easier math as well :D  ).

Naturally, doing almost no physics calculation makes the game faster. Running with no PhysX is the easy case.

According to the graph, you get around 300-400% increase in physX rendering when the GPU is used along with the CPU. That is more or less clear evidence most of the work is being done GPU side, yet users are still reporting high CPU usage, which to them means the CPU is doing all the work (false).

BTW, what are you using to measure CPU usage? If its taskmanager, list the process eating up the CPU please (so i can check against the SDK, (still at work :(  )).
August 13, 2008 9:15:16 PM

gamerk316 said:
You make it sound like Larrabee will be the end all GFX chip, which I doubt...

Lets use a hypothetical based on the graph above us a bit:

Software (CPU PhysX) = 10 FPS
With GPU = 45 (lowered to 40 for my example for easy math).
No PhysX = 65 (lowered to 60 for easier math as well :D  ).

Naturally, doing almost no physics calculation makes the game faster. Running with no PhysX is the easy case.

According to the graph, you get around 300-400% increase in physX rendering when the GPU is used along with the CPU. That is more or less clear evidence most of the work is being done GPU side, yet users are still reporting high CPU usage, which to them means the CPU is doing all the work (false).

BTW, what are you using to measure CPU usage? If its taskmanager, list the process eating up the CPU please (so i can check against the SDK, (still at work :(  )).


Already at home and finishing cooking :bounce:  Merluza a la Gallega, anybody up for it ?

I've scoured the SDK and white papers a few weeks ago. There are several possible/logical hipoteses for it:

  • Some functions off load partial calculations to the CPU, due to the lack of code/portability/compatability.
  • The frame rates increases, some more load to the CPU, on a tradicional CPU rendering, only now there are more stuff to render at a higher frame rate.
  • Inability by some chips (g80/g92, we still have no numbers on GT200) to fully use all the CUDA functions/macros/extensions
  • Bugs still presented on the CUDA/compiler that only time will Iron out.

    I guess between all 4 the reasons, one or the mesh of them, there is the reason for the increased CPU load.

    PS: Sorry about the typo festival but i dont have Ingrish Dicionary at home.
    a b U Graphics card
    a b à CPUs
    August 14, 2008 7:41:06 PM

    Same thing I came up with; guess the PhysX SDK needs a bit of cleaning up. I'll have to give my 9650 a run and see how it goes.
    August 14, 2008 10:20:19 PM

    I didn't get my cards for the Physx, so if it is all marketing BS then that is just fine. I do know that I didn't purchase an old Ageia card, yet Physx can now operate on my compy. I did enjoy some of the stuff that came with the driver pack, hopefully we'll see some good games in the future with Physx enabled. Right now it is very slim pickin's (does anyone say 'slim pickin's outside of the Southern region of the US?).
    !