Sign in with
Sign up | Sign in
Your question

PhysX PPU Review

Last response: in CPUs
Share
May 5, 2006 8:42:21 AM

Link.

Nice, runs slower.

More about : physx ppu review

May 5, 2006 9:09:00 AM

In a way, not surprising. This is a first-generation PPU, so naturally, things won't be as smooth as Ageia would want. Whilst the PPU has to process all the lovely physics, the CPU would then have to organise and make sure all the data required for the physics calculations is sent to the PPU, and then the GPU would have to render the results, which would include more geometry from fragmented objects, particles, smoke, and general shader effects. Probably with eventual driver revisions, we may see improved performance.
May 5, 2006 9:53:23 AM

I know but talk about worst case scenario. Expensive and runs slower, not exactly the best start.
Related resources
May 5, 2006 10:23:19 AM

I hope this thing is only a software problem and can be fix by a driver update. Not good, but I'll wait for some other reviews to get a more info on this card.
May 5, 2006 12:05:15 PM

Is it just me or is it disturbing this card is an old-school PCI card?
May 5, 2006 12:58:50 PM

Didn't that also apply to the first graphics cards though? Besides, things should improve with DX10 I reckon, but until then, Ageia either has to solve some driver issues, plus get more cooperation from Nvidia and ATI in order to get the drivers for their graphics cards to allow better compatibility between the GPU and PPU, because that's where things could be wrong at the moment with the way I'm seeing things. All in all though, not a good start for Ageia.
May 5, 2006 1:16:44 PM

I agree, I was hoping for 100% positive news that the physics card just blew everything away. However, I do feel that the game could be at issue. I wonder if PhysX was an after thought in the Ghost Recon game and was added in when the game was near completion?

I like the idea of physics accelerators. I have a feeling that it will catch on once there is a clear benefit. It would not surprise me if 5 or 6 years from now we have physics processing units integrated into veido cards.

Also. I wonder if a physics card would assist in the Folding@Home project. . .
May 5, 2006 1:19:28 PM

Quote:
Also. I wonder if a physics card would assist in the Folding@Home project. . .


Hmmm... the use for a physics card is almost limitless :lol: 
May 5, 2006 1:44:53 PM

Mabey they should make the PPU like a general processor, but is still targeted towards physics. You should be able to use it as secondary CPU, and change its function from helping the CPU, helping the GPU or just doing what it was made for. That would be interesting.
May 5, 2006 2:03:52 PM

I think that making the PPU a 'coprocessor' was more or less the goal.

What I understood from the results is that while the PPU is indeed powerful, now the GPU is the bottleneck, not helped by the current driver implementation.
May 5, 2006 2:04:32 PM

I can't see there being all that much data that is moved back and forth that the pci bus couldn't handle it.

I think there will of course be optimizations in the software, and of course the games too. I'm not surprised at all by these results.

I said a few weeks ago in a thread about the PPU that I'd guess there might be lower overall performace, or at least a net zero gain, but with better eye-candy.
May 5, 2006 2:06:37 PM

Its not really surprising or even a fair review considering that they compared the performance without the physics card at a low detail and resolution, to the performance with the physics card at a high detail and resolution, so its kinda to be expected that its slower.
May 5, 2006 2:14:39 PM

Meh... I'm not against it, but I think Ageia could have done it better. Using a PCI-e 1x slot would have made things better with the benefit of extra bandwidth. But think about it. If adding a physics card to eat away at all the physics calculations, the GPU became extremely stressed with having to cope with the visualised results it had to render, so if you offloaded the physics onto the GPU and forced it to run all these physics calculations along with rendering, then it would be heavily stretched. Even with a second graphics card, things could be stretched a bit too far for a GPU. Again, I would like to see how SLI/Crossfire physics is implemented, not just in physics effects like particles, but also on actual physics that affects gameplay like landscape deformation and the likes.
May 5, 2006 2:38:10 PM

Don't forget that the settings with the card are higher than without. Based on the numbers I'd guess if both were at the same graphic settings the card WOULD outperform the CPU.... not anywhere near $300 worth, but a few frames. Basically not worth it yet, but first graphic cards were like that too. Frankly there are so few games that support it or will support it for a while that even if it catches on it'll be silly not to wait till at least the second generation of these before grabbing one. (unless the cost drops dramatically)
May 5, 2006 2:43:06 PM

I don't like AnandTech's reviews of stuff, they are no where near as complete and well done as Tom's. The article says the higher level of detail in physics calcs was automatically enabled, so its not even a direct comparison. Making their pretty little bar graphs useless.
May 5, 2006 3:41:30 PM

it will run faster. There were cards with ClearSpeed(tm) chips, so I think they will be optimized, and more cards like these with much better performances will be available soon. But we need software support to find these cards usefull and valuable for their prices.
May 5, 2006 3:59:04 PM

Quote:
Don't forget that the settings with the card are higher than without. Based on the numbers I'd guess if both were at the same graphic settings the card WOULD outperform the CPU.... not anywhere near $300 worth, but a few frames. Basically not worth it yet, but first graphic cards were like that too. Frankly there are so few games that support it or will support it for a while that even if it catches on it'll be silly not to wait till at least the second generation of these before grabbing one. (unless the cost drops dramatically)


I am aware the settings were different in the test when hardware physics was activated, and so the test results became difficult to compare, so I understand that the system then had to deal with a heavy amount of graphical work to do afterwards, and so became GPU limited. This is to be expected and shows the physics card would work best in a high-end computer rather than be a solution for all systems.

Those who say CPUs can do physics, even dual-core processors, fail to understand the limitations of a general purpose chip. A CPU is geared for more logical tasks than massive floating point calculations, just like a CPU cannot handle heavy graphical calculations like real-time rendering. Even Havok physics engine has its limits. A PPU solution is the best, whilst offloading physics calculations to a secondary GPU is also a good idea, since both GPU and PPU chips are incredibly powerful things. However, getting a second GPU or a PPU is a hefty investment, so for the moment, this limits the availability of hardware physics to high-end systems. For the performance issue, this should be cleared up once Ageia starts revising its drivers, and once we have better games which have been designed and optimised with Ageia PhysX in mind, not just had it added in.
May 5, 2006 4:50:05 PM

The ppu has been in machcines since the Dell Renegade. or at least it was an option. Im sure that anybody that could afford that could also afford games like G.R.A.W. and the likes, so we should ask them. How do they like it? HAve you noticed any performance issues? Has there been bugs that havnt been fixed?
If these people say that they work fine and all, these new cards that are coming out will be the next generation. Which should mean that the first gen bugs will be fixed and there will be better compatability.
May 5, 2006 5:33:18 PM

Quote:
The ppu has been in machcines since the Dell Renegade. or at least it was an option. Im sure that anybody that could afford that could also afford games like G.R.A.W. and the likes, so we should ask them. How do they like it? HAve you noticed any performance issues? Has there been bugs that havnt been fixed?
If these people say that they work fine and all, these new cards that are coming out will be the next generation. Which should mean that the first gen bugs will be fixed and there will be better compatability.


Either you replied to an earlier post and didn't see my previous post, or you misread it. :?
May 5, 2006 5:43:04 PM

Has anyone actually gotten their Renegade system? I wasn't aware they were shipping...not that I would buy one anyway...too much $$$$$...
May 5, 2006 6:04:34 PM

This is nice to see people post Anand's reviews so fast before I even see it on his own website having I check it every day or two! It is like I-saw-it-first game.

As fart as physx goes, all I see is another scam designed for your poor packet. Maybe they will sell you a piece of hardware as well when you buy a game, FEAR physx! Then another one optimized for BF2. Smartass.


,,
May 5, 2006 9:01:04 PM

I think the important thing to note here is that with the PPU hardware enabled, this game drops to (in my opinion) unplayable frame rates. Regardless of the reasons, the game isn't playable at those frame rates.

Even at 800x600 the minimum frame rate drops to 17fps and 12fps on the Opteron and FX system respectively!

What good is a physics PPU if the farking game's frame rates drop to unplayable levels??? What good is all the extra "eye candy" if the AVERAGE frame rates are 47 and 38?

Just my opinion on the whole matter.
May 6, 2006 5:21:13 PM

I guess the term "version 1.0" means nothing to people. IT'S THE FIRST EFFING REVISION!!!!!!!! YOU CAN'T EVEN BUY IT YET!!!!!! Give it a rest, it'll get better.
May 6, 2006 5:55:20 PM

I agree. This is part of the reason why certain games run better on different cards (e.g. HL2 favors ATI, Id likes nVidia).

Then again, it would favor competition between companies, which means lower prices for us and more sales for them...
May 6, 2006 6:31:15 PM

It would be awesome of the AGEIA PhysX could run general purpose code, but it probably never will.
May 6, 2006 6:47:22 PM

Not necessarily. I heard somewhere that a University was using the processor on a video card to do some really complex equations. Since the video card is actually way faster than the CPU, it makes a lot of sense that it would work. As long as software is designed to take advantage of a certain piece of equipment, it should work.
May 6, 2006 7:27:35 PM

I doubt a PPU would do as well as a CPU in terms of logic calculation performance, but as soon as it comes into a floating-point calculation, the PPU would smoke the CPU quite easily. Same goes with a GPU against a CPU. In the end of the day, a dedicated chip will only perform exceptionally well in the tasks it was designed to handle. A CPU tackles general code, logic, and mathematical calculations, whilst a GPU tackles real-time rendering and a PPU deals with complex physics calculations. Each to their own as one would say.
May 6, 2006 8:52:42 PM

I know, a physics engine could be great if it worked well for gamers. But I'm not a gamer. Does the ppu have any other potential advantage for non-gamers?

Secondly, any predictions on the success of the ppu?
May 6, 2006 9:46:58 PM

Quote:
It would not surprise me if 5 or 6 years from now we have physics processing units integrated into veido cards.

Sorry for wrong vote. I wanted to give 5 stars. Integrated physics processor is very good idea.
PCI can be a limiting factor for these cards
May 6, 2006 10:24:04 PM

Quote:
I know but talk about worst case scenario. Expensive and runs slower, not exactly the best start.

This test was a waste of time. The fact is it only shows a program written for the card can expect a sizable % increase while it should be turned off for programs not. The programs needs to tell the card what effects need to look real and which look fine as is else the PPU will try to do the work of both the CPU and the GPU. Think of all the cheap work arounds GPU's do and look great doing and then using a PPU to do them correctly is just a waste of resources but less just not use parts that dont show an increase in gamming no matter if the software wasn't writen for it. Less not us dual core CPU's, raptor drives, shader models in GPU's, and any extra memory the games want use. My god less just buy a crappy Xbox and forget about anything other than games. Sorry for the rant but I thinks its wrong to dismiss a great product when a tester should know the product isn't backwards compatable to the software.
May 6, 2006 10:46:16 PM

Quote:
first do you mean lets instead of less and second i think you have no clue.

Ok grammer, so do you mean i or I and as for a clue I see you couldn't point out anything I have no clue at.
May 6, 2006 11:23:09 PM

Quote:
not too great. yes i am against this card completely but even i will concede that trying to communicate all that data over the PCI bus is just crazy. i really don't know what Ageia was thinking. still, looking at that, i wonder how well a dual core CPU would do with some optimised code.

i still think gfx cards could wipe the floor with it regardless. that better be a bad review that didn't show it off to its best cause otherwise it has no hope. i may be wrong but we'll see.


there meant to be bringing out 4x pcie cards by the end of this year, the only 2 things i would buy pci are tvcard and a soundcard, no way would i put anything that would handel the same amount of data as a GFX on a PCI slot.
May 6, 2006 11:47:20 PM

Quote:
the GPU's have outperformed the CPU'S for a while and i don't see them being used to run windows.


windows vista :p 

anyway back on topic i do hope that the low score is because it is the first realease, im looking forward to the Unreal engine 3, since UT is one of my all time fav games. i just hope it migrates to a better interface PCI does suck, and developers kno this why they didnt bring the card out on a pcie slot straight away is a mystery.
May 7, 2006 11:26:12 AM

Quote:
the GPU's have outperformed the CPU'S for a while and i don't see them being used to run windows.


windows vista :p 

anyway back on topic i do hope that the low score is because it is the first realease, im looking forward to the Unreal engine 3, since UT is one of my all time fav games. i just hope it migrates to a better interface PCI does suck, and developers kno this why they didnt bring the card out on a pcie slot straight away is a mystery.

The card would have done much better on a PCI-e 1x or 4x interface, since PCI is just not good enough in order to supply the PPU with the bandwidth it obviously needs. But even then, this card would be restricted to high-end systems since the GPU would have to render all the results of the physics calculations, smoke, particles and shattered objects, which is what the test showed us.
May 7, 2006 1:08:00 PM

Quote:
StrangeStranger wrote:
not too great. yes i am against this card completely but even i will concede that trying to communicate all that data over the PCI bus is just crazy. i really don't know what Ageia was thinking. still, looking at that, i wonder how well a dual core CPU would do with some optimised code.

i still think gfx cards could wipe the floor with it regardless. that better be a bad review that didn't show it off to its best cause otherwise it has no hope. i may be wrong but we'll see.


there meant to be bringing out 4x pcie cards by the end of this year, the only 2 things i would buy pci are tvcard and a soundcard, no way would i put anything that would handel the same amount of data as a GFX on a PCI slot.

While I would like the cards to be PCIE I dont believe sending and recieve Physics problems over a PCI bus could be a bottle neck. Graphics cards deal with large bitmaps and lots of bandwidth hogging graphics the physics card doesn't.
May 7, 2006 6:03:02 PM

Quote:
if you look at the components listed i have to disagree. it has ddr3 memory just like a gfx card and it has a supposedly fast proc. the thing is it can only ever process 133MB/S. that is it. the bandwidth of the memory is far higher than that if im not mistaken. the component with the lowest abilities is always going to be the bottleneck and in this situation it is the interface.

True but the 133MB can stay in memory and, be used over and over, like a hard drive limits the motherboard, CPU, and memory. The real problem I see is programs using it correctly and thats all up to the programmers. I believe DX10 supports the Physics card but until its release I see very little use of testing its abilitys.
May 7, 2006 7:22:24 PM

Quote:
it cannot work like that as the information has to travel both to the cpu and GPU. whilst the static memory capacity is 128 or even 256, the throughput capabilites is far higher. i believe a speed nearer the bandwidth of the memory is needed. it is true that even gfx cards don't use the whole available bandwidth of PCI-e or AGP but it is useful to have no limit as it were.

Well the truth is the creators of the physic chip has smart people than both of use working for them and as far as we know the physics chips use of PCI could be about the same as GPU's use of PCIE. Like the harddrive limits the programmers have to keep in mind how much of say a board can be loaded into memory and limit board size holds true for the physics card. Could the Physics chip even use more bandwidth is another question because unlike a CPU the PPU only does math.
May 7, 2006 7:31:36 PM

Quote:
it cannot work like that as the information has to travel both to the cpu and GPU. whilst the static memory capacity is 128 or even 256, the throughput capabilites is far higher. i believe a speed nearer the bandwidth of the memory is needed. it is true that even gfx cards don't use the whole available bandwidth of PCI-e or AGP but it is useful to have no limit as it were.

Well the truth is the creators of the physic chip has smart people than both of use working for them and as far as we know the physics chips use of PCI could be about the same as GPU's use of PCIE. Like the harddrive limits the programmers have to keep in mind how much of say a board can be loaded into memory and limit board size holds true for the physics card. Could the Physics chip even use more bandwidth is another question because unlike a CPU the PPU only does math.

It's more or less latency at this point, the physics calculations are needed in a timely manner; while off loading that workload to perhaps a physics processor that will obviously calculate the results faster sounds ideal.

Yet to be addressed is the latency that comes from ordering that data over the Southbridge interconnect. With considerations that the Southbridge isn't designed to handle this type of traffic, I would have to say either a direct bus or substantial improvements to the Southbridge will be necessary for this technology to become feasible.
May 7, 2006 7:34:38 PM

Quote:
I know, a physics engine could be great if it worked well for gamers. But I'm not a gamer. Does the ppu have any other potential advantage for non-gamers?

Secondly, any predictions on the success of the ppu?


Anyone have a clue?
May 7, 2006 7:44:24 PM

Quote:
I know, a physics engine could be great if it worked well for gamers. But I'm not a gamer. Does the ppu have any other potential advantage for non-gamers?

Secondly, any predictions on the success of the ppu?


Anyone have a clue?

No not overly, since if you aren't gaming you aren’t going to be running into a lot of physics calculations.

It could take off but needs to come down in cost, decrease the latency that appears to be the case of the drop in performance.
May 7, 2006 7:47:11 PM

If it is successful, do you think Windows be altered so that it would take advantage of the ppu? They are working on it looking all nice with "glass" theme, you know.
May 7, 2006 7:48:52 PM

Quote:
If it is successful, do you think Windows be altered so that it would take advantage of the ppu? They are working on it looking all nice with "glass" theme, you know.


I couldn't see why not, MS is in it for the money and if there is money to be made from the PPU then MS will be there.
May 7, 2006 8:13:56 PM

I talking about new really cool screen savers or new amazing features in Windows with graphics (like a totally revised movie maker, or art, or media player, etc)--all standard. If I'm thinking a little to futuristic for you, I understand....people didn't think going to the moon was possible either. :roll: I'm sure Microsoft will find some use for it if the ppu becomes a everyday part.

Quote:
I couldn't see why not, MS is in it for the money and if there is money to be made from the PPU then MS will be there.


That is so true.
May 7, 2006 8:19:21 PM

Quote:
people already complain it takes too much resources to run.


Ahh, but technology is ever advancing. In addition, it could possibly be windows extreme (or something to that effect) so that people would not have to buy this version.
May 7, 2006 8:36:16 PM

I can't see a card like this going too far. For the time being, they may make games look sharper and be able to maintain a decent framerate, but as games get more and more complex, with more and more physics calculations to be done, the bottleneck of the PCI/PCIe bus is going to kill any chance for a PPU.

For example, someone will create a game with say a crate that explodes into 1000 fragments, each fragment requiring a calculation on it speed, direction, rotation, etc.... In order for this to work, the PPU will have to do the calculations on all 1000 fragments and update their positions how many times/second?? Also, the PPU is suppose to handle fluid dynamics, including smoke and fire type effects right? How much data would that be moving across the bus? Way too much in my opinion. Then of course, someone would have to beat that, and make a crate explode into 2000 parts, then 4000, etc....

IMO: The best solution is for a dual core GPU/PPU, that would require little PCIe usage. The next best is a specialized CPU/PPU chip.
May 7, 2006 8:43:35 PM

A CPU/PPU chip wouldn't work well enough. A card would be the best option, but on a PCI-e 4x interface or something along those lines. The best option for integrating a PPU chip would be to integrate it into a graphics card, because then it would have all the benefit of the bandwidth of the card's architecture, however, then there's the question whether the PPU will affect the GPU's performance by taking up its resources...
May 7, 2006 8:49:42 PM

Quote:
good point. there are already going to be six versions or so, so whats one more.


Yeah, thats kinda how I feel about it too.


As to the PPU being combined with a graphics card....this may take time because the price would be astounding. So right now PCI was probably the best choice. I guess the PPU could be combined with the graphics card without modification to the motherboard? If not, it will really take a while for it to become mainstream.
May 7, 2006 9:05:08 PM

Quote:
Elbert, you're back.... good to see ya'. Hope things are well in programming land.

I just got finished teaching a class of freshmen Intro to Assembly and only 2 out of 17 have any ideal how to code. God am I glade it over with. Next term I get beginning VB, advanced COBOL, beginning Pascal, and advanced Pascal which is a step up. Hows things been? Come up with any crazy chemicals for overclocking PC's?
May 7, 2006 9:14:46 PM

Why don't you guys pm each other, instead of distracting the thread of the main focus?
May 7, 2006 9:24:26 PM

Quote:
I can't see a card like this going too far. For the time being, they may make games look sharper and be able to maintain a decent framerate, but as games get more and more complex, with more and more physics calculations to be done, the bottleneck of the PCI/PCIe bus is going to kill any chance for a PPU.

For example, someone will create a game with say a crate that explodes into 1000 fragments, each fragment requiring a calculation on it speed, direction, rotation, etc.... In order for this to work, the PPU will have to do the calculations on all 1000 fragments and update their positions how many times/second?? Also, the PPU is suppose to handle fluid dynamics, including smoke and fire type effects right? How much data would that be moving across the bus? Way too much in my opinion. Then of course, someone would have to beat that, and make a crate explode into 2000 parts, then 4000, etc....

IMO: The best solution is for a dual core GPU/PPU, that would require little PCIe usage. The next best is a specialized CPU/PPU chip.

Only if the game creater though it was worth making crate explode physicly correct which would be a waste of the PPU. The GPU's do great jobs in cheating these affects to make them look real. The PPU is more about say rag doll physics and bullets glancing off objects. Things that just dont look right unless they are done right. Throwing a bomb and it bouncing before it explodes or being shot while jumping. Remember the little side to side action of Doom, arms pumping at the same time even while going around a corner. The Physic chip is at 130mm and what AGP level did we use when 130mm CPU's first came out. As for latency again its all up to the programmer which or maybe Microsoft which things the PPU is used for. Sure I'd like to see it in a PCIE but the limited number of those slots could turn away customers.
!