GPU PhysX: What Card Is Best?
How much do you need?
Generally, faster is better. Of course, it would be nonsense to use a GeForce GTX 480 as a dedicated PhysX card. Even using a rather expensive GeForce GTX 285 could hardly be called economically sensible. But let's take a look at our Mafia II benchmarks.
Again, we chose this game because of its very good compromise between physics and traditional graphical effects. Cryostasis uses a disproportionate amount of PhysX. Conversely, Metro 2033 is too heavy on graphics to make a good gauge of PhysX-based performance.
Looking at the graph, you can see very clearly that a card slower than a GeForce GT 240 or 9600 GT makes little sense, even if it should be able to support PhysX in theory. Using a GeForce 8400 GS is actually 15% slower than using a single GeForce GTX 480, which is extremely counterproductive. We therefore left those results out of the chart.
What PCIe slot is good enough?
A popular question centers on how fast the PCIe slot for the PhysX card needs to be. We used a motherboard with PCIe slots of different speeds, measuring speed simply by moving the PhysX card around between them.
Clearly, a faster card is slightly bottlenecked by a x4 slot compared to the other two. The difference between x8 and x16 is so marginal that it can be disregarded. A GeForce GT 220 is too slow to notice any difference, as would be a GeForce GT 240 and a 9600 GT. Even the GeForce GTX 285 doesn't suffer that badly. A x4 slot seems to be OK, though a x8 slot is the safer bet for faster cards.
In the end, it comes down to cost. Spending $80 on a used GeForce GTS 250 will bring your computer with a Radeon HD 5870 to the same level of PhysX performance as a single GeForce GTX 480 card. However, the combined cost of these two cards is higher than the single GTX 480. Real added value is obtained only by using an additional GeForce GTX 260 or better. This is where costs get out of hand and scare everyone but true enthusiasts away. We would only recommend adding an additional card if you already have a spare lying around due to a recent upgrade, for example. Then the effort might be worthwhile, even if the extra idle power consumption might gnaw at your consciousness.
Everyone could be enjoying cpu based Physics, making use of their otherwise idle cores.
The problem is, nVidia doesn't want that. They have a proprietary solution which slows down their own cards, and AMD cards even more, making theirs seem better. On top of that, they throw money at games devs so they don't include better cpu physics.
Everybody loses except nVidia. This is not unusual behaviour for them, they are doing it with Tesellation now too - slowing down their own cards because it slows down AMD cards even more, when there is a better solution that doesn't hurt anybody.
They are a pure scumbag company.
Thank you, Tom's, thank you Igor Wallossek for makinng it easy!
You just made my day: a big thumbs up!