Analysis: PhysX On Systems With AMD Graphics Cards
Rarely does an issue divide the gaming community like PhysX has. We go deep into explaining CPU- and GPU-based PhysX processing, run PhysX with a Radeon card from AMD, and put some of today's most misleading headlines about PhysX under our microscope.
Introduction
The history and development of game physics is often compared to that of the motion picture. The comparison might be a bit exaggerated and arrogant, but there’s some truth to it. As 3D graphics have evolved to almost photo-realistic levels, the lack of truly realistic and dynamic environments is becoming increasingly noticeable. The better the games look, the more jarring they seem from their lack of realistic animations and movements.
When comparing early VGA games with today's popular titles, it’s amazing how far we’ve come in 20 to 25 years. Instead of animated pixel sprites, we now measure graphics quality by looking at breathtaking natural occurrences like water, reflections, fog, smoke, and their movement and animation. Since all of these things are based on highly complex calculations, most game developers use so-called physics engines with prefabricated libraries containing, for example, character animations (ragdoll effects) or complex movements (vehicles, falling objects, water, and so on).
Of course, PhysX is not the only physics engine. Up until now, Havok has been used in many more games. But while both the 2008 edition Havok engine and the PhysX engine offer support for CPU-based physics calculations, PhysX is the only established platform in the game sector with support for faster GPU-based calculations as well.
This is where our current dilemma begins. There is only one official way to take advantage of PhysX (with Nvidia-based graphics cards) but two GPU manufacturers. This creates a potential for conflict, or at least enough for a bunch of press releases and headlines. Like the rest of the gaming community, we’re hoping that things pan out into open standards and sensible solutions. But as long as the gaming industry is stuck with the current situation, we simply have to make the most of what’s supported universally by publishers: CPU-based physics.
Preface
Why did we write this article? You might see warring news and articles on this topic, but we want to shine some light on the details of recent developments, especially for those without any knowledge of programming. Therefore, we will have to simplify and skip a few things. On the following pages, we’ll inquire whether and to what extent Nvidia is probably limiting PhysX CPU performance in favor of its own GPU-powered solutions, whether CPU-based PhysX is multi-thread-capable (which would make it competitive), and finally whether all physics calculations really can be implemented on GPU-based PhysX as easily and with as many benefits as Nvidia claims.
Additionally, we will describe how to enable a clever tweak that lets users with AMD graphics cards use Nvidia-based secondary boards as dedicated PhysX cards. We are interested in the best combination of different cards and what slots to use for each of them.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
-
eyefinity So it's basically what everybody in the know already knew - nVidia is holding back progress in order to line their own pockets.Reply -
Emperus Is it 'Physx by Nvidia' or 'Physx for Nvidia'..!! Its a pity to read those lines wherein it says that Nvidia is holding back performance when a non-Nvidia primary card is detected..Reply -
It looks like the increase in CPU utilization with CPU physX is only 154%, which could be 1 thread plus synchronization overhead with the main rendering threads.Reply
-
eyefinity The article could barely spell it out more clearly.Reply
Everyone could be enjoying cpu based Physics, making use of their otherwise idle cores.
The problem is, nVidia doesn't want that. They have a proprietary solution which slows down their own cards, and AMD cards even more, making theirs seem better. On top of that, they throw money at games devs so they don't include better cpu physics.
Everybody loses except nVidia. This is not unusual behaviour for them, they are doing it with Tesellation now too - slowing down their own cards because it slows down AMD cards even more, when there is a better solution that doesn't hurt anybody.
They are a pure scumbag company. -
rohitbaran In short, a good config to enjoy Physx requires selling an arm or a leg and the game developers and nVidia keep screwing the users to save their money and propagate their business interests respectively.Reply -
iam2thecrowe The world needs need opencl physics NOW! Also, while this is an informative article, it would be good to see what single nvidia cards make games using physx playable. Will a single gts450 cut it? probably not. That way budget gamers can make a more informed choice as its no point chosing nvidia for physx and finding it doesnt run well anyway on mid range cards so they could have just bought an ATI card and been better off.Reply -
archange Believe it or not, this morning I was determined to look into this same problem, since I just upgraded from an 8800 GTS 512 to an HD 6850. :OReply
Thank you, Tom's, thank you Igor Wallossek for makinng it easy!
You just made my day: a big thumbs up! -
jamesedgeuk2000 What about people with dedicated PPU's? I have 8800 GTX SLi and an Ageia Physx card where do I come into it?Reply -
skokie2 What is failed to be mentioned (and if what I see is real its much more predatory) that simply having an onboard AMD graphics, even if its disabled in the BIOS, stops PhysX working. This is simply outragous. My main hope is that AMD finally gets better at linux drivers so my next card does not need to be nVidia. I will vote with my feet... so long as there is another name on the slip :( Sad state of graphics generally and been getting worse since AMD bought ATI.. it was then that this game started... nVidia just takes it up a notch.Reply