It's not going to work for a long time. Developers are doing their own thing for the most part. The whole physics using an extra GPU was just vaporish FUD thrown out there by ATI and Nvidia the second they needed to squash the Physx card.
But, I'm sure they would be grateful it everyone would by 3 expensive video cards. 8O
Most likely you would be able to just upgrade your current card and use your old one for physics. But I think the most likely way for this to go is for multi-core CPU's to start picking up the physics (like the plan for Crysis).
Well... GPU physics is a concept which can use either one graphics card, two graphics cards, or three (2 for graphics and one for physics). In either setup, the graphics card you set as doing the physics calculations doesn't need to be the same card as the one doing the graphics. This means you can throw in a cheaper card for physics work, whilst the more expensive one(s) do(es) the graphical work.
Driver wise, this sort of configuration needs it own physics API, something like the DirectX standard for graphics, so interoperability can be accomplished between different solutions. Then you need the hardware drivers, the necessary hardware to support the configuration, and then most important of all, software support. Until Developers start taking GPU physics seriously, it ain't gonna come round.
Don't buy a MoBo until you need one. Once it's closer to implentation MoBo makers will be providing better solutions. And why just 3 slot MoBo when you can have 4 for asymetric SLi + Physics? :twisted:
BTW, Crysis is supposed to use Crytek's own version of VPU/GPU physics so no 5yr wait, just 5 months.