When these babies first hit the market some 7-8 months ago I was very intrigued by the concept of a Physics dedicated processing unit. Really these cards are actually mini-GPUs that dedicate their attention and power to rendering visuals like smoke, dust, debris, bullet casings, ect. Equally as important as what It currently does, is what it is supposed to do in the future. That games are apparently to be developed to directly utilize this technology, thus effectively creating more realistic physics and peripheral effects. Though unfortuntely, it seems as though there is very little if anything currently under developement.
My main point of interest regarding the product is its ability to increase overall graphical performance. Since the addin card occupies a standard PCI slot, it wouldnt conflict with an SLI or crossfire situation. Furthermore as the card would concentrate on processing physics, it would allow the GPU and CPU to focus more on processing other game qualities, as they no longer need to process physics and other peripheral effects.
Does anyone own one of these babies? Id love to hear some comments on whether or not they boost graphical performance. I would also be interested in hearing about any software under development that would use the PhysX card.
When these babies first hit the market some 7-8 months ago I was very intrigued by the concept of a Physics dedicated processing unit. Really these cards are actually mini-GPUs that dedicate their attention and power to rendering visuals like smoke, dust, debris, bullet casings, ect.
They do no such thing. They don't render anything. They tell the CPU what to tell the GPU to render, the PPU does no rendering on it's own.
Though unfortuntely, it seems as though there is very little if anything currently under developement.
There is alot under development, there is very little currently available that uses it (about a dozen titles), but there are some important titles like UnrealTournament3 that will be the do/die application of the technology.
My main point of interest regarding the product is its ability to increase overall graphical performance.
Cannot increase graphics performance, if anything it taxes the graphics more by making it render more effects, what it frees up is CPU resources, not GPU, so it increases calculating performance.
Furthermore as the card would concentrate on processing physics, it would allow the GPU and CPU to focus more on processing other game qualities, as they no longer need to process physics and other peripheral effects.
Depends on the type of physics, having a second or third graphics card can often do the job better, and also has the benifit in SLi/Xfire mode when not having a use for physics. This is the biggest barrier to the PPU, when not coded for it's simply sucking power and creating heat in a rig. A graphics card can usually be used for multi-vpu rendering or even multi-monitor. And also old card can be used which is another benefit to the VPU model's credit.
Does anyone own one of these babies?
Yes but they are a select few.
Id love to hear some comments on whether or not they boost graphical performance.
No they don't like I said if anything they reduce graphical performance, what they sometimes do in some situations is increase the number of objects/effects rendered which can make things more realistic if done right.
I would also be interested in hearing about any software under development that would use the PhysX card.
Unfortunately, I have one Sold it to a friend, it apparently broke. So I sent it back, and I refund him the money he paid me for it. It's currently being used in my computer right now, the new one that was sent back to me that is. I'm really only holding onto it to see if their is any improvement in the Physics department and for mostly to see how it does on Crysis and UT3. If its a no-go, then the card is going on ebay. Pretty much plain and simple.
I got mine from E bay and I would reccomend waiting till they are cheaper or even better! Actual game support. There are some cool demos for it, like cellfactor that completely blew me away...but other than a few demos there isn't anything. Hopefully the Unreal Engine 3 engine will utilize it.
Personally I don't see a bright future for the physics card but anything could happen.
With the inevitable shift to multi-threaded games and mulit-core cpus there is less need then ever for the physics card. It made alot more sense for single core cpus and single threaded software.
I think even Ageia sees the writing on the wall. The latest news Ive seen from their camp is that they are in talks with gpu manufacturers like nvidia trying to work out a deal to add their technology directly onto future gpus, which I think makes alot more sense then a stand alone card at this point.
If it became a standard part of a gpu and something everyone would have by default in their system, I think alot more game manufacturers would make use of it and make having it worthwhile.
Personally I think the dedicated PPU was DOA. Yes it has some nifty features if you have a game that supports it but it does not justify the high price of the card. I would much rather see something along this line, ok drop the 3 card uber SLI or crossfire, that I think is a bit extreme to dedicate an entire GPU to only physics maybe a lower end card but comeon e3 8800gtx's thats just nuts. I think we would be better served by building a PPU into a regular graphics card, kind of like running a dual CPU system. Is that fesiable with PCB size and space constraints I don't have an answer. But I think we the consumers would be better served by having it built into maybe the mid-high to high end graphic cards since it might not be possible price wise to build one into the low end cards. First I think that most people that buy high end cards would be willing to spend a little extra to have a PPU built in, but not if it brings the price up a 100 bucks or more. Plus the GPU makers would have a built in user base for whatever format of physics they want to use, although I would rather see a unified physics format rather than one for ATI and one for nVidia. They could even go with the dual card setup like the 7950 GX2 have one a gpu and the other a ppu. It just make more sense to me to have it all integrated into one card be it dual PCB or not rather than having you buy another card to do the processing. Granted going to a 3 card set up will make them more money but it just gets a bit unwieldy running the big gpus considering current power usage and heat output. Besides I would think that with a ppu built into the card itself it would give better data throughput from the ppu to the gpu. You could maybe even move to a dual core gpu model with one core doing the gpu functions and the other core running physics calcs.
Now the problem for PPU is that it requires PhysX's physics engine, and requires it's licensing. Which can and can't be free depending on whether Ageia waives the fee or not.
The competitors come in different forms, the first VPU/GPU-Physics, HavokFX which is part of Havok4 and can be tacked on later to existing engines. They've been working heavily with nV and ATi and supposedly ATi and nV's original plans for their own APIs relied on alot of help from Havok.
Next came Microsoft stating they are going to create a universal API called Direct Physics, which would be added to Vista/DX10 later (likely around SP1 or DX10.1), they also said this would be for VPUs not PPUs, which may or may not still be true (I think if Ageia had had more success M$ would've been almost forced to add support to some level).
And then came the info that Crytek was going to use their own physics solution for Crysis and later that it would be VPU based not PPU based.
Now the thing is who has decided to go in which direction.
Ageia is having a slow start adding support, but they are one of the large physics engine companies along with Havok, and so the still make non-PPU physics for many titles. Likely the expansion of the PPU support will come once they have a truely killer app like UT3, but seriously, like everyone else, this is very VERY slow in coming, so we're all just sitting here waiting. The problem is that Ageia is leveraging everything to keep PPU going and aren't getting anywhere need the revenue they need to keep it going long term. I also agree with Grifter that they appear to currently be looking for an exit strategy by bringing intel, nV, and AMD into the fold, this is supposedly for large scientific apps, but really I think they are running out of time and money on the PPU front and their most valuable asset right now is IP and that's what they have to sell to the boys with deep pockets IMO, and then they can go back to focusing on the physics engine.
Much of this can change in an instant , one killer app from any side and voila, you have suddenly a ton of people clamoring for your solution. If the PhysX card offers way better effects or performance in UT3 suddenly it'll be right back in the game, but if M$ or Crytek or Havok come out with a better solution first, then likely it'll get the first rush, and while graphics cards don't need more to sell millions a year, should PPUs not be first with the killer app, then they'll be starting from behind, and UT3 will a tech demo that could only revive the PPU if shown to be better than the model demoed by whomever just showed the VPU solution.