Sign in with
Sign up | Sign in
Your question

SLI/Xfire Physics Vs. Ageia PhysX

Last response: in Graphics & Displays
Share
March 22, 2006 12:00:37 PM

What are your personal thoughts on ATI's + Nvidia's solution to complicated in-game physics, compared to having a completely seperate processing unit devoted to real-time dynamic physics processing?

IMO, i think the "SLI-Physics" won't suffice in terms of being able to process physics. Since games are becoming more graphical and shader heavy, even SLI configurations would eventually struggle to render frames and process dynamic physics at the same time, where as having a seperate processing unit devoted for this purpose would obviously have its advantages. So, by keeping GPU's for rendering, and offloading physics from the CPU onto a PPU, in theory, the system would run more efficiently when presented with a physics-heavy game, since the only thing the CPU would have to do is process management, A.I., along with the rudimentary game engine processing.
March 22, 2006 12:44:37 PM

Not sure how performance will play out SLI vs. PhysX, that remains to be seen after benchmarking. But I do appreciate NVidia's approach to this, their solution is basically a driver install with no additional hardware purchase for SLI users. By waiting this long after ATI announcing that were opening up the API, they neatly solve the chicken-and-egg dilemna; there's already an installed base of over 1 million SLI rigs out there. I predict that fact alone will attract developers to start using it, plus they're already familiar with NVidia coding. PhysX will be a new set of programming methods, and who's gonna speculatively invest in that when there's no boards out there in the wild? That's a big gamble and could mean a big hit to your R&D budget if those boards don't sell well. PhysX may perform better, but attracting delevopers to support it will be an uphill battle, and selling it will be hard since you'll need some big titles out there to support it and show it off in demos. Chicken-and-egg paradox.

I did notice today that the announced $10,000 monster rig from Dell is including the PhysX board:
http://www.tgdaily.com/2006/03/22/dell_xps600_renegade/
a b U Graphics card
March 22, 2006 12:46:39 PM

I think an independent processor will work better than a shared GPU. Also, a standalone processor will benefit people without SLI:

While the name "SLI Physics" indicates that two graphics processors are needed to run physics, Nvidia officials told TG Daily that owners of single-GPU systems will still see "some benefit," but couldn't provide more detail at this time. - article

Of course, the standalone processor will need to be fast enough to feed data to the GPU or at least synchronized with the GPU. I would expect lower latencies on the SLI version since the vid cards have their own interconnect and calculations are being done on the same processor/GPU.
Related resources
March 22, 2006 1:19:56 PM

I suspect that SLI Physics would act as the intermediate solution to complex physics in the mean time, since many games even today don't make full use of a dual-GPU setup. In terms of communication between devices, i would agree that a SLI-Physics config would have lower latencies, but in having lower latencies, it would become limited by the fact that it can't dedicate itself completely to processing dynamic physics, which is where the PhysX card would have the advantage. Also, the CPU would have to manage the communication between the PPU and GPU so heavy physics processing still puts pressure on CPU usuage, but not on the magnitude that it would have without either SLI Physics or a standalone PPU. I would reckon that the only way to really get the PPU solution going is by keeping the boards reasonably priced along with having a selection of games that would benefit loads by having the card. Personally, i can't wait to see some benchmarks or tech-demos to demonstrate its capabilities, because though the PPU would require a different code set than what nvidia are offering, the power of the card would certainly be enough to encourage game developers to at least experiement with the technology in their games.
March 22, 2006 1:34:37 PM

I think we need to wait and see what ATI has going on. From an interview I read in cpu they're working on gpus as a co-processor for more then just physics, things like audio,cad and adding more to avivo. They also want to do it with any ATI gpu, so crossfire can use a x1600 and a x1900. The lower card does math and the higher renders. I read this in november so I may be off on this some but thats what I remember.
What I would like to see is the processes to work together and share the load or for developers to give the option like gl or dx. It seems nvidia rushed to be first on the block.
The ageia seems to be the better solution in the long run since it's a dedicated processor, if it can get noticed. With ATI and NVINIA trying to hold it down it will be a fight.
This is a link to a site about gpu's as coprocessors http://www.gpgpu.org/ it has some interesting info on it.
March 22, 2006 1:38:08 PM

Well Ageia is getting some love theres already a few graphic engines that provide direct support for it. I.E. Unreal 3 engine being the most noted. Also the PS3 will be rocking some form of the Ageia PPU engine pre-loaded into one of the Cell's SPU's, not sure how thats all gonna work. But Ageia is definitely getting its product out there. In my opinion if BFG see's something in it to manufacture them, I'll trust them (even though there outragously overpriced) since there the former owners of the original SLi back in the 90's. But thats neither here nor there.

I'll fork up around $200 give or take $50 for an Ageia PPU over SLi Physics. One is essentially software based and the other is hardware, and hardware based processing is always better than software based.

On a side note: Doesnt it seem odd that nVidia released details about SLi Physics, right after launching Quad-SLi...
March 22, 2006 1:49:33 PM

Concerning the PS3, it's getting the codebase loaded onto some of the threaded Cell CPU's, so from what i can imagine and speculate, the CPU's would be divided through threading on what core processes what task, so i guess some part of the processor will dedicated for physics processing... however, i am concerned whether a general cpu, even with multi threaded cpus, could cope with what Ageia reckons physics will become like.

As you can see, i'm really interested on in-game physics, partly because of what Half-life 2 managed to do with Havok. I also know Havok FX deals with using GPU's to process some physics, but specifically to do with visual physics effects, like explosions and the creation/movement of the debris from the explosion, so i reckon that software would take advantage of a SLI Physics configuration, or with whatever ATI offers up. Overall, i'll be keeping an eye on this, along with the whole Conroe/AM2 battle.
March 22, 2006 1:50:18 PM

yes, interesting about the quadsli/sliphysics connection. things that make you go hmmm...

I also agree that ageia is getting plenty of exposure, and w/ native support in engines like unreal 3 "coding for it" is not an issue. Just a matter of how much hardware gets out there in how much time. w/ Dell offering their gaming rigs w/ them (and vodoo and others i assume) i think there is plenty of promise for ageia.

Personally I will wait and see what happens once the physX cards are @ retail. See what the industry does and what "real" benchmarks show for perf gains... then i will buy.
March 22, 2006 1:52:46 PM

I think that the Ageia PhysX card will be the future, simply because of the psycology of enthusiasts. People are gonna look at the SLi and similar solutions and say, "Why is my video card trying to process physics? Wouldn't a dedicated physics card be better at that?" Whether it's true or not is besides the point. (Although I personally think that the physics card WILL end up being a better solution for high end gaming)

I have to wonder though, that with multicore CPU's coming out, what's to stop a developer from dedicating a core to computing physics? I mean, won't a dedicated physics card require a seperate thread to run correctly anyways? I'm probably getting a dual core on my next system (and soon enough, a quad core) Unless I'm convinced that these physics cards will have signifigantly different architecture that is optimized for physics, I'll probably just wait for games to be written for the CPU to handle the load.
March 22, 2006 2:01:04 PM

I would reckon a multi-core CPU could handle some physics, or at least the management of a seperate PPU by concentrating a core or thread on the processing for that task. CPU's at the moment are powerful... but they lack the required architecture to help them cope with dynamic physics processing. However, what's stopping chip manufacturers from putting in maths/physics coprocessors inside their chip architectures in order to help them cope with such cpu-intensive tasks? Which makes me wonder... what will happen if someone manages to get a CPU with a maths/physics coprocessor and put that into a decent high-end system with a Ageia PhysX card.... :twisted:
March 22, 2006 2:19:54 PM

NVidia's announcement was that they'll be opening up the GPU to doing alternate tasks, it just happened that the demo of it was a physics demo (I won't get into potential politics of that decision). Gee, you think it might be geared to help sales of SLI rigs? And to poke fun at ATI, where their announcement months ago of this is the usual ATI vaporware?

Using the vidcard wouldn't necessarily accelerate it beyond just having additional processor resources to work with, but with support for math functions that are present in 3D rendering, some of which are useful in physics calculations. almost like dedicating a CPU core to the task, but better since a GPU is far better at doing floating-point calculations.

I don't see the processor mfgrs wanting to put physics into an onboard coprocessor, 99% of their sales are corporate and non-gamer sales, and the R&D cost to modify an existing product line is staggering.

Dedicating a general-purpose processor core to doing physics wouldn't give much benefit, these specialized coprocessors have hardwired logic to accelerate some of the hairy math involved in calculating physics, like matrix transforms that take hundreds or thousands of CPU cycles to process on a general purpose CPU. (*Shudder* - I had to do matrix math in engineering college, ick) We're already beating the onboard math co-processors to death now and it's still not enough horsepower.
March 22, 2006 2:20:30 PM

To me a Physics processor built from the ground up is better than a makeshift solution using a driver.

However, yes, Nvidia has an advantage because it has the developer support for its products, doesn't require a new code base, and theres already hardware out there.

But the Physx processor is gaining momentum. Personally I'll probably buy one and put it in my system.

With SLI-Physics I think it should be that one card renders the screen while the other is devoted to doing the physics calculations. That alone would speed things up because the rendering GPU wouldn't have to wait for the CPU to do the calculations and get the result back to the GPU (which might be what Nvidia does).

SLI definitely gained some more justification for its expense to me now. But if developers embrace a dedicated PPU then theres no need for SLI-Physics and one card will be fine still.
March 22, 2006 2:29:02 PM

Quote:
Dedicating a general-purpose processor core to doing physics wouldn't give much benefit, these specialized coprocessors have hardwired logic to accelerate some of the hairy math involved in calculating physics, like matrix transforms that take hundreds or thousands of CPU cycles to process on a general purpose CPU. (*Shudder* - I had to do matrix math in engineering college, ick) We're already beating the onboard math co-processors to death now and it's still not enough horsepower.


I see your point. Which is why i am slightly concerned about the PS3 in having the Ageia PhysX SDK codebase integrated into some of the cores in the Cell CPU. In theory, it should work, but when confronted with the "hairy maths" of complex dynamic physics, how will it cope in reality? The only way to find out is sit and watch....
March 22, 2006 2:54:57 PM

Well, Ageia is an IP company, not a manufacturer, so I'm sure they'll figure out some way to maximize the potential of the Cell CPU.

I'm not sure of the capabilites of Cell's math copro, but since it's not saddled with legacy issues like I386 architecture, and it's designed to be part of a gaming system, I'm sure it's a lot more capable than I386, and that there's more hardwired support for advanced math functions.

Personally, I'd love to see PhysX take off and be supported. Hopefully they'll climb that steep slope.
March 22, 2006 3:10:41 PM

Just a couple of thoughts...

1) a separate card is going to be cheaper than a full-blown graphics card;
2) a separate card will surely not consume as much power, nor generate as much heat, so will probably be either passively cooled or only require a small/slow fan. Quieter, in other words.

So, there's certainly no reason to get a second graphics card for processing physics, and if I already had an SLI config I'm obviously not too scared of spending money on bits of circuit board; I'm almost certainly going to want to get a separate card purely so my graphics cards can concentrate on what I bought them for.

Just what I came up with after 5 minute's thought, but I can't really see the argument for having your expensive graphics card process physics. As good as it may be for it, it's still not going to be as efficient as a PPU designed from the ground-up for the purpose. Obviously my thoughts about the power/heat are entirely conjecture, but I can't see the market for a dual-slot loud-as-hell physics card at all. :lol: 
March 22, 2006 3:10:49 PM

This looks like a good thing for the NB makers[ sis,via and the like] to license this and intergrate it in the NB[ really for AMD because there's no mem controller in it]. I could also see AMD adding something like a physic processor in the FX line since it's really only a gamers chip.
Is the new race in gpu's which one can do more nongraphics processing? From the GPGPU site it looks like this is something thats catching on.
March 22, 2006 3:51:41 PM

Quote:
Just a couple of thoughts...

1) a separate card is going to be cheaper than a full-blown graphics card;
2) a separate card will surely not consume as much power, nor generate as much heat, so will probably be either passively cooled or only require a small/slow fan. Quieter, in other words.


They're saying $100-$400 for these cards, so they're in the realm of a second vidcard.

The picture of the card at http://www.tgdaily.com/2006/03/22/ageia_physx/ shows a heatsink/fan on the board and a 4-pin Molex connector to the power supply, so it's going to be using a fair amount of power and generate some heat, but probably not as much as a high-end vidcard.

Going to a dedicated physics board will definitely be the better performer, but using unused vidcard power also has some attractiveness. I'll be watching for other areas of computing to invent interesting uses for this. Wouldn't accelerated DVD ripping/transcoding be nifty? Or HD/BluRay video ripping, which I'm sure will require a ton of horsepower once they crack the encryption within a week of its release :)  Or SETI- or Folding@home processing...

Lots of possibilities.
March 22, 2006 4:33:07 PM

Quote:


Going to a dedicated physics board will definitely be the better performer, but using unused vidcard power also has some attractiveness. I'll be watching for other areas of computing to invent interesting uses for this. Wouldn't accelerated DVD ripping/transcoding be nifty? Or HD/BluRay video ripping, which I'm sure will require a ton of horsepower once they crack the encryption within a week of its release :)  Or SETI- or Folding@home processing...

Lots of possibilities.


Using the PhysX chip for something other than physics processing... just as inventive as using the unused resources in a dual-GPU setup to process physics... oooo.... I'm starting to get lots of ideas on this one... :twisted:
March 22, 2006 4:39:01 PM

Quote:
Using the PhysX chip for something other than physics processing... just as inventive as using the unused resources in a dual-GPU setup to process physics... oooo.... I'm starting to get lots of ideas on this one... :twisted:


How about 3D rendering on the PhysX chip? :D 

I'll use it to run my Basic bubble sort faster......
March 22, 2006 5:00:01 PM

Damn it! All these ideas!! I have decided I must get the card... along with the SDK to go with it... just so i can mess around with all that potential power...
March 22, 2006 5:06:40 PM

How about an AOL accelerator? (sorry, I feel dirty just mentioning that cesspool)

Imagine if viruses or malware could get hold of this power!

Or real-time encryption of data for your hard drive, or better VPN connectivity.

Or making your Sims do stupid things even faster...

Oooh! Playing Solitaire with real physics applied to the cards!

But I digress.....
March 22, 2006 5:48:37 PM

Quote:
How about an AOL accelerator? (sorry, I feel dirty just mentioning that cesspool)


Shame on you! :evil:  The mere mention of using specialised hardware for physics processing to try to improve the performance of such an abomination is blasphemy! You should scrub yourself thoroughly with a metal wire brush!
March 22, 2006 5:58:51 PM

As for GPGPU and doing physics on GPUs, there are a lot of reasons why it will win out over AGEIA's hardware specific solution. There are two main issues to keep in mind on my points:

(1) Faster does not matter, just has to be fast enough. Physics is not an end-all be all task at this point. Devs are working within normal game models and have some limitations. While the fastest solution, in a perfect world, is best, the reality is each solution only has to be fast enough for the end product. PhysX does have limits (e.g. you are not going to get an ocean of complete fluid dynamics), so the question is can GPUs keep up with what devs are trying to do today?

(2) Game physics & Effects physics. HL2 had some game physics. Yet so far the games like UT2007 and City of Villans are talking about ONLY putting in effects physics. e.g. the UT example: Instead of 600 rocks falling off a cliff, instead 6,000. But it was noted this would NOT change gameplay. Ditto City of Villans. Blow up a news stand and paper goes everywhere. Here is the problem: GPUs already can do effects physics great. Go to ATI.com and look at the "ToyShop" demo -- there are a LOT of GPU based physics there. The 360 has an example in Kameo where it had 1M interactive particle bodies running in realtime. So until someone makes a game w/ gameplay physics (like dynamically destructable buildings) and it cannot be done on GPUs, devs are so far not doing anything that requires AGEIA. If the power goes untapped...

On to my points of why AGEIA's PhysX wont survive.

- PR and Marketing. No contest. ATI/NV have a ready made fan base.

- Established developer support. It is all about the games, namely getting a game that screams, "You MUST have AGEIA". But no developer can risk building around AGEIA due to install base issues, which leads to...

- Distribution Channels, board maker, and OEM support. This cannot be overlooked. NV/ATI have a decade or more of working supply lines and having great connection with board makers. AGEIA has an undersupported product that was delayed with 2 board makers getting on ship.

- Install base. SLI/Crossfire > PhysX. Developers will develop where the money is; developers will then code to the expectations of the least-common denominator.

- Open platform. There is competition on the GPUs (like MS unannounced Physics API and Havok) which means more developer choice. Thus far AGEIA has NOT gone out of their way to make their platform open (they say after release anyone can work on it but they are not making the runtimes and so forth open at this point)

- GPUs are more versatile. Every 3D game uses them. PhysX? Only games using Novodex will use it. GPUs are now also used in Vista and also can be used for GPGPU tasks. PhsyX? Nope.

- GPUs have 100% overlap in market. PPUs, on the other hand, will only work with a small percentage of games.

- Cost. $250-$300 (Alienware has it at $275) for a chip that only works in a few games is nuts. Return on investment is low. Would you rather have 1GPU+1PPU or 2GPUs, like 2x 7900GT? Which leads us to...

- SLI/Crossfire. Does a game use physics? Use 1 GPU for Physics, 1 GPU for 3D. No physics needed? Then use both GPUs for 3D! PPUs cannot do this, if the game does NOT use it, well, it sits there like a $250 paper weight.

- Old GPUs may in the future be left in your machine as a dedicated physics accelerator.

- ATI/NV may put smaller chips onboard as "dedicated" physics chips. Basically just an ALU engine.

- X1900XT(X). This chip has 300% the PS performance of an X1800XT, but in games it only gets a 0-20% lead in most cases. Why? All those ALUs are idle since most games currently don't use them. The X1900XTX has a TON of power just sitting there, unused... waiting to be used by something like physics!

- ALUs are CHEAP. It cost ATI less than 63M transistors to add 32 ALUs into R580, about 2M transistors an ALU. That is well over 200 billion flops (200GFLOPs). Basically ATI/NV could add ALUs just to have a hug surplus for physics. Since DX10 GPUs are more robust and are actually much better general processing oriented, a few tweaks and changes could make them VERY Physics friendly. The power is there. The X1900XT has a total flops of 500B. This will only increase. By fall 2007 we will break the 1 TFLOPs (Tera-flops, as in TRILLION) range in PROGRAMMABLE shaders. AGEIA does not have the R&D, product range, or the know how (they are still on the large, hot, and outdated 130nm node... I believe some of the NV35s were made on that!).

- Another level of latency. With PhysX you have a CPU, GPU, and PPU. ALL working over a bus fighting for traffic and coherancy. With a GPU you have only a CPU and GPU, limiting some of the traffic and coherancy.

- The chips are expensive for what they are. 125M transistors on the 130nm node process, 25W for the chip alone. To compare GPUs are on the 90nm processing, moving to the 80nm process and are 3x as big. What they lack in elegance they make up for in brute force.

GPUs are not perfect for physics. Advanced rigid body collision detection takes more work, and they may (or may not) be faster than AGEIAs solution. But pure speed is not always the determination of the winner. VALUE, Marketshare, and PR are. And in games, the most important factor is developer support.

Thus far AGEIA has over promised and never provided a product (well, until now). Delays have been common and the chip itself is waaaaay overpriced.

Yep, not a fan. More useless hardware. At least they woke up the GPU makers.

What is more interesting is how ATI and NV will fight it out. ATI is about 5x faster in GPGPU tasks than NV (that was comparing the 7800GTX with X1800XT), and ATI has done a lot of work on the 360 with MEMEXPORT to specifically accelerate this type of tasks. 6 months ago one of the "bullet points" of the X1000 series was it can be used to accelerate a number of tasks and ATI has had working physics demos up on hardware as old as the 9700. With all the extra ALUs on the R580 + the excellent dynamic branching, I get the feeling ATI could have a significant lead here.
March 22, 2006 6:10:45 PM

Eloquently well put!

I have nothing else to add or disagree with.
March 22, 2006 6:19:02 PM

Damn.... nicely put.... I can't exactly argue with that... :)  Well, at least i know my X1900XT won't need extra help...
March 22, 2006 6:21:02 PM

I'd say this topic is pretty much put to rest. It's been a fun ride, though. :D 

Oh, well. Goodnight, it's time to leave work now.

BTW, why is your finger blue?
March 22, 2006 6:25:46 PM

Beware of rubber bands, fingers, and forgetfulness..... no need to say more...
March 22, 2006 6:30:00 PM

I'll assume alcohol was involved. Great nickname, even if it was earned the hard way.
March 22, 2006 6:49:29 PM

how do u mod a dfi lanparty to sli?
and about the physics thing, forget sli physics, its only for eye candy
March 22, 2006 6:58:23 PM

Perhaps i like eye candy?
March 22, 2006 7:29:49 PM

Here is a Link of the Ageia's PhysX card in action in Ghost Recon Advanced Warfighter, compared to a CPU doing it.
i like the way the doors and tires come flying off :D 
March 22, 2006 7:35:17 PM

Quote:
Here is a Link of the Ageia's PhysX card in action in Ghost Recon Advanced Warfighter, compared to a CPU doing it.
i like the way the doors and tires come flying off :D 


It looks nice, but now lets see that with a multi-core CPU and GPU using physics ;)  There is no doubt that games that use AGEIA wont look better than standard, non-accelerated games. The problem is AGEIA is facing both GPU makers and Intel/AMD. It is going to come down to cost, market penetration, and support.

One thing AGEIA has going for them is they got into both consoles & UE3 and are affordable. In the past Havok has been much more robust, but just like the hardware, sometimes the more robust solution does not win out.
March 22, 2006 7:42:58 PM

well for me i will defo get a AGEIA card since im a ghost recon fan, and graw is espically made for it.
It is gonna be in use with UT 2007 which is a CPU limited game and is 1 of the most popular games, so i assume that AGEIA's card will take off with a good start.

As for Nvidia SLi Physics, personally i wouldnt like my graphics card to render frames and do physics, its bound to take up GPU. the AGEIA's phsyX card has 128mb GDDR3 dedicated to it, a CPU would use RAM, and your graphics card will use its VRAM which could be used for aa fa and higher res in the upcoming games.
So it really does make sense to have a dedicated card to handle Physics.
Also (last point), AGEIAs card you dont have to keep upgrading like a graphics card and CPU so it is a good safe long term investment.
:) 
March 22, 2006 7:45:09 PM

Well what I think it will allow for is fully deformable terrain and environments in multiplayer games (not necessarily MMOs though cause it'd be bad if people blew a hole through the world).
March 22, 2006 8:02:24 PM

sli physics is just another gimmick to make sli look more attractive cuz sli sucks the shit outta your wallet
March 22, 2006 9:42:18 PM

Quote:
As for Nvidia SLi Physics, personally i wouldnt like my graphics card to render frames and do physics


Interestingly in todays news it was noted on the Alienware site that the 7900GTX SLI is not available with PhysX. So one or the other.

Quote:
Also (last point), AGEIAs card you dont have to keep upgrading like a graphics card and CPU so it is a good safe long term investment.
:) 


A little birdie (ok, the AGEIA press releases) already announced "value" (read: slower) SKUs and that they are working on future products.

What you are saying is akin to when the first GPUs came out: "I will never need to upgrade!" Sadly it does not work like that.

As the market expands we will see how it plays out I guess. I am curious if Creative will get in on the action... which brings up an interesting point. Sound cards used to be all the craze. I remember my dad's first SB and when I got my SB Pro. I remember in 1998 getting an MX300 A3D. I have an Audigy 2ZS currently. The problem is, from a market perspective, is that even a $50 sound card--which sounds better than the integrated audio and offers better performance--is dieing off. Why? Because the standard parts in a PC work well enough.

Call me skeptical, but if $50-$100 sound cards, which had great market penetration at one point, were weeded out by slower, but cheaper, integrated sound... how is AGEIA going to fight an upstream battle with no market penetration, more competition (Creative had a virtual monopoly), with an expensive product?

EAX has a lot of dev support, but in the long run it is pretty much moot. You can get the same experience for the most part elsewhere. The fact they do have faster PPUs in the pipeline indicates that this is not a one shot deal.

Quote:
Well what I think it will allow for is fully deformable terrain and environments in multiplayer games (not necessarily MMOs though cause it'd be bad if people blew a hole through the world).


The problem is with online play is whatever you do in your world has to be transmitted everywhere else. You can do pre-canned destruction (see: Black) but once you talk about dynamic realtime deformable terrain and destructible content ALL that content has to be passed over the internet. Servers lag with 64 people shooting guns, the thought of people blowing up buildings with tens of thousands of individual parts is asking too much (unless it is not interactive and pre-canned ala Soldner).

That is the problem with physics. Once it impacts the gameplay level (i.e. just not junk blowing up and disappearing/making no impact in the world) you make online a huge hurdle and you also divide the userbase since many users wont have the hardware to accelerate it.

Sticky problems. Publishers in my estimation will fallow the money, which is where the install base is. It is important for AGEIA to sell millions of PPU cards.
a b U Graphics card
March 22, 2006 10:46:28 PM

I wanted to add something, but then I read those two posts and said... umm it's pretty much well covered.

Primarily the biggest selling point for me is EVERYONE needs a GPU, not everyone needs a Physx engine, the fact that you can use something you may already have to increase the playability a bit without buying a dedicated card is great.

And if it came down to bying a second card, I would buy a second graphics card for the benifits when games didn't need that much physic punch but might need more rendering punch. I'd rather have the proven dual benifits than buying a physics card that might be helpful in something like tree-sorting. 8O

In other words I concure.

I think great interview articles on this are the ones from FiringSquad;

http://www.firingsquad.com/features/havok_fx_interview/

http://www.firingsquad.com/features/ageia_physx_press_conference/

I'd say Havok with their HavokFX as the lead on this is far more realistic and attractive than Ageia's solution. And if Havok makes it a game engine upgrade then games like Oblivion could potentially add it to their current build far more seemlessly.
March 23, 2006 2:17:19 AM

Hey This may have been answered already. But what will any of the physics solutions do anything for my currents games. Will the current games need patches or have to be totally reprogrammed? And can the game servers held all the extra rendering. Increase bandwidth? Being a Server admin and paying out of my pocket. It will be hard to pay for more bandwidth just to see a glass window break in UT2k4.
March 23, 2006 6:27:14 AM

For current games, there won't be much difference, since most of the physics is software based, and not reliant on hardware.
March 23, 2006 11:27:24 PM

I'm a little skeptical about the preformance and abilities of PhysX.
Doing a little reading, I saw that Nvidia stated the GPU doing the
physics calculation would would send the data to the DX9 driver without
the intervention or use of the CPU.... :?: :?: :?:

Next off, most GPU's, but namely Nvidia's that I can speak of are
able to process physics for lighting, and not much else.
The biggest problem --current mem controllers dont allow for
the necessary (or minimum) caching to make any physics calculations
truely effective or possible. (Perhaps the following generations though)

A dedicated PPU seems to be the best possible solution.
March 23, 2006 11:44:05 PM

so if a ppu takes the load off the cpu wat else is there for a cpu to do in the game?

-ai
-loading
-thats all i can think of

wats next an AI processing unit? and a loading unit?
a b U Graphics card
March 24, 2006 12:02:54 AM

The actual map location and centerpoint information.
Keep relational track of all other aspects and what the overall changes are for an area of the map affected (whom have you talked to, where are the boadies/bullet holes).
Keep time for the story, linear progression (this is saved on the harddrive but written and retrieved through the CPU.
Network communication.
And now relay the tasks from the dedicated parts (audio card, graphics card/physx card) interact with the rest.

There's still alot to do, and AI is a progressively larger task, especially in games like Oblivion where there's Radiant AI affecing characters not directly interactiung with the current scene or main character.
March 24, 2006 5:51:28 AM

ATI announced physics acceleration as a possibility 6 months ago, but here is their new response:

http://www.pcper.com/article.php?aid=226

Since I have seen some of the GPGPU tests I can say in general ATI can at times have a 5-6x lead. How that translates to physics, who knows. But there is good info in here--namely confirmation MS is working on a universal API so ATI/NV/AGEIA can all compete on the same level ground and let consumers choose the best product; also confirmation that ATI plans to allow customers to use their old GPUs as dedicated PPUs i.e. recycle your old GPU. Also a nice tid bit about how physics can be used either on a standalone GPU or dedicated one. I know some cringe in regards to having a GPU do both, but lets say your new X1900XTX does 375GFLOPs. 10% of that is nearly 4x the floating point performance of a CPU. So it is like going from 66fps to 60fps (10% hit) while getting 4x the performance of a CPU. Ok, not quite that simple or clean, but you get the point.

But it is not unimaginable that if you are at 60fps at 1600x1200, that by moving to 1280x1024 (30% reduction in pixel shader & bandwidth) that you have allocated a ton of resources for physics type tasks. We are talking multiples of degrees of magnitudes. And DX10 should open up more doors and efficiency.

Obviously GPUs are not the "ideal" solution in terms of effeciency, but they are a proven design with great market penetration with a 100% overlap with the intended physics audiance.

And most importantly is economy of scale. PhysX is a small chip (125M transistors) with a 128bit bus to 128MB memory. Yet they are going for $275. I am looking at a 7900GT right now that, while only being in a similar ball park, has a chip 2.25x bigger, has 2x as much memory, and has 2x the memory bandwidth. With a PPU you are paying a LOT for what you are getting. Even if GPUs are only 60% effecient they would still come ahead on a dollar-to-performance ratio.

Who knows how effecient they are and how they compare at this point, but we do know that GPUs, like the PPU (at least the little we know from their patents), are streaming processors with a lot of brute force and a high bandwidth connection to dedicated memory. PPUs are a specialized solution and will be more effecient, the question is can that effeciency overcome the brute force of the GPUs, the market conditions, and the cost factors (not to mention versatility/usefulness).

Ultimately it comes down to cost and a killer app that drives consumer interest. If we get to a point where games have multiple options (PPU | GPU | SLI | Dual Core | Quad Core | None) before we get to such a point, well...
a b U Graphics card
March 24, 2006 5:44:27 PM

Quote:
; also confirmation that ATI plans to allow customers to use their old GPUs as dedicated PPUs i.e. recycle your old GPU. Also a nice tid bit about how physics can be used either on a standalone GPU or dedicated one. I know some cringe in regards to having a GPU do both, but lets say your new X1900XTX does 375GFLOPs. 10% of that is nearly 4x the floating point performance of a CPU. So it is like going from 66fps to 60fps (10% hit) while getting 4x the performance of a CPU. Ok, not quite that simple or clean, but you get the point.


Yeah and you know what, all the old VPUs I've owned I've GIVEN away, and some thing I couldn't give away if I tried (actually I'm glad to have the PCI AIW back because it still is a solid card for Win2K/W98SE). So if I KNOW that this is an option I might get X1900AIW very VERY secure in the knowledge that it will still be useful in the DX10 era as a PPU. Better than the hassle of selling it off to a friend (I'm not much of an e-Bayer unlike guys like Cleeve and Pauldh who seme to be able to spin their old hardware into almost the entire replacement cost for sweet fresh gear [new lamps for old?]). So for me the holds alot of additional promise at no additonal cost, that I didn't have before.

Quote:
But it is not unimaginable that if you are at 60fps at 1600x1200, that by moving to 1280x1024 (30% reduction in pixel shader & bandwidth) that you have allocated a ton of resources for physics type tasks. We are talking multiples of degrees of magnitudes. And DX10 should open up more doors and efficiency.


Definitely and if you thnk about a 'cheating way' opf implementing the most efficient use, you overbright the screen (drop visual/pixel load) calculate initial tragectories and close interactions) and then continue with the rendering of the explosion. This would give you the flash bang/stun effect of an explosion, and also give the card time to render trajectoriues at the same time, to help ease that instanataneous large load. Of course the self-interacting particles will multiply in a confined space or if they have heavy number of collisions (like a falling wall), but for most 'explo-zee-uhns' this should give some initial breathing room uintil efficiencies occur.

Quote:
Obviously GPUs are not the "ideal" solution in terms of effeciency, but they are a proven design with great market penetration with a 100% overlap with the intended physics audiance.


I wouldn't say they are ideal, but for me they are a 'better' solution than a dedicate physics card simply because they do have other uses and that for some situations adding a second VPU will increase frames in games that aren't PPU accelerated, since they can then push pixels. When SLi and Xfire go with unbalanced solutions (like X1800+X1900 or 1600) then you'lltruely be able to exploit your previous card for good use, and increase the utility of both this physix side and the graphics side.

Quote:
And most importantly is economy of scale. PhysX is a small chip (125M transistors) with a 128bit bus to 128MB memory. Yet they are going for $275. I am looking at a 7900GT right now that, while only being in a similar ball park, has a chip 2.25x bigger, has 2x as much memory, and has 2x the memory bandwidth. With a PPU you are paying a LOT for what you are getting. Even if GPUs are only 60% effecient they would still come ahead on a dollar-to-performance ratio.


Being an economist, I appreciate that, and that's a winner for Ati and nV, however for a consumer the winner is that the PhysX card doesn't have as quick product cycle, a faster refresh rate for production so likely that GF7900GT/GTX will cost as much or far less in only a few months since there'lll be a better part out to knock it out of contentionm and still would be able to do more overall, and that will continue to happen, to the point where the PPU is competing against a $50-100 graphics card on eBay. Because while most people will NEED a new VPU every year or two, most won't consider the new PPU, and eventually IMO, the VPU will far surpass the PPU until Physix makes enough money off their current line to refresh their product to a next generation performance. And the whole time you can add to it and swap in and out. And the fact that even just having the VPU that can do both is better for cheap people who don't want to pay for 2 things. Of course if they added functionality quickly the PPU might be competitive, but it will remain a costly way of competing for the manufacturer.

Quote:
Ultimately it comes down to cost and a killer app that drives consumer interest. If we get to a point where games have multiple options (PPU | GPU | SLI | Dual Core | Quad Core | None) before we get to such a point, well...


Exactly, and I easily se multi-VPU + Multi-Core being able to handle all the elements well enough that a seperate thrid card being part of the solution just makes it anohter 'variable' for people.
!