Does ATI and NVIDIA have a PPU onboard the graphics card?

Eurasianman

Distinguished
Jul 20, 2006
883
0
19,010
Does the ATI HD2900s and the nVidia 8800s have an actual PPU? Or is it the architecture of the GPU that allows the cards themselves to do physics.... er, can the cards actually act like a PPU is what I'm really trying to ask. This is what I pulled from both ATI and nVidia websites. However, ATI's site doesn't look as clear as nVidia's site does.

ATI Unified Superscalar Shader Architecture
- Physics processing support

NVIDIA Quantum Effects™ Technology
Advanced shader processors architected for physics computation
Simulate and render physics effects on the graphics processor

I remember reading something about how ATI can go into crossfire and have one card do graphics rendering while the other do physics, or have two cards doing graphics and one doing physics. But as far as nVidia goes, I don't know. I'm guessing nVidia can run single or SLI and the card itself has enough GPU power to calculate the physics itself.

Anyone care to shed some light on this?
 

Eurasianman

Distinguished
Jul 20, 2006
883
0
19,010
Well, after reading the comments, let me see if I can rephrase the question.

Are the GPUs powerful enough to perform the phsyic calculations as well as graphics without witnessing any loss of performance?

The reason I ask was because I was trying to play CellFactor the other day and you're limited to like 2 or 3 game modes if you don't have the PhsyX card, which I think is sort of a waste if you can get the graphics card to do the physic calculations itself. I mean, wouldn't it be faster for the GPU to calculate it instead of the the CPU due to the GPU's architecture?
 

tamalero

Distinguished
Oct 25, 2006
1,125
133
19,470
Well, after reading the comments, let me see if I can rephrase the question.

Are the GPUs powerful enough to perform the phsyic calculations as well as graphics without witnessing any loss of performance?

The reason I ask was because I was trying to play CellFactor the other day and you're limited to like 2 or 3 game modes if you don't have the PhsyX card, which I think is sort of a waste if you can get the graphics card to do the physic calculations itself. I mean, wouldn't it be faster for the GPU to calculate it instead of the the CPU due to the GPU's architecture?

if I remember correclty YOU CANNOT USE the a single videocard for graphics AND at same time physics..
thats why you need to do 1+1 or 2+1 crossfire
 
Actually that isn't the case.

It depends alot on the power required by the graphics of course (hey turn the resollution/effects up high enough on any game and obviously a single card can't do it if SLi/Xfire are a benefit.

However both ATi and nV have designs now that make it very easy to do both simultaneously, and there will of course be a hit, but if someone is say gaming @ 1280x1024 or 1280x720, then they would likely have a ton of extra horsepower left over (regardless of AA which especially on the G80 is a back-end issue) and so you would have idel portionsof the shader unit that could be used for physics.

Without getting in to various applications of AA the R600 also has an ambundance of shader power too also being under utilized due to it's ROP limitations, so adding the physics load handled at an X1600 level (equivalent to the PhysX card) wouuld likely not impact it to much.

Of course it's a question of balance, both the G80 and R600 are easily able to do both, but we might still be better off with a single R600/G80 rendering the highest level of graphics and then have a GF8600/HD2600 do the physics load.

But of course the best results will likely stil come from thre top end card doing 2+1.
 
G

Guest

Guest
No But R700 and G90 is rumored to have separate physics processing unit
 

Eurasianman

Distinguished
Jul 20, 2006
883
0
19,010
I understand that, however I don't understand what NVIDIA and ATI are trying to say when I look at the specs for both of their cards. Look at back at my first post and you will see what I'm talking about.
 
Hi i was intrigued by the question and have done some digging and apparently the game is made for the physix card and so can only be played to its fullist with the card.
Just type cellfactor in to google and check out a review or two for yourself.
There is the sugestion on the one i read that if you had enough ram/cpu and gpu power you could get away with it but i only had a quick look.
Hope it turns out you can
Mactronix
 

blade85

Distinguished
Sep 19, 2006
1,426
0
19,280
iv been playing it on my system, and its a struggle to stay over the 20fps mark with everything set on high at 1024x768. Notch down the graphics and then the game runs muuuch better...and the physics dont look too shabby either :)
 

Eurasianman

Distinguished
Jul 20, 2006
883
0
19,010
Sorry fellas, didn't mean to hijack my own thread.

I was just saying that I've been reading up on the hardware architecture of the 8800 and the 2900 and they both say they are, in one way or another, capable of processing physics. Like, they have a dedicated pipeline or shader or whatever for physic calculations. However, I think that only works if you do Crossfire/SLI. Not sure. This was the point of the thread. To figure out if there was a way to just have one card process the graphics as well as the physics. That and ask if it was possible to someone make it to where the graphics card could take the place of a physX card.

Hope that brings some clarity to this thread.
 

Heyyou27

Splendid
Jan 4, 2006
5,164
0
25,780
The reason I ask was because I was trying to play CellFactor the other day and you're limited to like 2 or 3 game modes if you don't have the PhsyX card, which I think is sort of a waste if you can get the graphics card to do the physic calculations itself. I mean, wouldn't it be faster for the GPU to calculate it instead of the the CPU due to the GPU's architecture?
My current problem with Cell Factor is that Nvidia's latest drivers don't support antialiasing, and performance is very poor with per pixel motion blur enabled. Now that I own a PhysX card, my overclocked Geforce 8800GTX has become the bottleneck when Per Pixel motion blur is enabled. :?
 

gayan

Distinguished
Dec 19, 2006
128
0
18,680
This is weired.......................
Try this....... create a new folder...name it as con.......... You just cant do it.
Is it a bug......
 

daniel2br

Distinguished
Apr 15, 2007
26
0
18,530
gayan
This is weired.......................
Try this....... create a new folder...name it as con.......... You just cant do it.
Is it a bug......

It´s true! So wierd..... going to look into it, but something tells me it´s a question only M$ can answer :twisted:
 

dobby

Distinguished
May 24, 2006
1,026
0
19,280
they dont percifically have a chip to do physics, but it does proccess the bultk of physics because it effects the renders.

so this means your better off with Sli than with a physics card
 

Eurasianman

Distinguished
Jul 20, 2006
883
0
19,010
Thank you for staying on topic.

Unfortunately, my motherboard doesn't support SLI and I don't think the modified SLI drivers will work under Vista. I might go crossfire, but after seeing the HD2900XT wattage use and heat output... I'm not sure. I'll wait a while.
 

mcain591

Distinguished
Aug 21, 2006
303
0
18,780
Wait for the R650. Its the R600 65nm, and they are going to do a slight overhaul on the architecture. For example, they are giving it more horse-power.

And strange, now that I think about it, the only reason that I might hate Crysis, is because my rig won't be able to run it with any eyecandy. :(
 

Original

Distinguished
Nov 27, 2006
22
0
18,510
I agree with mcain591 we should wait for the next chip shrink. Less heat and power, more MHz and performance. Current boards exist only to be present in the war market. :roll:



Athlon64 X2 3800+@2.4 GHz/Evercool SHARKS
ASUS A8R32-MVP DELUXE
1 GB Ching Lin DDR466@433 MHz/2-3-3-6-1T/2.7 V
250 GB HD SATA Seagate/200 GB HD EIDE Maxtor
Gigabyte nVidia GeForce 6600 Silent-Pipe
400 W Ching Ling PSU
LiteON 16X DVD
Teac Floppy
LG L1925H 19" LCD 8 ms
XP SP3
1400 VA true senoidal wave nobreak
 
Yeah, you should wait. I love the performance of this card, but the power/heat of two would be scary. ATI seems to be having good luck with the 65nm, supposedly the 2600 and 2400 don't even need a 6 pin power connector. :D
 

shompa

Distinguished
Apr 2, 2007
72
0
18,630
I own a physx card.

I think that both Nvidia and ATI are trying to get people not to buy the Ageia card, instead go for SLI or a 3 card SLI.
Is that not the reason for the extra SLI slot on the newer Nvidia cards?
2SLI game and 1 for physics?

So they "lie" a bit about their capabillities. They cant do the same as a dedicated physics prosessor can do.

If you what physics today, you should get the Ageia card.
It is stilll whery few games that support them. It is a great inovation.

If you can wait: NV90?
 

gomerpile

Distinguished
Feb 21, 2005
2,292
0
19,810
Yes this card has a phy processor Calibre P860 graphic card by nvidia.

Based on NVIDIA’s next generation G84 architecture, the Calibre P860 graphic card adopts innovative unified architecture, dynamically allocates processing power to geometry, vertex, physics, or pixel shading operations, delivering up to 2x the gaming performance of prior generation GPUs. Built upon technologies such as NVIDIA Lumenex Engine, providing support for DirectX 10 Shader Model 4.0, NVIDIA Quantum Effects technology for physics computation and GigaThread Technology for extreme processing efficiency in advanced, next generation shader programs.
guru3d this info came from
 
gayan
This is weired.......................
Try this....... create a new folder...name it as con.......... You just cant do it.
Is it a bug......

It´s true! So wierd..... going to look into it, but something tells me it´s a question only M$ can answer :twisted:

Nope, it's a reserved word going back to the MS-DOS days. 'con' is short for 'console' and is used to read entry from (mainly) the keyboard acting as a file.
Back to the topic please.
About physics calculations in GPU: programmable shaders and physics basically use the same computing resources, mainly floating point processing - if you want, a GPU (or a PPU like Ageia's PhysX) is nothing more than a turbocharged, oversized FPU.
However, you can't hit a GPU like you do a PPU for physics computations - if only because they are just not the same hardware - and both drivers and hardware must allow simultaneous computation of physics and 3D (physics could be considered an off-screen context, but the card must support this kind of hack).
The former explains why, at the time the concept of doing PP on a GPU came out (around the NV 6/7xxx days), both Nvidia and Ati supported it only on dual GPU configs: one GPU in graphics mode, one in physics mode. More recent chips may have been better designed to allow both.

Now, for the details, you'd better go roam the Nvidia or Ati fora.
 

patrickisfrench

Distinguished
Dec 29, 2008
2
1
18,515
As the OP asked back in 2007. I know now nvidia makes use of aegias physX processor onboard, however has ATI done anything similar yet? Do any new Radeons now have an onboard PPU?