Sign in with
Sign up | Sign in
Your question

Physics Processor not the future

Last response: in Graphics & Displays
Share
July 14, 2006 3:30:07 AM

What is with Aegia, NVidia and ATi and this physics stuff. Physics is not really that important, and you can simulate physics with simple mathematics. I do not really care is my grenade sends a vehicle flying three feet with a PPU or two feet with a simulated calc (I will never see it in real life to say if it was realistic or not). Physics is not important enough for a dedicated processor.

Now, AI would be good, some processor which handles the AI standard algorithms, and branch predictions. That would make games better.

As with graphics, HDR and Bloom are going the right direction. I would like to see more of this, and also better material handling of textures (done on GPU), maybe material properties for texture maps, so a texture can have more than one material property. There is still a long way to go in the visuals department, and companies such as NVidia and ATi should be concerntarting on these instead.

What do you think? What ideas of improvements can you think of to make gaming a better experience.
July 14, 2006 5:12:30 AM

ATI and Nvidia are both GPU makers, with a PPU via Ageia or a ATI/Nvidia GPU acting as a PPU, they'll be give consumers the option to crank out better graphics using their $500 GPU rather than have it waste precious power on physics and what not. Basically, it is a good step to improving graphics by giving the GPU one less task and letting it concentrate more on graphics.

AI is more up to the programmers, not the graphics card makers.

I'm all for a dedicated PPU unit, but I'd like one that actually works... not the disappointing crap Ageia just released, which has shown to be worthless.
July 14, 2006 6:35:23 AM

Physics is for the programmers as well. I am saying if ATi are pushing for GPUs to handle Physics, why, why not AI. Why not someone work on an AI chip.

Many years ago when I studied computer science, I remeber there are some algorithms for standard AI, such as search possible outcomes, fastest way from A to B, etc. I am sure most FPS AI is pretty similar, and branch predictions can be used for RTS and games like CIV series.
Related resources
a b U Graphics card
July 14, 2006 7:14:17 AM

I'm excited about multicores. In essence you can give each of these systems(ai, physics, graphics, etc…) its own processor. Physics cards are just going to saturate the bus.
July 14, 2006 7:16:20 AM

Because, there is no need to take AI off the CPU yet, it isnt advanced or system straining enough to change anything.
July 14, 2006 7:20:50 AM

Quote:
I'm excited about multicores. In essence you can give each of these systems(ai, physics, graphics, etc…) its own processor. Physics cards are just going to saturate the bus.


Totaly agree with this (besides I need my one PCI slot for my X-Fi lol) well if Agea could have gotten something out the door a little faster they may have had a market for a bit (asumeing the software would have been made to take advantage of it) Now with the dual core being standard and 4 cores seemingly right around the corner. Well 2 years but thats not that long a good game can take a year to make or longer !

Edit: You know MMM said that multicore was the way to go but at the time I was thinking that the Agea add in card was a good thing but after thinking about it more I agree with him ! Hmmm maybe the physics data could be hooked into the geometry calcs ? or do GPU's do there own geometry these days ? (I just install them and use them its been quite some time since I had to deal with anything more detailed lol)
July 14, 2006 7:59:16 AM

I believe both physics and AI is more important than better graphics. It's always frustrating when you have a good looking game but things won't behave realistic.

Instead of add-on cards I think programmers should start and use a single dedicated CPU core for this kind of calculation. CPUs are becoming more and more useless in games, the video cards are what is important when it comes to gaming. So this could be a good way to make CPU upgrades useful for us gamers.
July 14, 2006 1:04:06 PM

Quote:
Instead of add-on cards I think programmers should start and use a single dedicated CPU core for this kind of calculation. CPUs are becoming more and more useless in games, the video cards are what is important when it comes to gaming. So this could be a good way to make CPU upgrades useful for us gamers.


I completley agree. Let me quote something I said in the "what is a physics processor" thread:

Quote:
Before I say anything further, let me sate that I 100% concurr that specialized hardware for a task (Specialized sound card for sound, specialized GPU for graphics) is the best thing since the invention of the mouse.

But, i've very sceptical on if we really will ever need a seperate card to do the job. If my mid-2004 AMD Athlon 3400+ processor can handle 25 objects flying in the air in FEAR doesn't cause a single hitch, freeze, or drop in performance (Depending on detail settings, w/e) then who's not to say my processor couldn't handle 100's of objects with maybe a little hike in CPU usage?

I love real-time physics in games... I take it personally offensive when a game (Like Call of duty 2) doesn't include it. But before we go off half-cocked bying and add-in card for it, why don't we see how well regular processors can handle it first? I haven't seen any studies on that...

Although paying $50 for an X1600 does make it pretty attractive :) 


Basically, i'm saying I believe that speacialized hardware is excellent for doing things, but i don't think we will ever need a seperate card to process pysics. No matter how many physics we have to process, I think any standard processor can handle it. I personally believe one of three things should happen

1) A PPU is built-in to all future high-end graphics cards
2) We leave things how they are and let CPU's handle them
3) or make a dedicated proccessor chip on motherboards for physics.

Unlike graphics that are always changing and evolving, physics are always a bunch of mathmatecal algorithims (Heh, i sound like a scientist saying that). It isn't going to need constant new hardware, and the more powerful CPU's get, the less we need an add-in card...
July 14, 2006 2:42:48 PM

The cards are too expensive. Give it a year or two prices are sure to drop.
Also, nvidia is trying to have the physics processor on the graphic card itself instead of having it on a seperate PCI card.
July 14, 2006 3:39:29 PM

i totally agree with you Cody, my second core doesnt do much work when gamming and giving it the phyics calculations wouldnt harm perofrmance. I do also agree about the points with AI, it basically sucks in all games sure fear was good but its just an extra for loop, i would love to see realistic AI, has anyone here played the new tomb raider? the ai really is desperatly bad at one point i was standing about 5 foot from a guy while he continuud to fire in the opposite direction to me, i was the only person in the room it was like he got stuck.

edit

i really need to go back to english lessons you wouldnt think i got a B in english at school would you lol.
July 14, 2006 4:21:02 PM

Quote:
i really need to go back to english lessons you wouldnt think i got a B in english at school would you lol.


Nope. :lol: 

I'm a fan of nVidia's approach to physics calculation, SLI physics. I currently have a 7900 GT, an Asus M2N-SLI Deluxe, and a currently unused SLI bridge. Next year, when DX10 cards become available, I'm going to purchase a card of roughly the same value as my 7900 GT, most likely ($200-300 range).

As I understand it, physics calculation is not a part of DirectX, but a different set of calculations to produce the desired effects - that's why the physics cards from Ageia don't use DirectX.

So...instead of selling the 7900 GT or giving it to my brother, I can shift the 7900 GT down to the second PCIe slot, slip in the new DX10 card, connect the bridges, boot, install the newer drivers (Windows and Linux), and voila! Gorgeous physics processing, all for the price of whatever I bought the 7900 GT for last month and the price of the newer card, which I'll need for the new DX10 effects anyway.

I see that as a much more useful system than a dedicated PhysX board. So what if the PhysX method is more pretty? It'll bog down the machine because the video card may not be able to render all the new stuff on the screen. With SLI phsyics, I have a hunch that performance will be better at initial release because there's two buses at work. The PCIe bus to take the info from the two cards to the rest of the system and back, and the bridge provides its own bandwidth between the two cards - more synchronization means more performance.

And since I can reuse parts, I save money. That and the specs of a PhysX card are equivalent to a cheaper GeForce FX card - at three times the price. Waste of Money.
July 14, 2006 6:32:16 PM

what? U mean u can't install DX10 on a 7900GT in the future?

The box is labelled 'Vista Ready' in other words, DX10 ready.
July 14, 2006 6:54:07 PM

A 7900 GT will someday have a Dx10 driver, but it will never have all of the Dx10 features.

It's a DirectX 9c class card...
July 14, 2006 6:55:28 PM

Quote:

I'm a fan of nVidia's approach to physics calculation, SLI physics. I currently have a 7900 GT, an Asus M2N-SLI Deluxe, and a currently unused SLI bridge.


Well, actually, ATi announced that you can use a 2nd GPU (as cheap as the X1600, and performs better than Agiea's PhysX) several months before nVidia... If you look Here to see their info page about it. It's good that both companies have PhysX solutions; we know nVidia and ATi will blow Ageia right out of the water!

Quote:
what? U mean u can't install DX10 on a 7900GT in the future?

The box is labelled 'Vista Ready' in other words, DX10 ready.


Yes, of course you can use DX10 on a 7900GT! When it's officially released, it will work on any card that can run DX9 -- but of course, running games with DX10 are going to run alot slower than a card with DX9 becuase there's no direct hardware support. DX10 is completley changing the way DX10 cards render things (There's a smaller "overhead", i'm not really even sure what that means). Basically, DX10 will run really well on cards that support it when it's released. But that's a year or more in the future.
July 14, 2006 7:16:39 PM

Quote:
what? U mean u can't install DX10 on a 7900GT in the future?

The box is labelled 'Vista Ready' in other words, DX10 ready.


Yes, of course you can use DX10 on a 7900GT! When it's officially released, it will work on any card that can run DX9 -- but of course, running games with DX10 are going to run alot slower than a card with DX9 becuase there's no direct hardware support. DX10 is completley changing the way DX10 cards render things (There's a smaller "overhead", i'm not really even sure what that means). Basically, DX10 will run really well on cards that support it when it's released. But that's a year or more in the future.

Vista will be using DirectX 9 for its effects, not DirectX 10. It will support it, but will not use it to create the Aero glassy interface. Windows XP supports DirectX 10, but doesn't use it. A card that says "Vista-Ready" does not mean that it supports DX10.

And Cody, well...you're sort of right. The architecture of the card must be designed around the renderers it will be using...in the case of the 7900 GT, DirectX 9x and OpenGL 2.x. You can install DirectX 10x and OpenGL 3.x on a current-gen card like you said, and it will run slower like you said, but you left out one piece of information.

DirectX 9 cards with DirectX 10 installed won't show any features of the new renderer that the old one doesn't. Example - I have an old Inspiron 2600 notebook with integrated Intel Extreme graphics. I installed DirectX 9.0c (latest version) and Halo. On the older machine that only has full support for DirectX 7, it doesn't show any features exclusive to DirectX 8 and 9.

Specular is disabled, many textures are shown at less than half their full appearance, no shinyness on metal surfaces, and colors aren't fully drawn (the master chief is gray, not green, and the red and blue Covanant are indistinguishable except by shape). On a fully-DirectX 9-compliant video card (e.g. my new 7900 GT), it shows Halo in its full glory, which by today's standards isn't really all that impressive. :?

Now, since DirectX 10 is a complete rewrite, not based on any previous version, just imagine the visual effects it will support, and what kind of hardware needs to support it!

And no hardware update, like a BIOS flash or driver update, can make that card DX10-compliant.
July 14, 2006 7:24:02 PM

Quote:
What is with Aegia, NVidia and ATi and this physics stuff. Physics is not really that important, and you can simulate physics with simple mathematics. I do not really care is my grenade sends a vehicle flying three feet with a PPU or two feet with a simulated calc (I will never see it in real life to say if it was realistic or not). Physics is not important enough for a dedicated processor.

Now, AI would be good, some processor which handles the AI standard algorithms, and branch predictions. That would make games better.

As with graphics, HDR and Bloom are going the right direction. I would like to see more of this, and also better material handling of textures (done on GPU), maybe material properties for texture maps, so a texture can have more than one material property. There is still a long way to go in the visuals department, and companies such as NVidia and ATi should be concerntarting on these instead.

What do you think? What ideas of improvements can you think of to make gaming a better experience.


2 words: Destructible Terrain

From some articles I have read (sorry to pressed for time to google sources) when physics processors are in common use then we will begin to see destructible terrain in online games like BF2 (more like BF4), etc...


That is where the physics processors will come into their own.
July 14, 2006 7:39:42 PM

Right on. Ageia, nVidia, or ATI, it doesn't really matter, in the grand scheme of things.

Say someone has a motherboard with only one PCIe, or even an AGP slot for that matter, but wants phsyics processing without a new motherboard to purchase. Go with Ageia's PCI slot solution.

More and more people are getting SLI/Crossfire motherboards, so they can take advantage of those solutions. It really doesn't matter all that much which is chosen, because those parts are just big, powerful calculators.

And physics processing is just taking whichever calculator you choose and telling it "Here, you do this problem and hand it over to this other calculator to put it on my screen." I feel that nVidia's solution is the least complicated and offers the most cost-effective way to calculate physics.

Now, some people are saying that nVidia's and ATI's methods aren't true physics. Supposedly, those systems won't be able to do all the physics calculations of a dedicated board.

I doubt that. It's the same math, the OS' driver just sends it to a different calculator. The physics math isn't the problem, it's the calculator.
July 14, 2006 7:49:32 PM

Quote:
Now, some people are saying that nVidia's and ATI's methods aren't true physics. Supposedly, those systems won't be able to do all the physics calculations of a dedicated board.

I doubt that. It's the same math, the OS' driver just sends it to a different calculator. The physics math isn't the problem, it's the calculator.


I doubt that as well. I think the only difference is that ATi (I don't know about nVidia, sorry) will use a physics engine called "Havok" as opposed to Agiea using PhysX. All games up to this point have been using their own integrated physics engine -- and i personally think there's now going to be a standard for physics engines which is great. If it was up to MS and Apple, each would have there own version of the internet. Instead, some smart peole came along and said "Let's use HTML so we can all see it!" and it's getting that way for Physics, thank god.

@ your other post
Thanks for reminding me about that information I left out (I actully knew about it, just forgot to mention it). DX10 on DX9 will of course run slower (Like you said Vista currently is DX9 not X) but also if you're using a DX9 card, you won't get some new support features [which are still in developtment]. Kind of like if used a DX8 card for a DX9 game, you wouldn't get the effects like heat blur and bump-mapping, but it wouls still run it.

Don't forget that DX X (He he, kind of like Mac OSX) is still about a year away people.
a c 108 U Graphics card
a b Î Nvidia
July 14, 2006 7:54:22 PM

Quote:
What is with Aegia, NVidia and ATi and this physics stuff. Physics is not really that important, and you can simulate physics with simple mathematics. I do not really care is my grenade sends a vehicle flying three feet with a PPU or two feet with a simulated calc (I will never see it in real life to say if it was realistic or not). Physics is not important enough for a dedicated processor.

Now, AI would be good, some processor which handles the AI standard algorithms, and branch predictions. That would make games better.

As with graphics, HDR and Bloom are going the right direction. I would like to see more of this, and also better material handling of textures (done on GPU), maybe material properties for texture maps, so a texture can have more than one material property. There is still a long way to go in the visuals department, and companies such as NVidia and ATi should be concerntarting on these instead.

What do you think? What ideas of improvements can you think of to make gaming a better experience.


2 words: Destructible Terrain

From some articles I have read (sorry to pressed for time to google sources) when physics processors are in common use then we will begin to see destructible terrain in online games like BF2 (more like BF4), etc...


That is where the physics processors will come into their own.

That would be sweet. I am sick of my rocket not blowing sh*t up.
July 14, 2006 8:06:52 PM

Quote:
In all honesty, i would think that the whole physics thing, that is the PPU anyways will likely be intigrated into future graphics cards, or maybe on mobo's.


Well, nVidia's method does that without changing the hardware. SLI physics was only named to get people to buy another video card - and it works. A video card can compute gigaflops of information (a lot of information) much faster than a CPU. As such, there are a lot of unused clock cycles that could be put to use.

Supposedly, SLI physics will employ some of the unharnessed power of the video card to do both rendering and physics calculation on the same card. It won't be as fast as using one decdicated card to render and the other to do only physics, but if they can do that with one card that's pretty good.
July 14, 2006 8:10:25 PM

Quote:
I have mixed feelings about PPU's. I think it's an excellent idea on paper. And for someone to have thought about it, well kudos to them.

I agree; I do have mixed feelings. It's a great idea to dedicate a chip for PPUs, but I think integrating it into all modern day video cards might be a better option. EDIT as Astronout said above.
Quote:

yeah anyways, I also agree that physics isn't really an important factor in gaming at the moment. I mean with graphics the way they are now, although, good, aren't of the quality or realism to need the physics to back them up anyways.

I disagree there totally. It's truly amzing to be running from a grenade in FEAR and knock over stools and chairs in the process and seeing bottles and weapons and even sometimes a hammer fly past you from the explosion. It's even cooler to knock someone out in Splinter Cell: Chaos Theory and throw them off a roof to see them flop on the ground

I have mixed feelings as well. My opinon is Physics are essential for realism (in many cases, not all) but it's not time to dedicate a whole graphics [seperate] card for it yet.
July 15, 2006 4:42:00 PM

We have dual core cpus now, one out of 50 games actually use more than what one cpu can churn out.
In 2010 Intel promises to have 32 core cpus..
What in the hell will we ever need PPUs for, when we have plenty of processing power now?
July 15, 2006 5:16:49 PM

Quote:
ummm... I think ya missed the boat on what we're talkin about but yeah


No I didn't, I'm saying PPUs seem completely useless with all this CPU power.. Physics being neccesary or not.
July 15, 2006 10:13:30 PM

I think a card that does nothing but process physics would be awesome. I can't wait to see destructible terrain. :D 
July 15, 2006 10:15:12 PM

red faction 1 and 2...
July 15, 2006 10:49:58 PM

Quote:
i really need to go back to english lessons you wouldnt think i got a B in english at school would you lol.


Nope. :lol: 

I'm a fan of nVidia's approach to physics calculation, SLI physics. I currently have a 7900 GT, an Asus M2N-SLI Deluxe, and a currently unused SLI bridge. Next year, when DX10 cards become available, I'm going to purchase a card of roughly the same value as my 7900 GT, most likely ($200-300 range).

As I understand it, physics calculation is not a part of DirectX, but a different set of calculations to produce the desired effects - that's why the physics cards from Ageia don't use DirectX.

So...instead of selling the 7900 GT or giving it to my brother, I can shift the 7900 GT down to the second PCIe slot, slip in the new DX10 card, connect the bridges, boot, install the newer drivers (Windows and Linux), and voila! Gorgeous physics processing, all for the price of whatever I bought the 7900 GT for last month and the price of the newer card, which I'll need for the new DX10 effects anyway.

I see that as a much more useful system than a dedicated PhysX board. So what if the PhysX method is more pretty? It'll bog down the machine because the video card may not be able to render all the new stuff on the screen. With SLI phsyics, I have a hunch that performance will be better at initial release because there's two buses at work. The PCIe bus to take the info from the two cards to the rest of the system and back, and the bridge provides its own bandwidth between the two cards - more synchronization means more performance.

And since I can reuse parts, I save money. That and the specs of a PhysX card are equivalent to a cheaper GeForce FX card - at three times the price. Waste of Money.

From my understanding, with the GPU rendering physics it won't affect gameplay. What I understand is that the GPU doesn't talk back to the CPU much, which only lets it render things such as cloth, hair, ground maybe?. However a dedicated processor like AMD has suggested that connects to the CPU via a HT link would be able to render the wall that blows up which is hiding multiple multiple NPC's...

My understanding of this is very basic so my understanding may be skewed.
July 17, 2006 5:51:56 AM

Although I can understand the uses of PPUs, there is still an inherent problem with Physics calculations, which was pointed out in the Toms article- Multiplayer.
Physics calculations need to be calculated per frame, you cannot draw the frame unless you calculated everything. Whilst this means a PPU is good, it also means that everyone must have a PPU, which is a bit problem, not just the physical hardware but also the driver VERSION etc, otherwise different set ups will produce different results (positions of fragments etc).

This like AI are less important in multiplayer, although "bots" may be more intelligent for those with AI hardware (if they every produce it). This means that even in multiplayer, different hardware and drivers does not affect it as much. Also it is not "real time", AI can be calculated slower, and frame drawing does not depend on it.
Graphics is the same, nicer graphics does not affect multiplayer.

I think too much emphasis put on PPUs at the moment, when hardware manufacturers should be working on something different.
a b U Graphics card
a b Î Nvidia
July 17, 2006 6:27:59 PM

Quote:

From my understanding, with the GPU rendering physics it won't affect gameplay. What I understand is that the GPU doesn't talk back to the CPU much, which only lets it render things such as cloth, hair, ground maybe?.


That's not the case. The VPU can give information back to the processor and at a higher rate than the PPU, however right now the implementation under Havok choses not to. So it's not a hardware limitation but a software/API limitation, and it's unknown if M$ or ATi&nV will change this anytime soon, but neither ATi nor nV found it to be a big concern as the game dependant phsyics were easily handled by current high end CPUs.

[H] does a good job of covering most of this by letting the principal players talk about it;

http://enthusiast.hardocp.com/article.html?art=MTA5Nywx...
July 17, 2006 6:49:46 PM

Quote:
Windows XP supports DirectX 10, but doesn't use it.


Windows XP does not support or use DX10 in any way, shape, or form. As of now, it never will.

And dedicated PPUs are a great idea. Ageia's isn't doing well because of cost, lack of support, and under-usage. The effects we've seen it generate are far from the extent a PPU is capable of. And Ageia's solution only lowers performance because its generating a lot more calculations and it has to talk to the CPU for each of them across the slow PCI bus. Now if a PPU had a direct link to the CPU (a la AMDs 4x4 platform) or if it could completely bypass the CPU altogether it would have a lot less of a performance impact on the game other than the GPU having to draw all those extra objects on the screen.

I for one agree with the guy who prefers more AI and physics in a game than pretty graphics. Whats the point of making a car look completely real when if I fire a rocket at it, it just gets its windows blown out and a black texture on it. I want to see it explode realistically in a non-canned way. My dream of video gaming is to be able to completely interact with every single object in a game world. If want to pick up a glass and throw it at an enemy's face, I want it to hurt him and break when it hits the ground. If I want to fire a rocket into a bus full of school children, I want to see it explode realistically and see little Johnies head roll past me.........did I go too far there?
a b U Graphics card
a b Î Nvidia
July 17, 2006 9:01:11 PM

Quote:

Now if a PPU had a direct link to the CPU (a la AMDs 4x4 platform) or if it could completely bypass the CPU altogether it would have a lot less of a performance impact on the game other than the GPU having to draw all those extra objects on the screen.


The PPU couldn't completely Bypass the CPU, or else it would have the current implementation of VPU-physics, which is supposedly what's so 'great' about the PPU.

I think if the PPU existed before the potential of VPU based physics, it'd be welcomed with open arms and given the time and support to succeed. But with the possibility of our used graphics cards doing the work potentially better and definitely cheaper, it's going to be a tough battle for them, especially since there isn't a killer app for them, and by the time there might be, it's probably going to have more solid and serious competition from the VPUs.

Quote:
I for one agree with the guy who prefers more AI and physics in a game than pretty graphics. Whats the point of making a car look completely real when if I fire a rocket at it, it just gets its windows blown out and a black texture on it. I want to see it explode realistically in a non-canned way.


I prefer a balance. Would I give up a little Oblivion bling for better AI, sure, but better physics, no way. Would I give up graphics in flight sim or in something like UT2K4 for better physics, fo' sure! But then you have a title like Crysis. Would I prefer realistic looking foliage and lighting and slightly canned physics (still desctructable and all, but just 10-20% of possible), yeah I'd prefer the really nice scenery and better AI. The fact that the branch may not snap back exactly as it would in real life, or that the building falls down in an 8 point segment with a probability engine for surrounding damage rather than tue calculated physicis. I don't know if I'll care as much. The question IMO is the thresholds for each in their attemtp to make it realistic. Do I want proper height, drop, wind, caliber and angle calculations for my sniping experience, sure! Do I need to ensure that the corpse rolls down the hill properly disturbing the foliage as it passes, and potentially causing a dirt slide. Nope! But also do I want pretty shiny water, yet a grenade beside a wooden crate that the bad guy's hiding behind doesn't so much as disturb the box, Nope.

It's about varying degrees IMO, and that's why I don't think gameplay physics will be so intensive on the CPU, because it should be a moemtary thing for everything other than racing/flying games.

Quote:
My dream of video gaming is to be able to completely interact with every single object in a game world. If want to pick up a glass and throw it at an enemy's face, I want it to hurt him and break when it hits the ground. If I want to fire a rocket into a bus full of school children, I want to see it explode realistically and see little Johnies head roll past me.........did I go too far there?


Yes, but not from the graphic violence nature (personally prefer a bus load of nuns and kittens though), but the level of phsyics would be wasted if those rolling craniums were simply rendered as featureless spheres, so even there you'd want balance where displacement mapping would allow progressivley greater damage to the skulls's apprent integrity before the reward of grey goo is brought forth.

The whole point is that there needs to be balance, and this idea of 8billion calculations on every last atom is overkill, and anything less than what seems to require dedicated hardware seems to be something that can be done by many techniques, including VPU physics, or even GPU+CPU with dual/quad core.

So while there's no right answer, at this time there's also no real reason to deicde, let alone fork out the coin, unless you want an early model for the fun of playing with it. And personally I'd prefer playing with pre-release VPU physics software support rather than a PPU.
July 17, 2006 10:22:20 PM

As GGA mentioned, but didn't quite directly get at, it appears that the PPU is a case of too little, too late.

PPUs really only fill a small niche; people already have a hard enough time coughing up the case for an expensive graphics card setup; you may all have heard of those dual-GPU SLi boards, but in reality, only a small number have sold. Even if the PhysX card did have more to show for itself than it does now, most people didn't buy one because they simply bristled at the concept of shelling out yet another $300US to make their gaming rig complete.

Yes, lots and lots of physics simulations is the way of the future. However, it doesn't have to come from a PPU, and nor is a PPU the only way to provide the intensive calculations necessary.

Obviously, we've all hit upon the two other big sources: CPU-based (multi-core) calculations, and GPU-based calculations.

When it comes to CPU-based calculations, if one (or more, in the future) entirely separate processor threads is devoted to nothing but physics calculations, it could be fairly easy to achieve a lot of physics calculations from the processor alone. Since this is the CPU we're talking about, everyone has one, and even if they don't have enough cores, (remember, in 5+ years, no single-cores will be made) it can simply be run on fewer cores at merely a performance penalty, rather than being shut out entirely because they don't have the hardware.

As for GPu-based calculations, a modern GPU provides power vastly in excess of what the PhysX chip offers. With the advent of the unified shader arcitecture in future cards, (most notably the Radeon X2000 series) there will be a massive pool of INCREDIBLY powerful, specialized processors; the arcitecture will mean that they are more flexible than pixel shader units, and perhaps using them for physics will be incredibly simple. The impact of this on graphics isn't actually as great as PhysX's boosters make it out to be.

The way I see it, the PhysX card could've succeeded were it several years ago. But now, with the advent of more desireable alternatives, Ageia is trying to grab hold of a rapidly-closing window; competing technologies are ramping up their solutions at the same time that actual applications are appearing for it.
a b U Graphics card
a b Î Nvidia
July 17, 2006 10:45:52 PM

I'm sure you didn't see my final statements, because when I left work, I didn't quote it right so last line was gone, so swome things may appear different at the end than what you saw before.

Quote:
As GGA mentioned, but didn't quite directly get at, it appears that the PPU is a case of too little, too late.


And even too early (if there are killler apps, if they were here now, with nothing for the VPU proponents to reply with, hey no challenge, PPU Rulz!

Quote:
Yes, lots and lots of physics simulations is the way of the future. However, it doesn't have to come from a PPU, and nor is a PPU the only way to provide the intensive calculations necessary.


Exactly, when the PPU was announced, I said COOL, especially when there was grumblings that it would be added to Oblivion (back when about everything was going to be added). But as we got competing solutions, then it became a question of what is the 'NEED' for each piece of the puzzle, and even when it was believed that VPUs had no option for interactive physics the need for the PPU still seemed limited as that aspect of the game seemed to be something the new CPUs could easily handle. 90+% shiny physics of the ripple, particle, visual kind; and <10% the kind of game dependant interactive physics. Now that there is that option, it makes even less sense.

Quote:
it can simply be run on fewer cores at merely a performance penalty, rather than being shut out entirely because they don't have the hardware.


Or even just run it as a slider level, where the calculations are reduced somewhat, not ideal, but if we're looking at support for the low end, then it's similar to support for people with lesser graphics.

Quote:
As for GPu-based calculations, a modern GPU provides power vastly in excess of what the PhysX chip offers. With the advent of the unified shader arcitecture in future cards, (most notably the Radeon X2000 series) there will be a massive pool of INCREDIBLY powerful, specialized processors; the arcitecture will mean that they are more flexible than pixel shader units, and perhaps using them for physics will be incredibly simple. The impact of this on graphics isn't actually as great as PhysX's boosters make it out to be.


And if you think about how the geometry shaders work you could reduce some calclations on the fly. And the ability to adjust the load of those ALUs on the fly would offer great new possibilities (think about the 48 shader units now in the R580, plus assigned to vertex, 4 to geometry and then 4 to physics, you'd still have a more powerful solution than an X1900XTX based on speed alone, and we all know that with being able to dynamically adjust these proportions would allow for more graphical or physics power on demand. That to me is an attractive proposition.

Quote:
The way I see it, the PhysX card could've succeeded were it several years ago. But now, with the advent of more desireable alternatives, Ageia is trying to grab hold of a rapidly-closing window; competing technologies are ramping up their solutions at the same time that actual applications are appearing for it.


And the thing is that with M$' support it seems like Ageia's only hope is if UT2K7 offers something special. It's possible, but unklikely. Even their deal with the PS3 seems very niche and of little hope, and being tied to the fate of the PS3 seems in trouble right now with the PS3 seeming to be in a little trouble itself.

That VPU physics can rely on thing that are essentially throw-away cards otherwise, that's one heck of a benfit. Even if it were 50% performance and not the 190+% ATi claims, would you prefer to pay ~$50 for a used (less than $100 new) X1600 or $200-250 for a PhsyX card?

Sure it's all theory right now, but only the PPU requires a risk on something with no other uses (if it helped encode video or do anti-virus then it would have a use until the PPU physics finally blossomed).
July 17, 2006 11:30:47 PM

Quote:

From my understanding, with the GPU rendering physics it won't affect gameplay. What I understand is that the GPU doesn't talk back to the CPU much, which only lets it render things such as cloth, hair, ground maybe?.


That's not the case. The VPU can give information back to the processor and at a higher rate than the PPU, however right now the implementation under Havok choses not to. So it's not a hardware limitation but a software/API limitation, and it's unknown if M$ or ATi&nV will change this anytime soon, but neither ATi nor nV found it to be a big concern as the game dependant phsyics were easily handled by current high end CPUs.

[H] does a good job of covering most of this by letting the principal players talk about it;

http://enthusiast.hardocp.com/article.html?art=MTA5Nywx...

Kind of sounds like a yes and no:

Imagine a case where you blow up a wall in a game and thousands of pieces of debris fall to create a mountain that your character climbs to escape a certain area. In such a situation, every player of that game would need to have a system that supports GPU physics acceleration. If they don’t the game comes to a grinding halt because there’s too much information to process. For developers, this is unacceptable. The only fair way is to include elaborate effects physics only, and leave basic gameplay physics to be processed on the CPU so that the widest number of people can enjoy the game. That way, if you don’t have a system that supports GPU physics acceleration, you can still enjoy the game, you just won’t get quite the level of immersion you would with the effects physics turned on.
July 17, 2006 11:45:51 PM

Anyone remember 3Dfx and their 3D pass through add in card? One can only hope that Ageia travels the same path. $300.00 for a few more specs on the screen? Blowing up terrain or seeing a flashlight fly by me is not in anyway worth $300.00, it's a gimmick, nothing that a decent dual core cpu couldn't handle and the cpu will do even more than just physics calcs :) . With recent price cuts for both brands I don't see any reason to buy this dead end product, just go get a better cpu and you'll be tons happier than if you wasted it on this physics card. I'll pay for a sound card, a video card or a tv card but I won't pay for an extra cpu on a pci card dedicated to one thing.

Don't get me wrong I like things to look pretty in the game but I usually spend time playing the game not watching it happen so I can see some bits fly around. Perhaps and only perhaps the software version might improve single player games but I don't see it doing much that most will pay attention to while playing an online game.

Oh and did anyone mention to these Ageia boys about PCI-E 1x? At least then it wouldn't use up a useful pci slot :)  which are getting fewer and fewer on new MB's and I don't understand why, only NIC's, Controller cards and one TV card are available on PCI-E when last I checked I had a sata controller, a bunch of USB ports, GB Lan already on the motherboard. Sorry got off the topic but you gotta despise a useless slot taking up space on your board for absolutely no reason almost like I've got a bunch of ISA slots :) 
August 2, 2006 1:18:29 AM

The physics card is a good idea as far as i'm concerned. I agree that a multicore processor could deal with ai and physics nicely but on my CURRENT system (that i don't plan to upgrade anytime soon - processor and gpu wise) i think a addon card would be great.

They need to make it a lot cheaper tho ( a LOT cheaper) like 50 - 100 bucks. then we can all get one and be happy.
August 2, 2006 4:09:49 AM

Quote:
What is with Aegia, NVidia and ATi and this physics stuff. Physics is not really that important, and you can simulate physics with simple mathematics. I do not really care is my grenade sends a vehicle flying three feet with a PPU or two feet with a simulated calc (I will never see it in real life to say if it was realistic or not). Physics is not important enough for a dedicated processor.

Now, AI would be good, some processor which handles the AI standard algorithms, and branch predictions. That would make games better.

As with graphics, HDR and Bloom are going the right direction. I would like to see more of this, and also better material handling of textures (done on GPU), maybe material properties for texture maps, so a texture can have more than one material property. There is still a long way to go in the visuals department, and companies such as NVidia and ATi should be concerntarting on these instead.

What do you think? What ideas of improvements can you think of to make gaming a better experience.


I agree w/ you on most points with the exception of physics not being important enough for serious consideration. Integration of physics processing into the GPU is very important IMO. There is an incredible amount of realism that's missing from games because of a lack of power to computate complex physics. I think anyone who initially played HL2 can understand what i mean. When I first picked up the gravity gun in HL2 I said to myself, "wow! i can't believe this has been missing in games for so long." And HL2's physics engine is just the beginning.

Ever wonder why in modern games you still can blow holes in walls w/ rocket launchers? or completely demolish a car? The reason is simple: it's a complex thing to do and thus requires a complex and powerful processor.

Now... I do agree that a dedicated physics processor is un-necessary. GPU's are powerful enough to do it on their own. However, it's going to take a lot of work to make that happen. Already ATI is working on integrating physics acceleration into GPU's. nVidia is reportedly working on it too.

Make no mistake, physics acceleration is the next big thing in gaming. It'll do for gaming what pixelshading did for graphics. However, it coming in the form of add in boards is simply not going to happen.
August 2, 2006 8:15:29 AM

Quote:
The physics card is a good idea as far as i'm concerned. I agree that a multicore processor could deal with ai and physics nicely but on my CURRENT system (that i don't plan to upgrade anytime soon - processor and gpu wise) i think a addon card would be great.

They need to make it a lot cheaper tho ( a LOT cheaper) like 50 - 100 bucks. then we can all get one and be happy.


I whole heartedly agree, its annoying that all these solutions, like SLi physics, will require new graphics cards etc at a high cost (although if Nvidia made it so that I could use a 7900 GTX for physics with a DX 10 card i would be happy :)  ). A dedicated PPU for 100 bucks would be fine, slot it in, and your old system is ready to go with the newer games.
August 2, 2006 5:06:55 PM

Meh. I'd consider getting one if they were both cheaper and didn't hinder performance. Currently, neither is the case.

That's why I think PPU will be integrated into GPU by nvidia and ATI. Neither company has any intention of creating a whole new chip and packaging for physics.

And we all know where Ageia is heading... (bankruptcy)
August 3, 2006 8:57:30 PM

Quote:

All games up to this point have been using their own integrated physics engine -- and i personally think there's now going to be a standard for physics engines which is great.

I don't know about using their own but...

I do know that many game developers have chosen to implement third party physics engines in the past. Havok for instance, was already used back in Max Payne and was also used by Valve in some (all) of their current titles. So standards do exist. At least for the software. It's the hardware that needs to be brought closer together on the other hand so that companies producing this kind of platforms can learn how to optimize their code more effectively.

Now, about the card itself... I'm really starting to doubt if it really would be worth the money. I have been giving some thought and came to the conclusion that in order for this kind of hardware to become succesful, it will have to be integrated into current GFX cards. Maybe it could be done in form of an "addon-chip" that you can plug into a socket on future cards.

Surely would increase the lifespan of the GPU in that way, don't you think? :roll:
September 30, 2009 9:02:23 PM

Physics processing has improved just like everything else has. If you compare the physics of some of hte modern games to games 20 years ago, it's amazing! So many more particles and objects! The euphoria software technology is incredible. Just as graphics will look better in the future, they'll behave better too. Internet bandwidth will have to improve for online games to keep up because the server will have to ensure that each client is seeing the same thing in the environment - otherwise using hte environment as a tool would be unfair. You'd want to make sure that things break the same from one client to another. Physics processing shouldn't be different from one system to another, rather it should run faster depending on how many processor cores you have available - it doesn't matter whether the core is on an expansion card or it's a part of your mainboard cpu. Someone said that it would be different, but I don't see that as being a problem. Speed is what would change, not the actual physics. This means that you'd have to strike a balance so that lots of people can play your game at acceptable framerates.

I see no reason why physics cards shouldn't be made. Just as we have graphics cards to act like another cpu core, we could also have physics cares to act as a physics core. We could also use extra cpu cores to do the same thing. Ultimately, it doesn't really matter where the parellelism comes from. And it also needs consumer interest. If consumers don't want good physics, then there won't be any demand for it. Our software at this point, in most cases, isn't advanced enough to do everything we want it to do. I think that people like shiny graphics more than they do physics, but I think that we will shift towards physics as people see what's possible when there's a good physics engine in place. It really is the difference between a sandbox and a linear plot. In one you have a very limited range of physics, and in the other you have an expansive range of possibilities, all of which you can employ to overcome challenges in the game. It's like overcoming a challenge 1000000000000 different ways versus 49 different ways. I'd rather have more variety in the game.
!