MarkG

Distinguished
Oct 13, 2004
841
0
19,010
I posted it in the CPU section but yeah not exactly the best of starts.

So let's get this right: the game runs slower with high-quality physics simulation and a PhysX card than it does woth low-quality physics simulation on the CPU, and you're surprised by that?

That's like complaining because a game runs slower at 2048x1536 with 4xAA with all details maxed out on a Geforce 7900 than it does at 800x600 with minimum details on a GF4 MX.

Personally I suspect the PhysX card is always going to be limited by the low performance of the PCI bus, but it's a shame they don't allow you to run with high-quality physics on the CPU to give an actual, real, valid, useful comparison between the two.
 

Jak_Sparra

Distinguished
Mar 31, 2006
519
0
19,060
remember years ago when people used to have to have a 2d card as well as a seperate 3d accelorator card? (think mine was a creatives monster 3D {4meg onboard})

well thats what i think will happen with physics cards, they'll eventually be combined with graphics cards (and probably cost a lot less then 2 seperates).. it makes sense for companys to do this. If you where Nvidia or ATI what would you do?
 

dvdpiddy

Splendid
Feb 3, 2006
4,764
0
22,780
remember years ago when people used to have to have a 2d card as well as a seperate 3d accelorator card? (think mine was a creatives monster 3D {4meg onboard})

well thats what i think will happen with physics cards, they'll eventually be combined with graphics cards (and probably cost a lot less then 2 seperates).. it makes sense for companys to do this. If you where Nvidia or ATI what would you do?
Yeah that would be good.
 

BGP_Spook

Distinguished
Mar 20, 2006
150
0
18,680
remember years ago when people used to have to have a 2d card as well as a seperate 3d accelorator card? (think mine was a creatives monster 3D {4meg onboard})

well thats what i think will happen with physics cards, they'll eventually be combined with graphics cards (and probably cost a lot less then 2 seperates).. it makes sense for companys to do this. If you where Nvidia or ATI what would you do?

The problem is right now all the PPU is doing is functioning as a graphics co-processor. If that is all it ever does then I see it becoming integrated with video cards.

If it matures and becomes as central as a video card is today then it will have to lose the PCI bus all together. Since the PCI bus is bandwidth limited and for the PPU to really shine it is going to have to stay very busy. What would also help is direct communication with the GPU rather than having to proxy through the CPU.
 

Pain

Distinguished
Jun 18, 2004
1,126
0
19,280
But, it's going to be doing stuff that must be communicated to the cpu more than to the gpu. It's calculating location of items that will interact with the game and the player, not just calculate things that get displayed and then forgetten.

I personally don't see the BW of the PCI bus being an issue, but we'll see.
 

Primitivus

Distinguished
Apr 21, 2006
324
0
18,780
remember years ago when people used to have to have a 2d card as well as a seperate 3d accelorator card? (think mine was a creatives monster 3D {4meg onboard})

well thats what i think will happen with physics cards, they'll eventually be combined with graphics cards (and probably cost a lot less then 2 seperates).. it makes sense for companys to do this. If you where Nvidia or ATI what would you do?

According to ATI GPU's already do physics calculations so a separate PPU is not necessary.

And in response to MarkG, the whole point of the PPU is to be able to run games as fast as before but with higher visual quality, isn't it? In the same way as someone buying a 7900GTX instead of a 6200: because they want to run games FAST with all the settings cranked up, not because they want to play good looking slide shows.
 
I think video cards will have a physics accelerator on-board.

Why, if the hardware already present in the graphics card can do it, then there's no needs for a co-processor for physics, just have the unified architecture call for a physics calculation, just like it will for pixel, vertex, geometry. No need for a special part that doesn't help when physics aren't called for (like lighting effects).
 
So let's get this right: the game runs slower with high-quality physics simulation and a PhysX card than it does woth low-quality physics simulation on the CPU, and you're surprised by that?

The surprise is the level of impact.

That's like complaining because a game runs slower at 2048x1536 with 4xAA with all details maxed out on a Geforce 7900 than it does at 800x600 with minimum details on a GF4 MX.

No it's not, it's like complaining that going from an X800Pro to a GF7800GT doesn't yield much performance improvement in Oblivion because one's doing HDR and the other is just doing bloom+AA. The game comes to a standstill, and if the physics card is doing more calculations that's fine, but doing 20 times as many calculations 4 times faster still mean 20% of the performance, and I'd prefer a rounded arch based on less physics than having a choppy game because the boxes are falling precisely at v1=([V0-sqrt 2g] - air resistance), and then calculating the correct impact rebound based on surface characteristics x mass x velocity with respect to F=Mg, etc.

I'll take a guesstimate if it means it looks very similar, but the one is accurate gravity at 6 fps and the other is estimated gracity at 30fps.

Personally I suspect the PhysX card is always going to be limited by the low performance of the PCI bus, but it's a shame they don't allow you to run with high-quality physics on the CPU to give an actual, real, valid, useful comparison between the two.

Well the PCI bus is not the major issue, but it does play a part, the fact that it needs to use it just makes all communication slower than it could be, but it shouldn't be this impactful, communication between the PPU and the CPU and the VPU (yes people they do have to talk because you need a resultant to tell the VPU to draw, what's the point in the physics if you can't see them?).

As for the CPU, they've already shown that a VPU can do the physics far FAR faster than a CPU, so they already know the CPu impact, and it would be preferable to use Havok FX for VPU based physics if you have to compare to host based in order to get a favourable showing.

For simulators PPUs will likely be very important for correct physics calculations quickly, however adding that level of realism to such fast paced games at such a high performance cost isn't worth the minor difference in game physics (rough estimated versus 'real') IMO.

Right now it doesn't look good, hopefully it's just early game support issues more than anything else, because I doubt the physics differences would even be as noticeable as the HDR vs Bloom differences.
 

jkay69

Distinguished
May 8, 2006
82
0
18,630
I have read all your posts with great interest, I feel that some very good points are being made, so here's my 2 cents worth ;-)

I believe the 'IDEA' of having a dedicated PPU in your increasingly expensive monster rig is highly appealing, even intoxicating and I believe this 'IDEA' coupled with some clever marketing will ensure a good number of highly overpriced, or at least expensive, sales of this mystical technology in it's current (ineficient) form.

For some, the fact that it's expensive and also holds such high promises will ensure it's place as a 'Must have' component for the legions of early adopters. The brilliant idea of launching them through Alienware, Falcon Northwest and the top of the line Dell XPS600 systems was a stroke of marketing genius as this adds to the allure of owning one when they finally launch to the retail market...If it's good enough for a system most of us can never afford but covet none the less it's damn well good enough for my 'monster RIG'. This arrangement will allow the almost guaranteed sales of the first wave of cards on the market. I have noticed that some UK online retailers have already started taking pre-launch orders for the £218 OEM 128MB version I just have to woner how many of these pre-orders have actually been sold?

The concept of a dedicated PPU is quite simply phenominal, We spend plenty of money upgrading our GPU's, CPU's and quite recently Creative have brought us the first true APU (X-Fi series) that it makes sense for there to be a dedicated PPU and berhaps even an AiPU to follow.

The question is, will these products actually benefit us to the value of their cost?

I would say that a GPU, or in fact up to 4 GPU's running over PCIe x32 (2xPCIe x16 channels) become increasingly less value for money the more GPU's added to the equation. i.e. a 7900GTX 512MB at £440 is great bang for the buck compared to Quad SLI 7900GTX 512MB at over £1000. The framerates in the Quad machine are not 4x the single GPU. Perhaps this is where GPU's could trully be considered worthy of nVidia or ATI's Physics SLI load balancing concept. SLI GPU's are not working flat out 100% of the time...Due to the extremely high bandwidth of Dual PCIe x16 ports there should be a reasonable amount of bandwidth to spare on Physics calculations, perhaps more if Dual PCIe x32 (or even quad x16) Motherboards inevitably turn up. I am not saying that GPU's are more efficient than a DEDICATED and designed for PPU, just that if ATI and nVidia decided the market showed enough potential, they could simply 'design in' or add PPU functionality to their GPU cores or GFX cards. This would allow them to tap into the extra bandwidth PCIe x16 affords.

The Ageis PhysX PPU in it's current form runs over the PCI bus, a comparitively Narrow bandwicth bus, and MUST communicate with the GPU in order for it to render the extra particles and objects in any scene. This in my mind would create a Bottleneck as it would only be able to communicate at the bandwidth and speed afforded by the Narrow bandwidth and slower PCI bus. The slowest path governs the speed of even the fastest...This would mean that adding a dedicated PPU, even a very fast and efficient one, would be severely limited by the bus it was running over. This phenomenon is displayed in all the real world benchmarks I have seen of the Ageis PhysX PPU to date, The framerates actually DROP when the PPU is enabled.

To counter this, I believe, Ageis through ASUS, BFG and any other manufacturing partner they sign up with will have to release products designed for the PCIe bus. I believe this is what Ageis knows as the early manufacturing samples were able to be installed in the PCI bus as well as the PCIe bus (although not at the same time ;-) ). I believe the PCI bus was chosen for launch due to the very high installed user base of PCI motherboards, every standard PC I know of that would want a PPU in their system. I belive this is a mistake, as the users most likely to purchase this part in the 'Premium price' period would likely have PCIe in their system, or at least would be willing to shell out an extra £50-£140 for the privelage. Although I could be completely wrong in this as it may allow for some 'Double Selling' as when they release the new and improved PCIe version, the early adopters will be forced to buy into it again at a premium price.

This leads me neatly onto the price. I understand that Ageis, quite rightly, are handing out the PhysX SDK freely, this is to allow maximum compatibilty and support in the shortest period of time. This does however mean that the end user, who purchases the card in the beginning will have to pay the full price for the card...£218 for the 128MB OEM version. As time goes by and more units are sold, the installed userbase of the PPU will grow and the balance will shift, Ageis will be able to start charging the developers to use their 'must have' Hardware Physics support in their games/software and this will subsidise the cost of the card to the end user, therefore making them even more affordable to the masses and therefore making it a much more 'Must Have' for the developers. This will take several generations of the PPU before we feel the full impact of this I believe.

If ATI and nVidia are smart, they can capitalise on their high installed initial userbase and properly market the idea of Hardware physics for free with their SLI physics, they may be able to throw a spanner in the works for Agies while they attempt to attain market share. This may benefit the consumer, although it may also knock Agies out of the running depending on how effective ATI and nVidias driver based solution first appears. It could also prompt a swift buy out from either ATI or nVidia like nvidia did with 3DFX.

Using the CPU for Physics, even on a multicore CPU, in my opinion is not the way forward. The CPU is not designed for physics calculations, and from what I hear they are not (comparitively) very efficient at performing these calculations. A dedicated solution will always be better in the long run. This will free up the CPU to run the OS and also for Ai calculations and well as antivirus, firewall, background applications and generally keeping the entire system secure and stable. Multicore will be a blessing for PC's and consoles, but not for such a specific and difficult (for a CPU) task.

"Deep breath" ;-)

So there you have it, My thoughts on the PPU situation as it stands now and into the future. Right now I will not be buying into the dream, but simply keeping the dream alive by closely watching how it develops until such a time as I believe the 'Right Time' comes. £218 for an unproven, generally unsupported, and possibly seriously flawed incarnation of the PPU dream is not in my opinion The Right Time, Yet ;-)

JKay6969
 

jkay69

Distinguished
May 8, 2006
82
0
18,630
I must apologise TheGreatGrapeApe, I accidentally voted your post as 'GOOD' I didn't realise this was the low vote, I would like you to know that I wanted to vote 'BEST'. I read your post and thought you had some interesting points. You were not flaming anyones views, simply giving your educated opinion, I appreciate that, too many posts to forums these days end up being slagging matches. Not cool...

Keep sharing your opinion with us, intelligent comments in my opinion, lead to better forums.

JKay69696
 
I must apologise TheGreatGrapeApe, I accidentally voted your post as 'GOOD' I didn't realise this was the low vote, I would like you to know that I wanted to vote 'BEST'.

No worries, I don't think the ratings matter for anything anyways. It's either an overjoyful pat on the back or an overly pesimistic Booo!
Funny you take it more seriously than I do. :lol:

I believe the 'IDEA' of having a dedicated PPU in your increasingly expensive monster rig is highly appealing, even intoxicating and I believe this 'IDEA' coupled with some clever marketing will ensure a good number of highly overpriced, or at least expensive, sales of this mystical technology in it's current (ineficient) form.

I have no problem with the idea, because on the surface it makes alot of sense, but to me it would be best if it basically took the load off the rest of the computer (freeing up resources) rather than simply increasing it's own load thus dragging down performance in general. The idea is great the implementation may be currently slithly flawed.

The concept of a dedicated PPU is quite simply phenominal, We spend plenty of money upgrading our GPU's, CPU's and quite recently Creative have brought us the first true APU (X-Fi series) that it makes sense for there to be a dedicated PPU and berhaps even an AiPU to follow.

The question is, will these products actually benefit us to the value of their cost?

And those two things combined are somewhat of the issue, does it make sense to buy a Quad Core + CPU with the extra money of a PPU & AiPU that are dedicated to solely one task (and therefore offer nothing for Video processing/editing/rendering, and other CPU intensive tasks, whereas the great CPU power has more of a global impat in the ever increasingly MP world?

SLI GPU's are not working flat out 100% of the time...Due to the extremely high bandwidth of Dual PCIe x16 ports there should be a reasonable amount of bandwidth to spare on Physics calculations, perhaps more if Dual PCIe x32 (or even quad x16) Motherboards inevitably turn up. I am not saying that GPU's are more efficient than a DEDICATED and designed for PPU, just that if ATI and nVidia decided the market showed enough potential, they could simply 'design in' or add PPU functionality to their GPU cores or GFX cards. This would allow them to tap into the extra bandwidth PCIe x16 affords.

I think the first part of that is exactly why the GPU route is a good idea, because often there is that added unused overhead for physics calculations that HavokFX could tap into. The PCIe lanes I don't think will eve be a concern, but the current method of the PPU is more of a concern, and here's why....

The Ageis PhysX PPU in it's current form runs over the PCI bus, a comparitively Narrow bandwicth bus, and MUST communicate with the GPU in order for it to render the extra particles and objects in any scene.

According tho the Ageis info, the PPU communicates with the CPU, which then tells the GPU what to draw. So it's doubly delayed, Usually the VPU simply waits for the CPU, now the GPU has to wait for the PPU and CPU to decide what needs to be rendered, whereas if the physics are done on the GPU itself it 'might' allow for the 'skip a step' benifit, but who knows until it's truely exposed.

This in my mind would create a Bottleneck as it would only be able to communicate at the bandwidth and speed afforded by the Narrow bandwidth and slower PCI bus.

I'm not as worried about the bandwidth as the latency, since I doubt that even now the PPU to CPU communication is coming anywhere near saturating the bus, but that it is sharing lanes may be causing even further issues to other traffic. At least the PCIe option you're speaking of will allow it greater ability to run in parallel with other traffic straight on the PCIe lanes, however as long as there is still that CPU involvement mid stram I have a feeling we'll have some latency still.

If ATI and nVidia are smart, they can capitalise on their high installed initial userbase and properly market the idea of Hardware physics for free with their SLI physics, they may be able to throw a spanner in the works for Agies while they attempt to attain market share. This may benefit the consumer, although it may also knock Agies out of the running depending on how effective ATI and nVidias driver based solution first appears. It could also prompt a swift buy out from either ATI or nVidia like nvidia did with 3DFX.

And this is the issue, HavokFX physics could already work on alot of rigs & games installed out there, for some games it would likely be of great benifit as they aren't fully using the available resources, for other games like Oblivion though the VPUs are already pretty much maxed out at 1280x1024, so no room for physics IMO, even with SLi/Xfire.

Using the CPU for Physics, even on a multicore CPU, in my opinion is not the way forward. The CPU is not designed for physics calculations, and from what I hear they are not (comparitively) very efficient at performing these calculations.

And despite my early statements I do agree with this, but the question is at what cost and at what possible loss of lateral utility?

A dedicated solution will always be better in the long run. This will free up the CPU to run the OS and also for Ai calculations and well as antivirus, firewall, background applications and generally keeping the entire system secure and stable.

I agree with the first part, but the second part I don't see how a PPU in non-gaming situations would help with anti-virus and firewall and background stuff, which is the exact area I'm talking about when I cay a Quad CPU might offer some people more global benifit.

Multicore will be a blessing for PC's and consoles, but not for such a specific and difficult (for a CPU) task.

And that's just it, I think for a true 'gaming rig' a PPU or fast VPU based physics assist will be a must, because multi-core CPUs alone won't offer as much of a bang for a game like Crysis which will have deformable terrain and interactive objects, but for casual gamers like myself, I think multi-core CPUs could do 'just enough' to get it done.

Let's say it cost me as much upgrading from 2GHZ dual core to 2GHZ quad core as it does adding a PPU, if it means I can only add Medium physics (single or dual core is 'low') versus the PPU's High or VeryHigh physics, but it allows me to edit video or multi-task better and gives me better performance in non PPU suppporterd games, then I'd personally go with the Quad core, since unlikemany gamers I probably wouldn't be splashing for both since I'd likely prefer getting some new ski boots or a 2nd ski pass like I did this year. For people who don't balance their expenditures, and will spend whatever it takes to make their gaming rig as that's their primary source of entertainment or they are less frugal with their coin, then they will likely get both and there's no issue, for most other people they'll need a good enough demonstration of benifit for them to buy it. These are the same people who buy GF7900GTs or X1800XLs because they want top gaming, but they want to be wise with their money. I don't know how good a choice a dedicated PPU will be for them.


Right now I will not be buying into the dream, but simply keeping the dream alive by closely watching how it develops until such a time as I believe the 'Right Time' comes. £218 for an unproven, generally unsupported, and possibly seriously flawed incarnation of the PPU dream is not in my opinion The Right Time, Yet ;-)

And that's just it, I think you fall into that second category like so many others here. The PPU sound good, but right now it's implementation of massive amounts more physics calculations, instead of simply off-loading the CPU burden, makes it an unappealing feature. The original selling of the idea to the masses was offlaoding CPU work, not making it worse by adding a ton of extra physics. I think it's an atractive thing for people who felt they were CPU limited in some games, even with an FX-60, and that this may help, if it only makes things worse, I doubt it'll be attractive even for the people looking at 'money no object' rigs, and really if you can't sell to them, how are you going to convince the rest of us who try to balance cost/performance?

I think the story's long from over, but the opening chapter is tsarting out as a greek tragedy IMO.
 

jkay69

Distinguished
May 8, 2006
82
0
18,630
@TheGreatGrapeApe

I have to say that I believe that essentially we are singing from the same sheet, your clarification on my points shows that.

From all the Marketing blurb comming out of Ageia, they 'seem' on the surface at least to have the same goal in mind, To offload the Physics calculations from the CPU to free up the system however, as stated, they either don't seem to have implemented it very well or the games/demos currently available are not properly optimised.

If it is the case that optimizations are required then I believe it is in Ageias' best interest to help speed up this process, NO ONE wants a lame duck, regardless of whether it's hardware or software to blame.

I have noticed that Ageia has already released a driver revision although I have not seen any evidence of this helping or degrading the situation, until I do I will hold my judgement.

As for the Quad core support, I believe developers are having a hard time programming for multithreading, this means that any advances in CPU core quantity are, for the short term at least, not going to provide massive performance gains regardless of how many cores are added to a system. I would very much like this situation to be reversed as the more options available to us, the consumers, the better. Until such a time as multi-core is more evenly supported I will stick to buying the most powerful single core CPU in my price range. At this time the 3700+ was my weapon of choice due to it's price/performance ratio.

I agree completely that in it's current form the Ageis PPU seems to be dragging the system down. I seriously hope for their sake they can reverse this trend, although if that means dropping the amount of objects displayed on screen then I'd have to wonder what the point of having a costly PPU in my system.

My statement regarding the use of a dedicated PPU with regards to freeing up system resources was intended purely during gameplay, not in general. I realise I was not clear on that point. During gameplay System processes are still going on in the background and I believe no user should have to disable anti-virus or firewall while playing games to free up resources, doubly so as these days many games are designed to be played online and also more and more systems are becoming always online through the use of broadband. Having a multi-core CPU would offer the best solution for this, although if the PPU did what it promised, there would be less requirement for this.

You are 100% right on the point that having a multi-core system would bring a more balanced system and enable non gaming duties to be performed much more efficiently and that for all but the least frugal among us currently the PPU does not offer any reason to upgrade. There are far better ways to spend the £220 to maximise your PC than to add a PPU with currently next to no support and what little support there is, seems to be bolted on at the last minute. Until there is a 'killer app' that shows the true benefit of Hardware physics, I believe this card will struggle to make the big leagues.

I hope that Ageis can pull the rabbit out the hat and satify all our concerns, however I have a sinking feeling this product was launched prematurely due to other factors like the delay of the PS3 and perhaps growing concerns of investors. This would make it unlikely that Ageis can quickly turn this situation around.

JKay6969
 

MarkG

Distinguished
Oct 13, 2004
841
0
19,010
No it's not, it's like complaining that going from an X800Pro to a GF7800GT doesn't yield much performance improvement in Oblivion because one's doing HDR and the other is just doing bloom+AA.

No it's not. The game is doing substantially more work and then people whine because it's slower: let's run the same game doing the same amount of physics work _on the CPU_ and then see how fast it runs.

Hint: I'm pretty sure it will be a damn sight slower than the same physics calculations running in dedicated hardware.
 

ivoryjohn

Distinguished
Sep 20, 2001
174
0
18,680
Using the CPU for Physics, even on a multicore CPU, in my opinion is not the way forward. The CPU is not designed for physics calculations, and from what I hear they are not (comparitively) very efficient at performing these calculations.
JKay6969

The CPU is the perfect place for these calculations. If the current CPU cores don't have the architecture to perform them efficiently, then lets see some new logic units added to support the cores.

ATI has shown with their x1900 that adding more shader units allows them to be a shader processing powerhouse.

IBM has shown that surrounding the CORE with multiple mini-cores can allow the processor to be much more efficient.

We already know what AMD and INTEL are doing with dual cores now, and multi cores in the near future.

We have even seen some manufacturers put 8 cores in a single package (not counting the CELL even).

When Intel and AMD wanted to improve Multi-Media performance, they added instructions to the CPU.

The best direction for physics processing is to take all this into account, and add parallel processing sub-cores and new instruction extensions to use them. Additional sub-cores are going to be way more useful than physics only processors.

I hope that AMD and/or Intel see this as a positive way to go, before they come to the conclusion that adding more full-blown cores eats up too much real-estate.
 

jkay69

Distinguished
May 8, 2006
82
0
18,630
An interesting argument ivoryjohn although I must say that I don't agree with you. Not in the long term anyway.

I remember the days before GPU's existed in the consumer domain, and Intel decided to add some instructions to their core to allow support for better graphics, this was seen as a revolution at the time, and with impressive results at the time, so how come 3DFX and then ultimately nVidia and ATI were able to succesfully create a new market for Dedicated 3D GPU's? By your argument surely the best way forward would have been to add more instructions, and ultimately even add full GPU functionality to the core of the CPU? This didn't happen, Why?

I realise that at this time there is a lot of new technology comming out, and CPU's are certainly not being left out, with multicore technology promising amazing functionality and tremendous scope it is easy to forget that multicore is just that, Multiples of a core. If Intel and AMD were to have a multicore architecture that say, one core was optimised for Audio, one optimized for Physics, one optimized for Ai, another optimized for security, etc, etc...I would be forced to sit up and take notice, but sadly this is not the case. Multicore CPU's do not double the performance of a PC, why not? Because the software that is run on them are largely not written for it, and from what I hear, it is much more difficult than simply adjusting the code. The entire program must be written to be multithreaded. This in itself will slow down the rate in which we see the true performance of multicore CPU's.

The fact is that eventually some sort of dedicated Physics solution will be succesfully marketed, the demand is trully there and growing every day, it was only a matter of time before a company released a product to cater for this growing demand. Software physics has it's limitations, although this first generation (as with all first generation) hardware Physics card seems to have it's limitations too.

The question is, in my opinion, is when will the Ageia solution really show the gamers what it is trully capable of?

From their first hardware I admit it is a very shaky start, although most of this judgement comes from the very shady performance in GRAW, which could (hopefully) end up being because of a seriously sloppy last minute bolt on. The true test for them is how they deal with this problem and how they will be affected by ATI and nVidia's efforts. I do not believe Ageis see any of the current software Physics solutions as a threat to their Hardware solution, more like a possible way to push their own hardware. They are already in talks with Havok, although admittedly at this time, without much success.

I believe that in 10-20 years if Processor minituraisation continues, then all these Dedicated solutions could well be packaged into one chip, a Processor that contained many different cores, one each for the main tasks, but even then I believe that individual chips that were upgradable would be the better way.

I believe that Intel and AMD will not attempt to seriously optimize much of their core's to Physics as it would be futile at best to compete with a dedicated solution. The fact that currently the PPU costs £218 doesn't mean it will always cost so much, or that what you get for that money will not improve over time, look at the thriving GPU market, when the first 3DFX arrived for £300 many people doubted it would succeed, for many of the same reasons I hear against the PPU and just look at it now.

A dedicated PPU could enable absolutely amazing things in games, with all 3D games benefiting. currently all we see from the technology are a few more objects on the screen during an explosion, or loads of objects being able to be blasted around, but think about what true hardware accelerated Physics could provide to Flight sims, Racing sims, FPS's, RPG's, in fact any 3D game. Open your imagination beyond the polygons and flashy eye candy provided today, and see a trully interactive world rendered on your screen, software physics will just not be able to give us that, not to the level that a hardware solution could.

JKay6969
 
No it's not. The game is doing substantially more work and then people whine because it's slower: let's run the same game doing the same amount of physics work _on the CPU_ and then see how fast it runs.

You are completely missing the point, regardless of the extra work if the effect isn't worth it, the who cares if it's working 8 thousand times hard if the performance is greatly reduced for minimal perceptable difference. That's the point. I don't care how much more work is involved is performance goes from playable to unplayable, then it's not a benifit. Ageia should have focused on easing the burden on the CPU not overburdening the PPU and CPU in order to show a slideshow of granular detail. Bullet time looks great in MAXPayne, but not in a game that's supposed to play real time.

Hint: I'm pretty sure it will be a damn sight slower than the same physics calculations running in dedicated hardware.

Well yipee, that's not the point now is it!?!

If guesstimated physics can give nearly the same effect while doing 1/10 the amount of work, but twice as fast, then I'll take that elegant solution, instead of some component holding back the rest of the system. For whatever reason the Ageia solution slowed down thte game, and didn't accomplish what everyone expected which was faster game play due to less CPU stress, instead Ageia thought everyone here played demos and didn't mind the game playing like 3Dmark06, we would simply be impressed by the 'theory' and not the results.

Fact is they should've stuck with what people expected them to do, not increase the physics beyond their own abiltities thus killing a playable game, but instead taking a playable game, offloading the small amount of physics the CPU was tasked with doing, and thus increasing the performance. That would sell the card, right now they are selling a product that like an anti-virus that does 3 simultaneous scans, it may make your PC extra secure, but now you can't use it because it's bogged down doing nothing other than scans. Do you understand how more work isn't better if the results are worse gameplay?

Seriously think about it, and then think about how granular the physics really need to get to be believeable, and which is more important a 50 object explosion with 30-60fps or a 1,000 object explosion with 5-10fps?

Sure it's 20 times as much work and only costing 1/6 the performance, but that's still a stoopid implementation. Compare what they did to what people expected, which would be to do 50-100 objects at 40-80fps, thus maintaining or increasing the level of physics while also increasing performance due to the CPU being freed up for maps and AI, etc.
 

jkay69

Distinguished
May 8, 2006
82
0
18,630
Well, TheGreatGrapeApe, I find myself agreeing with you on this one also...

The implemented Physics in GRAW is simply excruciating!

What is the point of ANY hardware that slows the performance of a game by any margin? Don't get me wrong, I don't write off the Ageia PPU, yet, but I have to agree that this example of Physics acceleration should make even the biggest money blaster wince!!! £218 GBP for a card that will instantly drop your framerates, regardless of how 'Monster' your rig is?

No matter what this card is, GPU, Audio, RAID or any other card you care to mention, it would be complete madness to pay £218 GBP for it and happily stick it in your rig, watching your framerates drop by a fair amount!?!? Could anyone accept this, even for free? ;-)

The point is, with this game, the card looks like a 40 Foot Lemon!!!

It doesn't mean the card is hopeless, or that it will never be any good, just simply that in THIS instance, it doesn't convince anyone to part with their hard earned cash for it.

And if you consider this to be the Premier launch title, something tells me there are some corporate asses under fire!!!!

I just hope that the next title to be released can reverse this trend, as surely no titles are better than titles that make the card look this bad!

JKay6969

P.S. and yes...I am an optomist ;-)
 
Don't get me wrong, I don't write off the Ageia PPU, yet, but I have to agree that this example of Physics acceleration should make even the biggest money blaster wince!!!

And that's just it. Don't get me wrong either, like I said;

I think the story's long from over, but the opening chapter is tsarting out as a greek tragedy IMO.

And really that's it. It's even worse than the poor launch of the Prescott, this 'launch title' looks horrible. And while the math may be impressive, we're not theorist looking for real-time physics in a slideshow to emulae reality of some model (as if we were analyzing earthquakes), we're game players looking for performance. The math behind it shouldn't be a concern, the visuals alone should be impressive, either from an FPS point of view, or the effects must be so much better as to be astounding, and not at the cost of realism.

£218 GBP for a card that will instantly drop your framerates, regardless of how 'Monster' your rig is?

Yep, basically add it to any $4000 rig and make it feel like a $1000 rig! :evil:

[qupte]It doesn't mean the card is hopeless, or that it will never be any good, just simply that in THIS instance, it doesn't convince anyone to part with their hard earned cash for it.[/quote]

Exactly. Hopefully the future is there, but thinnk of it this way, what happens if physics aren't as big an issue as we thought for CPU (sure it's a hit, but think along these lines for a second), where adding the PPU to a current game doesn't offer much performance boost less than 5% CPU cycles freed up leading to a 1-2% performance boost, and then there are no intermediate options for physics until you crank it to 'insane 10-20X particle count' but with a performance penalty. Are you more likely to spend that $3-400 US on a PPU or a second VPU? That's the immeidate problem. Heck I'm not a proponent of SLi, but if the global performance is better on SLi how can I even consider a PPU? And like I said, will I be guaranteed to notice a difference? It's be like if moving from 4XAF to 16XAF caused framerates to drop to 40%, would anyone play with 16XAF for a noticeable level of performance, but still considered rather minor?

And if you consider this to be the Premier launch title, something tells me there are some corporate asses under fire!!!!

Oh yeah, that's like the M$ launch with Conan Obrien at last E3 where there were multiple BSODs and CTDs. :lol:
Wow impressive, gotta get me one of those! :twisted:

I just hope that the next title to be released can reverse this trend, as surely no titles are better than titles that make the card look this bad!

And just hope that Havok doesn't come out with a convincing FX demo in the meatime as well. Seriously if they want ti to shine force this huge burden on a CPU then show the benifit of the PPU, but really which game developer is going to ship a title so crippled only to make a single product shine, no one who plans on selling alot of them. Ageia's best solution is to do what everyone expected, take the load of the CPU and that's it. Even if it only adds 5fps to some games, there's a ton of people out there who'd jump on it for the fastes rig they can make. But kill performance, and you'll never get those people because they'll be equal sacred that they'll get fragged first and that added physics will only be used to show them their own crumpled body.

P.S. and yes...I am an optomist ;-)

I'm a realist, but prefer to be critical in a way that exposes the problem and offers a better implementation. Like I said go for the boost, period, regardless if it's mathematically impressive of not (who cares if the PPU is 60+% idle), show a framerate boost and you get sales, trust me.

BTW, the glass is either half empty or half full it just depends on what you're doing with it, filling it then obviously it's half full, drinking it, then obviously half empty. :twisted:

That's been my answer since about JuniorHigh. 8)
 

Primitivus

Distinguished
Apr 21, 2006
324
0
18,780
It's be like if moving from 4XAF to 16XAF caused framerates to drop to 40%, would anyone play with 16XAF for a noticeable level of performance, but still considered rather minor?

And to give another example: It's like buying a new GPU that can ONLY do 3200x2400 16xAA 32xAF. Sure it will look better than anything alse on the market but if it's not playable who will use it?
Perhaps a better implementation for the Ageia PPU would be to be able to choose the level of physics calculations the card performs so one could strike the ideal balance between good framerates and improved visuals. Same as we all do with our regular graphics cards by tuning resolution, AA, AF etc.

BTW, the glass is either half empty or half full it just depends on what you're doing with it, filling it then obviously it's half full, drinking it, then obviously half empty. Twisted Evil

If the glass is half full and you're filling it then you're probably focusing on the empty space in the glass that can be filled. So it's half empty. But if it's half full and you are drinking it then what matters to you is the water already inside the glass, so it is half full :wink:
 

KingGreatYat

Distinguished
Mar 27, 2006
65
0
18,630
I'd just like to point out that GRAWs implementation of PhysX is pretty limited really - considering Havok are claiming that the core physics driving the gameplay is handled in software. So essentially what you have happening is two physics engines doin their thing at the same time. - I would guess that this is potentially detrimental to performance. It certainly sounds like a cheap way of trying to sell your game off the hype of another product!

And... Not sure if any one has confirmed this but Ageia have posted an updated driver that remedies the frame rate issues.