Check this out
http://www.anandtech.com/video/showdoc.aspx?i=2751
It is just one game but still... First impressions matter
http://www.anandtech.com/video/showdoc.aspx?i=2751
It is just one game but still... First impressions matter
I posted it in the CPU section but yeah not exactly the best of starts.
Yeah that would be good.remember years ago when people used to have to have a 2d card as well as a seperate 3d accelorator card? (think mine was a creatives monster 3D {4meg onboard})
well thats what i think will happen with physics cards, they'll eventually be combined with graphics cards (and probably cost a lot less then 2 seperates).. it makes sense for companys to do this. If you where Nvidia or ATI what would you do?
remember years ago when people used to have to have a 2d card as well as a seperate 3d accelorator card? (think mine was a creatives monster 3D {4meg onboard})
well thats what i think will happen with physics cards, they'll eventually be combined with graphics cards (and probably cost a lot less then 2 seperates).. it makes sense for companys to do this. If you where Nvidia or ATI what would you do?
remember years ago when people used to have to have a 2d card as well as a seperate 3d accelorator card? (think mine was a creatives monster 3D {4meg onboard})
well thats what i think will happen with physics cards, they'll eventually be combined with graphics cards (and probably cost a lot less then 2 seperates).. it makes sense for companys to do this. If you where Nvidia or ATI what would you do?
I think video cards will have a physics accelerator on-board.
So let's get this right: the game runs slower with high-quality physics simulation and a PhysX card than it does woth low-quality physics simulation on the CPU, and you're surprised by that?
That's like complaining because a game runs slower at 2048x1536 with 4xAA with all details maxed out on a Geforce 7900 than it does at 800x600 with minimum details on a GF4 MX.
Personally I suspect the PhysX card is always going to be limited by the low performance of the PCI bus, but it's a shame they don't allow you to run with high-quality physics on the CPU to give an actual, real, valid, useful comparison between the two.
I must apologise TheGreatGrapeApe, I accidentally voted your post as 'GOOD' I didn't realise this was the low vote, I would like you to know that I wanted to vote 'BEST'.
I believe the 'IDEA' of having a dedicated PPU in your increasingly expensive monster rig is highly appealing, even intoxicating and I believe this 'IDEA' coupled with some clever marketing will ensure a good number of highly overpriced, or at least expensive, sales of this mystical technology in it's current (ineficient) form.
The concept of a dedicated PPU is quite simply phenominal, We spend plenty of money upgrading our GPU's, CPU's and quite recently Creative have brought us the first true APU (X-Fi series) that it makes sense for there to be a dedicated PPU and berhaps even an AiPU to follow.
The question is, will these products actually benefit us to the value of their cost?
SLI GPU's are not working flat out 100% of the time...Due to the extremely high bandwidth of Dual PCIe x16 ports there should be a reasonable amount of bandwidth to spare on Physics calculations, perhaps more if Dual PCIe x32 (or even quad x16) Motherboards inevitably turn up. I am not saying that GPU's are more efficient than a DEDICATED and designed for PPU, just that if ATI and nVidia decided the market showed enough potential, they could simply 'design in' or add PPU functionality to their GPU cores or GFX cards. This would allow them to tap into the extra bandwidth PCIe x16 affords.
The Ageis PhysX PPU in it's current form runs over the PCI bus, a comparitively Narrow bandwicth bus, and MUST communicate with the GPU in order for it to render the extra particles and objects in any scene.
This in my mind would create a Bottleneck as it would only be able to communicate at the bandwidth and speed afforded by the Narrow bandwidth and slower PCI bus.
If ATI and nVidia are smart, they can capitalise on their high installed initial userbase and properly market the idea of Hardware physics for free with their SLI physics, they may be able to throw a spanner in the works for Agies while they attempt to attain market share. This may benefit the consumer, although it may also knock Agies out of the running depending on how effective ATI and nVidias driver based solution first appears. It could also prompt a swift buy out from either ATI or nVidia like nvidia did with 3DFX.
Using the CPU for Physics, even on a multicore CPU, in my opinion is not the way forward. The CPU is not designed for physics calculations, and from what I hear they are not (comparitively) very efficient at performing these calculations.
A dedicated solution will always be better in the long run. This will free up the CPU to run the OS and also for Ai calculations and well as antivirus, firewall, background applications and generally keeping the entire system secure and stable.
Multicore will be a blessing for PC's and consoles, but not for such a specific and difficult (for a CPU) task.
Right now I will not be buying into the dream, but simply keeping the dream alive by closely watching how it develops until such a time as I believe the 'Right Time' comes. £218 for an unproven, generally unsupported, and possibly seriously flawed incarnation of the PPU dream is not in my opinion The Right Time, Yet ;-)
No it's not, it's like complaining that going from an X800Pro to a GF7800GT doesn't yield much performance improvement in Oblivion because one's doing HDR and the other is just doing bloom+AA.
Using the CPU for Physics, even on a multicore CPU, in my opinion is not the way forward. The CPU is not designed for physics calculations, and from what I hear they are not (comparitively) very efficient at performing these calculations.
JKay6969
No it's not. The game is doing substantially more work and then people whine because it's slower: let's run the same game doing the same amount of physics work _on the CPU_ and then see how fast it runs.
Hint: I'm pretty sure it will be a damn sight slower than the same physics calculations running in dedicated hardware.
Don't get me wrong, I don't write off the Ageia PPU, yet, but I have to agree that this example of Physics acceleration should make even the biggest money blaster wince!!!
£218 GBP for a card that will instantly drop your framerates, regardless of how 'Monster' your rig is?
And if you consider this to be the Premier launch title, something tells me there are some corporate asses under fire!!!!
I just hope that the next title to be released can reverse this trend, as surely no titles are better than titles that make the card look this bad!
P.S. and yes...I am an optomist ;-)
read his post agian and you will notice a £ before it. it will retail for 218 pounds.
It's be like if moving from 4XAF to 16XAF caused framerates to drop to 40%, would anyone play with 16XAF for a noticeable level of performance, but still considered rather minor?
BTW, the glass is either half empty or half full it just depends on what you're doing with it, filling it then obviously it's half full, drinking it, then obviously half empty. Twisted Evil
And... Not sure if any one has confirmed this but Ageia have posted an updated driver that remedies the frame rate issues.