Sign in with
Sign up | Sign in
Your question

Possible to use second core for PPU running Physx software?

Last response: in Graphics & Displays
Share
June 2, 2006 5:30:08 PM

Not sure if this has been asked before but if someone can figure this out we could find out how powerfull ageias card is. what i'm suggesting is this when I installed GRAW it installed the software ageias physx software could that software be ran with affinity -1 so that its on my second core then give it a high priority, second you would have to find a way to turn up the physx level in graw for software and then we can see what the ageias card is really doing and if its that much better? I guess the first part of this is someone hacking graw so that we can trick it into thinking we have physx hardware anyone working on that?
June 2, 2006 6:55:38 PM

The aegeia SDK will probably use mutithreaded CPU power running in software mode anyway, but the dedicated physics processor is many times more powerful than a regular CPU at these kind of calculations, so it won't make all that much difference.
June 2, 2006 7:42:58 PM

The sad truth is even a Conroe isn't capable of the kind of numbers AGEIA talks about their card doing.
June 2, 2006 7:58:35 PM

Quote:
The sad truth is even a Conroe isn't capable of the kind of numbers AGEIA talks about their card doing.

lol, kinda depressing.
June 5, 2006 3:43:29 PM

Quote:
The sad truth is even a Conroe isn't capable of the kind of numbers AGEIA talks about their card doing.

lol, kinda depressing.

Well I dont expect mind blowing performance but then again nobody is getting that from AGEIA's cards anyway. With their software we are allready using our cpu's for physics all I want to do is turn it up some and make sure its on the second core. Most of the first 3D cards where piles of poo stuck on pci cards and i'm wondering if this physx card is the same. I cant see the game getting that much worse if we turn the physics up a bit with the software version because its running like a$$ on the hardware version also. BTW ageia wouldnt want to make it multi-threaded if it would make it run good would buy their overpriced pile of poo then? :D 

btw this would also give us a way to acutally benchmark the ageia card apples to apples right now we are compairing normal vs high physics and looks like the card sucks if we only get 2fps when we try it in software we will all relize how powerfull their card is and be willing to shell out some cash.
June 6, 2006 10:06:31 PM

Quote:
The sad truth is even a Conroe isn't capable of the kind of numbers AGEIA talks about their card doing.


Ageia has been very secretive about performance tbh. In regards to theoretical peak, Conroe @ 3GHz closes in on 50GFLOPs. It has been suggested that the "20 giga-instructions" Ageia is pimping is flops. If true then it may not be as clear as some suggest. As it is GPUs offer higher peak performance--the question is whether they are a good solution for physics overall. The jury is still out on this, but Havok recently demoed Physics on ATI's X1600 (which is $100 cheaper than PhysX) and I must say it is as impressive as anything else currently shown in realtime on the PPU:

http://www.tweaktown.com/articles/908/

Theoretical peaks aside, a PPU adds an extra level of latency to the workflow. Instead of having a CPU<>GPU type situation, a PPU creates a something like PPU<>CPU<>GPU. This involves more layers of potential bottlenecks. A powerful CPU, even a multicore one, is going to be able to avoid more resource bound scenarios relating to bandwidth and latency.

Numbers are not everything. PR-marks only get your foot in the door. Delivering software that shows that your product really works and is being used in games is key. In this regards I see a Windows like scenario: Install base will drive developer support.
June 6, 2006 11:08:40 PM

Quote:
The sad truth is even a Conroe isn't capable of the kind of numbers AGEIA talks about their card doing.


Ageia has been very secretive about performance tbh. In regards to theoretical peak, Conroe @ 3GHz closes in on 50GFLOPs. It has been suggested that the "20 giga-instructions" Ageia is pimping is flops. If true then it may not be as clear as some suggest. As it is GPUs offer higher peak performance--the question is whether they are a good solution for physics overall. The jury is still out on this, but Havok recently demoed Physics on ATI's X1600 (which is $100 cheaper than PhysX) and I must say it is as impressive as anything else currently shown in realtime on the PPU:

http://www.tweaktown.com/articles/908/

Theoretical peaks aside, a PPU adds an extra level of latency to the workflow. Instead of having a CPU<>GPU type situation, a PPU creates a something like PPU<>CPU<>GPU. This involves more layers of potential bottlenecks. A powerful CPU, even a multicore one, is going to be able to avoid more resource bound scenarios relating to bandwidth and latency.

Numbers are not everything. PR-marks only get your foot in the door. Delivering software that shows that your product really works and is being used in games is key. In this regards I see a Windows like scenario: Install base will drive developer support.

Not necessarily. If you look at Cray's design for SeaStar and the XT3 and then transfer it to the desktop with either the HTT bus of the Hyper-X HTT slot(board demo'ed by AMD on last Thursday ) the PPU and GPU talk directly to each other so no latency. As to gigaflops, if that were a valid measurement of performance,the Intel Xeons and the Itanium would own the supercomputer market and the 8XX opteron series would be going nowhere. Cray would not have designed and built the XT3. You are right about PR marks being useless unless there is code that can deliver the advertised performance to the user. The physics card is a better idea than SLI or Dual SLI or Quad SLI. Putting it on the PCI bus isn't going to work.
June 12, 2006 3:50:54 PM

FiringSquad: Have there been any talks with Intel or AMD about perhaps offloading physics to one of their multi core processors?

Jeff Yates: Of course. This is something we launched more than ago – via our HydraCore technology in Havok Physics – our game-play physics product. Havok 3 and beyond are fully multithreaded and utilize extra cores optimally to accelerate game-play physics. We’ve done test on dual-CPU, dual core systems (4 cores) that deliver an astounding level of game-play physics in the thousands of objects. We have regular dialog with both AMD and Intel and we continue to be bullish about the upside potential of multicore architectures in the PC for accelerated world-class game-play physics. The future is very bright on this side of the fence as well. This is why we distinguish between game-play and effects physics. We know the CPU can and will continue to push into the 1000’s of objects for game-play; while the GPU can carve out 10’s of thousands of collidable object simulations to add visual fidelity that no one thought was possible with an off the shelf graphics card.
Firingsquad interview with Havox FX
a b U Graphics card
June 12, 2006 4:10:42 PM

That quote is pretty much my whole view of the situation in a nutshell, the CPU does the far less taxing game-dependant phsyics with the more than enough headroom it already has, and the GPU does the overtaxing stuff that would choke a CPU.

BTW, havok physics engine in Oblivion already takes advantage of multi-core. The impact still seems minimal though which IMO shows the only brief and minor loading of the CPU for such game dependant times.
!