8800 gtx with physX??
is there a need to get a physx card if you have a 8800 gtx..does the Quantum Effects Technology really do physics??
I really hope DX10 card can handle all those physics, either from driver hack from enthusiast community/sites, or they are "supported" officially/unofficially by ATi & nVidia especially when the games "need" PhysX.I'll buy 8xxx/R6xx, but having to buy another card just to have some effect that my new card could have handled flawlessly, that's just not my cup of tea (even if it was promised PhysX won't just for "better" physics).But hey, i won't hold anybody back if they want to spend their money there.... it's their money, not mine, just my 2 cents :wink:
Expect nVidia to have their own version out soon. They announced CUDA at Sc'06 before Thanksgiving.
"NVIDIA's also recently announced their first generation GPGPU technology called CUDA. It consists of a new GeForce 8800 graphics card and future Quadro Professional Graphics solutions. NVIDIA claims computing with CUDA overcomes some limitations of traditional GPU stream computing by enabling GPU processor cores to communicate, synchronize, and share data. A CUDA-enabled GPU operates as either a thread processor, where thousands of threads work together to solve complex problems, or as a streaming processor in specific applications such as imaging where threads do not communicate. CUDA-enabled applications use the GPU for fine grained data-intensive processing, and the multi-core CPUs for coarse grained tasks such as control and data management."
Since ATI demo'ed Stream at SC'06 at 375 gflops (Tyan/Intel's 10xQX6700 only is good for 256 gflops) nVidia shouldn't be far behind. nVidia only resolved ISA issues to get the 8800 PCI-E certified.as of Monday 15 Jan 2007. http://www.pcisig.com/developers/compliance_program/integrators_list/pcie/#components You may want to wait for 2nd gen DX10 for some of the bugs to be worked out of production cards. This more than buggy drivers has been responsible for the complaints about the 8800 cards.
In a nutshell CUDA and Stream make dual core and quadcore technology obsolete for desktop gaming. They are typically 10+x faster than a quadcore http://sc06.supercomp.org/schedule/event_detail.php?evid=5052 and use faster DDR4 memory to cut latency and have direct access to the cpu without getting delayed buy the northbridge memory controller. HTT 3.0 will support direct access for the GPU accelerator to system memory. As one of the speakers at SC'06 wondered where does this leave Intel. We now know that nVidia is at least on the playing field. "If GPUs are destined to achieve parity with CPUs, it will be interesting to see what happens with Nvidia and Intel. Being late to the GPU party could have devastating effects for the procrastinators, since building a software base for your graphics engine will be critical in establishing product momentum. So far Intel has not made a move, but as I write this, rumors of Intel acquiring Nvidia are circulating around the Web. Stay tuned ..." http://www.hpcwire.com/hpc/960279.html
The other killer for Quad core is power consumption. It takes about 2200 watts to get the same level of performance with Quad cores as either the CUDA or Stream cards will do with75-100watts. Clearstream has demo'ed this with a Sun computer TSUBAME in Japan. (Performance results for the world’s most powerful commercially available computer systems published on October 3, 2006 by Jack Dongarra of the University of Tennessee demonstrated that ClearSpeed acceleration technology scales efficiently from single servers to hundreds of nodes. Using ClearSpeed Advance accelerators, the Tokyo Institute of Technology’s TSUBAME supercomputer achieved a performance of 47.38 Teraflops (TFLOPS, trillion floating point operations per second) on the LINPACK benchmark. This is an increase of over 9 TFLOPS from the non-accelerated result of 38.18 TFLOPS published in June 2006, delivering an unprecedented performance boost of 24 percent. From an efficiency perspective, the ClearSpeed Advance boards delivered 1 TFLOP per kilowatt adding only one percent to the cluster’s overall power consumption." http://www.linuxelectrons.com/News/Hardware/ClearSpeed_Accelerates_New_Breed_of_IBM_System_Cluster_1350_Hybrid_Supercomputers
So figure each accelerator card is worth about 10 quad cores based on Tyan's demo and uses a 10th or less of the power.
With Bioware using the PhysX in there new engine, I believe that we will see more games using PhysX.
With PhysX being Royalty-Free to use, I see more opting to use that rather then Havoc (I dont know if Havoc charges to use it or not).
The only question is if Ageia can stay in business long enough to see the technology blossom. Worst/Best case scenaro: Ageia goes under and Nvidia buys them out.