Sign in with
Sign up | Sign in
Your question

9800GT and 9800GTX+ released: Should I still go for ATI4 4850???

Last response: in Graphics & Displays
Share
August 2, 2008 5:37:32 PM

So, im building a new rig and buying the components this week.

Suddenly: http://www.engadget.com/2008/08/02/nvidia-gets-official...

I was going to whack an ATI 4850 in my rig but now im not so sure.
the 4850 would cost me about 112GBP.
the 9800GT costs 100GBP
and the 9800GTX+ costs 140GBP (prices from dabs.com)

Is the performance of the 9800GT any different from that of an 8800GT?
and if so, is it worth using this card instead of the 4850!??!
August 2, 2008 5:47:20 PM

Looks like the 9800gt is nothing but a rebranded 8800gt. So its old tech but with a new name !! The 9800gtx+ is a little more powerful than the 8800/9800gt, maybe 10-15%. Out of those three cards I would go for the 4850.
August 2, 2008 6:01:21 PM

+1 to what smartel7070 said
August 2, 2008 6:11:03 PM

okey dokey thanks!
my system will look something like this:
e8400 (will OC to 3.6)
asus p5k se
2gb corsair 800mhz
ati 4850
windows xp :p 
August 2, 2008 6:24:55 PM

With the price of DDR2 being so cheap, I would try to go for 4Gbs (2x2Gbs) as well. Although Windows XP may not need it, I think eventually within the next 2 years you will probably upgrade to Vista, or whatever is next.
August 2, 2008 6:25:52 PM

Why not 4 GB of RAM and Vista 64?
August 2, 2008 7:51:39 PM

well im absolutely skint and 2gb is the only possibility at the moment.

bit noobish of me but is it possible to have 4x1gb sticks?
and do all 4 have to be same brand?
i may upgrade to 4gb novemberish when i get some birthday money.
a b U Graphics card
August 2, 2008 8:24:28 PM

I have 4 x 1 gig sticks, so yes. But I would recommend going with 2 x 2's.
This would allow you to upgrade to 8 IF You go 64 bit down stream.

Might be easier to get working if all 4 are the same (ie 2 matched pairs).
I had minor problem getting 4 to work, had to bump up FSB voltage and (g)mch by 0.1 V to get stable.
August 2, 2008 8:30:30 PM

ok. im not a heavy multi-tasker and wont be alt+tabbing out of games much so 8gb isnt really necessary for me.

well iv ordered 2x1gb sticks. so looks like i may have to do your voltage tweak if/when i get 4gb and vista 64.

anyways thanks everyone for the help!
August 2, 2008 9:24:32 PM

In a lot of tests, the 9800GTX+ is on par or better then the HD4850. It also has a better cooler, and as most people will agree, NVIDIA's brands tend to be better. However, the HD4850 has better technology, and will likely perform better in the future. With the hotfix drivers, even the HD 4850 with the single-slot cooler should stay within fairly decent temperatures. The HD4850 is also cheaper in most cases. I would go with the HD4850 just because it is more future proof, and is cheaper. Don't expect to get much overclocking out of it in comparison to the 9800GTX+, though.
August 2, 2008 10:25:26 PM

If you can afford £140, get a 4870.

Do not get a 4850 unless you are prepared to invest in and install a new cooling solution for it. The native cooling solution is good for use only in a fridge, and then only on it's own.
August 2, 2008 10:33:04 PM

The_Abyss said:
If you can afford £140, get a 4870.

Do not get a 4850 unless you are prepared to invest in and install a new cooling solution for it. The native cooling solution is good for use only in a fridge, and then only on it's own.


or you could just turn up the fan speed
August 2, 2008 11:06:55 PM

And be deafened as well as slowly toast the rest of your PC.

Lovely.
August 2, 2008 11:07:29 PM

Even though I used to only like Nvidia, The 4850 brings much better results than the 9800 GTX+ (in most cases). It also uses up less power and costs alot less money. If I were you, I'd take the 4850 or even the 4870 if you have the money.
August 3, 2008 12:20:39 AM

single 4850 or dual 8800GT's (u gotta have an nforce mobo tho :( )
August 3, 2008 4:10:31 AM

The_Abyss said:
And be deafened as well as slowly toast the rest of your PC.

Lovely.



quite, the heat doesn't just evaporate into thin air - it has to go somewhere.

I have a 4850 in my Phenom box, and while it is a nice card - it has incredible heat production. Stable? Absolutely. But it does run hot and that could be a concern if you wanted to use 2.

Bear in mind HIS is launching an ICE-Q 4850 - which will use the same cooler as the 4870 (at least, it looks the same) - I would recommend the HIS ICE-Q 4850 if you can get it and want a 4850 with the option of stable crossfire later.
August 3, 2008 4:49:55 AM

I'd definitely go for the 9800GTX+.

The reason is you get more bang for your buck:
I am not talking only about gfx performance. When you buy the 9800 you also get the added Physx (and CUDA) support. With all the Physx games that will come out, you will get better perf and experience from the nvidia card.

This alone is worth more than the 48x0.
August 3, 2008 5:08:48 AM

beuwolf said:
I'd definitely go for the 9800GTX+.

The reason is you get more bang for your buck:
I am not talking only about gfx performance. When you buy the 9800 you also get the added Physx (and CUDA) support. With all the Physx games that will come out, you will get better perf and experience from the nvidia card.

This alone is worth more than the 48x0.


I'm not sure just how much real value having PhysX acceleration on the GPU is going to bring. Keep in mind that when playing a game the GPU is going to be highly loaded just rendering the graphics and adding a physics processing thread in there as well is likely going to slow down the rendering or just run the physics really slow. I suspect that unless you are playing a game that your card can max out easily (i.e an older game) having the physics running on the GPU instead of the CPU is not going to make a noticeable difference. In fact, if the physics thread is taking GPU resources away from rendering it could potentially decrease game performance instead of increasing it.
August 3, 2008 5:22:33 AM

I agree that it takes away from the gfx perf but it is all about balance. UT3 with physx perf is absolutely awesome and it looks amazing. Worst case scenario you get SLI with one card doing the physx/gfx and the other pure gfx.

In both cases it is better than CPU physics which slow down the computer even more.

Having the CUDA support is an added benefit and I am for one looking to use the badaboom video transcoder a lot.

I truly believe that the only reason 4870 managed to stay close to the 280 and as good as the 260 is because the nvidia cards have the added support for physx and CUDA.

Look at it this way: for the same price of the 4870 you can get a 260 AND a physx card :) 
August 3, 2008 5:28:00 AM

Yeah the ICE-Q is a great decision. Blows it out of the back of the case, and their customer support is great.
August 3, 2008 7:07:24 AM

Something that really annoys me is when people talk about how hot a video card will make your case. Let's take a 4850, for example, which many people know can run in the 80 C range. Just because the TEMPERATURE is high doesn't mean that it's putting out more HEAT than another card running at 40 C. If you put a nice cooler on the 4850 (also single slot), and the card begins to run at 50 C, your case isn't going to be cooler... The card is still dissipating the same amount of heat, it just runs cooler in the latter case since heat is taken away faster by the better cooler. Same deal if you increase the fan speed on the 4850s stock cooler, the temps will go down and heat production will stay the same.
August 3, 2008 11:39:25 AM

sseyler said:
Something that really annoys me is when people talk about how hot a video card will make your case. Let's take a 4850, for example, which many people know can run in the 80 C range. Just because the TEMPERATURE is high doesn't mean that it's putting out more HEAT than another card running at 40 C. If you put a nice cooler on the 4850 (also single slot), and the card begins to run at 50 C, your case isn't going to be cooler... The card is still dissipating the same amount of heat, it just runs cooler in the latter case since heat is taken away faster by the better cooler. Same deal if you increase the fan speed on the 4850s stock cooler, the temps will go down and heat production will stay the same.



The 4850 has a cooler design which vents hot air inside of the system case, that does quite inherently make it raise the ambient temperature of your system case - that is the only point that I, or anyone has made, and its perfectly valid and honest. Don't try to sugarcoat the fact that this card DOES produce a great deal of heat and has a mediocre stock cooling method - I own this card and its great, but I wouldn't run it in crossfire on stock cooling.
August 3, 2008 1:15:28 PM

beuwolf said:

Having the CUDA support is an added benefit and I am for one looking to use the badaboom video transcoder a lot.


Keep in mind that ATI also has a GPU transcoder as well. Furthermore, ATI's transcoder is available now for free download off of ATI's website while Badaboom is still not available and when it is you will have to pay to download it.
August 4, 2008 3:30:12 PM

Ah, I remember the days when a card being a two-slot one was counted against it, rather than in its favor... Then again, I suppose that the 4850, at some 110w TDP, is above the mark where they'd started using two-slot coolers; if memory serves, the X800XT I have is some 89w, and the X850XT PE, which is pretty close to that, has a two-slow cooler to blow air out of the case.
Quote:
could someone explain it to me. does the OP mean that even in games that have no physx support they someone lose performance because of it?

Well, as we've seen in some benchmarks before, the addition of the PhysX card reduces the computer's overall framerate in that game as opposed to otherwise identical machines that lack said physics card... :pt1cable: 
August 7, 2008 9:45:58 PM

Just_An_Engineer said:
Keep in mind that ATI also has a GPU transcoder as well. Furthermore, ATI's transcoder is available now for free download off of ATI's website while Badaboom is still not available and when it is you will have to pay to download it.


There is a huge difference between what ATI is providing to Nvidia: ATI's solutions is a hardware transcoder only on some of their cards that works with that specific program. Nvidia CUDA allows anyone to write any program that uses the GPU for computing. Badaboom is just one example for such a program. You can go to CUDA zone and see how many have already been written.

This may not be important for everyone but for me this is a real added benefit to get more than just gfx from my GPU (and I did not even mention again physx :)  ).
August 8, 2008 12:52:48 AM

+ 1 for going with the 4850 I think it is the most powerful card in your price range
August 8, 2008 1:25:24 AM

The_Abyss said:
And be deafened as well as slowly toast the rest of your PC.

Lovely.


By the time the heat actually effects anything I'm sure the system will be replaced with something newer and much faster. GPU's are meant to handle high heat. If you have good airflow in the case the heat shouldn't effect any of the other components to the degree that it will degrade their life that much. My fan is on 50%, I can't even hear it. Although my card idles at 50C, I can see the molten metal slowly drip and burn holes into my computer desk...and I can't hear very well now that I think of it.
August 8, 2008 1:28:31 AM

Get a cooler running card that you don't have to tinker with..... or get one that you do.
August 8, 2008 12:34:22 PM

stove said:
Get a cooler running card that you don't have to tinker with..... or get one that you do.



I can't seem to get a straight idea of just how hot these cards get. I'm just going to be running the single (4850) card (Have not need for Xfire) in a normal ATX case with the normal couple of fans ... am I going to have major problems with this card as is? I don't plan on any OC, (or tinkering as you say) and in fact I have the side of my case open from time to time for various reasons. Is the heat issue more of an issue for dual cards or am I really going to have to mess with the cad to keep it from joining my northbridge in starting fires left and right?
August 8, 2008 3:01:13 PM

4850 +1 Fastest card in your list and at the best price point! Buy another card for more$ and less performance if needed.
August 8, 2008 3:25:59 PM

beuwolf said:
There is a huge difference between what ATI is providing to Nvidia: ATI's solutions is a hardware transcoder only on some of their cards that works with that specific program. Nvidia CUDA allows anyone to write any program that uses the GPU for computing. Badaboom is just one example for such a program. You can go to CUDA zone and see how many have already been written.

This may not be important for everyone but for me this is a real added benefit to get more than just gfx from my GPU (and I did not even mention again physx :)  ).

I'd note that GPGPU development started ages before CUDA ever came around, and that CUDA is simply nVidia's latest generation of such technology, namely in that it provides a layer to translate C-based code to the GPU. People have been able to use Radeon cards for GPGPU purposes since at least the X1800 series, if not earlier.

And likewise, PhsyX is hardly the only physics game in town; one could just as well argue that for (actually less than the price of a GTX 260) one could get a 4870 and a Havok card, which you couldn't get from Nvidia for that price... And it's well-worth noting that Havok already has a much bigger base than PhysX.

lonewulf said:
I can't seem to get a straight idea of just how hot these cards get. I'm just going to be running the single (4850) card (Have not need for Xfire) in a normal ATX case with the normal couple of fans ... am I going to have major problems with this card as is? I don't plan on any OC, (or tinkering as you say) and in fact I have the side of my case open from time to time for various reasons. Is the heat issue more of an issue for dual cards or am I really going to have to mess with the cad to keep it from joining my northbridge in starting fires left and right?

Well, the thing is, the Radeon 4850 has possibly the highest-ever TDP, at 110 watts, for any video card that has only a single-slot cooler, rather than a double-slot cooler that blows air out of the case. So it should be a serious consideration with that card. Of course, there are several methods to easily take care of this problem; the two biggest ones being, namely, to place fans carefully so as to suck the heat that it's blowing into the case, or, simpler yet, just replace its cooler with an aftermarket one that will blow the heat out of the case.

You don't really have to quite go overboard there, especially if your room has air conditioning during the summer, but know that while for most designs, a case and setup arranged that is normally poor at managing its own airflow and thus cooling will likely have stability problems with the 4850, that it wouldn't have with a cooler-running card or one with a two-slot cooler. However, if you have a case with good airflow and cooling, it shouldn't really be all that much of a problem.
August 8, 2008 5:56:32 PM

nottheking said:
I'd note that GPGPU development started ages before CUDA ever came around, and that CUDA is simply nVidia's latest generation of such technology, namely in that it provides a layer to translate C-based code to the GPU. People have been able to use Radeon cards for GPGPU purposes since at least the X1800 series, if not earlier.

And likewise, PhsyX is hardly the only physics game in town; one could just as well argue that for (actually less than the price of a GTX 260) one could get a 4870 and a Havok card, which you couldn't get from Nvidia for that price... And it's well-worth noting that Havok already has a much bigger base than PhysX.



Again, you don't really understand CUDA if you are comparing it to ATI's GPGPU efforts. It took NVIDIA 3 years to make their cards run real c code so that any app can be written to utilize the GPU. Even if ATI began implementing it today it will take them 1-2 years to change the architecture to support all the c data types on the GPU. This is exactly the reason why the 4870 can even compete with the GTX 260. If the 260 did not support CUDA it would trash the 4870 by a huge factor. The 4870 can only do gfx and was designed for it. The GTX 260 can do gfx AND computational C and that's why it runs gfx only as fast as the 4870 (because part of the transistors and arch. was made to support something other than gfx).

Regarding physics:
As I said before, physx is GPU accelerated unlike Havok which is CPU based. Therefore, buying an Nvidia card will enable you to play BOTH Havok and Physx games! (since Havok runs on Intel CPU). Buying ATI will only let you play Havok and not any of the physx games.
Not to mention that physx is way superior than Havok since it runs on the GPU.
a b U Graphics card
August 8, 2008 6:34:04 PM

I missed alot of this thread but it sounds like Beuwolf has bought into the nV PR machine as if Cuda is something new and GPGPU is restricted to nV-only. [:thegreatgrapeape:5]

nottheking said:
I'd note that GPGPU development started ages before CUDA ever came around, and that CUDA is simply nVidia's latest generation of such technology, namely in that it provides a layer to translate C-based code to the GPU. People have been able to use Radeon cards for GPGPU purposes since at least the X1800 series, if not earlier.


Exactly, this stuff has been going on since the BrookGPU work and the R300 was the first that could do it, although with 24bit precision while the FX introduced FP32; and ATi's CTM (Close To Metal) has been around longer than CUDA. now AMD uses primarily CAL, which is under their Stream SDK umbrella. Both are simply a front end for turning the shaders/ALUs into processors for other calculations, and there are a TON of applciations doing so. I'm working on something right now for work, and I'm finding both ATi and nV to be of little help making their hardware work with 3rd party apps (they both want us to develop apps for them). Microsoft even have their own work with a C# back-propagation library for GPGPU (similar to AMD's ACML-GPU lib), and to me I actuall prefer someone else making this work because the proprietary stuff is a pain and they're more worried about people using it on hardware that isn't theirs. There is little that can be done on CUDA or CTM/CAL that can't be done on another platform, it's just tougher especially for the older hardware where you have to manually program things for the ALUs. However both ATi and nVidia have built the new GPUs to be general purpose stream processors, so it's much easier than it was in the GF7 and X1K era.

Quote:
And likewise, PhsyX is hardly the only physics game in town; one could just as well argue that for (actually less than the price of a GTX 260) one could get a 4870 and a Havok card, which you couldn't get from Nvidia for that price... And it's well-worth noting that Havok already has a much bigger base than PhysX.


And while I haven't kept up with their work on the PhysX cross-over since the early days, if NoHQ has PhysX working on the HD4K series seemlessly, then an HD4850 would outperform a GTX280, and way surpass a GTX260 based on raw compute power alone. So anyone touting CUDA as a benefit doesn't understand the underlying implication of bringing GPGPU ops/apps into the ring when the HD4K is a monster in that area. Look at a more practical gamer application on a no proprietary platform like Ray-casting rendering and see the big (sometimes 2-4X) difference in performance going from the GTX280 to HD4K. GPGPU is definitely not an area that GTX supporters want to discuss. The Tesla 4GB card maybe for the truly memory limited apps, but other than that I wouldn't mention compute options when comparing the two. Even RapidHD has said they would not limit their appliction to nV only.
a b U Graphics card
August 8, 2008 7:38:33 PM

beuwolf said:
Again, you don't really understand CUDA


Actually that describes all your statements sofar, especially your statement regarding ATi limited to transcoding only (that was their AVIVO effort, not their GPGPU effort which have far longer and far wider history than CUDA which you seem to think is the only thing in the space, and that it matters to gamers at all. That PhysX has been ported to run on ATi hardware shows that your point is moot, and you're simply blowing BS smoke everywhere.


Quote:
if you are comparing it to ATI's GPGPU efforts. It took NVIDIA 3 years to make their cards run real c code so that any app can be written to utilize the GPU. Even if ATI began implementing it today it will take them 1-2 years to change the architecture to support all the c data types on the GPU.


Actually it's not that simple, much C code must be adapted to run in the limited C# environment that works for CUDA. There's alot of basic C++ code you can use that will crash simply because of DLL issues or because of the length. Very basic, it still needs to be converted to run on the GPU. So it doesn't just run out of the box, it like everything else, must be adapted to run in the stream environement on CUDA. It's like using Cg, it has alot of interoperability with C as well, but it still needs to be coded with Cg in mind, just like how code needs to be adapted to CUDA. All CUDA does is let you use C as the interface to run your computations within the GPU. It's still an interface more than tied to the core.
As for the time it takes, you're including ALL of nV's development time in your figure, ATi did alot of their ground work in other areas. And BTW, you can thank a Calgary company Acceleware for alot of nV's early work with C on the GPU. That's a major reason nV bought them was for that IP. I wouldn't say it would require the exact same amount of time (shorter or longer) for any other company to follow that path should they so chose. I doubt it would take much effort to make a C-centric complier interface for the HD4K series, it seems more of a software limitation than hardware one.

Quote:
This is exactly the reason why the 4870 can even compete with the GTX 260. If the 260 did not support CUDA it would trash the 4870 by a huge factor. The 4870 can only do gfx and was designed for it.


What a load of BS, the X1K-HD4K can co alot more than just graphics, there are a ton of apps that use the GPGPU power to do raw math that is not graphics-specific.
We use it for something that is graphic related with mapping software, however it's not limited to vector calculations alone.

Quote:
The GTX 260 can do gfx AND computational C and that's why it runs gfx only as fast as the 4870 (because part of the transistors and arch. was made to support something other than gfx).


Who cares? If it added transistors to be a cookbook organizer I don't care if it's not helping me with games or applications I use. The OP isn't asking for it to be a swiss army knife.

Quote:
Regarding physics:
As I said before, physx is GPU accelerated unlike Havok which is CPU based.


Havok does both CPU and GPU.
Just like PhysX is both CPU and GPU physics. The difference being that PhysX has tiny demo levels in some games and in 3Dmark. Whereas HavokFX is stil just tech demos.

Quote:
Therefore, buying an Nvidia card will enable you to play BOTH Havok and Physx games! (since Havok runs on Intel CPU).


Havok runs on any X86/64 CPU. It's not limited to intel, AMD and VIA can both use Havok's CPU physics.

Quote:
Buying ATI will only let you play Havok and not any of the physx games.


Actually nVidia themselves said PhysX could easily be done on ATi cards, they wanted to try and get AMD to use CUDA for that, but the boys at NgHQ showed you can do it without either CUDA or an nV card.

Quote:
Not to mention that physx is way superior than Havok since it runs on the GPU.


Limited implementations of physX on GPU and in an already limited Physics API. PhysX is second banana to Havok's much larger game title base.

Running small add-on levels to GRAW and UT3 and not throughout the game doesn't make for a compelling argument for PhysX GPU acceleration. And that Epic chose to use their OWN physics implementation in UT3 and only use PhysX for the demo levels, and that GRAW uses Havok Physics at the core and PhysX just for the Demo island, doesn't do much to ay that PhysX is all that respected by even the developers that decided to give it a test drive.

August 8, 2008 9:05:45 PM

TheGreatGrapeApe said:

Actually it's not that simple, much C code must be adapted to run in the limited C# environment that works for CUDA. There's alot of basic C++ code you can use that will crash simply because of DLL issues or because of the length. Very basic, it still needs to be converted to run on the GPU. So it doesn't just run out of the box, it like everything else, must be adapted to run in the stream environement on CUDA. It's like using Cg, it has alot of interoperability with C as well, but it still needs to be coded with Cg in mind, just like how code needs to be adapted to CUDA. All CUDA does is let you use C as the interface to run your computations within the GPU. It's still an interface more than tied to the core.


What are you talking about? CUDA is just an open source C compiler with very few modifications. Any C program will compile it with ZERO work. The only difference between regular C and CUDA is they added a few types and changed the syntax for functions so you can decide how to run them in parallel (which is all optional). Do yourself a favor and d/l the sdk.

Quote:
Who cares? If it added transistors to be a cookbook organizer I don't care if it's not helping me with games or applications I use. The OP isn't asking for it to be a swiss army knife.


But a lot of people like me do care. GPU is not only for gfx and it can help for ton other things.

Quote:
Havok does both CPU and GPU.


No it does not. Havok FX has been talked about for as long as I can remember but it has not moved forward very much. I bet you that you won't see it coming out in the next year or so. Physx is the only option for GPU accelerated games NOW.


Quote:
Havok runs on any X86/64 CPU. It's not limited to intel, AMD and VIA can both use Havok's CPU physics.


Even better. Just made my point for buying nvidia cards even stronger (maybe with an AMD CPU:)  ).

Quote:
Actually nVidia themselves said PhysX could easily be done on ATi cards, they wanted to try and get AMD to use CUDA for that, but the boys at NgHQ showed you can do it without either CUDA or an nV card.


Coulda, woulda, shoulda :)  - right now and for the foreseable future it does not.

Anyway, I admit of being an Nvidia fanboy so this conversation will get us nowhere. Just wanted to throw my 2 cents in.
a b U Graphics card
August 8, 2008 11:23:21 PM

beuwolf said:
What are you talking about? CUDA is just an open source C compiler with very few modifications. Any C program will compile it with ZERO work. The only difference between regular C and CUDA is they added a few types and changed the syntax for functions so you can decide how to run them in parallel (which is all optional). Do yourself a favor and d/l the sdk.


Actually I have used the SDK as well as RapidMind's, and that's why I know it still has alot of troubles (for me it was with some DLLs) although seems less of an issue with Linux. It's more robust than RM, but RM is a little more flexible and can be used on both CPUs and GPUS. You cannot simply drop in arbitrary code into CUDA though you still need to re-learn for it for all the 'special case / exceptions' like length, it's still a C front end for access to the core. CUDA 2.0 improved alot from the early days when it was problematic as heck, but your statements, like the other ones you make, oversimplify it, and pretend that nV is the only one that could do it as if it were a hardware limit and not a software interface limitation of the compiler. And sofar CAL looks to be even more powerful overall, but the learning-curve is ridiculously daunting.

Quote:
But a lot of people like me do care. GPU is not only for gfx and it can help for ton other things.


But it's irrelevant to the thread or even your initial use for it (transcoding) because others can do the same without CUDA so who cares if nV has dedicated transistors. If in the end the the task is completed as efficiently or more so, it doesn't matter if it's because of dedicated hardware or raw power. I don't care if it's doign AA in shader or dedicated hardware as long as the result is the same and one is faster than the other. That's why, it's of note, but it's no more important than telling the OP he should buy the HD4K because it does Raycasting faster. Unless that's specifically what he's asking about it's even less important than Tesselation let alone DX10.1.

Quote:
No it does not. Havok FX has been talked about for as long as I can remember but it has not moved forward very much. I bet you that you won't see it coming out in the next year or so. Physx is the only option for GPU accelerated games NOW.


Which is still barely more than a tech-demo itself, not the underlying physics of either of it's big title games, and the ones it is underlying is CPU-only, just like Havoc.

Quote:
Even better. Just made my point for buying nvidia cards even stronger (maybe with an AMD CPU:)  ).


Once again you're skipping over the facts to colour your comments, you said it's Intel CPUs, whereas just like PhysX it's not limited to a specific architecture, as before nV has offered both CUDA and PhysX to AMD for their ATi hardware. And remember the PPU, essentially nV is just doing a wrapper treatment to make it work on the GPU. So is doesn't support your point it actually shows you myopic view of the reality that the things you are focusing on are software limits that can change if AMD so wihes (nV is offering AMD is not taking), my point i very contrary to yours.

Quote:
Coulda, woulda, shoulda :)  - right now and for the foreseable future it does not.


What like PhysX was GTX, then GF9 only? Oops for got to mention it's doable on the GF8600 and G92 based GF8800 too, now what?
Artificial limits are not the same as hardware limits, and you're confusing the two.

Quote:
Anyway, I admit of being an Nvidia fanboy so this conversation will get us nowhere. Just wanted to throw my 2 cents in.


Admiting you're a fanboi is the first step, the next is to let go of that, and then the final one is to be IHV and APP agnostic.

The biggest thing is if you were even a competent nV fanboi you would've discussed the benefits of the GTX260 from the start not just the GF9800GTX+ initially, because that's the only one that has value for the uses you later illustrate, the 9800GTX is pointless from that perspective it even more limited, especially with the lack of DP support, which for what you say you're focused on would be more important than the limited PhysX support.
!