Sign in with
Sign up | Sign in
Your question
Closed

GPU Physics is a Stupid Idea

Last response: in Graphics & Displays
Share
May 20, 2006 6:21:18 AM

GPU physics is a stupid idea, why? Cause it only gives Eye-Candy.

The PPU from Ageia is somewhat a great idea, but now is not the time to release it, plus they bottle neck its power of 20Giga Instructions per sec with the PCI 3.0/2.2 slot. No GPU can do 20Giga instructions per sec with data going forward and backwards at the same time. Simply said, no data that is feed to the pixel/vertex shader will EVER go backwards(For now, the future, I dont know), so the Physic via a GPU can only bring EYE-CANDY for now.
May 20, 2006 1:30:09 PM

Quote:
GPU physics is a stupid idea, why? Cause it only gives Eye-Candy.

The PPU from Ageia is somewhat a great idea, but now is not the time to release it, plus they bottle neck its power of 20Giga Instructions per sec with the PCI 3.0/2.2 slot. No GPU can do 20Giga instructions per sec with data going forward and backwards at the same time. Simply said, no data that is feed to the pixel/vertex shader will EVER go backwards(For now, the future, I dont know), so the Physic via a GPU can only bring EYE-CANDY for now.

Ageia is a waste of time. GPUs will basically do what the Ageia card does. That will happen as time goes on.

errr, no, you did not look at the interactable objects is different from eye candy physics, go do your homework before saying the GPU can do what the PPU can do.
May 20, 2006 2:39:26 PM

I love it how any of you people feel you're qualified to say what is a stupid idea. People thought VGA cards were a dumb idea. No idea is stupid if it can give tangible results (well in this case tangible in the virtual world). An idea can only be a failed idea if people choose not to accept it due to ignorance or cost.

I think both Ageia's card and GPU physics are both great ideas. Granted, only one can really exist on a system (it would be extremely difficult to implement a way to have both share the physics workload). So the problem is, which way do we go? I like the idea of a standalone card handling it. But I also like ATIs idea of having a 2nd GPU handling the physics because their implementation doesn't call for two identical cards. This lets us recycle a previous generation graphics card instead of selling it for next to nothing, throwing it in a box that doesn't need it, or have it gather dust.

And as far as bottlenecking goes, as long as the average framerate stays around 60 fps, I don't care. Do I want 200 fps and canned physics effects? Or do I want 60 fps and near realistic physics effects? I would rather have the latter.
Related resources
Can't find your answer ? Ask !
May 20, 2006 3:11:39 PM

No kidding, I'm with you. There's nothing wrong with trying to do physics calculations. They really help the immersion of games, but they're super taxing on a system.
I would have loved to play FEAR with all the effects up, but my cpu simply couldn't do better than medium, and that at ~30fps.
I support ATI in this. Fact is that GPU's are very powerful specialized processors and if we can harness that power for physics, then I say do it!
May 20, 2006 3:29:42 PM

Quote:
GPU physics is a stupid idea, why? Cause it only gives Eye-Candy.

The PPU from Ageia is somewhat a great idea, but now is not the time to release it, plus they bottle neck its power of 20Giga Instructions per sec with the PCI 3.0/2.2 slot. No GPU can do 20Giga instructions per sec with data going forward and backwards at the same time. Simply said, no data that is feed to the pixel/vertex shader will EVER go backwards(For now, the future, I dont know), so the Physic via a GPU can only bring EYE-CANDY for now.

Ageia is a waste of time. GPUs will basically do what the Ageia card does. That will happen as time goes on.


I would agree :-D

Sadly you almost have to spend $3-5K to be a gamer these days :cry: 
a c 171 U Graphics card
May 20, 2006 4:03:50 PM

"it only gives Eye-Candy."
Uhmmmm, whats wrong with that? Isn't eyecandy good? I'd much rather look at a screen full of eyecandy than a boring screen full of cr@p. Until they get the problem of lower frames fixed, and the price comes down, I'm not ready to "upgrade"
May 20, 2006 4:10:45 PM

You certainly can spend $3K or more on a good setup, but if you are content with previous generation equipment (like I am) you only have to spend ~1K to have a decent machine. I don't get the best frame rates on the newer games (~30fps), but I can play anything out right now at high detail. I think FEAR is only game that really kicked my machine in it's glowing red nads, so I had to turn it down to medium detail.
May 20, 2006 4:32:27 PM

Quote:
GPU physics is a stupid idea, why? Cause it only gives Eye-Candy.

The PPU from Ageia is somewhat a great idea, but now is not the time to release it, plus they bottle neck its power of 20Giga Instructions per sec with the PCI 3.0/2.2 slot. No GPU can do 20Giga instructions per sec with data going forward and backwards at the same time. Simply said, no data that is feed to the pixel/vertex shader will EVER go backwards(For now, the future, I dont know), so the Physic via a GPU can only bring EYE-CANDY for now.

Ageia is a waste of time. GPUs will basically do what the Ageia card does. That will happen as time goes on.

errr, no, you did not look at the interactable objects is different from eye candy physics, go do your homework before saying the GPU can do what the PPU can do.

I may not be able to handle as many calculations per second as xgas's mighty PhysX paperweight, but I can do this calculation:

4754 posts by prozac26
divided by
44 posts by xgas
= 108


Looks to me like prozac has done more research than xgas by a factor of greater than 100.
I know whose opinion I'm more likely to value...
May 20, 2006 4:51:08 PM

:-D

lol

:trophy: :trophy:

Indeed
May 20, 2006 5:12:16 PM

prozac26 is da man ....









:wink:
May 20, 2006 5:54:45 PM

Well, post count doesnt prove anything, but experience does, I have been using the computer since the age of 4, thats the DOS age when I was using it. Simply said I have an experience of 18 years.
May 20, 2006 6:02:29 PM

Check my avatar for your reward.
May 20, 2006 6:51:41 PM

Quote:
i'm assuming your are either 22 or 18 years of age then. your point. i used to use dos as well when i was young. my first comp was a used one from my dad's friends work. it had a 386 and windows 3.1. does this make me as qualified as you.

you still haven't provided the proof i asked for. it is not optional, either you back up what you say or shut it.


LOL, then I chose to shut up. You know those back then who didnt think GFX cards are the way to go, still bought them now, but I might eat my words. My theory is the GFX and CPU have to wait for the PPU to tranfer data to them(Check out the IO delay between the CPU GFX and standard PCI slot, lol), thats what causing some slow downs, and not pushing the system high enuf in CellFactor, unless you have a supa fast multi-core proc, you will need the PPU, plus the Software version of Ageia can spread the pain to multi-core/multi proc systems, but Havoc havent done this to any resent titles, which is a waste.

You see the Multi-core proc in the Xbox-360? Its a CPU designed to run "CLEAN" codes, mean its good in Graphics geometry prosessing, but the draw back is, its damn stupid in Physics prosessing, the standard CPU in gaming comps we see nowadays have what we call SSE optimization, it rearranges the ramdom data and turn them into "Clean" codes for easy prosessing.
So now, its kind of a smart to design a chip that does this kind of prosessing without the optimization, cause its "DESIGNED" to run datas like that, it does not need optimization, oh yeah, transister amount does not prove anything, the new 7900GT/GTX has 120++ Million less transister than the 7800GT/GTX, but the 7900s run faster, that is all.

In the end, its "Wait And See".
May 20, 2006 7:31:26 PM

While it is true that ATI and nVidia can do physics calculations, they are not HW physics, but SW pumped through the FP on the GPU. A PPU has hard coding for physics. So it is more specialized than a GPU. SInce it is for games I don't know why they didn't use PCIe x4. Most serious gamers have PCIe. No one is really making AGP slots anymore.
May 20, 2006 11:16:29 PM

Quote:
GPU physics is a stupid idea
Quote:
The PPU from Ageia is somewhat a great idea


What a complete fukcing moron.....

Do you argue against yourself opften? :roll:
a b U Graphics card
May 21, 2006 2:25:19 PM

"18 years ago I wasn't alive,"

Whippersnapper! :-)
May 22, 2006 2:15:23 AM

Quote:
were you replying to me or all of us.


No yours is just the one I happened to click on to reply. Sorry.
May 22, 2006 3:00:53 AM

The physics processing GPU wasn't a stupid idea.
To release a PPU that does practically nothing... now that's a stupid idea.
Great Going Ageia! :lol: 
May 22, 2006 3:24:11 AM

Quote:
The physics processing GPU wasn't a stupid idea.
To release a PPU that does practically nothing... now that's a stupid idea.
Great Going Ageia! :lol: 


That pretty much sums up this entire thread...
May 22, 2006 3:35:52 AM

Quote:
The physics processing GPU wasn't a stupid idea.
To release a PPU that does practically nothing... now that's a stupid idea.
Great Going Ageia! :lol: 


That pretty much sums up this entire thread... Exactly! btw what happened to your thread about running PhysX games without PhysX?
May 22, 2006 3:47:30 AM

Quote:
The physics processing GPU wasn't a stupid idea.
To release a PPU that does practically nothing... now that's a stupid idea.
Great Going Ageia! :lol: 


That pretty much sums up this entire thread... Exactly! btw what happened to your thread about running PhysX games without PhysX?

I'm sooo glad you asked. :D  I think that thread dovetails nicely into this one. If anyone still thinks PhysX is a viable solution as it is right now they should check it out.
May 22, 2006 3:54:43 AM

Thank you ok man it's almost 12 here in New York i gotta go to sleep now bye.
May 22, 2006 5:55:00 AM

I support the use of PPUs, but AGEIA screwed up by making the game slower without really improving the overall game. IMHO, the little square bits flying out (GRAW) actually made the game look worse.

For those who did not know, many "ancient" computers used multiple processors. My favorite was the AMIGA with Chips lovingly called

COPPER
BLITTER
AGNES
DENISE
(and the CPU)

which was an Awesome computer when it came out, because the different chips performed different functions offloading the work of the CPU.
May 22, 2006 7:23:59 AM

Quote:
can you show me where it says they have hard coding and what the hell that means. physics would just be alot of maths. why can't those same equations be run on a PPU and not a GPU. the PPU still need correctly programmed software to work AFAIK.



Both do 3D vectors, but not for the same thing. There is a big differnece bewteen matrix math for physics and texture and vertex mapping. If you setup the ASIC to do a specific mathematical function it will excel. Again I realize that FP wise GPUs are more than capable, but if this were to go to 3rd gen, it would be so far ahead in physics 378 million transistors(X1900) will seem like a few.
a b U Graphics card
May 23, 2006 10:30:39 PM

Quote:
While it is true that ATI and nVidia can do physics calculations, they are not HW physics, but SW pumped through the FP on the GPU. A PPU has hard coding for physics. So it is more specialized than a GPU. SInce it is for games I don't know why they didn't use PCIe x4. Most serious gamers have PCIe. No one is really making AGP slots anymore.


You're basing that statement on SLI physics implementation. HavokFX is no less software bound (Ageis needs the software too BTW) than the alternative, but it's far more advanced than SLi physics that is very limited in it's implementation. I doubt anyone will adopt SLi physics and instea will turn to the more adept HavokFX to do VPU based physics calculations and can use the card to set up calculations in hardware, through the dispatcher, especially in DX10 hardware.

And while on the surface it may not be able to do as much at the core level, the VPU based physics doesn't have to do 2 path communication and can tell the VPU what to render and what not to without leaving the card (or so is the 'plan').

And the whole point behind my support of HavokFX over PhysX is not because one is better than the other, only that one is more attractive a solution for people like me or close to myself (especially heading to a laptop-only world personally). Most people aren't going to buy a dedicated PPU, it's just not a consideration, heck if they don't buy a dedicated sound card now, do you really think slightly better physics is a BIG enough concern to spend $200-300 on a dedicated card?

I can see a variable vpu (either unified shader or similar design) being a more useful object in allowing slight increases in physics to the market that wants it but isn't going to sacrifice to get it.


Think of it in these terms, you have $400 (if you're lucky) to upgrade the 'visible package' let's call it, where you could spend half on the PPU and half on the VPU or all on the VPU, so would be you choice?

Currently the GF7600GT + PPU or the X1900XTX/GF7900GTX would argueably be the best buy combos for that money (not being allow to send extra to CPU/Memory/HD which would make an XT or plain GT attractive too). Now even this, IMO, is more money than the average gamer would spend on a single card at each upgrade cycle, but for argument it fits.

So considering those two choices at this very moment it's obvious which one offers the most return this second, but if we even look to the future, would this purchase make sense from a perspective of actual gameplay if it mean't 1024x768 for one choice and 1600x1200 for another, or if you're lucky noAA versus AA? For the physics card you're pretty much always stuck at the low resolution, but may have the benfiit of physics, whereas the VPU-only solution may allow you higher resolution for games where the enhanced physics are just fluffy add-ons, but may grind you down to the same resolution with 1/2 the physics for a title that requires physics, and which the VPU has to balance the load. For that my choice would usually favour the graphics options.

That's especially so if it's not game dependant physics (like my bouncing grenade actually bounces correctly into the other room to bounce off the wall and into that side room thus killing the bad guys), because if you are simply sharpening up / 'particiling-up' the smoke plume or making it 200 pieces of box instead of 20 not sure if that's as big a concern? And I think the VPU could handle those gamedependant issues which are usually very specific.

Now an example of a rare exception IMO would be if you could make it so that my grenade on the side of any random tree makes it collapse providing me with visually accurate cover for individual bullets from enemies in various changing positions (not pre-determined), then perhaps that'd be worthwhile if it would only be achieveable by PPUs.

But even if they made GRAW the same speed with the added physics, the arbitrary nature of those improvement woudn't be good enough for me. It's be like buy a card solely on the ability of a card to render a demo; and regardless of Dusk/Nalu or Ruby's beauty there's no fricking way most people are going to do that who aren't fanbois who wouldn't buy it anyways.

Just my two vectors worth. :wink:
May 24, 2006 9:58:14 PM

I actually think what Aegia is trying to do with their PhysX card is intriguing, albiet flawed. From what i've seen in the GRAW comparison videos the card does make a substancial improvement in the realism and visual quality on screen; However, from what i've heard and read about the real-world performance, it would seem that the card is just trying to do too much. Adding that much on screen action in real-time is like trying to play F.E.A.R. at max settings with a FX5200, it may look nice but it will play like a 6hr slideshow; Furthermore, charging $300 for it kinda limits it's adoption by the masses. Aegia should go back to the drawing board, ditch the PCI interface, pump up the performance, and drop the price... substancially (MSRP ~$100); After that then most serious gamers would consider it a viable option.
May 24, 2006 10:11:29 PM

Well i would buy it if it was at 100$ and on pci-e.
a b U Graphics card
May 24, 2006 10:36:24 PM

Quote:
From what i've seen in the GRAW comparison videos the card does make a substancial improvement in the realism and visual quality on screen;


And that's where we disagree. To me it's just the 'shiny water' of the physics world. Like I said, little real impact to the game itself, but the smoke looks a little better, there alot more objects once they blow up, but a cut scene could achieve the same thing for the amount of interaction it has in actual gameplay.

Quote:
However, from what i've heard and read about the real-world performance, it would seem that the card is just trying to do too much. Adding that much on screen action in real-time is like trying to play F.E.A.R. at max settings with a FX5200, it may look nice but it will play like a 6hr slideshow;


Well they have improved the performance somewhat (still a drain on SLi/Xfire setups)

Quote:
Furthermore, charging $300 for it kinda limits it's adoption by the masses.


And that's the biggest barrier IMO. Unless it can do more than 'shiny water' effects and do somethat that MATTERS to my gameplay, then I'm looking at 10+ Cases of beer, 20 DVDs, 5-10 new games, a new pair of Skis or Boots, A Season pass to a small hill, or a playoff ticket in Edmonton. It's all relative, but it just isn't a 'must have' for me yet, and as long as it's just a question of 200 boxes exploding versus 20 or more billowy smoke, I probably won't ever spend that much on an add-in card. $100, yeah maybe, but still, it's gotta affect gameplay more, get me real phyisics where I can lob a grenade beside a piece of steel/pipe on the ground and have that pipe go flying across and kill the guy. Make my sniping more difficult because of wind gusts, angle of elevation, and distance, instead of no matter what the scenario aim 2 reticle notches above the kill triangle and there you go, head shot. Now that's physics I'd pay for, everything else makes me think about how much shinier my water would be with better graphics.

Quote:
Aegia should go back to the drawing board, ditch the PCI interface, pump up the performance, and drop the price... substancially (MSRP ~$100); After that then most serious gamers would consider it a viable option.


Yeah, but it's still needs a little more for most. At that price you no longer want the 'serious gamers' you want 'all gamers'. Make it a killer app in more ways than one, convince people it has a larger impact than even HDR does.
May 24, 2006 10:47:01 PM

Quote:
Haro,

GPU's already can do physics. ATI's cards can do it. That's why Nvidia bought Aegia so they can do it.
When did Nvidia buy AGEIA?
a b U Graphics card
May 24, 2006 10:51:16 PM

Both MFR's cards can do physics, just a question of implementation, and while SLi physics doesn't work very well, but Havok's FX works well on both nV and ATi's cards.

And as has been asked already, since when did nV buy Ageia?
May 24, 2006 10:52:52 PM

Quote:
news to me too. can we have a link please. i would have thought their would have been a post already if they had.

Why did you reply to me it's the Admiral you should be asking.
May 24, 2006 11:01:47 PM

Quote:
GPU physics is a stupid idea
Quote:
The PPU from Ageia is somewhat a great idea


What a complete fukcing moron.....

Do you argue against yourself opften? :roll:
LMAO!!!!!!!!!! HAHAHA. OMG!!!! LOL :lol:  :lol:  Seriously, I'm gonna sufficate.

@ xgas

Post counts don't matter. Neither does experience. 18 years ago I wasn't alive, and CPUs couldn't do simple math calculations. So basically most of those 18 years don't count since there is little relationship between DOS and Physx card.


Post count dosent matter and niether dose the "Experience" of a 4 year old at a computer. CPU's Still can only do simple math. They just do it really fast.
May 24, 2006 11:05:13 PM

Quote:
i meant to reply to you as i was saying that like you i hadn't heard the news. after my first sentence i should have made it clear that the rest was for the admiral.
OK man i just hate when people do that sometimes sorry about that. Btw i think that Admiral is lying.
May 25, 2006 12:10:23 AM

Quote:
i meant to reply to you as i was saying that like you i hadn't heard the news. after my first sentence i should have made it clear that the rest was for the admiral.
OK man i just hate when people do that sometimes sorry about that. Btw i think that Admiral is lying.

He's not intentionally lying. He's either misinformed or full of $h!t. I hope it's not the latter.
May 25, 2006 3:08:49 AM

This thread still going.

Of course the GPU can do it. But you need R&D costs; but they ATi or NVidia can buy up Ageia.

Me I can't wait for a BPU (Biological processing unit) real time biological process simulation (only joking).
a b U Graphics card
May 25, 2006 3:24:40 AM

Quote:

Of course the GPU can do it. But you need R&D costs;


R&D in GPU based physics is likely older than that of the Ageia and their PPU though I have no number look at bridge processing and it's use for precisely that.

Quote:
but they ATi or NVidia can buy up Ageia.


But won't until Ageia has fizzled (that's probably already happening a bit after the lack luster launch). Buying so 'late' in the 'early' stage of the company would mean maximum price based on alot of hype. Buying a year ago or more would've made more sense, buying later now makes sense too. But right now, they are still riding hype, and only once nV or ATi determine they for sure can't do better or can't compete would they even consider it IMO.

But both companies have been talking about more aquisitions for quite some time. But IMO SGI is a stronger play for both.
May 25, 2006 3:30:05 AM

Quote:
..


Hey, cecil. Why delete the entire post? Why not just say that you messed up? We'll forgive you!
May 25, 2006 3:54:13 AM

Haro,

Surprise no one saw this article. Nvidia buys over Aegia as the title says:

http://www.rojakpot.com/showarticle.aspx?artno=309&pgno...


""NVIDIA Buys Over AGEIA!

This morning, we had an emergency conference call with NVIDIA. Apparently, they had something urgent to announce. You couldn't possibly guess what it was. Neither did we.

Over the last few months, NVIDIA has been steadily buying up companies that add value and expertise. Their most recent purchase was the Finnish mobile graphics outfit, Hybrid Graphics. But that changes today with NVIDIA's acquisition of AGEIA!

What does that mean for us users? Let's talk to NVIDIA.""
May 25, 2006 4:04:03 AM

Quote:
Haro,

Surprise no one saw this article. Nvidia buys over Aegia as the title says:

http://www.rojakpot.com/showarticle.aspx?artno=309&pgno...


""NVIDIA Buys Over AGEIA!

This morning, we had an emergency conference call with NVIDIA. Apparently, they had something urgent to announce. You couldn't possibly guess what it was. Neither did we.

Over the last few months, NVIDIA has been steadily buying up companies that add value and expertise. Their most recent purchase was the Finnish mobile graphics outfit, Hybrid Graphics. But that changes today with NVIDIA's acquisition of AGEIA!

What does that mean for us users? Let's talk to NVIDIA.""


Spent a lot of time on that, didn't you?
May 25, 2006 4:08:27 AM

Not at all, it was a few weeks ago when I saw the Nvidia acquiring Aegia. This happened not long after Rojak had published the article where ATI wins on physics vs Nvidia cards.
May 25, 2006 4:21:48 AM

Quote:
Haro,



Here's the ATI vs Nvidia Physics test: http://www.rojakpot.com/showarticle.aspx?artno=308&pgno...



Please, please PLEASE tell me that you didn't fall for that. You're just kidding, right? If you aren't, did you happen to check out the date of the post? April 1, 2006. Does that date mean anything to you?
a b U Graphics card
May 25, 2006 5:52:34 AM

OK first off to address the issue of the use of the PCI bus for this card. The bus will not be a limitting factor, yes it does 20GigFLOPS/sec, but the card doesn't send or receive data over the bus after every instruction. It may take 2000 or 20000 (who knows) instructions before the PPU has any data for the CPU/GPU. Think of how your GPU operates, many operations are executed before a single pixel is processed and sent to the RAMDAC. Another point is that the PPU will not be managing bus traffic. When the PPU makes a request for data it will give instructions to an onboard DMA controller / bus arbitor to handle the retrieval of data, same goes for sending data. This is how GPU's and CPU's handle data management, you just can't tie up C/G(PU) machine states while data is being sent/received. Also once a level in your game is loaded, most of the data and instructions for the said level will be loaded into the PPU's onboard RAM much like your graphics card. Of course during the level there will still be bus traffic, but not as much as you think, and mostly outboard.

As for the people pooh poohing the idea of any form of graphics implementation, adopt a wait and see attitude. As with any product, there are going to be early adopters. These are the people that will keep the product alive in it's retail infancy. If software titles step up and leverage the power of Ageia's PPU and there is a decernable difference, then I will happily trundle off and get myself one. When it comes to making games more realistic and interactive, I'm all in. I am not an early adopter though, these people (and I'm not bashing them) are the ones who keep new idea's alive long enough to either fail because they are bad idea's or take off and become the next piece of the computing equation. I'm the sort of person who doesn't rush out and buy the first Stepping of any CPU or GPU or chipset. I don't buy Rev 1.00 of any motherboard, nor do I rush out and buy Rev 1.0 of any software. I prefer letting those who have to have the latest and greatest find "all" the bugs in any products I wish to buy, once they are fixed then I buy.

As for GPU implemented physics, hey why not if it works. To be honest the only people it will benefit is the people who already have multi-GPU systems. Most people are not going to run out to get another graphics card (and maybe a motherboard) to run SLI / Crossfire just to have GPU physics. Unless you want to have more than one graphics card to begin with, you would be better off getting Ageia's card (especially if you have to get another motherboard).
May 25, 2006 9:51:52 AM

Sticking a PPU on a GFX card with the GPU is a really good idea(yeah, right) But I think the guy Adrian is an nVidia fanboy.
May 25, 2006 11:27:05 AM

Quote:
"18 years ago I wasn't alive,"

Whippersnapper! :-)


Indeed! The llama turned 27 on Monday, and no-one here bought him a cake! 8O
May 25, 2006 1:00:43 PM

Quote:
Check my avatar for your reward.


lmao

And i used a Commodore when i was young. And a calculator before that. And paper and a pencil before that. Does that make my opinion of physics cards any better than anyone elses?
May 25, 2006 1:22:30 PM

That's the BIGGEST joke I've ever seen in my whole life!!! :lol:  :lol: 
Nvidia buys Ageia!!!....
Actually, you know, I've started a plastic soap packaging company yesterday, and this morning, Nvidia made me an offer! Of course, I didn't sell it... Principles, you know... 8)



:lol:  :lol:  :lol: 
May 25, 2006 1:23:15 PM

Ultimately this whole story is going to be determined by economics

"Human behaviour is economic behaviour" - CEO Nwabudike Morgan (Sid Meier)

Whichever option is available that is the best performance per monetary unit will win. Just like VHS versus betamax, VHS was WAY cheaper and sure the quality may have been slightly worse but it was still WAY cheaper and it did essentially the same job. So whatever happens your tightfistedness as a human (i swear i am not human) will help determine which system is implemented. Every now and then this is not true but money being the root of all evil will usually go for the easier option, ie the one that consumes less money. Though technically money is just potential, ie power so power is the root of all evil and since computers are getting more powerul so more power = more evil so the other path might just be true too. i think i just out argued myself. But anyway who buys what is going to sort this out, so technically it is in our hands what happens, and since we are cheap ageia might just bite the dust, hard. A third 16X PCIe slot on a crossfire motherboard might help sales of these things but since they look to compete with ati i don't think that would happen. So basically it is also in ati's hands too. I am beginning to ramble so i will now proceed to stfu.

Quote:

Me I can't wait for a BPU (Biological processing unit) real time biological process simulation (only joking).


My Biochem lecturers would pay out of their noses for one of those, as would i, you had better patent that before anyone else really makes one. It would just be a cpu optimised for biological calculations but you will still need to write the software, methinks this is a very good idea.

Shutting up ... now.
!