Sign-in / Sign-up
Your question

GTX 3xx Series Cards

Tags:
  • Graphics Cards
  • ATI
  • Graphics
Last response: in Graphics Cards
October 9, 2009 1:59:00 AM

Hello all :) 

Ok, I am in the market for a computer and am trying to decide on a videa card and have been hearing about how good ATI's 5xxx series is and was curious about some different things with Nvidia's 3xx series. So here are some questions:

Anybody know of an exact (and if not that, an approximate) release date?
Anybody know how they will compare with ATI's 5xxx series?
Anybody know of an approximate price?

Just basically any and all information about them would be greatly appreciated! :D 


Thanks in advance! :D 

More about : gtx 3xx series cards

a c 221 U Graphics card
October 9, 2009 2:03:42 AM

No one knows a damn thing about the nVidia cards except they will be out eventually, there are no even semi reliable rumors about them, the sites posting the rumors tend to contradict themselves in the next article. You can also expect them to be significantly more than the ATI cards.
October 9, 2009 2:06:56 AM

Keep waiting.....it's anybody's best guess as to when GT3xx will be released. MAYBE this year, but frankly I would be surprised if they are available in any quantity this year.

As for performance, there isn't enough information floating around. The chip is quite larger, and has more memory bandwidth than the Radeon 5xxx series. But there are too many unknowns to say for sure. If I had to pull a number out of my nose, I'd say it will be 30% faster than the Radeon 5870.

That brings us to price. Being that the chip is significantly larger than the 5870, and it has a larger memory bus (translation: more expensive PCB board), I would be shocked to see it available for under $500 US.
Related resources
October 9, 2009 2:07:17 AM

Ok that would explain the lack of information that I have been able to find.

Thanks! :D 
October 9, 2009 2:11:09 AM

Nice rig btw....I have one almost exactly like that. I just bought some more RAM for it on Monday because my parents are still wanting to use it :p 

Single channel RAM, isn't that a thing of the past.
October 9, 2009 2:48:23 AM

ElectroGoofy said:


Anybody know of an exact (and if not that, an approximate) release date?
Anybody know how they will compare with ATI's 5xxx series?
Anybody know of an approximate price?

Thanks in advance! :D 


it's gonna be "when it's done" any rumor either made by fanboy or ati people to be look "good"/"overkill" or conspiracy of "make fun about it"(ati people always done that.......). which personally dont believe it will release anytime soon...

since g200 had push the architecture to its limit and nvidia had losing money ever since. any new develop would require amounts of fund which nvidia is not big enough for it. they are also in fabrication disadvantage while amd can get new tech in first hand while nvidia have to play catch up. and not to mention they a lot of resource on soc field and later x86 platform(they still don't give up on that yet.) the only thing they can do is what amd did in the last few years when they merge with ati. cut down price and rebrand the low cost older, mature technology because g200 is too costly still and it has trouble with 40nm fab while g92 had more flaxiblity. and start over at begining again.

or they can just overdose some random new design(put as much rop/tmu/shader into current g200 as possible) and end up to be another FX series.
October 9, 2009 3:06:10 AM

its all speculation from nvidia fanboys as to what exactly the gt300 cards ARE. nvidia has released some bogus figures here and there (like they always do....) about how amazing this series is. from what i heard they have really screwed up the architecture of their new designs - having 9 gpus out of a whole wafer considered proper. i used to like nvidia a lot and my first card ever was a geforce 4 titanium... but wow have they got their ass kicked as of late by ATi.
October 9, 2009 3:08:23 AM

1ce said:
Keep waiting.....it's anybody's best guess as to when GT3xx will be released. MAYBE this year, but frankly I would be surprised if they are available in any quantity this year.

As for performance, there isn't enough information floating around. The chip is quite larger, and has more memory bandwidth than the Radeon 5xxx series. But there are too many unknowns to say for sure. If I had to pull a number out of my nose, I'd say it will be 30% faster than the Radeon 5870.

That brings us to price. Being that the chip is significantly larger than the 5870, and it has a larger memory bus (translation: more expensive PCB board), I would be shocked to see it available for under $500 US.


Ok, may as well go with a 275 or maybe a 5870 if that approximation is acurate. Thanks :) 

1ce said:
Nice rig btw....I have one almost exactly like that. I just bought some more RAM for it on Monday because my parents are still wanting to use it :p 

Single channel RAM, isn't that a thing of the past.


Yup, slower than molasses in January, especially since it is 133mhz ;) 
I do 3D work and the render times are crazy :ouch: 

cheesesubs said:
it's gonna be "when it's done" any rumor either made by fanboy or ati people to be look "good"/"overkill" or conspiracy of "make fun about it"(ati people always done that.......). which personally dont believe it will release anytime soon...

since g200 had push the architecture to its limit and nvidia had losing money ever since. any new develop would require amounts of fund which nvidia is not big enough for it. they are also in fabrication disadvantage while amd can get new tech in first hand while nvidia have to play catch up. and not to mention they a lot of resource on soc field and later x86 platform(they still don't give up on that yet.) the only thing they can do is what amd did in the last few years when they merge with ati. cut down price and rebrand the low cost older, mature technology because g200 is too costly still and it has trouble with 40nm fab while g92 had more flaxiblity. and start over at begining again.

or they can just overdose some random new design(put as much rop/tmu/shader into current g200 as possible) and end up to be another FX series.


Oh, ok, thanks! :D 
a b U Graphics card
October 9, 2009 11:51:32 AM

Toi be fair, the whole wafer affair isn't shocking for that stage, nevermind the source of said articles notable anti-NVIDIA bias...

Everything on the G300 points to a december-january release, with slightly greater then 5870 performance. We won't know for sure until the card comes out of course, but those seem to be the signals...
Anonymous
a b U Graphics card
October 9, 2009 12:37:57 PM

im sure that nvidia has game benchmarks but their are waiting ati to release 5870x2 card to see it performance and then we will see more from nvidia. so better wait.
October 9, 2009 1:27:58 PM

Feb / March 2010, and $500... Performance will be a little behind the 5870 in 3d games, but not much, but its folding at home performance will be amazing...
October 9, 2009 1:29:37 PM

^^^^PS it's as good as anyone else's guess!!!
Anonymous
a b U Graphics card
October 9, 2009 1:38:01 PM

jamesgoddard said:
Feb / March 2010, and $500... Performance will be a little behind the 5870 in 3d games, but not much, but its folding at home performance will be amazing...


little behind...gtx 295 is faster than 5870 in 3D games and you say that gt 300 will be slower than 5870...better dont write anymore it doesnt have any sence...
October 9, 2009 1:44:38 PM

i feel the same that the 5870 will be faster than the g300 im games that use heavy tessellation, it wont even be a DX11 card
a b U Graphics card
October 9, 2009 1:47:23 PM

Quote:
little behind...gtx 295 is faster than 5870 in 3 games and you say that gt 300 will be slower than 5870...better dont write anymore it doesnt have any sence...


John..do you read what you write? 3 games??? :na:  And you consider that a win? Wait until OCed versions of 5870 come which will incredibly be...CHEAPER than the 295...with greater performance. And regarding Fermi...please show me a game benchmark...any game...Show me where those 3 billion transistors blow away the 5870. As jamesgoddard said it is anybody's guess, so why your opinion should be the right one?
Anonymous
a b U Graphics card
October 9, 2009 1:49:31 PM

hallowed_dragon said:
John..do you read what you write? 3 games??? :na:  And you consider that a win? Wait until OCed versions of 5870 come which will incredibly be...CHEAPER than the 295...with greater performance. And regarding Fermi...please show me a game benchmark...any game...Show me where those 3 billion transistors blow away the 5870. As jamesgoddard said it is anybody's guess, so why your opinion should be the right one?


that is simple logic: if gtx 295 ( old generation) is faster than 5870 or it match it than why gt 300 will be slower than 5870...that doesnt have any sence
October 9, 2009 2:14:45 PM

the only person that does not have sense on here is you, you go on about a card that has not been released yet, yes it could be faster than the 5870 or it could be slower and remember its not even a DX11 card
October 9, 2009 2:19:33 PM

Well the 5870 is sometimes slower than the old generation 4870x2.... As you said 'that doesn’t have any sence'

You can't compare the g200 and g300 architecture, from what I get the feeling, g300 is going to be horribly inefficient with its 3 billion transistors as they have tried to chase down the GPGPU market and have partially taken their eye of the traditional 3D market.



So.... for my 2010 prediction

ATI will clean up the gaming market, the 5xxx are the chips ATI wanted to punt from the very start when Direct x 10 was brought in, but Nvidia had is hissy fit, this chip has been in design therefore for the last 5 years....

Nvidia will try to tackle the super computer market with their super GPGPU chip...

....But fail as Intel's Larabee ultimately takes all the GPGPU market with its native x86 coding.

Direct X 11 games will come out in droves, as the programming interface has been much streamlined thus saving huge amount of developer time.

Developers will take to open source physics API's like openCL - and more importantly MS's 'Direct Compute' - the days for CUDA and PhysX are well and truly numbered (huge mistake Nvidia made making then proprietary).

Nvidia will struggle on with its currently profitable chipset (aka ION) products, but as socket 775 dies, and the Atom goes SoC, there last profitable avenue will be taken from them.

Sometime in 2011 Nvidia are bleeding cash, and fold or are taken over by Intel, having missed the market direction completely.
a b U Graphics card
October 9, 2009 2:19:50 PM

Quote:
that is simple logic: if gtx 295 ( old generation) is faster than 5870 or it match it than why gt 300 will be slower than 5870...that doesnt have any sence


Reasons why GT 300 might be slower than 58xx:
- nVidia's response to Dx11 - "not that important";
- GPU power shifted to supercomputing and "maybe" games;
- lack of tessalation;

So, there are reasons why it can be slower. There are reasons to think it will be faster but with a premium price. But the lack of any information regarding this new generation of cards from nVidia (except Fudzilla - which is not a reliable source of news) makes any opinion a fanboy opinion. Until serious evidence we cant say anything. I for one wish the GT 300 will be faster just so I can buy the 5850 cheaper :D .
Anonymous
a b U Graphics card
October 9, 2009 2:23:29 PM

rangers said:
the only person that does not have sense on here is you, you go on about a card that has not been released yet, yes it could be faster than the 5870 or it could be slower and remember its not even a DX11 card


not dx 11 card how do you know...source
Anonymous
a b U Graphics card
October 9, 2009 2:33:16 PM

what happen ranger...where is the source?????
October 9, 2009 2:36:09 PM

G300 will be Direct X 11 compatible, and probably Direct X 12 or whatever comes next compatible. The reason for this (and the reason for the inefficiency as I see it) is Nvidia making the chip a more general CPU like workhorse, basically it's Direct X 11 combatable because the software it's running is, it's like a big software emulation chip where as the ATI cards are dedicated hardware through and through.... G300 will do tessellation, as again it will do whatever you program it's CPU / GPGPU's to do - but without a DEDICATED HARDWARE, tessellation will take somewhat of a performance hit, where the ATI cards have it for free.
October 9, 2009 2:54:49 PM

Quote:
not dx 11 card how do you know...source


Quote:
In the R870, if you compare the time it takes to render 1 Million triangles from 250K using the tesselator, it will take a bit longer than running those same 1 Million triangles through without the tesselator. Tesselation takes no shader time, so other than latency and bandwidth, there is essentially zero cost. If ATI implemented things right, and remember, this is generation four of the technology, things should be almost transparent.

Contrast that with the GT300 approach. There is no dedicated tesselator, and if you use that DX11 feature, it will take large amounts of shader time, used inefficiently as is the case with general purpose hardware. You will then need the same shaders again to render the triangles. 250K to 1 Million triangles on the GT300 should be notably slower than straight 1 Million triangles.


http://www.theinquirer.net/inquirer/news/1137331/a-look...


no Tesselation=no DX11

Anonymous
a b U Graphics card
October 9, 2009 4:59:23 PM

rangers said:
Quote:
In the R870, if you compare the time it takes to render 1 Million triangles from 250K using the tesselator, it will take a bit longer than running those same 1 Million triangles through without the tesselator. Tesselation takes no shader time, so other than latency and bandwidth, there is essentially zero cost. If ATI implemented things right, and remember, this is generation four of the technology, things should be almost transparent.

Contrast that with the GT300 approach. There is no dedicated tesselator, and if you use that DX11 feature, it will take large amounts of shader time, used inefficiently as is the case with general purpose hardware. You will then need the same shaders again to render the triangles. 250K to 1 Million triangles on the GT300 should be notably slower than straight 1 Million triangles.


http://www.theinquirer.net/inquirer/news/1137331/a-look...


no Tesselation=no DX11


did you see who write this article and how is that old...poor charlie...he is paid to screw nvidia
not interested in what this "writer" has to say about nv...
October 9, 2009 6:00:58 PM

Personally i think the GT300 cards will be faster than the CURRENT 5870(depending on the series like GTX380 or GTX360) and will include dx11 in the package. Unless nvidia is a stupid donkey and want to get rid of the graphics card market they should compensate the late entry for dx11 platform and late reply for the radeons new series with speed. So generally speeking i dont see any reason why GT300 will not be faster than the 5870 unless it is a refreshed G200 chips it should package dx11 and should be faster than the current 5870(speed will vary from high end GT300 cards to low end GT300 cards. The high end should be faster than CIRRENT 5870 while the mid end will be there to fight with 5770 or 5850 these kinds of cards). That is what i think and i think that is what Nvidia must do to compensate for their lateness and to survive this initial dx11 war.
October 9, 2009 6:37:26 PM

fact 1] the nvidia card will do Tesselation in software, fact 2] on games with heavy Tesselation the card will be extremely slow, fact 3] lack of Tesselation in hardware will make the card not DX11 compliant, and all so make the 5870 faster in games that use heavy Tesselation
a b U Graphics card
October 9, 2009 7:03:53 PM

hallowed_dragon said:
Reasons why GT 300 might be slower than 58xx:
- nVidia's response to Dx11 - "not that important";
- GPU power shifted to supercomputing and "maybe" games;
- lack of tessalation;


To be fair, NVIDIA siad that DX11 wouldn't be important as far as sales of Windows7 is concerned, and frankly, I agree on that point.

As for supercomputing, NVIDIA's designed their cards to do both tasks with ease; its not a one or the other situation. This of course is to cover themselves for lack of a CPU...

As for tesselation, we'll see. I still have questions about computational cost, and the total lack of significant DX11 titles leaves that question currently unanswered.

Right now, this (http://www.anandtech.com/video/showdoc.aspx?i=3651&p=1) is my reference material. So far, it looks like NVIDIA knows what they're doing. I'm looking at a Dec-Jan release, and then we'll see.
October 9, 2009 9:36:38 PM

gamerk316 said:
To be fair, NVIDIA siad that DX11 wouldn't be important as far as sales of Windows7 is concerned, and frankly, I agree on that point.



Umm lets see - you ask mr development company, this direct X 11 will save you 20% of your dev time, are you interested in using it.... What are they going to say???
October 10, 2009 12:11:46 AM

Quote:
im sure that nvidia has game benchmarks but their are waiting ati to release 5870x2 card to see it performance and then we will see more from nvidia. so better wait.


Why would Nvidia hold back benchmark numbers and "wait" for the HD 5870x2 to be realeased?! isnt it obvious that the 5870x2 will have almost identical numbers as already posted benchmarks for two HD 5870's in CF? Its pretty obvious they are scramlbing to get their stuff out before Xmas but most likely be out some time Q1 2010.
a b U Graphics card
October 10, 2009 2:33:40 AM

Quote:
did you see who write this article and how is that old...poor charlie...he is paid to screw nvidia
not interested in what this "writer" has to say about nv...

You're interested in whatever Fuad writes though, and he isn't even a journalist, just a rumour-monger.

Quote:
im sure that nvidia has game benchmarks but their are waiting ati to release 5870x2 card to see it performance and then we will see more from nvidia. so better wait.


Where's the source????? (You have to admit you had that one coming :D )
October 10, 2009 2:44:07 AM

werxen said:
its all speculation from nvidia fanboys as to what exactly the gt300 cards ARE. nvidia has released some bogus figures here and there (like they always do....) about how amazing this series is. from what i heard they have really screwed up the architecture of their new designs - having 9 gpus out of a whole wafer considered proper. i used to like nvidia a lot and my first card ever was a geforce 4 titanium... but wow have they got their ass kicked as of late by ATi.




Soo true the first 4 video cards i have ever bought were all Nvidia card's until they started rehashing there GPU lineup to the point that it got riduculous! As soon as ATI released the 4k series i bought an 4870 and havent looked back and from the way things are going i dont see myself buying any Nvidia pruducts anytime soon!
October 10, 2009 4:31:51 AM

Quote:
little behind...gtx 295 is faster than 5870 in 3D games and you say that gt 300 will be slower than 5870...better dont write anymore it doesnt have any sence...


consider gtx 295 is a dual gpu card it is logical that gt300 cannot surpass previous gen dual gpu card (like 9800gx to gtx 280) consider 5870 barely hold itself against a dual core card i dont think it is gonna be easy for gt300 to do that.

unlike market stratage from amd, nvidia never actual put their flagship gpu into dual gpu card(or unable to?) from 7950gx(1337 of 7950gt.....) to 9800gx2(2 9800gt...) and now gtx 295(well known as 2 gtx 260 216 on board). as we know from the source nvidia would likely go with old g92 in basic production and modified the current g200 for the top end market. it will be hard to find a way to increase huge performance like they were doing in last few years(or they need new design, new architecture, not just keeping putting rops and texture unit.) althought new gen gaming will be unlikely from the past. floating processing will play important role future such as physics and shader effect. which nvidia does not have that advantage and only rely on cuda, physix those add-on feature and increasing transistor as much as possible. which make it nearly impossible to lower power consumption and temperature. not to mention dual gpu card that based the flagship gpu. 40 rops is still seem to be too much for 40nm fabrication and dual gpu card.

i can feel that they were getting lazy after nv43, then all they have to do to is spamming with rops and memory bus to overwhelm their competitor. like g80 and g200. which it is getting old.

hopefully they can learn from it and stand up again. people need new design...not just bash with raw power
a c 126 U Graphics card
October 10, 2009 8:22:20 PM

Going from tradition, ATi will have powerful cards for a good price and nvidia will have even more powerful cards, but will be more exspensive.
October 10, 2009 9:54:14 PM

seriously doubt nvidea will outperform ati in dx11 titles, and by the time nvidea release there next gen software emulated dx11 compatable card, dx11 will be in full swing so it WILL be a big deal.
a b U Graphics card
October 11, 2009 2:59:06 AM

jamesgoddard said:
Feb / March 2010, and $500... Performance will be a little behind the 5870 in 3d games, but not much, but its folding at home performance will be amazing...


Out of the gate I don't know, you might see an OpenCL folding client by then and I don't think the G300 will be 'amazing'm it'll likely at best be an improvement similar to what the HD5K can offer over previous generations.

Unless they build even more complex proteins to fold, then you won't see much of an advantage overall (it looks like the dual issue will be slower than originally anticipated). The main thing will be can protein folding take advantage of the areas that the G300 will be particularly good at? Remember the reason behind the HD4K and HD5K being so poor is that the GPU2 client was built with the nVidia series in mind (the original GPU client built for the HD2K/3K), and it doesn't take advantage of the changes in the HD4K GPU.

I suspect the F100/G300 to be great at folding and other similar apps, but I think it won't be as amazing as it sounds, just good. Amazing was the first iteration of GPU folding where it was a many fold (no pun intended) boost over anything previous.
a b U Graphics card
October 11, 2009 3:04:21 AM

The ATI GPU2 client is pretty poor. I read that they were planning to move from Brook+ to OpenCL though.
a b U Graphics card
October 11, 2009 3:42:31 AM

gamerk316 said:
To be fair, NVIDIA siad that DX11 wouldn't be important as far as sales of Windows7 is concerned, and frankly, I agree on that point.




Why do you lie/mislead about such things? :non: 

You do know that the internet keeps a copy of these things, right? :heink: 


http://www.xbitlabs.com/news/video/display/200909161403...

“DirectX 11 by itself is not going be the defining reason to buy a new GPU. It will be one of the reasons... ” said Mike Hara, vice president of investor relations at Nvidia, at Deutsche Bank Securities Technology Conference on Wednesday.

About GPUs, not about Windows. I'd almost have let it pass, but since you're once again trying to correct someone about DX11, and you've been told not to in the past because you constantly post BS that is the exact opposite of what was said. :pfff: 
You might have been confusing Steve Balmer's comments on Win7 not boosting PC sales, but you don't get a pass on that, unlike someone with no prior history of doing this. :non: 

Quote:
As for tesselation, we'll see. I still have questions about computational cost, and the total lack of significant DX11 titles leaves that question currently unanswered.


Which is still far better than your 'no DX11 titles for at least a year'-style comment earlier, whereas titles are already out and working prior to the final launch of the OS and D3D. Now you change it to 'significant DX11 titles', which is an arbitrary measure, just like there are no 'signifcant' SM3.0/OGL2.0 titles out there, because after HL2 & Doom3, nothing it significant.

Seriously what is your issue with DX11? :heink: 
Did D3D kill your father, so now DX11 must prepare to die? :kaola: 

BTW, Anand is OK, especially for the more personal nV stuff, but it's far from the best technical reference on F100, nV's own white paper and a few other resources are far better;

http://www.nvidia.com/content/PDF/fermi_white_papers/NV...

All of which points to no hardware tessellator.

Quote:
As for supercomputing, NVIDIA's designed their cards to do both tasks with ease; its not a one or the other situation. This of course is to cover themselves for lack of a CPU...


That's all well and good to say, but there is a transistor budget, and there is alot of that budget being spent on general computing tasks, at a cost of other graphics budget (like tessellation hardware), which means that they havre to compromise in order to allow them to do both so extensively. If they are efficient with their code and thread management it might no be ponderously slow, but it is an either/or thing, because alot of that graphics transistor budget is useless for more generalized computing, and alot of that computing budget is useless for graphics. There's alot of overlap, but there is alot of 'either/or' which is why they didn't include some parts, like the tessellator, nor the TMDS, RAMDACS, etc. Whether or not it matters is how easily something else can do those tasks easily (like the NVIO for the displays).

I have no doubt it will be a good chip, but you like too many people are overselling it beyond what it's technically capable of.

So, other than the obvious, why pimp Fermi/nV and crap on DX11?

a b U Graphics card
October 11, 2009 3:51:24 AM

randomizer said:
The ATI GPU2 client is pretty poor. I read that they were planning to move from Brook+ to OpenCL though.


Well it's because when Stanford was finalizing the GPU2 they didn't have HD4K optimizations for local data share, so it's not possible to optimize outside of the client. Supposedly there will be a better GPU client coming out, but it might arrive after the OpenCL client which should solve the problem for the HD4K & HD5K gpus, and also should be somewhat IHV agnostic, and thus be a good solution for S3 and other possible entrants as well (including Larrabee).

Which is all well and good, but for anyone wanting to to do folding now, they are either get less than their optimal from an ATi card if they are more interested in other things and just want to contibute, or they are getting far more points per day, if they're somewhat competitive, with many nVidia solutions out there, even with the high end G92 far outstripping the high end HD4K and even the HD5K currently (which is underutilized on small proteins), so no one would really build a folding rig on ATi hardware right now.

This may change, but right now it's something that ATi is doing poorly, but also has little to no control over (Mike Houston can help, but it's still up to Stanford to update the clients).

a b U Graphics card
October 11, 2009 3:56:09 AM

I thought the clients were developed by AMD and NVIDIA, and Stanford only worked on the cores. Maybe I've been out of the folding loop for too long and my memory for folding info is beyond its expiry date.

EDIT: Typo
a b U Graphics card
October 11, 2009 4:52:40 AM

My understanding is that the GPU client has to be approved and standardized by Stanford, and that's why they missed the last window (from what I remember when this last came up during the HD5K launch).

Also the term cores/clients gets thrown around so much, I'm sure it creates alot of confusion for a bunch of people.

I'm just posting on the couch after picking up a movie (Brothers Bloom) & 'zza followed by SNL, so no time now, but later I'll see if I can find the information from Mr Houston, I posted it here a while back, but don't have time to grab it this sec.

There was also a good piece at Rage3D from the nV CUDA guy who defected to ATi and he detailed the process, and was optimistic that a new addition to the GPU client was coming soon (ie end of year instead of sometime much later in 2010).

Google might be able find that one, but I will post both later.
a b U Graphics card
October 11, 2009 5:04:22 AM

I refer to the scientific core Fah_Core when I say "core."
a b U Graphics card
October 11, 2009 7:29:10 AM

Here's the one from Rage3D;

http://www.rage3d.com/previews/video/ati_hd5870_perform...

And part of the Folding thread with both M.Houston and a Stanford researcher commenting;

http://foldingforum.org/viewtopic.php?f=51&t=11572&star...

http://foldingforum.org/viewtopic.php?f=51&t=11572&star...

I don't doubt the Fermi based GPUs will be great at Folding, but I doubt they will use the current GPU client, and will likely rely on the OpenCL benefits that AMD is also hoping for so the difference will likely be just a bit more, not sgnificantly more unless they get even larger protein models which would increase the benefit of HD4/5K over GTX2xx/GF8/9 but should also add a little more distance between the F100/G300 and the HD5K cards.

But of course we'll need to wait and see.

But it sure would be nice to get a little more productivity out of existing cards, although there wouldn't be anything for my current laptop or work desktops, so may need to buy something new for myself for Xmas. Yeah that's it I'm buying a new laptop for science, and to help the children of the world, it's all about helping the children, not the gaming... or the pr0n. :lol: