Give some comments bout these GCs please

What should I get?

  • PowerColor Radeon X800Pro [AGP]

    Votes: 11 78.6%
  • ASUS N6600GT/TOP/TD [AGP]

    Votes: 3 21.4%

  • Total voters
    14

mOngga

Distinguished
Jan 3, 2006
2
0
18,510
PowerColor Radeon X800Pro and ASUS N6600GT/TOP/TD. Both are AGP.

I'm want to upgrade my FX5200 magic to one of these. Help me choose one too please. I'm a gamer btw.

My current AI:

P4 2.4GHZ
1.5GB RAM
550W P.S.
Geforce FX5200 Magic[OC]
 
I would say get neither, go with an X800GTO2 or GF6800GS because they'll beat those other two options you have.

However from those you've picked the X800Pro will be top card in just about ever game out there. If they're the same price it'd be the one to go with, but check the price of those other two cards as well just to be sure. You might also be able to mod the X800Pro if it's a ViVo card.
 
The GTO2 is PCI Express only

SapphireGTO or GTO2, same difference. Nit-picking really, In any case SSDD.

As for the GF6800GSs they aren't that rare anymore;

http://www.us.ncix.com/products/index.php?sku=17411


Either way both would still be better, even as a plain GTO, which are still more modable than the X800PROs. Even if you have ot wait, probably worth it, just ask the bunch of people here who've already moded their GTOs.
 

the_guru

Distinguished
Dec 18, 2005
434
0
18,780
The GTO2 is PCI Express only

SapphireGTO or GTO2, same difference. Nit-picking really, In any case SSDD.

As for the GF6800GSs they aren't that rare anymore;

http://www.us.ncix.com/products/index.php?sku=17411


Either way both would still be better, even as a plain GTO, which are still more modable than the X800PROs. Even if you have ot wait, probably worth it, just ask the bunch of people here who've already moded their GTOs.

No. There is a BIG difference between a GTO and a GTO2 since the GTO2 has the R480 core with 16 pipelines. So by flashing the BIOS you get a X850XT PE for the price of a GTO2. 100% success rate.
 
No. There is a BIG difference between a GTO and a GTO2 since the GTO2 has the R480 core with 16 pipelines.


And the the SapphireGTO Has an R420 with 16pipes. The difference is minimal, and pointless since like I said BOTH moded or unmoded would be better than a plain X800Pro. So regardless of success rate a plain GTO is still a better choice.


So by flashing the BIOS you get a X850XT PE for the price of a GTO2. 100% success rate.

Actually NOT 100% success rate, since you can unlock them, but many people still experience artifacting. I love the 100% number because when people end up with a bum mod they then complain as if it were guaranteed by the MFR. Anywhoo, yes it's highly successful, but even without it, still better than the other two cards.
 

the_guru

Distinguished
Dec 18, 2005
434
0
18,780
Actually NOT 100% success rate, since you can unlock them, but many people still experience artifacting. I love the 100% number because when people end up with a bum mod they then complain as if it were guaranteed by the MFR. Anywhoo, yes it's highly successful, but even without it, still better than the other two cards.

Well, show med someone who has failed to unlock a pipe on the GTO2.

The artifacts are because people OC the cards to high. All cards can be unlocked but not all cards can handle the OC because of several reasons. The most common reason is people having poor case cooling.
 
Well, show med someone who has failed to unlock a pipe on the GTO2.

The artifacts are because people OC the cards to high.

And that's the point.

You said "So by flashing the BIOS you get a X850XT PE for the price of a GTO2. 100% success rate."

So unless you overclock it to the X850XTPE level then it's not 100% success rate, a$$clown! Or am I not allow to nit-pick like you? :roll:


EDIT: Instead of bringing this to the top of the pile again, not to draw attention to your nickname, as you told the teacher, I'll simply edit this post and re-state the fact that I said you CAN unlock them but people are experiencing artifacts. Therefore not 100% success to X850XTPE like you imply.
 

the_guru

Distinguished
Dec 18, 2005
434
0
18,780
Well, show med someone who has failed to unlock a pipe on the GTO2.

The artifacts are because people OC the cards to high.

And that's the point.

You said "So by flashing the BIOS you get a X850XT PE for the price of a GTO2. 100% success rate."

So unless you overclock it to the X850XTPE level then it's not 100% success rate, a$$clown! Or am I not allow to nit-pick like you? :roll:
You were referring to the unlocking procedure:

Actually NOT 100% success rate, since you can unlock them
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310
Well, in just about any benchmark, synthetic or real-world, that you can think of, the X800pro would hand the 6600GT its rear end.

However, that's clearly not the whole story of which card is a better thing to get. First off, the big "if" is how much video RAM each card has. Given that you're comparing two rather different cards, I would presume that you might've found the more expensive 256MB version of the 6600GT, as all X800pros (correct me if I missed one) have 256MB of video RAM. While the advantage of taking a 512MB video card might be debatable today, the advantages of a 256MB card are not. Most next-gen games we'll be seeing, as well as ones already released, will require 256MB of video RAM to properly run with maxed texture and detail settings. Otherwise, not enough texture data can be loaded onto the card itself, resulting in the card having to shuffle textures back and forth from the main memory, which can be especially painful for an AGP card.

Then again, there's also the fact that the 6600GT comes with support for shader model 3.0, while it's only shader model 2.0 extended for the X800pro. Normally, the differences are fairly small in real-world terms, (as far as I can tell, it's only that the X800pro can't use texture filtering and floating-point color at the same time, enough reason not to use floating-point color) if no one bothers to program for it, it's no better than an older technology; SM 2.0 extended has been ignored, so it's effectively as good as SM 2.0. Point for the GeForce.

Lastly, of course, there's the issue of price. Comparing a standard (128MB) GeForce 6600GT to a Radeon X800pro would be an insane idea, given that you'll see a massive gap in pricing, which may approach as much as 100%. The difference is smaller with the 256MB version, depending on the availibability of each card at each particular store and time. But usually, the GeForce will be the cheaper card. So even if it's weaker, it's a matter of paying for what you get here.
 
(as far as I can tell, it's only that the X800pro can't use texture filtering and floating-point color at the same time, enough reason not to use floating-point color)

The thing about "FP-Blending" is that it can be reproduced using other techniques, for the ATis it would require 3 passes to achieve the same effect, and I'm pretty certain the X800Pro could do 3 passes in nearly the time it take the GF6600GT torender a scene in FartCry with HDR enabled. Of course you still need to code for that too.

if no one bothers to program for it, it's no better than an older technology; SM 2.0 extended has been ignored, so it's effectively as good as SM 2.0. Point for the GeForce.

FartCry does use SM2.0 extended, that's where you get geometric instancing support, and alot of games are adding that since the benifits are to all R3xx series cards and above and all GF6 series cards and above. The edge to the Geforce in that case is negligible at best.

Lastly, of course, there's the issue of price. Comparing a standard (128MB) GeForce 6600GT to a Radeon X800pro would be an insane idea, given that you'll see a massive gap in pricing,

Except when the Sapphire X800Pros were selling for the same price as the GF6600s in the middle of last year. Now they're rare as sin, but second hand, oem or refurb this may create the same scenario, so 'insane' is a little strong.

which may approach as much as 100%.

Now that's insane talk! :p

The difference is smaller with the 256MB version, depending on the availibability of each card at each particular store and time. But usually, the GeForce will be the cheaper card. So even if it's weaker, it's a matter of paying for what you get here.

Yeah, but price performance will likely still favour the X800Pro, and also give you the benifit of having playable framerates longer IMO.

Of course like I originally said, the X800GTO or GF6800GS would be the better choices for price/performance.
 

cleeve

Illustrious
The X800 PRO will absolutely RUIN the 6600 GT.
It would still ruin it if the 6600 GT had 1024 megs of ram and the X800 PRO had 128 megs of ram.

The interface (AGP or PCI-E) makes no difference, more RAM or no.

SM 3.0 is maybe worth less than a half-point to the 6600 GT, especially since my 6800 Ultra is not fast enough to run the SM 3.0 HDR in far Cry. The 6600 GT doesn't have the horsepower to do squat with SM 3.0.
It's a checkbox feature, nothing more.

The X800 PRO has 12 pipelines and a 256-bit memory interface, the 6600 GT has 8 pipes and a 128-bit interface. No contest. Even if you were a total Nvidia fanboy, you'd be supremely retarded to choose a 6600 GT over an X800 PRO. The X800 GTO would also kill a 6600 GT.

I'd even be hard pressed to choose the 6600 GT over an X800 GT, but at least that's a much closer race. The X800 GT has the 256-bit memory interface (good for AA & other eye candy), but the 6600 GT has very efficient architecture and, in such a close race, SM 3.0 might come into play as a factor.
 
but the 6600 GT has very efficient architecture and, in such a close race, SM 3.0 might come into play as a factor.

Yeah the only thing I can think of where the SM3.0 might play a factor is in per-pixel specular lighting like in SplinterCell and FartCry where cose is gruesome, but then again we're talking about maybe single digit percentile benifit in an area where the X800GT and GF6600GT would still be chugging. So maybe 11fps versus 10fps for that benifit IMO.
 

pauldh

Illustrious
the best AGP option is the Geforce 6800 GT, it's the counterpart of X800 pro which I think is much better.

How did you come up with that? The X850xtpe, x800xtpe, x850xt, x800xt and the 6800U are all better than the 6800GT performance wise. Maybe prices are different where you live, but currently in North America anyway, better, cheaper, and more available than a 6800GT is an AIW X800XT. I ordered one yesterday from Buy.com for $259.99 less a $30 rebate. $230 shipped free no tax. Close to the 6800GS price and much better performance. If priced the same as the GS or pro, than sure the 6800GT is a nice card. But the only 6800GT AGP on pricewatch now is $508. 8O
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310
Sorry to resurrect this topic, but I neglected to post to some things directed at myself. I’m not sure what the general atmosphere is for resurrecting topics on THG’s Forumz is, but usually, one week isn’t all that bad if it’s brought up for good reason. Again, I apologize in advance.

Well, prices definitely have changed, at least in the USA. Unfortunately, this is AGP in question, so the only improvement of note would be the new GeForce 6800GS, which isn’t all that badly priced, at about $225US. (link)

And on another note, how is it that some are taking me to be an nVidia fanboy? I do know that my post regarding the two cards did perhaps give more credit to the 6600GT than it was due, but that would primarily be because the performance advantage of the X800pro over the 6600GT needed very little to be said on it. For any interested parties, I use a Radeon X800XT anyway.
The thing about "FP-Blending" is that it can be reproduced using other techniques, for the ATis it would require 3 passes to achieve the same effect, and I'm pretty certain the X800Pro could do 3 passes in nearly the time it take the GF6600GT torender a scene in FartCry with HDR enabled. Of course you still need to code for that too.
FartCry...” I’ll have to remember that one. I was fairly certain that the X series cards could handle filtering and blending in floating-point mode, as I’ve seen what is clearly examples in their own tech demos. (being the video card junkie I am, I have all of ATi’s tech demos for the Radeon 7200 through the X850, the highest that will run on my X800XT) However, as I said, if no one bothers to program using a feature, it’s worthless, and thus far, I’ve seen no one use it.

However, recent information regarding the upcoming Elder Scrolls IV: Oblivion found at Beyond3D suggests that the game will be an exception. Apparently, it will not only use floating-point color for its HDR method, it will possibly even work with MSAA even on any SM 2.0-complaint card, though what was said makes it inconclusive. It was noted that, for the abovementioned reasons, that “blending” was impossible to do without a significant performance hit, so it’s likely what we’ll see is simply a failure of some blending effects, such as alpha blending... And honestly, when’s the last time anyone really cared that any game got alpha blending correct? If memory serves me correctly, it was this acceptance of this problem that lead to the development of special AA techniques like nVidia’s “transparency super-sampling” and ATi’s “selective AA.”

FartCry does use SM2.0 extended, that's where you get geometric instancing support, and alot of games are adding that since the benifits are to all R3xx series cards and above and all GF6 series cards and above. The edge to the Geforce in that case is negligible at best.
It does? I honestly wasn’t aware of that; the game looked the same on a Radeon 9600XT as it does on an X8x0 card. Then again, I’ve heard some suggestions that the “XT” 9 series cards were SM 2.0 extended as well... And I have no other truly SM 2.0 cards; I think the “PCI” part of my old GeForce FX 5200 kinda nullifies things, as it won’t even run Far Cry, Halo, or any other game with a SM 2.0 mode.

Except when the Sapphire X800Pros were selling for the same price as the GF6600s in the middle of last year. Now they're rare as sin, but second hand, oem or refurb this may create the same scenario, so 'insane' is a little strong.
And lo and behold, we now have X800GTOs (128MB version) from Sapphire selling for close to the same price as 6600GTs right now! :D I find that to be a pleasant surprise, now if only they became available for AGP...

Now that's insane talk! :p
Well, it is what I’ve seen at points. It’s much better now, but primarily because the 6600GT seems to be drying up and raising in price. But at some points, I noted that the X800pro would cost in the $240US neighborhood while the 6600GT was around $120US.

Yeah, but price performance will likely still favour the X800Pro, and also give you the benifit of having playable framerates longer IMO.

Of course like I originally said, the X800GTO or GF6800GS would be the better choices for price/performance.
Indeed; my very first line was the comment that as far as performance goes, the X800pro can typically hand a 6600GT its rear end. I guess I didn’t mean to give the illusion of emphasis to the other qualities of the 6600GT so much, but in retrospect, it seems that I did considering how much space I devoted to them.

The X800 PRO will absolutely RUIN the 6600 GT.
It would still ruin it if the 6600 GT had 1024 megs of ram and the X800 PRO had 128 megs of ram.

The interface (AGP or PCI-E) makes no difference, more RAM or no.

SM 3.0 is maybe worth less than a half-point to the 6600 GT, especially since my 6800 Ultra is not fast enough to run the SM 3.0 HDR in far Cry. The 6600 GT doesn't have the horsepower to do squat with SM 3.0.
It's a checkbox feature, nothing more.

The X800 PRO has 12 pipelines and a 256-bit memory interface, the 6600 GT has 8 pipes and a 128-bit interface. No contest. Even if you were a total Nvidia fanboy, you'd be supremely retarded to choose a 6600 GT over an X800 PRO. The X800 GTO would also kill a 6600 GT.

I'd even be hard pressed to choose the 6600 GT over an X800 GT, but at least that's a much closer race. The X800 GT has the 256-bit memory interface (good for AA & other eye candy), but the 6600 GT has very efficient architecture and, in such a close race, SM 3.0 might come into play as a factor.
Well, as I commented, I do not disagree with the fact that the X800pro would slaughter the 6600GT in a fair performance fight. As for the interface comment, it can make a difference where texture reading is concerned; a PCI-e interface not only has about double the overall bandwidth, its multi-lane serial nature makes it far more flexible concerning re-proportioning bandwidth on the fly, to accommodate reading textures from the main RAM, as would be the case if the game has to buffer more in textures than the video card can hold.

The SM 3.0 is a wild card, heavily dependant on the game. As far as I’ve actually been able to find out, it really offers nothing truly improved over SM 2.0 extended, and it just merely makes me bitter that so many game developers have slighted those with SM 2.0 extended cards, making a SM 3.0 path instead. (and it’s not just that I use an X800XT) That path often only offers things that could’ve been accomplished simply with SM 2.0; from all information I’ve found, it seems that the abovementioned Oblivion game is a prime example; it will apparently incorporate all the soft-shadow and HDR technologies seen before and more, and it will all run in SM 2.0. (it will run much faster with SM 2.0b or SM 3.0 support, as SM 2.0 will require many of the things to be done in many passes instead of one) And again, as for what amount of power is really necessary to do something with SM 3.0, just as with how much power is necessary to use so much video RAM, is dependant on the game, not the card.
 

pauldh

Illustrious
Well, prices definitely have changed, at least in the USA. Unfortunately, this is AGP in question, so the only improvement of note would be the new GeForce 6800GS, which isn’t all that badly priced, at about $225US.
The 6800GS in AGP is going to be clocked lower than the PCI-e version. We already have benchmarks for the 350/1000 12-pipe 6800 in the Ausus v999 gamer edition. In general, the X800 pro still beat out that Asus, meaning a X800 pro should still be able to defeat a 6800GS in more games than it loses. If priced the same, the X800 pro would provide more bang for the buck. Neither competes with the similar priced X800XT or X850Xt when on sale. Your X800Xt will be above any of these other cards. If the 6800GS feature set pushes you over the edge compared to a X800 pro or X800GTO, knowbody would blame you; they should be very close. It's good to see a 6800GS come out in AGP, but the clock speeds were a huge dissapointment which makes me believe that NV still has very little to offer the AGP upgrader unless they MUST have NV. Besides the 6600GT when found for $120-140, I'm thinking ATI still owns the AGP upgrade market.
 
if u can find a x800pro, getting rarer all the time...
1.6800gs
2.x800gto
3.x800gt
4.6600gt in that order

Before moding the list is different for AGP X800GTO>GT6800GS because the GS in underclocked in AGP form. There is the advanatage of unlocking pipes in the AGP form, but that's hit/miss depending on whether they are locked/cut or not. Also AGP doesn't OC as well as PCIe because it's 130nm vs 110nm.
 
FartCry...” I’ll have to remember that one.

From an early nV application detection cheat, that was discovered by renaming it FartCry.

I was fairly certain that the X series cards could handle filtering and blending in floating-point mode, as I’ve seen what is clearly examples in their own tech demos.

The X8 series cards have FX16 alpha blending and FP16 texture and only FX16 in the TMU (I'm not sure but I think that's the case with the TMUs even on the R520, but not enough info on it). It's not a major limitation, just technical difference. The alphablenching is the killer when trying to use OpenEXR based HDR.

(being the video card junkie I am, I have all of ATi’s tech demos for the Radeon 7200 through the X850, the highest that will run on my X800XT) However, as I said, if no one bothers to program using a feature, it’s worthless, and thus far, I’ve seen no one use it.

ATI's techdemo is similar to Masa's Rthdribl. Valve used it for the first attempt at the lost coast demo which allowed HDR on SM2.0 cards. The progression from there is detailed in this ARStechnica article;
http://arstechnica.com/articles/culture/lostcoast.ars/3

To the point where it's a combination of both FP16 and FX16.

However, recent information regarding the upcoming Elder Scrolls IV: Oblivion found at Beyond3D suggests that the game will be an exception. Apparently, it will not only use floating-point color for its HDR method, it will possibly even work with MSAA even on any SM 2.0-complaint card, though what was said makes it inconclusive.

Likely it's similar to Valve's eventual compromise. I should check out that thread at B3D, likely very enlightening, and being a Morrowind / Oblivion fan I've ben keeping track of their developments (likely more raw new applicatoins of technology than any other single game since FartCry).

And lo and behold, we now have X800GTOs (128MB version) from Sapphire selling for close to the same price as 6600GTs right now! :D I find that to be a pleasant surprise, now if only they became available for AGP...

They are available in AGP now, have been for a while in my neck of the woods, but of course no GTO2s in AGP yet. And hey how about those supposed GF7800GSs coming from eVGA and XFX? :lol:

Oblivion game is a prime example; it will apparently incorporate all the soft-shadow and HDR technologies seen before and more,

Oh yeah, specular lighting, parallax mapping, HDR and more. It should be as fun as Morrowind in that respect.

and it will all run in SM 2.0. (it will run much faster with SM 2.0b or SM 3.0 support, as SM 2.0 will require many of the things to be done in many passes instead of one) And again, as for what amount of power is really necessary to do something with SM 3.0, just as with how much power is necessary to use so much video RAM, is dependant on the game, not the card.

Well that's true to some extent except that some of the ways the GF6 series does do it's target dependant calculations forces excessive loops, whereas the GF7 is far more efficient. Just look at the enormous speed boost of the GF7 with HDR (beyond it's boost in non-HDR), and in it's OGL 2.0++ performance in Riddick. There's alot more going on there than just the ability to use the video RAM texture buffer (a GF7800GT is still more efficient even at low res, and even at lower texturs detail).

I would say it's a combination of both, but it all leads down to the 'enough power to turn on the checkbox'. Many features are low impact and many features even improve performance, but to this date the only SM3.0 feature I think the GF6 can take full advantage of compared to the competition is specular lighting (used in FartCry and SplinterCell Chaos Theory), and the performance benifit from it is small. Also the GF6 can run the specular lighting at default whereas the X8 series cards do need to be properly coded for in the game, or else slight differences are noticeable.

It all comes down to what the developers are will to code for, just like R2VB on the R520; it could be far faster but without support it will be wasted.
 
Huh? Where? Where? WHERE???
I've been waiting for that new ages!

None for you while you're stuck in NZ with your new girlfriend
sheep5nf.gif


BTW, isn't GW's already moded and you still don't have one? :twisted:
 

the_guru

Distinguished
Dec 18, 2005
434
0
18,780
Guru my arse. You're lucky you got pwned by GGA and not by me when it comes to pipeline unlocking. :twisted:

I'm the only real Guru here BTW. :tongue:

I didn't get "pwned" by GGA. The GTO2 has a 100% success rate in unlocking the pipes. If you were a guru you would know that.
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310
ATI's techdemo is similar to Masa's Rthdribl. Valve used it for the first attempt at the lost coast demo which allowed HDR on SM2.0 cards. The progression from there is detailed in this ARStechnica article;
http://arstechnica.com/articles/culture/lostcoast.ars/3

To the point where it's a combination of both FP16 and FX16.
Well, I did note how very close to identical the variety of elements used in both Masa's RTHDRIBL as well as ATi's Devenec HDR demo for the 9700.

Likely it's similar to Valve's eventual compromise. I should check out that thread at B3D, likely very enlightening, and being a Morrowind / Oblivion fan I've ben keeping track of their developments (likely more raw new applicatoins of technology than any other single game since FartCry).
Indeed; the article primarily covers shadows though, but it's big nonetheless; the most important thing to note is that the game casts of stencil shadowing entirely, and uses shadow-mapping instead. Hence, with any card that has support for MRTs, each shadow can be drawn in a single pass. Not to mention the fact that it's far more accurate to life, as well as providing a better distributed workload.

They are available in AGP now, have been for a while in my neck of the woods, but of course no GTO2s in AGP yet. And hey how about those supposed GF7800GSs coming from eVGA and XFX? :lol:
I should've been clearer; I meant the 128MB versions; I haven't seen any for AGP, only 256MB versions. The price difference I saw was significant. Since the GTO²s seem to all use the R480 core, I have my doubts that they might be made availible for AGP. Then again, I have no idea how much it costs to add the PCI-e to AGP bridge chip, so it may actually be feasible. And the 7800GS cards are indeed strange, though I'm not holding my breath for an AGP version; even if they do come, they will likely still cost a ton, and not be able to compete performance-wise with ATi's X850XT PE, which already comes close enough to the 7800GT in a fair fight.

Oh yeah, specular lighting, parallax mapping, HDR and more. It should be as fun as Morrowind in that respect.
Ah, yes, the days when shaded water was all the rage, with all those cross Xbox/PC titles like Halo and Unreal Championship...

...Good thing that the game will certainly be more fun and less stiff than Morrowind was, and have some more of the gameplay goodness that we got in Daggerfall. I must say that it's my awaited game of 2006, and was my awaited game for 2005, though last year it shared the spot with Age of Empires III and Perfect Dark Zero. (both from Microsoft, coincidentially...)

Well that's true to some extent except that some of the ways the GF6 series does do it's target dependant calculations forces excessive loops, whereas the GF7 is far more efficient. Just look at the enormous speed boost of the GF7 with HDR (beyond it's boost in non-HDR), and in it's OGL 2.0++ performance in Riddick. There's alot more going on there than just the ability to use the video RAM texture buffer (a GF7800GT is still more efficient even at low res, and even at lower texturs detail).

I would say it's a combination of both, but it all leads down to the 'enough power to turn on the checkbox'. Many features are low impact and many features even improve performance, but to this date the only SM3.0 feature I think the GF6 can take full advantage of compared to the competition is specular lighting (used in FartCry and SplinterCell Chaos Theory), and the performance benifit from it is small. Also the GF6 can run the specular lighting at default whereas the X8 series cards do need to be properly coded for in the game, or else slight differences are noticeable.

It all comes down to what the developers are will to code for, just like R2VB on the R520; it could be far faster but without support it will be wasted.
Ah, yes, I'm aware that nVidia has definitely been making major changes to the whole GPU with each new generation; the G70 looks little like the NV45. I've been impressed with how much nVidia's been able to pull up their weak points; SM 3.0 modes in games did little good for the GeForce 6 cards given the massive performance loss that would come from it. I've seen this gap shrink, and it seems to be rather small in the X1k series cards, ad hopeful will be as well with the GeForce 8 cards as well; it might even be viable to use such features on a low-level card, preusming you're also reducing the number of pixels to render those shaders across, of course. :p