Sign in with
Sign up | Sign in
Your question

And this is how Nvidia falls off the high horse ... GTsuX 280

Last response: in Graphics & Displays
Share
May 24, 2008 5:35:05 PM

The last time the inq published real news was... What comes before never?

Don't trust such sensationalist sources and wait until official sources publish real reviews.
a c 130 U Graphics card
a b Î Nvidia
May 24, 2008 5:36:53 PM

mihirkula said:
Nvidia...ya gotsta be kiddin ....

http://www.theinquirer.net/gb/inquirer/news/2008/05/24/...

Go 4870 :) 


Well its very interesting reading but as i said to all those who went off half cocked about how great these cards would be when some specs were posted a week or so back, lets just wait and see what we get when the benchies are out.
Mactronix
Related resources
May 24, 2008 5:38:04 PM

I never put nV on a pedastal for G92 or GT200. I can admit G92 wasn't as good as claimed, but this is still early predictions from the Inq, probably the least reputable, AMD fanboy reporters out there. And your just another AMD/ATI fan clinging to it.
May 24, 2008 5:41:20 PM

I almost hope it is true.... daamit needs some money..... (and i'm probably going to be a budget participant in this generation....
May 24, 2008 5:47:09 PM

I thought it was pretty funny.
I like how they made it seem like they where serious for the lulz.
May 24, 2008 6:03:41 PM

Honestly, I'm a ATI support (almost to a fault), but I pursue truth first. This article seems alittle biased to me. They only talk about the negatives. I find it hard to believe that the card will be such a failure. As history shows, Nvidia cards tend to look weak on paper, but before well on real life. And vice versus with ATI lately.

Though I hope this is somewhat true, my gut tends to tell me otherwise.
May 24, 2008 6:55:00 PM

gosh i sure hope the gts isnt that much. i was hoping to step up, but now....
a b U Graphics card
May 24, 2008 7:23:38 PM

Come on, Charlie Demerjian is obviously throwing a tantrum since he wasn't invited to the party. He's kicking, screaming and crying while spewing forth garbage. Wait to get the FACTS about the new GPUS, not this twaddle.
May 24, 2008 7:35:55 PM

Another three or four weeks and no more speculation, although that is half the fun.
May 24, 2008 8:36:52 PM

Well it is just a rumor from The Inq. so it's still to early to say how much of the article proves to be fact. That said I agree that ATi seems to have the better price/performance in the next generation.
May 25, 2008 12:04:00 AM

If nvidia gpu gt200 series has onboard physx than this will more than likely dramatically improve FPS in current CPU heavy titles like UT3, WiC and Crysis as the load will be taken off of the CPU for processings physics calculations.

I think in titles with less physics, the differences will be significantly less - and also if you have a very high end cpu (4.0 ghz ish overclock quad and the like) the difference will be less.


On an extremely high end system the gt200 will still be better than any g92 by a markable amount i'm sure - but on a lower end system I am willing to bet a dollar that the g260 mid range card will be dramatically better than current mid range cards. Actually if g260 is reasonably affordable than it will be a really excellent SLI setup!


On the other hand, 4870 is more than likely going to be an excellent single card solution for its cost; and I bet a crossfire 4870 will be very impressive.
May 25, 2008 12:19:44 AM

Yeah, but devs would have to code for physX or whatever, so just having it means jack s***. I have a feeling that ATI is gonna get stomped again no matter the die size. Nvidia has done it with 6, 7, and 8 series..so the real 9 series ...it wouldn't floor me it it kicks ass.
May 25, 2008 1:09:52 AM

nawww the X1950XTX kicked the crap out of the 7900GTX and the 7950GX2, proven history. Plus the X19 had better IQ.
May 25, 2008 1:11:48 AM

Nah we both know that the X1950XTX kicked the crap out of the 7900GTX and the 7950GX2. And the X19 had better IQ. Nvidia lost hands down in the 7900s.
May 25, 2008 3:48:40 AM

royalcrown said:
Yeah, but devs would have to code for physX or whatever, so just having it means jack s***. I have a feeling that ATI is gonna get stomped again no matter the die size. Nvidia has done it with 6, 7, and 8 series..so the real 9 series ...it wouldn't floor me it it kicks ass.



admittedly true, but maybe next gen titles will better support this; or you never know really, maybe nvidia has worked out deals with crytek/massgate and the like to add the support with a patch to promote their new cards. Its totally possible, but lets wait until the cards are on the table right! =)
a b U Graphics card
May 25, 2008 6:59:45 AM

The 7 series was and still is kicked to the curb by the 19xx seires. Someone just needs to look. Im not even going to bother, as I wont waste my time. The G280 is going to be a killer card, but bring your pocket book and your watercooling.
a b U Graphics card
a b Î Nvidia
May 25, 2008 7:54:03 AM

JAYDEEJOHN said:
The 7 series was and still is kicked to the curb by the 19xx seires. Someone just needs to look. Im not even going to bother, as I wont waste my time. The G280 is going to be a killer card, but bring your pocket book and your watercooling.


And don't forget the X800 outperformed the GF6800, it relied on SM3.0 to make up the difference than the price/performance of the 6800GT (not the weak performance of the Ultra) to sell cards. The HD series has only been successful when learning from that GF6800GT strategy.

As for the GTX280 aka G200 (not GT200 like some people thought) it's too early to tell anything right now, but it's definitely going to be a huge die (thus expensive to make) and then a complex PCB and the added expense of the NVIO again, never going to be cheap.

I have no doubt it will outperform the HD48xx series, but they're definitely approaching the market from the two opposite ends of the equation. Relying on crippling this huge chip to make the GTX260 means that your still spending about the same amount to make that lower end card to compete against a significantly cheaper product. And they can't rely on the current G92 to be the cheap competition if the yields are low, they'll need to hope the yield of the G92B shrink are better, and the rumour that the move to 55nm for the G200/GTX280 will be quick to improve yields.
a b U Graphics card
May 25, 2008 8:04:28 AM

The 92b is nVidias true hope here. We all know that. Theyll be lucky to make a 20th of the money from the G2xx series as from the 92b,s. Whatll be interesting is whether nVidia can get the yields, up their clocks, and throw a few tweaks in to even compete with the 4xxx series
May 25, 2008 9:06:53 AM

babybudha said:
They only talk about the negatives. I find it hard to believe that the card will be such a failure.


Hmmm... Well it's in balance with what we know so far. The g200 series is big monster. Propably quite fast and very, very expensive to produce. So no new information in here. Are we getting shaky? So big GPU needs time to mature and needs smaller production node, to make it more affordable. But all in all what this article says is that 4800 is more mainstream and g280 more extreme card...

May 25, 2008 10:09:41 AM

My power supply couldn't even take a 280gtx or whatever. Nvidia's next chip sounds more and more like a power guzzling nuclear reactor than a video card.

I just hope AMD stay competitive. AMD will have a price segment where they will dominate at least unless Nvidia releases slower alternative or drop 260gtx prices.

Why didn't Nvidia plan a card with 384bit and 320bit bus along with 512bit? Nvidia knew AMD was going to stick to 16ROP config. It would have been much easier on the pockets and power consumption to compete with 4870 and still have the performance crown.
May 25, 2008 10:15:45 AM

DarthPiggie said:
Nah we both know that the X1950XTX kicked the crap out of the 7900GTX and the 7950GX2. And the X19 had better IQ. Nvidia lost hands down in the 7900s.


Nvidia actually won because they sold more video cards that is inferior. Thank god I skipped the entire 7 series. :p 
a b U Graphics card
May 25, 2008 10:27:25 AM

Thats just it, with all the hoolah about the G2xx, the 92b's could be 8800GTX refresh, lower node etc with higher clocks and a few tweaks. Who knows?
May 25, 2008 3:12:40 PM

If Nvidia releases G92 with GDDR5 it would easily compete with 4850 and 4870 or whatever. AMD would be in a tight spot again and be out of luck in all price segments.
May 25, 2008 4:12:59 PM

marvelous211 said:
If Nvidia releases G92 with GDDR5 it would easily compete with 4850 and 4870 or whatever. AMD would be in a tight spot again and be out of luck in all price segments.


Why do you assume that?

Is there any evidence that theG92s currently bandwidth starved?


Your only as fast as your slowest area. If the G92(a) is a well balanced design (as it should be since it has evolved over 18 months or so) then speeding up one particular part of the card (in this case memory) will not result in a significant advance as the bottleneck will move to the other parts.
May 25, 2008 7:21:55 PM

Amiga500 said:
Why do you assume that?

Is there any evidence that theG92s currently bandwidth starved?


Your only as fast as your slowest area. If the G92(a) is a well balanced design (as it should be since it has evolved over 18 months or so) then speeding up one particular part of the card (in this case memory) will not result in a significant advance as the bottleneck will move to the other parts.


G92 is bandwidth starved.
a b U Graphics card
May 25, 2008 9:56:34 PM

^ True that. But itd take a pcb change, a huge one, and other tinkering for the GDDR5 to work, and thats only if the nVidia cards arent too noisy for the ram, like we saw from the GDDR4, although I hear the GDDR5 is much more resiliant
May 25, 2008 9:58:50 PM

It comes down to that fact that ATI has an advantage here, they have been working on dualcore GPUs for a longer amount of time. It seems to me that once again Nvidia will be on top, performance wise, but the much cheaper ATI cards will be very close to the GTX280. It comes down to the fact that the story is very biased, but it's information is correct. The only thing it said wrong was saying that 1GB of memory was a bad thing, and it surely is NOT, since ATI shot themselves in the foot again and are now using GDDR5! GDDR4 did not yield much of a performance boost, but 512-bit/1GB of GDDR yields great performance over 256-bit/512mb GDDR at higher resolutions. You want to see a killer card? 1GB-bit/2GB that way the standard high resolution of 1920x1080 is no longer demanding at all. The last bit was all B.S. of course but it would be nice! Anyway I am all the way for ATI, because I can not stand Nvidia's crappy P.O.S. chipsets when there are X48s to be had!!!
May 25, 2008 10:18:39 PM

Nvidia has been working on SLI longer than ATI.
May 25, 2008 10:28:55 PM

Dualcore is not the same as SLI, drivers are sort of similar but thats about it. Also, Crossfire still performs better than SLI.
a b U Graphics card
May 25, 2008 10:29:01 PM

True, but Intel is being buttheads about this too. CUDA is free, and Intel wants to charge for usb3, from what I hear, so why not SLI? Anyways, I agree for the most part, ATI will have the heart of the market if their cards deliver, the top end will most likely belong to nVidia, because the G280 is as much as you can put on a card period
May 25, 2008 10:35:39 PM

marvelous211 said:
If Nvidia releases G92 with GDDR5 it would easily compete with 4850 and 4870 or whatever. AMD would be in a tight spot again and be out of luck in all price segments.


Not only are you missing the extra core point, but GDDR5 is not determined as a GOOD thing yet, and keeping GDDR4 and only a 256-bit/512mb interface, it most likely will be the 4xxx series' downfall. So... no not at all.
May 25, 2008 10:39:56 PM

ovaltineplease said:
admittedly true, but maybe next gen titles will better support this; or you never know really, maybe nvidia has worked out deals with crytek/massgate and the like to add the support with a patch to promote their new cards. Its totally possible, but lets wait until the cards are on the table right! =)


Yeah... those deals have already been done, ever notice that pretty much only Crysis gets a major performance boost with the "9" series?
May 25, 2008 11:10:53 PM

The_Blood_Raven said:
Dualcore is not the same as SLI, drivers are sort of similar but thats about it. Also, Crossfire still performs better than SLI.


Same $h1t. AMD x2 line up is a 2 gpu in one pcb using crossfire bridge. Excuse my french. :lol: 
May 25, 2008 11:15:22 PM

The_Blood_Raven said:
Not only are you missing the extra core point, but GDDR5 is not determined as a GOOD thing yet, and keeping GDDR4 and only a 256-bit/512mb interface, it most likely will be the 4xxx series' downfall. So... no not at all.


What core points? :heink: 

4870 is a 16 rop 32tmu card with more shader with gddr5. G92 is a 16 rop card with 64tmu with gddr3.

When bandwidth isn't starved it can easily match a 4850. Shader might play a role but that extra texture fillrate will also which would be a good competition if G92 is outfitted with Gddr5.
May 26, 2008 2:26:51 AM

I do not mean to be offensive marvelous, but your point is very... stupid. That is like saying you could just slap more pipes and memory on a 7800 GT and get a 8800 GT, it does not work that way. Not only can you not compare the specifications of 2 different brand cards, or any cards that have different core architecture, but you can not "add" anything to a card without totally redoing it's core architecture, which would make it a totally different card. Lets say you put GDDR5 on a G92 based card and, hopefully, increased performance, but you can not compare that to the 4xxx series, not only because they are not out yet and so we do not know how they will perform, but also the G92 core is NOT optimized for GDDR5, but only GDDR3 so it would not even work without a total architectural revamping. This creates a totally new core and a totally new card. Also I have absolutely never seen anything ever pointing out that ANY current GPU is bandwidth starved, if anything bandwidth is the only thing we have plenty of. That is if you are referring to the interconnection lane, PCIE 1.1 or PCIE 2.0, otherwise you must be talking about the GDDR memory bandwidth which is definitely not "starved", to add more bandwidth to the card, or in a sense make it a 512-bit card, you would only increase performance at higher resolutions, and even this is not possible without a total architectural revamping. Your points make no sense, so I must say that this entire topic is useless.
May 26, 2008 2:51:06 AM

Well to be honest you (blood haven) sound like you have no idea what the hell you are talking about. talking about core points and crap that doesn't even exist.

You can essentially slap more pipes and memory on a 7800gt and make it perform faster not like 8800gt cause 8800gt uses SP. Basically turn it into a 7900gt or 7900gtx performance even. That's what 7900gt and gtx was anyway.

3870 and 4870 is not too far off. It's basically a respin. Added more texture units and SP and faster memory. :hello: 

G92 is bandwidth starved. Anyone who knows what they are talking about will tell you this. Only reason why 8800ultra still wins most of the benchmarks with AA on with less texture fillrate.

Texture swapping on PCI 1.1 or 2.0 is useless in the real world of gaming. Please get a clue before calling people stupid.

Oh wait you are a 16 year old genius who calls people stupid to a 33 year old who's been here from the very start of 3d engines and have had a history of electronic engineering and computer background. :pt1cable: 
May 26, 2008 2:54:52 AM

OMG... if that revelation is true, then I'll be switching sides!
I wanna play Assassin's Creed!
May 26, 2008 3:10:50 AM

My issue with that report is the price. $450 for something that is supposed to replace the 8800gt? Over $600 for the next GTX competing with their previous generations dual GPU card? Sales of their lower end cards and the value natured route their competition is going should give Nvidia a clue that market share does not lie in the extremes. They need to price the GTX 260 closer to the 8800GTS and the GTX 280 at the original rumored $499 or it will simply be a better value for enthusiasts to go crossfire. Either this card really has to hit it out of the park in performance (as in double the 9800/8800GTX fps') or they need to offer an efficient economical gaming solution; don't they know Hummers aren't cool anymore?
a b U Graphics card
a b Î Nvidia
May 26, 2008 3:52:26 AM

The_Blood_Raven said:
The only thing it said wrong was saying that 1GB of memory was a bad thing, and it surely is NOT,...


I don't think you understand what he's saying. With the 512Bit architecture with 32 ROPs and the current batch of GDDR3 chips in the speed they need, they have to use 1GB to match the 512bit interface because you can't get them smaller than the current layout, which means no cheap 512MB option to bring that performance level into the upper midrange, all must be 1GB of memory. Not all games use much more than 512MB, some do, but not all. So it does limite your options, but I agree, for the market it doesn't matter much since the GTX-280 is meant fo the Ultra-insane crowd anyways, not the value seekers, the bigger problem though is that the GTX-260 is equally stuck at 896MB which is not good for a card you want to make cheaper. Looking at recent production you don't see much in the GDDR3 are that would allow for smaller memory amounts without going backwards in the production scale. Although of course this is also possible if production demand were high enough to get mfrs to make modules specifically for a 448MB card, but I think you would lose some of your savings by going that route, so it's not impossible, just impractical especially at launch. I didn't think about that aspect before, but it does make one skeptical of their being a useable GTS-320 style card appearing anytime soon, which didn't matter for the G80 launch because there weren't other options out there from ATi, it means more this time around.

But for the cards themselves at the high end, sure 1GB is great, heck 2GB would be even better (as long as you're not on a 32Bit OS) and remains an option for the GTX-280, but it speaks to the issue of price, and 1GB and 2GB cards are not cheap, unless you have the option for much smaller bitwidth interfaces like found on the low/mid range cards. So for an article about keeping costs low, yes 1GB or greater only and 896MB or greater IS a bad thing. I think that's what he was speking to in that article.

Quote:
since ATI shot themselves in the foot again and are now using GDDR5! GDDR4 did not yield much of a performance boost, but 512-bit/1GB of GDDR yields great performance over 256-bit/512mb GDDR at higher resolutions.


GDDR4 and GDDR5 are very different animals. How do you perceive ATi having shot themselves in the foot, when they are going to have the option for nearly the same bandwidth at launch (basically 90% of the GTX-280's 1107 MHz GDDR3 memory) yet far more headroom to go forth from there (GDDR3 is near it's max at 1200MHz, especially for higher bitwidths; yet GDDR5 pretty much starts at 2GHz [1.8 is the lowest models from Samsung which is likely what's on the HD4850 running @ 1.75]), long term I don't see how ATi shoots themselves in the foot for giving themselves room to grow and use a technology that greatly benefits the X2 series where they have the option for 2 cores sharing the same memory pool, but the GTX series and it's GDDR3 memory do not have the same option (you could share the pool using a bridge interface but it would terribly inefficient compared to just copying the memory to 2 pools).

Quote:
You want to see a killer card? 1GB-bit/2GB that way the standard high resolution of 1920x1080 is no longer demanding at all.


How is it demanding at all right now? It's not the memory that's the issue with that resolution (or the more useful 1920x1200) , but the processing demands to renders the pixels and effects. Memory Bandwidth and size for the Geforce series is most important for AA and little else, even large textures were well handled without AA at 1920x1200 with the G92-GTX. They don't have the same DX10.1 buffer requirements as the HD series, and so I don't think that that resolution is an issue until you add the burden of AA. I think nV realizes that their marketing benefit is in pushing the traditional hardware DX9-style AA and pumping out those numbers, especially keeping their ROP number high which is way more than enough for just outputing the rendered pixels, but the large ROP number to add in AA resolve performance is crucial.
IMO a 1Gbit/2GB memory resource isn't necessary for large resolutions, it's only necessary for enormours texture sizes, huge amounts of AA, or for DX10.1 card numerous buffers from which to pull to reduce rendering objects multiple times or recalculating, and this does sort of go backwards for where people are wanting to head which is greater efficiency of resources. Do you even realize the amount of PCB complexity required for 1Gbit memory traces? That's a pretty packed and layered PCB, which is pretty expensive as well.

Anyways, I agree that the 1GB on the GTX-280 is likely not a limitation since it's aimed at deep-pockets, but the lack of an option to go with samller amount for the GTX-260 may be more of an issue if trying to compete with a GDDR3/4/5 HD4850 which can vary it's memory amounts to whatever partial 256/512/1GB fraction it wants thanks to the more flexible ringbus.
a b U Graphics card
a b Î Nvidia
May 26, 2008 4:06:55 AM

marvelous211 said:
Nvidia has been working on SLI longer than ATI.


Actually, no they haven't.

3Dfx worked on SLI, but none of that technology brought it's way over to nVidia's SLi which actually uses ATi's AFR and MEtabyte's SFR methods to renders images.

ATi's been working on it longer, they just didn't have a retail part to market for gamers sooner. Their work with Evans & Sutherland on the R200 and R300 cores way predates nV's SLi on the GF6 series, and used both AFR and their Supertiling methods to render scenes.

http://img221.imageshack.us/img221/8320/ensradsimfusbgr...
http://img98.imageshack.us/img98/8705/es7000dk4.jpg

However more experience doesn't mean capabitily/success as the poor X800 CF showed and as nV's early issues with Quad SLi after their GX2 experience shows.
a b U Graphics card
a b Î Nvidia
May 26, 2008 4:34:28 AM

The_Blood_Raven said:
...but you can not "add" anything to a card without totally redoing it's core architecture, which would make it a totally different card.


Sure you can, and with a process shrink you usually have more space to add things in new nooks and cranies made by the shrink. That's when you usually see slight tweaks in designs. I don't tink they will change the memory crossbar and add support for GDDR5, but it's not impossible, and it would bethe practical point at which to do it. However it's unlikely since it would require adding an unknown to a design that is already offering low yields, why add a crucial variable, when the most important thing would be to get more parts out the door with the existing design. You tweak succesful chips, not those that have issues (talking yields not peroformance).

Quote:
also the G92 core is NOT optimized for GDDR5, but only GDDR3 so it would not even work without a total architectural revamping. This creates a totally new core and a totally new card.


Actually the G92 core is not optimized for either, once it gets past the memory crossbar it's simiply bandwidth available for use to the various components.

Quote:
Also I have absolutely never seen anything ever pointing out that ANY current GPU is bandwidth starved, if anything bandwidth is the only thing we have plenty of. That is if you are referring to the interconnection lane, PCIE 1.1 or PCIE 2.0, otherwise you must be talking about the GDDR memory bandwidth which is definitely not "starved", to add more bandwidth to the card, or in a sense make it a 512-bit card, you would only increase performance at higher resolutions, and even this is not possible without a total architectural revamping.


He's talking about memory, that's the whole point behind the GDDR5 memory discussion, and yes the G92 can be starved by memory with either high resolutions and AA levels or with high texture loads (which make it so that the TMU's requests cannot be completed fast enough). There are memory management methods to help this, but there are many examples of where even the GF9800GTX looses out to the greater memory bandwidth of the GF8800GTX and Ultra. The easiest way to show this is by cranking the AA levels higher, at 8X they both fall of the end of the earth, but the G92s feel it more because of their much less memory bandwidth and their inability to feed the RBEs doing all the AA work.

I think support for GDDR5 would help the G92 somewhat, but it would be in such specific and limited cases where their shader power could also use improvement, that they'd be better off just increasing their yields of stanard G92s without risking adding something as touchy as memory which plagued the G80s early on and caused the R420 to be delayed forever (6+ months). It's possible, just not worth it IMO.
May 26, 2008 4:37:10 AM

Considering 4850 is supposed to be only 25% faster than 9800gtx I think GDDR5 would help G92 compete or hold on to ATI's price bracket. That would be the easiest way to dominate the price segment.
a b U Graphics card
a b Î Nvidia
May 26, 2008 4:53:03 AM

I don't disagree about performance wise (also depends on which HD4850 you're talking about [remember there are GDD3 and GDDR5 models]), but I still think that they could win some areas of the performance segment with the addition of GDDR5, but there's 2 issues, A they are now in an even deeper shader hole against the HD4K series with a G92, and second if they don't improve on those yields, then they may win, but it may still be expensive for them to win because the HD4850 will be cast of HD4870s, but the G92 would need to be the cream of their crop in order to add to that GTX performance and not be lower clocked to improve yields.

As to what performance boost GDDR5 would provide, I think it would be much less than 25%, but if coupled with some added core/shader mhz from the process shrink, perhaps it would yield that much or more in combination. As mentioned before it would definitely help in the higher resolution AA situations where it falls off greatly performance wise, so maybe there you'd see much greater boosts well above 25% perhaps substantially more geting closer to triple digits.
May 26, 2008 4:55:30 AM

I don't doubt 4870 with more SP would do better in some games but G92 has much more texture fillrate where it would take the cake on others with more bandwidth. Particularly with AA. Kind of like 9600gt vs 3870.
May 26, 2008 5:00:22 AM

Why is it that at high resolutions AA apparently matters less? I can't tell the difference between switching from 2x 4x to 8x at 1920x1200, save for the performance drops.
May 26, 2008 5:23:37 AM

FusoyaX said:
My issue with that report is the price. $450 for something that is supposed to replace the 8800gt? Over $600 for the next GTX competing with their previous generations dual GPU card? Sales of their lower end cards and the value natured route their competition is going should give Nvidia a clue that market share does not lie in the extremes. They need to price the GTX 260 closer to the 8800GTS and the GTX 280 at the original rumored $499 or it will simply be a better value for enthusiasts to go crossfire. Either this card really has to hit it out of the park in performance (as in double the 9800/8800GTX fps') or they need to offer an efficient economical gaming solution; don't they know Hummers aren't cool anymore?



To put it in perspective, if the gtx 280 performs like an 8800 GTX/Ultra did when they were launched then it will be well worth 600$

Consider how long 8800 GTX/Ultra has lasted as a enthusiast video array (2+ years and still going strong?) and you can see that at 600$ it was a bargain to say the least.


If the gtx 280 can deliver a comparable card then it will be 100% worth 600$
May 26, 2008 5:25:38 AM

scooterlibby said:
Why is it that at high resolutions AA apparently matters less? I can't tell the difference between switching from 2x 4x to 8x at 1920x1200, save for the performance drops.



In my opinion this mattered more in older games, but in newer games some of the post processing effects and anistropic filtering really help bring the image quality up considerably without enabling AA

I think 2x AA 1680*1050 looks better than 1920*1200 no AA; but thats just my opinion.
a b U Graphics card
a b Î Nvidia
May 26, 2008 6:04:32 AM

marvelous211 said:
I don't doubt 4870 with more SP would do better in some games but G92 has much more texture fillrate where it would take the cake on others with more bandwidth. Particularly with AA. Kind of like 9600gt vs 3870.


I understand, but remember texture power isn't as crucial once you get over the basic hump.
The HD2/3K series were definitely limited, but the HD4K has more than doubled it's TMUs, so that's no longer as prominent a bottleneck.

If it were strictly the texturing power alone then the G92 should do much better than the G80s without AA consuming bandwidth, yet it's only a small difference.

They can definitely use the bandwidth, but you really have to force the situation for it to be a texture limitation, like running large or many textures.
In very limited situations it'll make a noticeable difference, but in most situations it won't be a huge difference IMO.
a b U Graphics card
a b Î Nvidia
May 26, 2008 6:07:30 AM

ovaltineplease said:
Consider how long 8800 GTX/Ultra has lasted as a enthusiast video array (2+ years and still going strong?)


Huh? :heink: 
The G80 has been around less than 2 years, it launched in the late fall of 2006. Just over a year and a half sofar.
!