vaker5

Distinguished
Oct 28, 2007
136
0
18,680
I still dont see with its insane specs it failed.... 512bit interface 750mhz core etc.. can someone explain this to me?
 

yipsl

Distinguished
Jul 8, 2006
1,666
0
19,780
Other people can give a more technical explanation, but my opinion is that it did not fail by as much in terms of real world performance. It failed in some games vis a vis higher end 8800GTX parts but it did well against the 8800GTS 320.

Specs aren't everything. As the Phenom launch compared to the X3800 launch shows, a new process works wonders. The X2900XT just can't compete against the X3870 and does only minimally better than the X3850.

Right now, ATI is taking a more long term shader approach whereas Nvidia has more texture units in their cards. That's one spec that people often overlook. ATI does better in shader heavy games vs. the equivalent Nvidia card.

 

vaker5

Distinguished
Oct 28, 2007
136
0
18,680
What I dont get is why they release hot not as fast cards, nvidia and ati just to release a cheaper cooler faster version at the end of the year I mean are they that greedy?
 

Slobogob

Distinguished
Aug 10, 2006
1,431
0
19,280


Actually Nvidia is that greedy and AMD just can't make anything faster.
 

spoonboy

Distinguished
Oct 31, 2007
1,053
0
19,280


The 2900xt has specs that made it seem like it would trounce any other card, but its pricing showed it was only intended to compete with the 320mb & 640mb nVidia GTS cards.

However the 2900 has 3 important limitations.

1) It only has 16 texture units, which is a low number for a high performace card. 8800 series cards from nVidia have alot more. Texture units from both companies are not identical, and it would appear from the reasonable performance of the 16 units of the 2900xt that the ATI versions are more efficient, but there are still not enough of them to challenge the 8800 GTX or new GT. Note, this is likely the reason why the 1gb 2900xt and pro cards were almost no faster than 512mb versions with the same memory clock, as the card did not have the texturing power to benefit from the extra memory size. In short, the 512mb frame buffer is (unusually in the world of high powered video cards) not a limit to the card's performance.

2) It only has 16 ROPs (the units which usually apply anti aliasing and texture filtering). This really limits performance with these filters turned on. Furthermore the ROPs of the 2900 dont apply AA themselves, this is performed 'in software/emulation' by the huge shader core. This means that enabling high levels of filters can greatly cut into performance, as the ROPs can reach maximum capacity quite easily at higher resolutions and the shader core has to divert a huge amount of its time to implementing AA and not, well ...shading pixels! This sounds like a terrible system for implementing AA, but ATI aimed this card to do well with a possible future method for producing AA, which would have greater image quality than current 'traditional' AA. The pros and cons of doing this at the time are debatable, but I guess someones got to strike out at some point, and it was ATI. The performance hit of enabling AA & AF depends, almost wildly one could say, on the game to which it is applied. This is again mostly due to the final 'limitation'...

3) The core of the 2900 is mind-bendingly complex which means that good drivers are even more essential for good performance that you might normally expect them to be. With the R600 (Radeon 2400, 2600 & 2900) ATI set out to produce a technical marvel, that would gain performance efficiencies by complicated scheduling of tasks, carrying them out in the most efficient order, looking something like 7 steps ahead, across 320 programable shaders, while also finding time to implement AA on the side. This also explains why a wide 512 bit interface was used, as it allows the room for all these chunks of information to be put into queues and fly around to the appropriate area at the appropriate time without interfering with ongoing operations.

However coordinating all this activity proved to be a huge challenge for ATI to implement through drivers, which arguably it has taken them until just a short time ago with catalyst 7.10, something like 5 months after release, to actually get working at near full potential. The relase drivers for the 2900 were in all honesty barely functional, meaning the straightforward task of getting 3dmark to run well was fine, but an actual dynamically changing game engine was another problem entirely. To see where the drivers have come from - which is a very very long way - compare the first reviews of the 2900 with say, reviews of the 8800gt or radeon 3850 & 3870, where the results for the 2900xt if present, were performed using catalyst 7.10, and see for yourself how the performance has improved. Case in point, STALKER was barely playable with full detail on the 2900xt at a mere 1280x1024 wih release catalyst 7.3/7.4, but now 1900x1200 is dismissed with a smooth 60fps+. Even AA & AF performance has come a long way, showing how drivers have improved task scheduling and other things more or less across the board. However much more work needs to be done on DX10 performance, which is a night and day difference between that and DX9 performance on the 2900xt. The new 7.11 drivers, if you look at tweaktown.com, although a mixed bag this time, more or less bring the 2900xt in line with the new 3870, making them now roughly equal in performance. The above post about the 2900xt being behind the 3870 and just about keeping up with the 3850 is wrong. In terms of straight performance, using the latest drivers, the 3870 and 2900xt can easily be considered equals, and on average the superiors of the nVidia 640mb GTS, although it's taken a long time to get to that point. Those aleady with 2900's, are definetly not being left behind by the 3870 or 3850. Yes the 3850 & 3870 have been tweaked and improved, but they have lost the 512 bit bus of the 2900xt, so it more or less balances out I reckon. Hope that was useful to you!