This source and information is pending verification
http://www.fudzilla.com/index.php?option=com_content&task=view&id=661&Itemid=1
My speculation. I think it's true. I mean, there's no valid reason the R600XT was the only card to be released to reviewers and preview sites, other than that the R600XT was the only card to actually do what they wanted it to do.
But is this source really hard to believe?
Before you jump to any conclusions, ask yourself the following-
What has 65nm architecture done for other hardware?
Reduced heat and less power consumption are the only hard facts. There hasn't been a piece of hardware that has utilized 65nm architecture that has done any considerable performance gaps.
The Brisbane is 65nm, it still only over-clocks to 3.0-ish GHZ, albeit it does it at lower temperatures and with lower power consumption. A Windsor accomplishes the same over-clocks, even with 90nm architecture, so really it's the architecture of the chip and not the size that counts.
What has GDDR4 done for other Video-cards?
Look at the 8800 series cards. GDDR3 and they put a considerable beating on the X1950XTX/XT. The only conclusion can be that the GPU specifications and shader/core make more of a difference than the type of memory. I mean look at the 6600GT. It's DDR and it beats 7300GT's with GDDR2.
ATI put too much money on the table for 65nm architecture and GDDR4 without the actual GPU power or architecture to back it up. You can't just have GDDR4 and a 65nm sized GPU and expect it to be great. You have to USE all the ram and all the resources to their full power.
And remember, lack of evidence isn't a valid argument to sources like this. You can't come in here and say, "How do you know the X2900XTX sucks if you haven't seen benchmarks?"
That's just it, ATI won't release any engineering samples and the consensus is from leaked sources and speculation that they aren't releasing engineering samples because the card just sucks.
AMD/ATI is like a camel with his head in the sand, and all the fan boys run around defending them, when they don't even have the tenacity to defend themselves! There's no such thing as a "strategy" to not releasing a sample of your product before it's released.
No business person on the PLANET would commit to it. The argument defending why they haven't put out a sample is tired and dead now, and I'm all too confident of what's going on to be willing to hear an ATI fan boys defense.
There's no valid excuse to this late-coming except that the chip was a failure.
http://www.fudzilla.com/index.php?option=com_content&task=view&id=661&Itemid=1
R600XTX, the 1024 MB GDDR 4 card, has been pushed to the next quarter. This is just one in a series of ATI's failures, but of course DAAMIT will call this a strategic decision. R600XTX won't see the face of retail / etail stores till Q3 2007.
We know that something went wrong with the samples and that the there were some severe performance problems with the latest batch of GDDR 4 cards. Basically 512 MB DDR3 version with 1600 MHz memory was beating the GDDR4 version with 1024 MB of GDDR 4 memory clocked at 2200 MHz.
That cannot be good as the GDDR4 card is much more expensive to build or sell. The worst part is that there is no any big performance difference between 512 GDDR 3 card and 1024 MB GDDR. The trouble is that 1 GB of GDDR 4 at 2200 MHz costs you a lot of money, so ATI at least wants to save some money. It will launch this card after Geforce 8800 Ultra but remember, don’t expect too much.
My speculation. I think it's true. I mean, there's no valid reason the R600XT was the only card to be released to reviewers and preview sites, other than that the R600XT was the only card to actually do what they wanted it to do.
But is this source really hard to believe?
Before you jump to any conclusions, ask yourself the following-
What has 65nm architecture done for other hardware?
Reduced heat and less power consumption are the only hard facts. There hasn't been a piece of hardware that has utilized 65nm architecture that has done any considerable performance gaps.
The Brisbane is 65nm, it still only over-clocks to 3.0-ish GHZ, albeit it does it at lower temperatures and with lower power consumption. A Windsor accomplishes the same over-clocks, even with 90nm architecture, so really it's the architecture of the chip and not the size that counts.
What has GDDR4 done for other Video-cards?
Look at the 8800 series cards. GDDR3 and they put a considerable beating on the X1950XTX/XT. The only conclusion can be that the GPU specifications and shader/core make more of a difference than the type of memory. I mean look at the 6600GT. It's DDR and it beats 7300GT's with GDDR2.
ATI put too much money on the table for 65nm architecture and GDDR4 without the actual GPU power or architecture to back it up. You can't just have GDDR4 and a 65nm sized GPU and expect it to be great. You have to USE all the ram and all the resources to their full power.
And remember, lack of evidence isn't a valid argument to sources like this. You can't come in here and say, "How do you know the X2900XTX sucks if you haven't seen benchmarks?"
That's just it, ATI won't release any engineering samples and the consensus is from leaked sources and speculation that they aren't releasing engineering samples because the card just sucks.
AMD/ATI is like a camel with his head in the sand, and all the fan boys run around defending them, when they don't even have the tenacity to defend themselves! There's no such thing as a "strategy" to not releasing a sample of your product before it's released.
No business person on the PLANET would commit to it. The argument defending why they haven't put out a sample is tired and dead now, and I'm all too confident of what's going on to be willing to hear an ATI fan boys defense.
There's no valid excuse to this late-coming except that the chip was a failure.