I don't suppose anyone knows whatever happened to all the other mid-range designs ATI has going? There's the RV570 that we've been talking about, but there's also the RV535 which is a 80nm die shrink of the current X1600s and then there's also the mysterious RV560. On the low-end there's also suppose to be a RV505, also a 80nm part, that's suppose to truly replace the X300 since the prices of the X1300 are too high.
But what's the motivation to launch a replacement for the X300 now? The people a replacement would target don't care about the gaming 3D features, so I doubt there's any demand/need for ATi to replace it. A die shrink to reduce costs would make alot of sense though, but would the savings be that great on a mature chip? Likely skipping 90nm and going straight from 110 to 80nm or even better 65nm is what they are considering. Wisest choice IMO.
Obviously ATI can't bring to market all those parts. They don't even have enough X1xxx product numbers left. Why they decided to choose X13xx, X16xx, and X18xx to start with instead of the more spacious X12xx, X15xx, and X18xx is beyond me.
There's lotsa room left. You forget that there's room for X1x50 parts and even x1x25/75 parts too. I don't see it as a big deal.
The thing is, I don't see how ATI expects to make a mid-range part that can perform 10% faster than the X1800XT. The 80nm process offers them more room, but not that much room for more shaders. With the X1800XT already clocking the core at 625MHz, I don't see the 80nm process offering much more clock speed with sufficient yields for the higher volume mid-range market.
But if their yield ratio is the same @ 650-700mhz then the die size savings offer something additional. Add to that the fact that the X1800 no longer matters other than as a legacy part (sinxe the X1900 is much larger and better performing) it's like so many other parts where is gets abandoned. The most important factors for this new chip are cost to produce vis-a-vis X1900 and 1600, and performance level with respect to both the X1600 and X1900 (which it may or may not replace [might only shift down the ladder]).
Going to 256-bit is nice, but with the X1800XT having it's RAM at 1500MHz, I don't see a mid-range card having more memory bandwidth than that.
The X1800XT RAM is spec'ed for 800/1600, and the X1900XTX is 900/1800, so getting some cheaper memory as found on the X1600XT (1.4ns) would allow it to operate near the same speeds @ 700mhz, for 16x12 resolutions and settings the 36shaders might help it outperform with that slight memory penalty. And in the memory business the difference between 1.1 or 1.0ns GDDR3 or GDDR4 (still too young to offer great prices IMO) and 1.4 or even 1.6ns memory is likely enough to warrant the difference for board makers who bare the brunt of those costs.
What ATI is probably doing is designing the RV570 to compete in the around $200 mark where the 7600GT is,
I oubt that, I'd suspect they are looking for slightly higher with a $249-299 pricetag like they initially had in mind for the X700XT. With a card that warrants that price (only achieveable with the specs we mention here IMO) it's doable. Bring out another GF7600GT and you would be FORCED to price it at that $199. So if the cost is $5 more to produce, and you can leverage that into $50-100 more in MSRP, that would be a worthwhile boost IMO.
but in the $250-$300 mark high-mid range sector to replace the X1800XL.
OK, maybe we're agreeing here, Did you meant they AREN't targeting the GF7600GT @ $200 but the higher XL (and GTO) market?
The higher end market gives them more money to place with allowing faster memory and more die-space.
the $300 part would need to be clocked upwards of 600MHz, probably 650MHz is reasonable given the 80nm process. Coupled with comparable 1500Mhz RAM, the RV570 would probably still be weaker than the X1800XT in texture ability, but obviously faster in pixel shader ability which is what ATI really cares about.
Not sure if that's 'all they care about' but it is the focus of their design direction.
It of course has less vertex power.
A $250 part could tone down the core to 550MHz and the memory to 1200MHz.
To really compete against the 7600GT, ATI will probably have to use the RV560, whatever it is. However, the RV570 is designed it doesn't seem possible for it to offer 10% more performance than the X1800XT yet still fit a $200 price tag.
Except for the fact that the X800 (R430-based) series had simmilar pricing for a larger chip. Also remember the chips for that are of the market would be cast-off cripples or over production parts in the same manner as the plain X800.
The RV535 die shrink of the X1600 might offer higher clock speeds, but I don't see how much higher than 590MHz you can go with sufficient yields to overcome the fact that the RV530 has a third the texture units as the 7600GT and half the ROPs. The real issue is of course memory bandwidth. Like the RV570, the RV560 would probably also need to have a 256-bit memory interface.
And therein lies the problem, even with the most memory in the world, the X1600 will still not perform well againt the GF7600Gt as expressed by it's low res/settings performance. Memory alone will not bridge the gap, and they need to beat that gap, since the X800GTO and GF6800GS are already in that segment, and will remain for a short while. And if you're paying to add the 512bit bus ring, and the 256bit support, you might as well pay the premium. IMO, the X1600 replacement is not going to go after the above X1600XT performance crowd, but the X1600TX down to plain X1600 crowd. The process shrink saves them money, they don't need to increase performance.
Personally, I'd like to see the RV560 stick with the 128-bit interface, but move to higher clock speed GDDR4. Some 2.4GHz or 2.8GHz stuff would certainly relieve the strain just as well as a 256-bit interface but allow ATI to keep die size down and introduce GDDR4 to the market and ensure viability before integrating it into the RV580.
That's true but the GDDR4 memory you're talking about is too expensive, the GDDR4 @ 1066 speeds would be far cheaper and make sense for what you're talking about, moving to 1200+ is too much money for such a cheap part, and availability is unlikely to be high early, so for such a large volume part I wouldn't think it wise to start near the top of the speed spectrum. If anything the top end card like the G80 will be the ones worthy of such exotic/expensive fast memory, especially when the X1800 and X1900 already support it (supposedly the GF7900 not only doesn't support it, but also cut low end DDR support [kinda doubt that, but would save transistors]). so using GDDR4 on the X1900 as a refresh (ala R9800Pro-256) or the G80, makes more sense due to likely low volumes at such an early stage. The 800, 900 and 1066 GDDR4 seems more likely to me.
ATI could of course also improve performance through optimization much like nVidia did with the 7900, but I don't think that would lead to a substantial increase in performance.
No, because the optimization wasn't about performance, but about saving transistors (cost and power), not speed (other than the small hz increase due to lower thermals).
The RV530 core also already includes the Fetch4 feature that the X1900 uses to try to improve it's texture bottleneck. I'd be interested to know whether the RV570 has an expanded ring bus to 512-bit like the R520 and R580 to complement it's expanded 256-bit memory interface or will it just stick to the 256-bit ring bus. The latter may be more likely to save die space.
And that might be the case, because as much as the ring bus itself is an efficient way to transfer memory it offers less performance boost than the 256bit interface with the chips. I would hope they wouldn't cripple it, but to save transistors they might, and it would be an acceptable compromise IMO if that's what's require to squeeze the pennies.
[quoteI wonder if ATI also plans to convert the R580 to 80nm? It'll certainly help them control costs and temperatures. Some modest increases in core clock (hopefully to 700MHz) and addition of GDDR4 would increase it's performance nicely. Although 24 texture units would be nice, I don't see ATI backing down from their 3:1 implementation and it would require too much work to do when their focusing on R600.[/quote]
And that focus IMO is why they wouldn't bother with another refresh regardless of what the G80 has to bring to the table. If anything it makes more sense to accelerate the R600 (if it's already ready to go as they say) than to simply bring another R580 to the market. That to me would be a mistake since the G80 alone will have so many paper features as attractors, that even a 10-20% performancne boost from a new core wouldn't do it. A low-ball priced R580 with faster memory is the most economical way to approach it IMO, where they have a proven part with added performance for the time being, possible to price below the G80 (even as a loss leader), but they have to know that unless the G80 is a flop of some kind (unlikely) they can't compete head to head with the G80 with just a refresh.
A simple die shrink refresh to the X1950 shouldn't be particularly time-consuming and would allow them to definitively put themselves on top of the 7900GTX.
I don't think there's any worry of that right now, simply OC'ing the XTX's exact same memory chips to GTX levels pretty much does that already. A re-shrink would cost a minimum of $4-5 million, which would be a huge waste of money in light of the real competition at that time, the G80, not unlesss they can spin of the benifits to another market like the mobile chips which would be the only thing to me that would justify the spin. Giving away the crown uncontested for a few months and accelerating the R600 to arrive just in time for fall X-mas buying season, makes much more sense IMO.
In any case, this is mostly speculation on my part, but I kind of just felt like going on a tangent. I hope I'm making some sense.
Lotsa sense, and these are the tangents most of us like, something with no meat, but makes us all think. I'm sure at least Both Cleeve and Action Man find this interesting, I know I like thinking about what 'might be'.
Of course we could all be wrong, but no one gets points for being right, even if I did notice the SLi bridge and call that one long before they announced (and kept using those stupid "engineering test connector" excuses). 8)
As always, only time will tell.