Sign in with
Sign up | Sign in
Your question

R600 rumors from the Rumor Mill

Last response: in Graphics & Displays
Share
Anonymous
a b U Graphics card
November 15, 2006 9:59:52 PM

Since we don't have much to talk about when it comes to the R600 I figure I would link some rumors from everyones favorite website, The Inq!!! ANd i's 2 for one today!

16ROP
R600 is a monster

Disclaimer: these guys have at the very best a 50/50 chance of being right, most of the time, partially right but deformed =)

To sum it up:
13 Layer PCB, shorter then a GX2/8800GTX
512 bit memory controller(32 bit per memory chip), External is ?bi-directional 1024 equivalent RingBus? (wth..probably marketing)
114GB/s if GDDR3 ~900 or 140GB/s if GDDR4 ~1.1
"longer-than-the-PCB quad-heat pipe, Artic-Cooling style-fan"
Up to 2 gig of memory
200-220W of power
16 ROP
700-800mhz core
64 Shader 4-way SIMD units (G80 are scalar), the could theoretically do 4 scalar operation per clock each

They finally say that in complex shading it would be better but 'lighter' shader would run faster on G80. which would make an interesting question, buy for now and for soon to come only a few DX10 options games or for 'true' heavy upcoming DX10 title.

Discussion is welcomed, and let's make it clear THESE ARE RUMORS! just thought it would be interesting to discuss the possible advantage/disadvantage of this kind of implementation, and how much 1 gig of GDDR4 will set you back! and so on.

More about : r600 rumors rumor mill

November 15, 2006 10:03:43 PM

Specs look quite nice.

Too bad the INQ is reporting it :?

I wonder when other sites will have some bits of info.
November 15, 2006 10:45:50 PM

I fucking hate it when they say "why would AMD engineer such a thing?"

Its not AMD you retards!!! This design was in place long before the sale.
Related resources
November 15, 2006 10:47:25 PM

Quote:
This design was in place long before the sale.

Yea, AMD probably has nothing to do with the R600. Future GPUs might, but not R600.
November 15, 2006 11:02:23 PM

Brought to us by none other than Theo Valich! :lol: 
November 15, 2006 11:40:59 PM

That article seems like it was written by an ATi enthusiasts to say the least with numerous comments putting down the G80 and nvidia. If it wasn't written in that tone i might find it a little more believable. i doubt it will have 2GB(i know it mentioned it'd be on fire gl chips) of memory since we barely make use of 512mb as of now unless at very high resolutions. 1GB will most likely be the norm but won't have an effect on performance. Once again they say that the ATi will have GDD4 running at such and such speed that can do 140Gb/s or something, but in the end once again doesn't effect performance since most of these graphic cards , g80 included are all "core" speed limited...
November 16, 2006 12:01:23 AM

Sounds like it might run a bit hot... :p 

As for ATI, they have a bit of a history in reacting to NVidia. As a result, their cards are more powerful, but not necessarily cutting edge. I see no reason to believe that R600 will be any different.
-cm
November 16, 2006 12:05:14 AM

Just in time for Crysis and UT2007, if the rumors are true. Either way, the next year is when things really start getting interesting.
November 16, 2006 12:10:18 AM

I've played Roboblitz, its the Unreal 3 Engine, and it looked SPICY. UT2007 will be killer.

<drool>

Anyone else feel like drooling with me?
-cm
November 16, 2006 12:15:18 AM

Quote:
bi-directional 1024 equivalent RingBus

This isn't marketing, I'm guessing they went bidirectional instead of 1024 bit bus because the extra logic for the 1024 bit bus would take up too much space. Adding another 512 bit ring is probably more space efficient.

If you are thinking that Ring Bus is a marketing term, it isn't. I've done some research about it. From what I can gather, it is the same concept as a Token Ring Network. In the current implementation of it in the X1000 series, the data can travel either direction around the bus to the location it is trying to reach.

I found most of my info here.
November 16, 2006 12:50:44 AM

I'm in. /drools hopefully?
November 16, 2006 12:54:10 AM

These cards sound pretty cool, making me wait and not get a 8800GTX
Anonymous
a b U Graphics card
November 16, 2006 3:11:59 AM

I meant the 1024 part as marketing, not the bi-directional part, like dual channel DDR 200mhz ending up as 800FSB...you know.

I tottaly understand the concept of Token ring, I kinda see how it would work on memory system, any chance you coudl dig the links/reference, Id like to read about it. Token Ring and FDDI were really good for their time and offered good optical fiber perf.!
Btw, Im really into networking(Network Tech, Working on engineering degree,CCNA, going for CCNP and probably going for CCIE so shoooot :)  )
Anonymous
a b U Graphics card
November 16, 2006 3:20:25 AM

I am in a weird position, I am building in the next month, absolutely need a rig when coming back from my current internship in the USA.

With those rumor, I consider getting/borrowing a 7300Gs or whatever from a friend and wait for the R600 instead of going 8800GTS!
I can only see this release dropping the price on the 8800 if it's that much a killer or just making the G80 a safe bet!

I think it might boil down to future performance and what you're favorite games are. I don't see the G80 as being a Fx5800 Vs a 9700Pro!!! And the version with 1 gig of GDDR4 will probably really expensive!

Other comments: Didn't like the tone either but heck it's the Inq, the tone is not my biggest concern, it's the reliability 8)

About Ut2k7, I had a flash back on how Nice Ut2k4 was! and how many night I spent playing it(the game with the ball and the goal, 2k3? or wut) the huge map in 2k4 rox.
Then I continued drifting on my sniping in the original UT, (facing the world?)
Man does that franchise rox, can't wait for 2k7, so many good memories/time spent playing!!! *drools with all the other*

Last comment: It also coincide with Vista release...but who cares hehe
November 16, 2006 4:02:32 AM

From what I've seen, Crysis is better than UT 2007 in every graphics way. I remember awesome UT 2007 demos, but that was like early 2005.
UT 2004 also seemed lame after Farcry 1 came out.
(I hope I'm wrong about this, please prove me wrong with a link! :D  :D  )
November 16, 2006 4:05:28 AM

Just get any cheap thing you can. If you see cardgame, it works.
Save the big bucks for a big bleeding edge card.
November 16, 2006 4:08:40 AM

Ok, how was UT 2004 better? I'm not talking about graphics.
I'm just missing something.
November 16, 2006 4:11:05 AM

im in for droolin

whoever's card is better doesn't matter a whole lot, since im a red. (not tryin to start someone flarin, seriously)
but if the inquirer is right (as they are once apon a blue moon :p  ), one more reason for me to get it over a g80.
how i hope that there is a 2gig version for gaming cards.

this is cause a 2 gig would have a much higher futerproof set-up. And think of this, sure 512 is perfect for DX9, but thats DX9. DX10 is much more complex. much more.

finally, i is their mentioning of an R700 just inquirer bull?
wait, that would make sense, having a new series shortly after the realease of their FIRST DX10 cards, as the saying goes, we all learn from our mistakes.

if the R700 is true, i hope its out be next christmas, march '08 at the latest
November 16, 2006 4:18:52 AM

I think you are mostly talking about multi-player.
November 16, 2006 5:16:04 AM

Hm....

2GiB seems pointless to me, probably aimed at those stupid people that are convinced the more RAM a card has the better it is, regardless of RAM speed or GPU (I still know loads of them who wont be dissuaded from this).

Dont forget, high speed GDDR3 or GDDR4 RAM is one of the main costs of a Gfx card. 2GiB is a waste of money for consumers.

1GiB maybe... but not till DX10.

There is a reason the competition didnt do a 512bit bus - cost. Adding that many PCB traces to a board massively increases cost.

If the R600 is clocked at 700MHz and can fill "256 complex shaders per clock", then 700M*256 = 179,200M shader ops per second.

8800GTX's shaders are clocked at 1350MHz, and 1350M*128 = 172,800M shader ops per sec.

The R600 is faster here, and will be ~14% faster again if it is clocked at 800MHz, but at 700MHz there is not much in it, and Nvidias extra ROPs compensate for it in other areas, aloing with its 'better performance in simple shader games'.

If all this is true, then ATi will be in a similar position to Geforce FX - which could push 4 pixels per clock in single textured games and 8 in multitextured games. (just as ATi will push 64 shader ops in simple shadered games and 256 in complex).

220W is also alot, we have seen the G80 to be only marginally above the ~150W of the x1950.

From these specs I think the R600 and G80 will be pretty close in terms of performance, with R600 probably clinching it in DX10, while G80 will win by a larger margin in DX9 (but who buys a DX10 card for DX9 performance, after DX10s release...).

I also believe the R600 is going to be expensive however, a 13 layer PCB and 2GiB of RAM doesnt come cheap. I suppose AMD/ATi could just absorb the cost, but they dont want to make their margins too tight or it could jepordise future GPUs.
a b U Graphics card
November 16, 2006 6:56:37 AM

Quote:
That article seems like it was written by an ATi enthusiasts to say the least with numerous comments putting down the G80 and nvidia.


LOL!

It's funny hearing everyone go off talking about Theo being an ATi fan, when in fact he has a reputation as the opposite, which I think is why BMFM commented on it.

Remember Folks: Fuad is ATi's fan favourite, Theo is nV's (along with Charlie IMO).
a b U Graphics card
November 16, 2006 7:13:41 AM

2GB on a FireGL makes alot of sense, on a gamer less so, but if you want to run different things at the same time (like Graphics and Physics) it makes alot of sense to have more memory than you need right now, hence why the G80 also has more than 512 even for the GTS.

Quote:
Adding that many PCB traces to a board massively increases cost.


Once again, as is 384bit bus support. I don't see it as a massive increase in cost compared to the alternative of ever more rarified memory. And I'd say both size and speed will be helpful to target that 25x16 marquee resolution of the 30" LCDs.

As for the rest, just like the InQ the theoretical math says one thing, but only the real world performance matters. If the G80 reality came anywhere near to matching the theory the G80 would much further outperform the previous generations.

Of course no need to worry about your hopeful 'next' purchase it'll all come out in the wash soon enough when they go toe to toe. :twisted:
November 16, 2006 7:31:05 AM

Quote:
I also believe the R600 is going to be expensive however, a 13 layer PCB and 2GiB of RAM doesnt come cheap. I suppose AMD/ATi could just absorb the cost, but they dont want to make their margins too tight or it could jepordise future GPUs.


If a guesstimation has to be made what would you guess?

650?
700?
750?
November 16, 2006 12:00:51 PM

he called the G80 Grapzilla or something dont have time to look but doesn't that sound like someone who likes Ati way more than nvidia?
a b U Graphics card
November 16, 2006 12:44:08 PM

Dude, Graphzilla is the nickname for nVIdia, because CHIPzilla is the nickname of intel. They are vestiges of a past when Basically they were the unopposed Godzilla of each realm repelling all challengers as if they were Mothra or whatever, with nV getting the 'Graph-ics' version.

Perhaps that will give you some context.
November 16, 2006 1:12:01 PM

Ha! Never heard of either nickname for nvidia or intel but thanks for the info.
November 16, 2006 1:17:25 PM

All INquirer journalists call Nvidia 'Graphzilla" and Ati/AMD "DAMMIT"

It's nothing personal against or for either company, just Inquirer style...
November 16, 2006 1:17:58 PM

After re reading it the memory speed has not the limiting factor yet, so increasing to the memory to that high of levels doesn't net a linear performance gain. i bet in the near future it will be nice to have speeds at that speed, but right now gddr4 is expensive and doesn't net any more benefit to gddr3, also i was very surprised that they said the card would not be anymore power hungry than an 8800GTX, especially since right now a x1950xtx and 8800GTX are very close at load, i bet we'll see this will be the most power hungry card on the market, just like Ati's x1900xtx was.

One nice thing was that they said the card is the same length as other 7900GTX or 1950xtx...
November 16, 2006 1:42:29 PM

Quote:
2GB on a FireGL makes alot of sense, on a gamer less so, but if you want to run different things at the same time (like Graphics and Physics) it makes alot of sense to have more memory than you need right now, hence why the G80 also has more than 512 even for the GTS.


IMHO the 8800GTX has 768MiB simply beacuse chips are not availible in the densities that would be needed to run 512MiB on the GTX's 384-bit memory bus. You'd have to go down to 384MiB Ram. The GTS has 640MiB because of the 320 bit bus. On a 512bit bus, 512MiB is possible, as is 1GiB.

Quote:
Adding that many PCB traces to a board massively increases cost.


Once again, as is 384bit bus support. I don't see it as a massive increase in cost compared to the alternative of ever more rarified memory. And I'd say both size and speed will be helpful to target that 25x16 marquee resolution of the 30" LCDs.

Nvidia could have gone for a 512bit bus without spending more money on RAM on each card - hell the cards could run with 512MiB and save money - but the board routing *is* expensive. I work with electronics, and a 13 layer PCB is just insane imho. Cool, I'll grant you, but insane :) 

I'm also skeptical as to how usefull all that memory bandwidth is anyway. Unless we are talking about the lower models that have hamstrung memory interfaces, I generally find that GPU core overclocks have more effect than RAM ones.

At the end of the day I'm a geek and I spend too much cash on my computer. If the ATi card comes out 3 months after I buy my 8800GTX, and is significantly faster, I will upgrade. It is my sincere belief however that there wont be much more the a single figure percentage in it, whoever has the advantage, and I'd rather take the G80 now than piss about waiting for the R600 to decide.

If I dont buy the G80 this month then I'll end up buying the monitor I have my eye on instead - a 24" 1920x1200 model. I'm not 100% happy with my 7900GTs performance at 1600x1200 so then I'll end up REALLY wanting a G80.
November 16, 2006 1:56:07 PM

Quote:
These cards sound pretty cool, making me wait and not get a 8800GTX


totally agree
however your word choice may be incorrect :wink:
November 16, 2006 2:28:31 PM

So uh... why do we still pay attention to the inquirer?
November 16, 2006 2:38:11 PM

Quote:
So uh... why do we still pay attention to the inquirer?


I personally find them amusing.

And, more often than not, their hearsay and speculation will give you a pretty good idea of unreleased hardware as time goes on... you can't just read one of their articles and take it as gospel, but if you pay attention to all of them in context good-sized chunks of truth will eek their way out of the BS. :) 
a b U Graphics card
November 16, 2006 3:23:30 PM

Quote:

IMHO the 8800GTX has 768MiB simply beacuse chips are not availible in the densities that would be needed to run 512MiB on the GTX's 384-bit memory bus.


Granted, but that goes on the assumption that 384 bit width is the target, and not just the convenient mid-way point (of you have to stop somewhere, and with less layers, that means more area, and you'd be brushing up against the original GX2 for size). Also, I was talking about the GTS, where there would be better flexability, and 320MB on 320bit or even 512MB on 256bit would likely be fine, but there is marketing benifit to 640x320 those numbers which IMO outweighs their practicality and 'value' since you don't want the PR crap of 512MB on a X1950/GF7950 is better than the 320MB on the GTS.

I would never say that either company is looking for efficiency and value in this segment either first, or even at all some times. They are the trophy segment, and I'd argue that even for the cost the value of 1GB on an R600 will be put ot much better use than the 1GB on a GX2, and likely for a lower cost to boot.

Quote:
Nvidia could have gone for a 512bit bus without spending more money on RAM on each card - hell the cards could run with 512MiB and save money - but the board routing *is* expensive.


I know it's the traces and the layers that are expensive not the RAM itself, but the thing is the price of RAM to make a 256bit bus equal a 512bit bus for throughput, and right now at the transition of GDDR4 it's not that bad, but towards the end of the lifespan of GDDR4 you would likely find that the cost difference between the boards versus the difference between memory prices will be large, this will help the GT, GTO, etc children of the R600. Also something to think about is the flexability this offers them versus looking at memory shortage that have affected cards so much in the past. I think at the time 512bit with the possibility of GDDR3 or GDDR4 was attractive if they were expecting an earlier launch like in the summer when GDDR4 would be rarified. I think both designs show a target of earlier launch dates, and at that time 512bit GDDR3 seems like a good safety net, of course other delays made this consideration less important. I think with the G80 the GDDR3-only support may be an issue in the future, but the move to support in an early refresh may negate the impact of that too.

Quote:
I work with electronics, and a 13 layer PCB is just insane imho. Cool, I'll grant you, but insane :) 


Oh I know, we've discussed this a few times, and I'm not unfamiliar with PCB fab design, BTW it's 12 layers, not 13 according to the InQ, but still it's quite alot. However it's a question of limited space, if they didn't make it a 12 layer board, like I mentioned before it would be a significantly larger card, and they likely wanted to stay closest to current configurations. It's an added expense definnitely, but it's one of the many things you have to trade off.

Quote:
I'm also skeptical as to how usefull all that memory bandwidth is anyway. Unless we are talking about the lower models that have hamstrung memory interfaces, I generally find that GPU core overclocks have more effect than RAM ones.


Well it would depend on when it's used. The core would be best for low resolution and low requirements situations (no AA, no HDR), whereas the memory would kick in for high levels of AA and the use of HDR, especially if it's FP32 HDR like nV's. Plus alot of it comes down to how the memory is accessed and used, and tha may have been changed to.

Quote:
At the end of the day I'm a geek and I spend too much cash on my computer. If the ATi card comes out 3 months after I buy my 8800GTX, and is significantly faster, I will upgrade. It is my sincere belief however that there wont be much more the a single figure percentage in it, whoever has the advantage, and I'd rather take the G80 now than piss about waiting for the R600 to decide.


Oh I agree, but right now I'd be looking to the GTS more than the GTX just due to the marque overprice right this moment, and then roll the coin difference into the GTX once I knew the fates. Like save enough coin to buy a copy of UT2K7 and Crysis. As forthe percentage difference, of course that depends on how it's measured, the GTS versus the XT/XTX can often be very little difference, but then under other conditions there's a large delta, and like the Previous version of GF7 vs X1K I think we'll see the same variability from app to app, o I don't expect much performance difference out of the gate, it will be later that it would possibly matter, and like I've mentioned before, by then most people wo bought first now will have upgraded again to the G90/R700 anyways.

Quote:
If I dont buy the G80 this month then I'll end up buying the monitor I have my eye on instead - a 24" 1920x1200 model. I'm not 100% happy with my 7900GTs performance at 1600x1200 so then I'll end up REALLY wanting a G80.


Well and that's the issue, IMO the GTS plus a monitor Refresh is a good path, the GTX instead of the monitor refresh staying at 16x12 almost seems like a waste, but is of course a prety good step on the way to an upgrade. Also IMO the GTS is sufficient for 16x12 and 19x12, but the GTX really pulls away from it's brother at the truely huge resolutions of the even larger CRTs and LCD with 20x15 / 25x16.

I don't doubt the value of the GTs, but when it comes to the very top, I don't think ATi nor nV is saving money, and I don't think the expense will actually hurt their future product lines, heck pretty much all the way to the G90 and R700 are already in the can and planned out with only minor tweaks to come. And these parts are always money losers (heck the GX2 likely more so than anything recently), but they aren't about the margins, that's for the 600 and 300 series cards where all the profits are. These cards are loss-leaders, where they only act as PR for the most part, and while people pay less than would be needed to sustain the company if this was their only line, it's also money better spent than on a few million ads IMO because it achieves the same goals with the tangible side benifit of R&D.
November 17, 2006 5:45:16 PM

Quote:

IMHO the 8800GTX has 768MiB simply beacuse chips are not availible in the densities that would be needed to run 512MiB on the GTX's 384-bit memory bus.


Granted, but that goes on the assumption that 384 bit width is the target, and not just the convenient mid-way point (of you have to stop somewhere, and with less layers, that means more area, and you'd be brushing up against the original GX2 for size). Also, I was talking about the GTS, where there would be better flexability, and 320MB on 320bit or even 512MB on 256bit would likely be fine, but there is marketing benifit to 640x320 those numbers which IMO outweighs their practicality and 'value' since you don't want the PR crap of 512MB on a X1950/GF7950 is better than the 320MB on the GTS.

I (possibly incorrectly?) understood that the reason for this was that each set of shaders supported a 64bit bus, and that to hit a 256bit bus they would have to disable more shaders - or something like that. (64 bits for each set of 32 shaders and 128 bits for some other part of the GPU... this is from a vague memory however so may be wrong)

I pretty much agree with everything else there, except that if I bought an 8800GTS this month I still couldnt buy my monitor the same month, thats the price of a GTX *and* a GTS :p 
November 17, 2006 10:33:47 PM

I am still waiting for a single cardwith MORE power than the X1950XTX SLI. (In DX9)
From what I see, the 8800GTX is only a little faster than a single X1950XTX. Maybe the R600 is better than the X1950XTX SLI.
!