R600 64 PIPES!

incinerator

Distinguished
Aug 5, 2004
68
0
18,630
The specs seem a little bit high though. Thats got to take up some serious real-estate on the die if true. I can't imagine 192 shaders... thats just insane. Off to a meeting...

-][nCiNeRaToR-
 

jamesgoddard

Distinguished
Nov 12, 2005
1,105
0
19,290
unless the same news shows up on a reliable site with reliable sources i am not convinced

It's well known that the R600 die is HUGE so something has to be taking up all that silicon real estate... The die is much bigger then the x1900 and that already has 56 shaders on it.... I think this is true, and thus this chip will kill Nvidia….
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310
The Inquirer's been right, and they've been wrong. I'm hedging my bets that they're wrong here; the R600 appears like it will be an 80nm part, meaning that it will only have some 40% increased real estate efficiency over the R520 and R580, the latter already pretty darn large.

On a note that makes me particularly suspicious of The Inquirer's claims, they refer to them as "pipelines." Most of us who've paid close attention know that ATi completely ditched the traditional symmetric pipeline arcitecture back with the release of thr R520; this was originally promised as a revolutionary advancement when they first announced it, calling it the R400...

The R5xx cards use a vastly multi-threaded approach, that has proven to be far more effective. Additionally, there is really no sort of need for any form of symmetry in the design.

These factors together, make it that if The Inquirer is right, this will be the biggest surprised they've ever given me; I've been largely thinking that ATi's been going for larger number of shaders than TMUs, given their increased importance in next-gen gaming, (Oblivion, etc. benchmarks as my witness) as well as their usefulness for a variety of scientific-related applications, including GPU-powered physics. However, the ratio need not stay 3:1 as we see in the RV530 and R580; given that they still lose some current-gen benchmarks to nVidia, ATi may "fall back" on this for a generation; I've been feeling that the R600 will go for 64 pooled shaders, (128 ALUs total) but 24 TMUs, giving it a 2.66:1 ratio, rather than a 3:1 ratio. This would match the texturing fill-rate per clock of the G70, and given that ATi's GPU will almost certainly post far higher clock speeds than G80, we may just see them make the texturing difference between R600 and G80 close to nothing, leaving ATi's shader advantage. (and possibly RAM advantage, if we get either that rumored 512-bit interface, or even if nVidia can't incorporate GDDR4 support for G80)

I mean 56, that is 48 pixel and 8 vertex.....
Vertex shaders can't quite be equated to pixel shaders; they only have one ALU apiece, while pixel shaders have two apiece. So in total, the R580 had a total of 108 ALUs, 96 from the 48 pixel shaders, and 8 from the vertex shaders.

Coincidentially, it's also ALUs that are the measure of shaders on the Xbox 360's "Xenos" R500 core; it has 48 ALUs, not shaders.

So, according to the unified arcitecture
 

nategarst

Distinguished
Jan 26, 2006
166
0
18,680
unless the same news shows up on a reliable site with reliable sources i am not convinced

That and there writing skills are confusing. Qouted from their site "WHEN we first time heard that R600 was going to be a big chip we could figure out that ATI wanted to completely redesign the chip and fill it full of lot pipes."
 

incinerator

Distinguished
Aug 5, 2004
68
0
18,630
I think this is true, and thus this chip will kill Nvidia….
...and some regional power distribution grids...
Beyond the price I wonder how many watts this card is going to consume. I would be like buying an add-on space heater, good for the winter bad for the summer. The x1900xt and xtx's put out enough heat as it is.
AMD is making one hella of a GPU there
:lol:
Nice one.

-][nCiNeRaToR-
 

Dahak

Distinguished
Mar 26, 2006
1,267
0
19,290
Nice.maybe my next video card will be an ATI.have not bought one for myself in awhile so it would be nice to see how it goes.but that's for a later time.like next year.lol.ok ty for the info.

Dahak

EVGA NF4 SLI MB
X2 4400+@2.4
2 7800GT'S IN SLI
2X1GIG DDR400 RAM IN DC MODE
520WATT PSU
EXTREME 19IN.CRT MONITOR
3DMARK05 11533
 

raven3x7

Distinguished
Aug 8, 2006
38
0
18,530
well since these are Unified Shaders this is correct. only this is the total shader count(pixel+vertex). I think this was common knowledge already. Nvidia are going for a more traditional 48+16 approach with the G80 if i remember correctly. Im guessing ATI can go with the Unified shader approach due to their work on the XBOX360 which gives them prior experience and a lot of info from the source(MS). Nvidia are playing it save on the other hand. It seems they learned the FX lesson well. What card will be thebetter performer is hard to guess right now. i'd say though until enough DX10 games arrive Nvidia will probably have performance, ATI will have newer tech.
 
Personally I don't believe it because ATi engineers supposedly don't like to talk about pixel pipelines and pigeon-hole them like that. BUT, let's discuss this as if it's true for the techncial aspect, because while it could be totally FUD/false, it's fun to immagine what it means.

On a note that makes me particularly suspicious of The Inquirer's claims, they refer to them as "pipelines." Most of us who've paid close attention know that ATi completely ditched the traditional symmetric pipeline arcitecture back with the release of thr R520;

Yeah but I think it is going to be more R500 than R520, we suspected that it was going to be a combo, but the way textures are handled on the R500 is VERY different than on the R520, although this would be even further removed from any previous design building on unification on a texture level as well.

These factors together, make it that if The Inquirer is right, this will be the biggest surprised they've ever given me; I've been largely thinking that ATi's been going for larger number of shaders than TMUs, given their increased importance in next-gen gaming, (Oblivion, etc.

However picture this... 64 shader fully unified units that can do texture also not as a sperate texture crossbar that needs to cross communicate, but something that can do 1:1 mid-stream, this would help two fold, giving you the flexability of what you have before in the R580 while adding the power to do what you were missing, as an example remember HDR requires lotsa shader opps, but at the core is texture first step which is why IMO the GF7 series isn't as handicapped as expected compared to the massive shader power of the R580 (for those sensetive people out there by massive I just mean in number). I'd love to see that, but oie like I said the transistor penalty could be huge, unless they found another efficiency.

The second benifit I could see you wuold remove the texture ALU crossbar and avoid another layer of potential problem goin back forth for texture lookup from both pixel & vertex which having to maintain the context information for each simply for that operation.

given that they still lose some current-gen benchmarks to nVidia, ATi may "fall back" on this for a generation;

And that was my thinking somewhat too when I first read this, is this ATi's own 'hybrid' response since the X1600 and X1900 didn't give them as much of a boost in most applications (although there are some just check AOE3 X1800 vs X1900 performance almost 2 times the diff @ same clock). This to me would be a huge price to pay in transistors, but would give them the PR wins, then when they do feel they are better suited to the return of the unbalanced besign then they would go for the transistor saving. I don't like the plan, but it would explain a decision to do so.

I've been feeling that the R600 will go for 64 pooled shaders, (128 ALUs total) but 24 TMUs, giving it a 2.66:1 ratio, rather than a 3:1 ratio.

And that was pretty much the concensus IMO on the shader part, 1Full+1mini+1branch unit(s), but the TMUs was higher than previously believed.

This would match the texturing fill-rate per clock of the G70, and given that ATi's GPU will almost certainly post far higher clock speeds than G80, we may just see them make the texturing difference between R600 and G80 close to nothing,

Or with an advantage depending on which G80 design you put the most faith in 32/24/16unified (V+G) or 32/32/16U

leaving ATi's shader advantage. (and possibly RAM advantage, if we get either that rumored 512-bit interface,

Yeah I just don't buy the 512bit yet (the transistor count increases alot, and the card traces go up enourmously on an allready packed board (likely meaning another 1-2 yaers on the PCB IMO).

or even if nVidia can't incorporate GDDR4 support for G80)

I would suspect GDDR4 is a given for the G80, you'd need it built into the VPU for at least a future refresh, unless they think the G80 won't last that long (quickly replaced by the G90). They could launch a board that doesn't sport GDDR4 but I'd think it'd be in the chip.

So, according to the unified arcitecture

Did you have something more there, seemed to end abruptly.

Anywhoo, hope my post was fodder for thought/discussion, but it's been a wicked WICKED busy day at work, so kinda rushing this out before getting the heck out of here! 8)
 

Sedako

Distinguished
Aug 6, 2006
54
0
18,630
In case no one has seen this yet, the R600 might be external since it requires a huge amount of power. This technology will probably make it cost a lot as well.

http://www.engadget.com/2006/07/28/ati-to-release-power-hungry-external-video-card/

Good chance this could be false, but who knows?
 

jamesgoddard

Distinguished
Nov 12, 2005
1,105
0
19,290
In case no one has seen this yet, the R600 might be external since it requires a huge amount of power. This technology will probably make it cost a lot as well.

http://www.engadget.com/2006/07/28/ati-to-release-power-hungry-external-video-card/

Good chance this could be false, but who knows?

To NEED an external case would be shooting themselves in the foot IMHO, wont happen... Sure some peeps may choose an external option, but thats not the same....
 

Slava

Distinguished
Mar 6, 2002
914
0
18,980
Beyond the price I wonder how many watts this card is going to consume. I would be like buying an add-on space heater, good for the winter bad for the summer.

:lol: Like I said a couple of weeks ago. 800-1000 Watt PSU... and many said it was overkill. 800W would be a good idea for some of the more advanced and well-packed rigs of today. Think X-Firing a pair of these new monsters. 800W may not be enough anymore.
 

hellcatjr

Distinguished
Sep 14, 2002
63
0
18,630
Beyond the price I wonder how many watts this card is going to consume. I would be like buying an add-on space heater, good for the winter bad for the summer.

:lol: Like I said a couple of weeks ago. 800-1000 Watt PSU... and many said it was overkill. 800W would be a good idea for some of the more advanced and well-packed rigs of today. Think X-Firing a pair of these new monsters. 800W may not be enough anymore.

an SLI setup right nows not touching 800W, so anyone with an 800W psus clearly fine.
 

raknarius

Distinguished
Aug 2, 2006
451
1
18,795
Well guys, i have seen the engineer sample and although i cannot give you any test results the pipeline terminology seems like it wont go away.
the R600 64 unified pipelines with 256 unified multipurpose shaders. the card is drawing a bit over 200 watts but becuse of its technology it will, and you can quote me on this, easily double the g80 performance, look to see nvidia realize there 2-1 approch for dx10 is insufficent and come out with there G90 shortly after but untill they do ati will own dx10 compatible cards, will this be the end of nvidia, nope, just a bump in the road.

btw reguardless when you hear its coming out, there will be limited release for xmas and then in jan full release.

Although the R600 is sweet, 2008 ati/amd will create a whole new industry standard and pci-e boards will become outdated, i just hope the one two combo of the R600 and the new standard in 2008 doesnt destroy nvidia becuse competition is good.