Sign in with
Sign up | Sign in
Your question

R600 64 PIPES!

Last response: in Graphics & Displays
Share
August 18, 2006 12:27:36 PM

Quote:
We previously wrote that the chip will have sixty four Shader units but we never realised at the time that the design is actually built around a full sixty four physical pipes. That is what various high-ranking sources are telling us.
8O

So sais the Inquirer( I know the Inquirer). If true this thing will be beyond amazing....
http://www.theinquirer.net/default.aspx?article=33784

We shall see.
-][nCiNeRaToR-

More about : r600 pipes

August 18, 2006 12:35:06 PM

then following with their 3:1 ratio , we'll get a R680 with 64 pipes and 192 shaders wow... :roll:
August 18, 2006 12:45:54 PM

The specs seem a little bit high though. Thats got to take up some serious real-estate on the die if true. I can't imagine 192 shaders... thats just insane. Off to a meeting...

-][nCiNeRaToR-
Related resources
Can't find your answer ? Ask !
August 18, 2006 12:53:18 PM

Holy crap dude. 8O
August 18, 2006 1:04:33 PM

unless the same news shows up on a reliable site with reliable sources i am not convinced
August 18, 2006 1:17:48 PM

Quote:
unless the same news shows up on a reliable site with reliable sources i am not convinced


They have been right, where have you been ?
August 18, 2006 1:19:29 PM

Quote:
unless the same news shows up on a reliable site with reliable sources i am not convinced


It's well known that the R600 die is HUGE so something has to be taking up all that silicon real estate... The die is much bigger then the x1900 and that already has 56 shaders on it.... I think this is true, and thus this chip will kill Nvidia….
August 18, 2006 1:22:09 PM

wtf 56 shaderS? u mean 48
August 18, 2006 1:22:59 PM

Quote:
wtf 56 shaderS? u mean 48


I mean 56, that is 48 pixel and 8 vertex.....
August 18, 2006 1:34:11 PM

Looks like your correct on the shaders.
x1900 series specs

If all of this stuff is true, i wonder what the cost is going to be? Maybe i dont want to know :( 

-][nCiNeRaToR-
August 18, 2006 1:42:22 PM

The Inquirer's been right, and they've been wrong. I'm hedging my bets that they're wrong here; the R600 appears like it will be an 80nm part, meaning that it will only have some 40% increased real estate efficiency over the R520 and R580, the latter already pretty darn large.

On a note that makes me particularly suspicious of The Inquirer's claims, they refer to them as "pipelines." Most of us who've paid close attention know that ATi completely ditched the traditional symmetric pipeline arcitecture back with the release of thr R520; this was originally promised as a revolutionary advancement when they first announced it, calling it the R400...

The R5xx cards use a vastly multi-threaded approach, that has proven to be far more effective. Additionally, there is really no sort of need for any form of symmetry in the design.

These factors together, make it that if The Inquirer is right, this will be the biggest surprised they've ever given me; I've been largely thinking that ATi's been going for larger number of shaders than TMUs, given their increased importance in next-gen gaming, (Oblivion, etc. benchmarks as my witness) as well as their usefulness for a variety of scientific-related applications, including GPU-powered physics. However, the ratio need not stay 3:1 as we see in the RV530 and R580; given that they still lose some current-gen benchmarks to nVidia, ATi may "fall back" on this for a generation; I've been feeling that the R600 will go for 64 pooled shaders, (128 ALUs total) but 24 TMUs, giving it a 2.66:1 ratio, rather than a 3:1 ratio. This would match the texturing fill-rate per clock of the G70, and given that ATi's GPU will almost certainly post far higher clock speeds than G80, we may just see them make the texturing difference between R600 and G80 close to nothing, leaving ATi's shader advantage. (and possibly RAM advantage, if we get either that rumored 512-bit interface, or even if nVidia can't incorporate GDDR4 support for G80)

Quote:
I mean 56, that is 48 pixel and 8 vertex.....

Vertex shaders can't quite be equated to pixel shaders; they only have one ALU apiece, while pixel shaders have two apiece. So in total, the R580 had a total of 108 ALUs, 96 from the 48 pixel shaders, and 8 from the vertex shaders.

Coincidentially, it's also ALUs that are the measure of shaders on the Xbox 360's "Xenos" R500 core; it has 48 ALUs, not shaders.

So, according to the unified arcitecture
August 18, 2006 1:46:13 PM

Quote:
unless the same news shows up on a reliable site with reliable sources i am not convinced


That and there writing skills are confusing. Qouted from their site "WHEN we first time heard that R600 was going to be a big chip we could figure out that ATI wanted to completely redesign the chip and fill it full of lot pipes."
August 18, 2006 1:47:19 PM

Quote:
I think this is true, and thus this chip will kill Nvidia….


...and some regional power distribution grids...
August 18, 2006 1:50:18 PM

AMD is making one hella of a GPU there
August 18, 2006 2:04:53 PM

Quote:
I think this is true, and thus this chip will kill Nvidia….

...and some regional power distribution grids...
Beyond the price I wonder how many watts this card is going to consume. I would be like buying an add-on space heater, good for the winter bad for the summer. The x1900xt and xtx's put out enough heat as it is.
Quote:
AMD is making one hella of a GPU there

:lol: 
Nice one.

-][nCiNeRaToR-
August 18, 2006 2:22:43 PM

Quote:
...and some regional power distribution grids...
+
Quote:
AMD is making one hella of a GPU there


=AMD's response to Prescott, only it actually performs! :lol: 
August 18, 2006 2:39:40 PM

Nice.maybe my next video card will be an ATI.have not bought one for myself in awhile so it would be nice to see how it goes.but that's for a later time.like next year.lol.ok ty for the info.

Dahak

EVGA NF4 SLI MB
X2 4400+@2.4
2 7800GT'S IN SLI
2X1GIG DDR400 RAM IN DC MODE
520WATT PSU
EXTREME 19IN.CRT MONITOR
3DMARK05 11533
August 18, 2006 3:17:41 PM

well since these are Unified Shaders this is correct. only this is the total shader count(pixel+vertex). I think this was common knowledge already. Nvidia are going for a more traditional 48+16 approach with the G80 if i remember correctly. Im guessing ATI can go with the Unified shader approach due to their work on the XBOX360 which gives them prior experience and a lot of info from the source(MS). Nvidia are playing it save on the other hand. It seems they learned the FX lesson well. What card will be thebetter performer is hard to guess right now. i'd say though until enough DX10 games arrive Nvidia will probably have performance, ATI will have newer tech.
a b U Graphics card
August 18, 2006 9:29:25 PM

Personally I don't believe it because ATi engineers supposedly don't like to talk about pixel pipelines and pigeon-hole them like that. BUT, let's discuss this as if it's true for the techncial aspect, because while it could be totally FUD/false, it's fun to immagine what it means.

Quote:

On a note that makes me particularly suspicious of The Inquirer's claims, they refer to them as "pipelines." Most of us who've paid close attention know that ATi completely ditched the traditional symmetric pipeline arcitecture back with the release of thr R520;


Yeah but I think it is going to be more R500 than R520, we suspected that it was going to be a combo, but the way textures are handled on the R500 is VERY different than on the R520, although this would be even further removed from any previous design building on unification on a texture level as well.

Quote:
These factors together, make it that if The Inquirer is right, this will be the biggest surprised they've ever given me; I've been largely thinking that ATi's been going for larger number of shaders than TMUs, given their increased importance in next-gen gaming, (Oblivion, etc.


However picture this... 64 shader fully unified units that can do texture also not as a sperate texture crossbar that needs to cross communicate, but something that can do 1:1 mid-stream, this would help two fold, giving you the flexability of what you have before in the R580 while adding the power to do what you were missing, as an example remember HDR requires lotsa shader opps, but at the core is texture first step which is why IMO the GF7 series isn't as handicapped as expected compared to the massive shader power of the R580 (for those sensetive people out there by massive I just mean in number). I'd love to see that, but oie like I said the transistor penalty could be huge, unless they found another efficiency.

The second benifit I could see you wuold remove the texture ALU crossbar and avoid another layer of potential problem goin back forth for texture lookup from both pixel & vertex which having to maintain the context information for each simply for that operation.

Quote:
given that they still lose some current-gen benchmarks to nVidia, ATi may "fall back" on this for a generation;


And that was my thinking somewhat too when I first read this, is this ATi's own 'hybrid' response since the X1600 and X1900 didn't give them as much of a boost in most applications (although there are some just check AOE3 X1800 vs X1900 performance almost 2 times the diff @ same clock). This to me would be a huge price to pay in transistors, but would give them the PR wins, then when they do feel they are better suited to the return of the unbalanced besign then they would go for the transistor saving. I don't like the plan, but it would explain a decision to do so.

Quote:
I've been feeling that the R600 will go for 64 pooled shaders, (128 ALUs total) but 24 TMUs, giving it a 2.66:1 ratio, rather than a 3:1 ratio.


And that was pretty much the concensus IMO on the shader part, 1Full+1mini+1branch unit(s), but the TMUs was higher than previously believed.

Quote:
This would match the texturing fill-rate per clock of the G70, and given that ATi's GPU will almost certainly post far higher clock speeds than G80, we may just see them make the texturing difference between R600 and G80 close to nothing,


Or with an advantage depending on which G80 design you put the most faith in 32/24/16unified (V+G) or 32/32/16U

Quote:
leaving ATi's shader advantage. (and possibly RAM advantage, if we get either that rumored 512-bit interface,


Yeah I just don't buy the 512bit yet (the transistor count increases alot, and the card traces go up enourmously on an allready packed board (likely meaning another 1-2 yaers on the PCB IMO).

Quote:
or even if nVidia can't incorporate GDDR4 support for G80)


I would suspect GDDR4 is a given for the G80, you'd need it built into the VPU for at least a future refresh, unless they think the G80 won't last that long (quickly replaced by the G90). They could launch a board that doesn't sport GDDR4 but I'd think it'd be in the chip.

Quote:
So, according to the unified arcitecture


Did you have something more there, seemed to end abruptly.

Anywhoo, hope my post was fodder for thought/discussion, but it's been a wicked WICKED busy day at work, so kinda rushing this out before getting the heck out of here! 8)
August 18, 2006 9:50:36 PM

Quote:
In case no one has seen this yet, the R600 might be external since it requires a huge amount of power. This technology will probably make it cost a lot as well.

http://www.engadget.com/2006/07/28/ati-to-release-power...

Good chance this could be false, but who knows?


To NEED an external case would be shooting themselves in the foot IMHO, wont happen... Sure some peeps may choose an external option, but thats not the same....
August 18, 2006 9:56:41 PM

Quote:
Beyond the price I wonder how many watts this card is going to consume. I would be like buying an add-on space heater, good for the winter bad for the summer.


:lol:  Like I said a couple of weeks ago. 800-1000 Watt PSU... and many said it was overkill. 800W would be a good idea for some of the more advanced and well-packed rigs of today. Think X-Firing a pair of these new monsters. 800W may not be enough anymore.
August 18, 2006 10:26:16 PM

Quote:
Beyond the price I wonder how many watts this card is going to consume. I would be like buying an add-on space heater, good for the winter bad for the summer.


:lol:  Like I said a couple of weeks ago. 800-1000 Watt PSU... and many said it was overkill. 800W would be a good idea for some of the more advanced and well-packed rigs of today. Think X-Firing a pair of these new monsters. 800W may not be enough anymore.

an SLI setup right nows not touching 800W, so anyone with an 800W psus clearly fine.
August 18, 2006 10:38:09 PM

Well guys, i have seen the engineer sample and although i cannot give you any test results the pipeline terminology seems like it wont go away.
the R600 64 unified pipelines with 256 unified multipurpose shaders. the card is drawing a bit over 200 watts but becuse of its technology it will, and you can quote me on this, easily double the g80 performance, look to see nvidia realize there 2-1 approch for dx10 is insufficent and come out with there G90 shortly after but untill they do ati will own dx10 compatible cards, will this be the end of nvidia, nope, just a bump in the road.

btw reguardless when you hear its coming out, there will be limited release for xmas and then in jan full release.

Although the R600 is sweet, 2008 ati/amd will create a whole new industry standard and pci-e boards will become outdated, i just hope the one two combo of the R600 and the new standard in 2008 doesnt destroy nvidia becuse competition is good.
August 18, 2006 10:47:03 PM

Quote:
Looks like your correct on the shaders.
x1900 series specs

If all of this stuff is true, i wonder what the cost is going to be? Maybe i dont want to know :( 

-][nCiNeRaToR-


Can u imagine it in crossfire mode?

sweeet!
August 18, 2006 11:02:13 PM

Quote:
Although the R600 is sweet, 2008 ati/amd will create a whole new industry standard and pci-e boards will become outdated, i just hope the one two combo of the R600 and the new standard in 2008 doesnt destroy nvidia becuse competition is good.


Yay. Yet another platform refresh rendering my "upgradable" motherboard obsolete.

I don't know about the whole "External" vid card idea... can you say latency?
August 18, 2006 11:04:34 PM

ROTF LMAO ,,,Its amazing that when Inq post anything good about intel we never hear a peep out of any of the intel fan boys... makes you really wonder.
August 19, 2006 12:08:13 AM

INTEL fanboys only rip the INQ when write something PRO AMD or ANTI INTEL

been like that for years
August 19, 2006 12:16:25 AM

Each pipe will allow for either vertex or shader calculation and then on the other side we have some dedicated to geometry shaders and texture units;
only god (and ATI engineers) know the ratio they'll be split into.

Your not going to have 64 "pipelines" and 192 shaders by any stretch of the imagination.
August 19, 2006 12:20:05 AM

i thought limits were meant to be broken
August 19, 2006 12:32:14 AM

Quote:
Each pipe will allow for either vertex or shader calculation and then on the other side we have some dedicated to geometry shaders and texture units;
only god (and ATI engineers) know the ratio they'll be split into.

Your not going to have 64 "pipelines" and 192 shaders by any stretch of the imagination.


Raven, you seem to know your fair share on GPUs and vid cards. What are your thoughts on the external vid card notion?
August 19, 2006 3:28:47 AM

@ Ill

Limits are made to be broken (mainly the speed limit :lol:  ) but having flexible working "pipelines" is going to be an awesome concept.
To be honest, we can't even consider GPU's having "pipes" anymore.
Perfect example being the X1900. It has 16 "pipes" but 4 are for texture units and the other 12 are dedicated to supply shader functions.

So really the R600 can be considered a 64....'stage' for lack of a better word. Because anyone of them can be address to textures, shaders, geometry, etc etc....

I'm sure Grape should be popping in soon too throw in his 2cents.
I always need him to clean up my sloppy posts...lol.

Plus he's a little more qualified to discuss such topics than I am... :D 

@ Whizz

Are you reffering to an External GPU for a laptop or desktop running through, a USB interface?
a b U Graphics card
August 19, 2006 8:36:01 AM

Quote:

Limits are made to be broken (mainly the speed limit :lol:  ) but having flexible working "pipelines" is going to be an awesome concept.


Damn straight on both.
Keep pushing the limits give us great new tech. The way I look at the unification process is trying to save transistors, getting close to X performance with a smaller number of transistors than a traditional design, hopefully meaning faster, cooler, less power hungry chips for cheaper than comparing to a traditional design of equal numbers.

Quote:
To be honest, we can't even consider GPU's having "pipes" anymore.
Perfect example being the X1900. It has 16 "pipes" but 4 are for texture units and the other 12 are dedicated to supply shader functions.


I'm a little confused by that statement, sounds like you're talking about the X1600's specs. Which was a good first step (although a little dissapointing compared to what we all expetcted). ATi then just refined the design even further with the X1900 which shows head to head with the X1800 what the benefit of those 60mil transistors and 3X the pixel shader power can do.

Quote:
So really the R600 can be considered a 64....'stage' for lack of a better word. Because anyone of them can be address to textures, shaders, geometry, etc etc....


And that's the thing we don't really know, which could make this very surprising in a few weeks time.

Previous view was that it'd be just like the R500 in design, with just lagrer number of units. Having a Pixel/Vertex/Geometry component with a seperate but parralel vertex crossbar attached to a 3:1 ALU to TMU ratio.

This article implies that the ratio is 1:1 at least which mean alot of units and alot of transistors and a long process for procedural calculations requiring multiple passes, although maybe benifiting somewhat from early outs.

Or something truely revolutionary (and very risky, thus unlikely IMO) that the units aren't really differentiated into simple ALU/TMU SIMD units for each and requiring unit duplication, but more complex and functional units (MIMD - Multiple Instruction Multiple Data) which would allow much more complex impementations, but likely would be slower in less complex situations, but most importantly would be such a departutree from the norm as to risk another FX fiasco where 4x2 couldn't come near the R300's 8x1 design. Jumping back from below, they could even do ROP functionality if designed as such which would be very interesting essential duplicating in a quad what is done in a whole X1300 or X1600. Now that's near 1billion transistors for sure.

The funny thing is take these seemingless harmless words written by fuad could mean alot of things, which gives them less credability. Heck even GEO @ B3D was suggesting 64 ROPs in addition to units that can process 64:64:64:64 to go to the most traditional of pipeline design, now that would require a massive transistor count.

Personally I think FUAd's confused and as nottheking mentioned it's likely 64 pooled shader 'arrays' (unlike the R580 or previous designs, but like the R500) consisting of 2 ALu each (unlike the R500 but like the R580).

Quote:
I'm sure Grape should be popping in soon too throw in his 2cents.


Nah I was out at a retirement party (Beer, Cigars, Pool, and the CorsLight girls 8) [didn't drink any of that stuff though]).

Quote:
Are you reffering to an External GPU for a laptop or desktop running through...


Yeah but it'd be external PCIe which is a standard that already exists and is currently being used in 4X and 8X for for networking hardware. The spec already has 16X and 32X on the board, but like has been mentioned latency becomes a concern, but unless you are crossfiring , using hypermemory/urbocache or doing game dependant VPU physics then the impact would be lessend, as those would be the only source of truely latency sensitive processing IMO.
August 19, 2006 6:37:32 PM

@ Grape

No, I meant the R580 design as my example, although the process for the X1600 and X1900 go mroe hand in hand when comparing the X1800 to 1900.

I'm not sure about cooler GPU's as the on going die shrink makes it increasingly hard to mate a decent cooling solution.... :( 

Cheaper? Bah, for that manufactorer, but they'll still sell around current GPU prices regardless of how much they would save; why? they known we'll keep buying...

Lowering the transistor count theoretically means a less power hungry chip, but how much less can you get with an evolving and complex architecture?

The Inq article does imply 1:1, but I dont see that happening ever again with an ATI design. I believe the R600 will be the first step towards the MIMD your refering too, making it an exception high end card, but I doubt the FX series again. The problem wasnt the physical design, it was the DX9 (or lack of) implementation with that series.

I think king is on track here with the shader array's. But with 2 attached ALU's each? Why not more?

Coorslight girls? ---Make sure to call me next time kthx!

I realize there's an external PCIe standard, but I've been hearing of an external solution through the USB interface. Which I find a decent idea, but then again you have high and low cpu utilization that could hurt performance and CPU access times, read's and writes etc etc....
I wouldnt be to sure of that at this point. Plus, your going to power a GPU through a USB port???????
August 20, 2006 12:33:50 AM

Quote:
Are you reffering to an External GPU for a laptop or desktop running through...


Yeah but it'd be external PCIe which is a standard that already exists and is currently being used in 4X and 8X for for networking hardware. The spec already has 16X and 32X on the board, but like has been mentioned latency becomes a concern, but unless you are crossfiring , using hypermemory/urbocache or doing game dependant VPU physics then the impact would be lessend, as those would be the only source of truely latency sensitive processing IMO.

Well there's no external PCIe standard, so I'm assuming it would go over something like USB2.0 or something proprietary.

I'm referring to what THG proposed with high-end cards going external due to growing power and heat requirements. I would think that latency would be a concern. It's hard enough to get low latency with traces on a board, let alone an exteral interface where RF becomes a concern.
a b U Graphics card
August 20, 2006 12:44:50 AM

Quote:

I'm not sure about cooler GPU's as the on going die shrink makes it increasingly hard to mate a decent cooling solution.... :( 


Yeah and you see, that's something I've always mentioned and people seem to miss, nice to see someone else considering that. 8)
Sure it's be nice to get an X1600 or GF7600 onto chip the size of pin head, but if you still get the same amount of heat produced, then it's going to be hard to get enough surface area of an HSF assembly to do cooling anywhere near that currently.

Quote:
Cheaper? Bah, for that manufactorer, but they'll still sell around current GPU prices regardless of how much they would save; why? they known we'll keep buying...


Definitely, agreed, although truely it does lower their base price if the costis lower so introductory prices will likely remain the same but over time they have the chance to go lower. So if it costs $50/chip to make we can expect an eventual value price lower than if it cost $100/chip to make. Also hopefully with the transistor 'cost' it means that their good balance of price/cost reaches a slightly lower equilibrium so they can still sell more chips for profit, and thus overall the chips should be slightly more profitable at a lower price unless there's a shortage of chips (which hopefully die shrink helps eleviate as well getting more yield per waffer). Now that the theory, but like you say the reality is different sometimes like the GTX-512, and any 'just' laucnhed card where they start at usually higher prices, and then fall to a good equillibrium.

Quote:
Lowering the transistor count theoretically means a less power hungry chip, but how much less can you get with an evolving and complex architecture?


Yp, adn here's the other thing, we usually see ATi and nV waste their benifits by OCing the crap out of these things to compete with each other, sure the GT is an efficient chip, but hey lets put two together on a card; hey the X1900 is efficient for it's properties, but we want to compete against the ridiculous GTX-512 so overclock the snot out of it. I think if they could get 4 times the chips @ 75% of the speed, they'd still clock them beyond belief just to beat each other, that's why we saw limited redutions going from 130nm to 110nm to 90nm, each time they are simply wasted by taking that benifit and OCing the crap out of it. That why I usually laugh at anyone talking about power consumption when considering an SLi/Xfire top of the line rig. Seriously, want efficient then run them at the same speed as the prvious generation, then you get that savings. Sure I like a fast card, but it's humourous when ATi and nV OC these thing so high as to hurt their yields and make them impossible to find ie (GF6800UltraExtreme, X800PlatinumEdition at launch, GTX-512, and what seems to be the case with the Mobility X1900 [where the F is it?])

Quote:
The Inq article does imply 1:1, but I dont see that happening ever again with an ATI design. I believe the R600 will be the first step towards the MIMD your refering too, making it an exception high end card,


I agree that's why this is so shocking if true. But like I say earlier inthe thread I don't believe it but imagine what this means if true, and how it would work, it's mind-boggling. 8O

Quote:
but I doubt the FX series again. The problem wasnt the physical design, it was the DX9 (or lack of) implementation with that series.


Well the thing was that with the FX's design it could do very complex things (like full 32bit precision 2 function [colour & z + just Z]) faster than running through 2 passes (which even had the benefit of lower precision), but when compared to the R300 series it was slow, because few games then, and even now play to those complex strengths, which really are only shown in proffesional apps for the most part.

Quote:
I think king is on track here with the shader array's. But with 2 attached ALU's each? Why not more?


I can't really speak for him, but I think he like myself see it as the logical progrssion taking the benefits of the R5xx/G7x dual ALU designs and applying them to the more simple single ALU design of the R500, but without the penalty of very large transistor count, nor the drawback of having too many calculations and dependancies in the array to slow things down. Remember even in the R500 they are organized into functional groups so I can't see a benifit to having 3-4 ALUs per array versus having more arrays or having them be more complex. The move to more ALUs would mirror the FX imo.

Quote:
Coorslight girls? ---Make sure to call me next time kthx!


We didn't even know they were going to be there it was a promotion by Molson's (Coors partner) and the Owner just sent them over to our little party. Yeah it was great, actually 2 off them kept hugging me around the neck and playing with my shaved head (women love the spikey brush hair), it's was nice but also weird since they all looked like they were in highschool at most, so it was a little creepy too since it was a mixed work party. But hey, there's nothing wrong with LOOKING and THEM touching! :twisted:

Quote:
I realize there's an external PCIe standard, but I've been hearing of an external solution through the USB interface.


Really cool, I had seen anything like that yet. The only USB I'd seen were those multi monitor extension solutions that were more about 3D than any serious work. A USB solution would be nice if they can get it to work, especially for people like myself who are primarily Laptop guys. You could plug in power when you need it and it's plugged in, yet not have the power burden when you don't, like on the road. That'd be awesome!

Quote:
I wouldnt be to sure of that at this point. Plus, your going to power a GPU through a USB port???????


Well I doubt they'd power it through USB I think the max on USB is like 10 Watts? Probably they'd have an external power brick or dedicated plug just to supply power the USB/external connector would likely be only for communicating, and thus only carry a low electrcal signal.
a b U Graphics card
August 20, 2006 1:32:05 AM

Quote:

Well there's no external PCIe standard,


It seems you might be right PCI-SIG is developing it, they may not have established it yet, but there's already external systems being desisned on it that I've read about for work, plus a ton of papers on the subject. I thought they were already established but they seem to be proposed specs (proposed for a few years);

http://www.pcisig.com/news_room/faqs/faq_express/
Q9: What is the PCI Express External Cable specification?
A9: The PCI Express External Cabling specification is being developed to address multiple market segment cable requirements to extend PCI Express protocol and functionality across arbitrary distances and packaging that cannot be met using existing backplane trace connectivity.


And looking at that link they mention the future option of external housing for testing the mobile card solutions (thereby a similar system could technicallybe used for PC the way the MSI SLI MXM solution was);

In the future we will provide developers the alternative option of using external test houses.

The only thing I could find to clearly show the concept and how it's designed (along the 4X-8X I mentioned [I didn't know 16X was 8+8 not it's own) is M$' paper on the subject;

http://download.microsoft.com/download/1/8/f/18f8cee2-0...




Quote:
so I'm assuming it would go over something like USB2.0 or something proprietary.


I don't know, I'm thinking it will be external PCIe because at even 4X it's faster and has much less latency than USB2.0 from the early stuff I saw (similar to the eSATA advantage over external USB2.0) and you wouldn't want proprietary as either a consumer or a system builder where you'd want options, not being stuck with just one solution.

nVidia technically already has a proprietary solution which isn't an artist's rendering like the TOM's example we're using but their QuadroPlex, which is mentioned and shown in that ARStechnica with some interesting examples parralleling our discusions here;
http://arstechnica.com/news.ars/post/20060802-7409.html

Quote:
I would think that latency would be a concern. It's hard enough to get low latency with traces on a board, let alone an exteral interface where RF becomes a concern.


Exactly, and I agree with you it's a big concern especially for high end solutions in the situations I mentioned, heck traces on the mobo versus traces on a card (ie Xfire versus Gemini) are a concern, let alone a few feet of cable. But like I was saying it depends on the situation and application as to what level of drawback latency poses IMO.
August 20, 2006 2:23:19 AM

Quote:
and what seems to be the case with the Mobility X1900 [where the F is it?])
I'd love to see the Mobility X1900. :D 
a b U Graphics card
August 20, 2006 3:01:02 AM

Quote:
and what seems to be the case with the Mobility X1900 [where the F is it?])
I'd love to see the Mobility X1900. :D 

EXACTLY!

I saw like 7 - 10 companies from Alienware to Voodoo have them in their literature or options list only to see them dissappear later. WTF !?!

Oh well hopefully they've got something new and efficient coming in the fall.
August 20, 2006 3:18:34 AM

Quote:
Can u imagine it in crossfire mode?


More like CrossFIRE!!!
August 20, 2006 3:23:14 AM

CROSS D = XD
a b U Graphics card
August 20, 2006 3:48:11 AM

Quote:


More like CrossFIRE!!!


LOL! Exactly.

These new cards will be interesting to see thermal wise. OIE 500+ million transistors runing @ 700+ MHZ ! 8O
August 20, 2006 4:08:24 AM

Some people use laptops as glorified desktops, and to an extent I understand, because if I could shrink my Aurora...I sure as hell would :wink:

However the problem with a USB interfaced GPU would be CPU utilization imho. High and low, then you'd have to make a the USB a higher priority bus.

What about data transfer? Is like PCI-E or AGP? PCI-E is an internal Serial connection that allows bi directional data transfer's both up and down 16x simultaniously. While AGP only allows 8x uni directional, then you need a bus grant, and data realese to "head in the other direction" - without sounding to dumb there.

I can think of several reasons a USB GPU would be performance limited....
Then again I can see the benefits if all the small issues could be ironed out, we'll just have to wait and see what develops, because I know I sure as hell am not the one to produce such a product. But it would be cool to have a USB GPU, I could go on the road and do my email, spreadsheets, surfing and save on battery power. Then when I went' to do something 3D intensive I could plug in the outside source.

Only problem = battery life :cry: 

Last I checked, a USB port could handle around the lines of 100Ma....so yeah, a battery cord or own UPS "on die" battery would be a geniues idea.
August 20, 2006 4:33:31 AM

I can see it now...motherboards w/ external graphic card plugs, or a PCI-E card that has an external GFX plug. I also think the the external graphics card would have its own powersupply haha, then people wouldnt have to spend so much on their PSUs. I though die shrinks were supposed to cool down the GPU?!? Maybe by the time they get down to 45nm heat will no longer be an issue, but then again, people will just OC until heat becomes an issue haha...
August 20, 2006 4:39:53 AM

Die shrinks dont necessarily bring a cooler GPU.

What they're mainly targeted for is to bring a higher chip yield per 300mm waffer (is it 300mm that GPU's are produced on?)

Basically a die shrink brings other gains, like lower operating voltages and traditionally lower power consumption. Lower voltage in turn = less heat.
However there's still some there, and it has to be removed from the chip or else :cry: 

@ Beer

Hense the U in USB (Universal Serial Bus)

But I must concede that I never though about the 1394. I"m just wondering if it would make much (if any difference) from the standard USB.
a b U Graphics card
August 20, 2006 7:02:42 PM

Quote:

its funny you guys talking about using usb for graphics but you are forgetting about the 1394 port. 1394 was designed with graphics in mind but no one ever used it for graphics while usb was designed as universal and gets used for everything


While Firewire may have been designed for graphics that was in the ISA/PCI/singleAGP era, nowadays it just doesn't have the badwidth.

Even the 1394-B / aka Firewire800 can only do 'about' 800Mbps peak throughput (sustained is closer to 600Mbps/80MBps, which is much less than AGP2X, and much MUCH less than each direction of PCIe 4X (1+1 - 2GBps total Effective 1.5+GBps). Think about the fact that you can put a FireWireA card on a PCI slot and get basically full throughput, but put even a low-end graphics card from 3 generation ago on a PCI slot and it will slow it down.

While this wouldn't have been an issue when it was designed, FireWire would pose a problem now especially since it has terrible latency compare to PCIe or even PCI. Sure it has low CPU overhead (compared to USB) but PCIe would probably be better.

USB2.0 < FireWireA/B < PCIe.

Regardless of the implementation current consumer solutions are not viable IMO, there need to be the adoption of something much better, like external PCIe or something that the graphics IHVs come up with (Intel being one the biggest developer/pusher of standards in the industry, for both graphics and computing hardware in general).
!