What nm process do card makers use when making cards now?

WithExtremePrejudice

Distinguished
Feb 24, 2006
14
0
18,510
Now maybe this glaringly highlights my extremely-ultra-noobiness, but I had to ask. :) There was a time when I would find mention of this in manufacturer/seller sites, but now, as with all preferably unmentionables, it seems it is no longer an issue(?). nVidia now touts their 24 pixel pipelines and ATi screams on and on about their 48 pixel watchmacallits..., but nowhere is any mention of what micron process their gpus are made from!! :cry:
Now AMD and Intel go on and on about how their smaller and smaller micron processes make their kick-butt action heroes gobble less power and go faster and cost less to produce and sh*t like that, so I was wondering - shouldn't it be the same for graphics cards? The last mention of micron process put graphics cards at 90nm was a year back or more, but cards just keep getting more and MORE power-hungry!! Best example is the new ASUS Dual GPU cards with power bricks needing exterior connection!! I mean, c'mon don't we have enough pc entrails sticking out the back already??? And over 300W consumption!!! CURSES!!! :evil: And its not as if prices are REALLY going down in the processor arena either... :cry: Ok ok, enough mad ranting!! So can some one tell me the whats what of this? Or is my extremely-ultra-omega-kick-butt-action-noobiness really makin my ass shoot hot gas?
 

Vokofpolisiekar

Distinguished
Feb 5, 2006
3,367
0
20,790
I believe that Nvidia (7900GTX) and ATi (X1800/1900) are both on 90nm tech. This will most likely change when both of these companies unveil their DX10 cards in the next few months. I believe the die shrink will then be to 80nm.

The 90nm tech has it's advantages, like you already stated ie low voltage, high performance and more transistor count. But both these companies had trouble reaching high frequnecies on the 90nm die - 650mhz core for both at this stage (top end cards and assuming the 7900GTX is in fact going to run 650mhz stock).

I also read sumthing about Nvidia that's going to get assitance from AMD fabs for waver production - or was that the other way around? Dunno...
 

WithExtremePrejudice

Distinguished
Feb 24, 2006
14
0
18,510
Thanks!! Thats a load off my 90nm brain!! :D lol. But it still seems sad that not even the high end cards and card makers can match the AMD and Intel processors' rate of development. And it seems weird too that they should find it harder to get higher clock speeds with smaller micron processes, yeah? And only down to 80nm? That'll not bring about much of a difference imo!! How come they can't go dual core like AMD and Intel?? Way i see it, that's the best path to greater performance! But thats just the ultra-omega-kick-butt-action dude of noobiness talking(me :p ) so... <shrug>
 

WithExtremePrejudice

Distinguished
Feb 24, 2006
14
0
18,510
Oro? :S 7800GS was the FIRST nVidia card with 90nm???? EEEEEEEEEKK!!! That sucks!! What's nVidia doing?!?!?! And isn't the "GS" suffix on the 7k series for the AGP version releases only??? NOOOOOOOO!!! :cry:
 

Vokofpolisiekar

Distinguished
Feb 5, 2006
3,367
0
20,790
With a die shrink, theoretically you should clock higher. But due to the density in fabrication between your transistors, heat becomes a major problem aswell as leakage current. So it is tricky...

That's why I support ATI in this regard (tech roadmap), they are using other methods for squeezing out more performance per clock, as was the case in the good old days when programmers could write small, efficient code (not for GPU's but the analogy can be dragged across). ATi does more per clock than Nvidia can, and thus pixel shader effieciency is almost 100%.

Thus, we have a 16pp card that can run a 24pp card to bits and hopefully if nvidia uses 32pp on the 7900, again. It's a very simple line I'm crossing - I'm not that deep into the tech.
 

ak47is1337

Distinguished
Jan 30, 2006
1,830
0
19,780
It depends on what card, fools.
Nvidia 6800Ultra/GT = 130nm
Nvidia 6800GS = 90nm
ATI x1xxx series = 90nm
Nvidia 7800gs/7300 = 90nm
Nvidia 7800gt/gtx = 110nm
I believe the ATi X8xx series is on 150nm
Any questions? The 7900's will be on 90nm, ironic that the 7800gt/gtx isn't already on it.
 

ak47is1337

Distinguished
Jan 30, 2006
1,830
0
19,780
I believe that Nvidia (7900GTX) and ATi (X1800/1900) are both on 90nm tech. This will most likely change when both of these companies unveil their DX10 cards in the next few months. I believe the die shrink will then be to 80nm.

The 90nm tech has it's advantages, like you already stated ie low voltage, high performance and more transistor count. But both these companies had trouble reaching high frequnecies on the 90nm die - 650mhz core for both at this stage (top end cards and assuming the 7900GTX is in fact going to run 650mhz stock).

I also read sumthing about Nvidia that's going to get assitance from AMD fabs for waver production - or was that the other way around? Dunno...

AMD is potentially using an old 90nm facility to produce 90nm nvidia chips. However, because the story is by the inquirer, its most likely bullshit.
 

ak47is1337

Distinguished
Jan 30, 2006
1,830
0
19,780
Thanks!! Thats a load off my 90nm brain!! :D lol. But it still seems sad that not even the high end cards and card makers can match the AMD and Intel processors' rate of development. And it seems weird too that they should find it harder to get higher clock speeds with smaller micron processes, yeah? And only down to 80nm? That'll not bring about much of a difference imo!! How come they can't go dual core like AMD and Intel?? Way i see it, that's the best path to greater performance! But thats just the ultra-omega-kick-butt-action dude of noobiness talking(me :p ) so... <shrug>

What are you smoking? GPU's are in no way comparable to the technology used in the newest CPU's by intel, on 65nm and eventually 45nm next year, and AMD is already moving to 65nm and getting rid of 90nm as nvidia and ati just move in.
 

ak47is1337

Distinguished
Jan 30, 2006
1,830
0
19,780
With a die shrink, theoretically you should clock higher. But due to the density in fabrication between your transistors, heat becomes a major problem aswell as leakage current. So it is tricky...

That's why I support ATI in this regard (tech roadmap), they are using other methods for squeezing out more performance per clock, as was the case in the good old days when programmers could write small, efficient code (not for GPU's but the analogy can be dragged across). ATi does more per clock than Nvidia can, and thus pixel shader effieciency is almost 100%.

Thus, we have a 16pp card that can run a 24pp card to bits and hopefully if nvidia uses 32pp on the 7900, again. It's a very simple line I'm crossing - I'm not that deep into the tech.

Nvidia's are FAR more efficient per clock. Nvidia has 24 pixel pipelines tops, ATi has 16. That's pretty big. However, the 48 pixel shaders of the x1900xtx can take it far, and thus it pulls ahead for now..not to mention that its on 90nm while the 7800gtx's are on 110nm. Once Nvidia actually matches the true clock speed of ATi with the 7900, their throne will be toppled.
 

ak47is1337

Distinguished
Jan 30, 2006
1,830
0
19,780
I'm no fool esse, just the mother in the : "assumption is the mother of all f*ck ups" statement :lol: :lol:
Just screwing with ya guys, I keep seeing retarded posts that claim the smallest chip is the best chip, which is clearly false as Intel should be owning AMD if this were the case.
 

Vokofpolisiekar

Distinguished
Feb 5, 2006
3,367
0
20,790
Hopefully my posts wasn't retarded.... 8) The die shrink is a very risky process as far as I know.

Busy plowing through Rainbow Six Lockdown - and very p*ssed at it. Damn CONSOLE ROOTS :evil: :evil: :evil:
 

ak47is1337

Distinguished
Jan 30, 2006
1,830
0
19,780
Hopefully my posts wasn't retarded.... 8) The die shrink is a very risky process as far as I know.

Busy plowing through Rainbow Six Lockdown - and very p*ssed at it. Damn CONSOLE ROOTS :evil: :evil: :evil:
Not risky, difficult and expensive. It must be done, otherwise a chip company would never compete...
AMD is currently having trouble perfecting 65nm SOI, btw.
 

ak47is1337

Distinguished
Jan 30, 2006
1,830
0
19,780
SOI - now that's sumthing Intel would love to get their hands on. I heard it runs much cooler than Strained Silicon?
It does. But, 90nm SOI has problems at -10c
So for those of us building phase changers at -100c, some AMD's will not boot at this temperature.
 

WithExtremePrejudice

Distinguished
Feb 24, 2006
14
0
18,510
My thanks, O Grandfather-of-All-Knowledge!! :p Though I am lost in the myriad twists and turns of your brilliant watchamacallit and my 90nm brain has shrunk to 45nm under the stress of trying to comprehend the mysteries you have valiantly tried to impress upon more unworthy beings, I would also like an answer to the small matter of why graphics cards are getting more power-hungry if their nm's are getting smaller? Is it because of more memory? More pixel pipelines? Or something to do with more complex architectural thingy? Just tell moi which plays the factor and not go in too deep as that will only go over my head like the avenging geese of every butcher's nightmares!! :D
 

Vokofpolisiekar

Distinguished
Feb 5, 2006
3,367
0
20,790
AK47 is going to prove me wrong and smite me, but as I said (assumed) before - more transistors (for more functionality ie SM2.1/3.0 , HDr etc). The closer they are stacked together on a smaller die, the higher the heat dissapation because we are running increased mhz.
 

ak47is1337

Distinguished
Jan 30, 2006
1,830
0
19,780
My thanks, O Grandfather-of-All-Knowledge!! :p Though I am lost in the myriad twists and turns of your brilliant watchamacallit and my 90nm brain has shrunk to 45nm under the stress of trying to comprehend the mysteries you have valiantly tried to impress upon more unworthy beings, I would also like an answer to the small matter of why graphics cards are getting more power-hungry if their nm's are getting smaller? Is it because of more memory? More pixel pipelines? Or something to do with more complex architectural thingy? Just tell moi which plays the factor and not go in too deep as that will only go over my head like the avenging geese of every butcher's nightmares!! :D
Flattering, Really. However, even as technology size decreases more transisters are put on a single die and thus it keeps ramping up. Massive amounts of pixel shaders, pixel pipeline quads and memory at 1.2ns doesn't help much either.
 

ak47is1337

Distinguished
Jan 30, 2006
1,830
0
19,780
AK47 is going to prove me wrong and smite me, but as I said (assumed) before - more transistors (for more functionality ie SM2.1/3.0 , HDr etc). The closer they are stacked together on a smaller die, the higher the heat dissapation because we are running increased mhz.
The transister part is true. However, HDR, SM3.0 and such as really BIOS/Software based, and can increase performance without needing more powerful hardware per say. This explains how one of the new Intel Extreme Graphics chips can actually support Directx10 for Vista and such.
 

Vokofpolisiekar

Distinguished
Feb 5, 2006
3,367
0
20,790
What did I say? :)

Anycase, I'm leaving before I really strart posting stuff that might earn me a smiting :lol: and going to whoop Real Madrid ass in PES5 with ze Arsenal team.

"One might love computers, but it doesn't allways love you back - it's like dating a German chick" - My addaptation of Billy Bob Thornton in the movie Bad News Bears.
 

WithExtremePrejudice

Distinguished
Feb 24, 2006
14
0
18,510
Ok, that explains why they don't go dual core - they can't! No space for it! Pity. They could've split the 16 PCIe lanes 8/8 and pixel pipelines the same and reduced load per core. Would've also cut down power consumption on less demanding games, especially ones needing only one core worth of gpu power.