Sign in with
Sign up | Sign in
Your question
Closed

NVIDIA GTX 350

Tags:
  • Graphics Cards
  • Gtx
  • Nvidia
  • Memory
  • Graphics
Last response: in Graphics & Displays
Share
July 18, 2008 10:51:47 PM

NVIDIA GTX 350
GT300 core
55nm technology
576mm
512bit
DDR5 2GB memory, doubled GTX280
480SP doubled GTX280
Grating operation units are 64 the same with GTX280
216G bandwidth
Default 830/2075/3360MHZ
Pixel filling 36.3G pixels/s
Texture filling 84.4Gpixels/s
Cannot support D10.1 .10.0/SM4.0

http://en.hardspell.com/doc/showcont.asp?news_id=3764

More about : nvidia gtx 350

July 18, 2008 10:58:15 PM

So wait, this is their next generation coming in winter or is this the GT200 refresh?
July 18, 2008 11:04:47 PM

Its coming late august - early september, when the 4870x2 is out this will be max 2 weeks behind it.

I have heard specs from a good source on ocuk and the specs/ design dont match what I have been told.2gb is real, ddr5 not and its dual.

Rest is pretty much spot on.



Related resources
July 18, 2008 11:06:58 PM

mmmm...Interesteing...Yummylol
July 18, 2008 11:19:16 PM

Seems a little too quick... I'm suspicious.
July 18, 2008 11:23:18 PM

mathiasschnell said:
Seems a little too quick... I'm suspicious.


Why?

It will arrive shortly...just after the release of a competitors dual solution.

They are saying it will be faster than the competitors by a bit.
July 18, 2008 11:26:39 PM

damn, I just bought a gtx260!
July 18, 2008 11:32:37 PM

peter_peter said:
damn, I just bought a gtx260!


The price is unknown at the moment.It will be alot more expensive than your 260, more than double.

260 is a hell of a card anyway, it a beast when overclock to the hills.
July 18, 2008 11:44:55 PM

mathiasschnell said:
Seems a little too quick... I'm suspicious.


You should be as it's not true. That same site reported earlier that the GTX350 would be 45nm and has been running a long series of BS rumors. If you looks around the other forums you will see that nobody is taking it seriously.
July 18, 2008 11:47:41 PM

Its coming dont worry, once the 4870x2 is out, expect to hear all about it.

Doubt it all you want, I will bump up this thread in september
a b U Graphics card
July 18, 2008 11:56:25 PM

There may be something coming, as Ive heard as well, but it wont be ythis. The power envelope simply wont allow something like this. Youd need special power hookups for it. Also, one look at the clock speeds on something that huge and then your talking a huge heat problem. Default on the G280 is what 607 core speeds? Gimme a break. Look at this and compare to whats already been released.
July 19, 2008 12:00:25 AM

if this does ship in the coming months, who cares? the hardware is getting too far ahead of the software now.
July 19, 2008 12:07:22 AM

Nik_I said:
if this does ship in the coming months, who cares? the hardware is getting too far ahead of the software now.


What happens in games that need the cores like crysis or warhead or Fc2 or alanwake?


a b U Graphics card
July 19, 2008 12:14:02 AM

738 with oc, yes, and then look at the power draw after the oc. What Im saying is, thats over a 30% core increase which would more than take away any die shrink, even adding in the reduced power using GDDR5, plus the compatability of the card even using GDDR5, which no ones heard a thing about. We all knew ATI was using it well before their cards arrived. I also find it hard to think we wouldnt have heard about that as well. If all the speeds are ramped up, and you throw in the die shrink, youre still talking about a G280x2 power demand, which I just cant see happening
July 19, 2008 12:16:58 AM

JAYDEEJOHN said:
There may be something coming, as Ive heard as well, but it wont be this.


Speculation is that the 55nm flavors of the GTX280 and GTX260 will be out earlier than initially expected. Perhaps as soon as late August.
July 19, 2008 12:17:04 AM

JAYDEEJOHN said:
There may be something coming, as Ive heard as well, but it wont be ythis. The power envelope simply wont allow something like this. Youd need special power hookups for it. Also, one look at the clock speeds on something that huge and then your talking a huge heat problem. Default on the G280 is what 607 core speeds? Gimme a break. Look at this and compare to whats already been released.

Actually Nvidia joined the SOI consortium. I doubt they joined it just to get something developed. Maybe they got a "special" manufacturing deal out of it too. That would be the only possible choice to get a card that is even more monstrous than the GTX 280 on 55nm working withing the same or an only slightly higher power envelope.

Until i read something official i think nvidia is just playing the marketing card to keep people from running off to buy 48xx cards.
a b U Graphics card
July 19, 2008 12:30:11 AM

I believe its possible to have a 55nm G280 come out in very limited quantities by sep, with clocks ramped to 800 core, and such a card could give the x2 problems as we saw at the previews. However, Im of the mind ATI is sandbagging as well and going to surprise yet again.
a b U Graphics card
July 19, 2008 12:33:28 AM

Remember, ATI stated that wed see improvements of 15% with the x2. Looking at those previews, I didnt see that. And of course, none of the drivers are even finalised yet on what we saw on top of that including the regular CF setups, so yes, theres much more performance to come
a c 1426 U Graphics card
a c 165 Î Nvidia
a c 251 } Memory
July 19, 2008 12:39:35 AM

All we need now is for Intel and AMD to come out with CPU's that can harness the full potential of these monsters.
July 19, 2008 12:40:16 AM

dos1986 said:
What happens in games that need the cores like crysis or warhead or Fc2 or alanwake?


What happens to something like Crysis is that it continues being the same unoptimized piece of junk it always has been. :kaola: 

That game doesn't "need cores." It needs to be written to actually use them, which is precisely its problem. It does not scale well at all. When SLI/Crossfire setups of cards a generation ahead of what it was made for still cannot really get it to run right at its higher settings, I think it's pretty ridiculous to consider the hardware to be the problem..
July 19, 2008 1:39:36 AM

the source doesnt seem substantial...
a b U Graphics card
July 19, 2008 1:47:48 AM

Core clock is high even for a die shrink. Theres no reason for GDDR5 on a 512 bit bus. The ram speeds are skewed, as the slowest GDDR5 comes in at 900. Its bogus
July 19, 2008 1:52:56 AM

This is vaporware made up by Nvidia loyalist. Its a bunch of random numbers with acronyms and abreveations.
July 19, 2008 2:01:51 AM

This card would still be stuck with AFR rendering, which is going to make it inferior to the gtx280

They need to rethink the technology like AMD did before they start making more dual gpu cards. =/
July 19, 2008 2:03:40 AM

do we seriously have to fight over this? cmon, its obviously not going to happen. If ATI cant get ahold of the GDDR5 (1GB), how is NVidia going to get a hold of 2GB?
July 19, 2008 2:17:20 AM

mathiasschnell said:
So wait, this is their next generation coming in winter or is this the GT200 refresh?



Aren't they always refreshes? Gets a little hazy with NV doesn't it?
July 19, 2008 2:27:01 AM

the last resort said:
do we seriously have to fight over this? cmon, its obviously not going to happen. If ATI cant get ahold of the GDDR5 (1GB), how is NVidia going to get a hold of 2GB?



It won't be a 2gb card even in the best case. It'd be at most a 1gb per core design just like the 9800 gx2 was a 512mb per core design. Even at that its more likely to be 1gb GDDR3 on a 512bit bus, not 2gb GDDR5 - I thought jaydeejohn did some digging on this and concluded that the gtx200 core doesn't support GDDR5 in its architecture or something like that.

NVIDIA GTX 350
GT300 core
55nm technology
576mm
512bit
DDR5 2GB memory, doubled GTX280
480SP doubled GTX280
Grating operation units are 64 the same with GTX280
216G bandwidth
Default 830/2075/3360MHZ
Pixel filling 36.3G pixels/s
Texture filling 84.4Gpixels/s
Cannot support D10.1 .10.0/SM4.0 said:
NVIDIA GTX 350
GT300 core
55nm technology
576mm
512bit
DDR5 2GB memory, doubled GTX280
480SP doubled GTX280
Grating operation units are 64 the same with GTX280
216G bandwidth
Default 830/2075/3360MHZ
Pixel filling 36.3G pixels/s
Texture filling 84.4Gpixels/s
Cannot support D10.1 .10.0/SM4.0


realistically you could maybe expect...

gt200 architecture, 55nm, 576mm

512bit width bus, 1gb GDDR3 per core (1gb framebuffer), 240x2 SP

And it'd run so hot you could cook eggs on it like this: http://www.youtube.com/watch?v=IDoOV0FFPvA

Clocks might end up being 600/1300/1000 - but I really think it couldn't get much faster than a stock gtx280 clock.
July 19, 2008 2:41:39 AM

wow, if this is true, ppl who bought the gtx 260 or 280 will be pissed.
July 19, 2008 2:43:20 AM

The FUD got out of it's containment cell, everybody to the bunker quick!!!

Seriusly, I don't buy this, it would require a direct connection to the power plant to boot, let alone OC it.
July 19, 2008 2:47:27 AM

Well given that the 4870x2 (prototype) has a 500watt (ed ^^) power draw at idle and it boots, I doubt that would be an issue.
July 19, 2008 2:51:34 AM

shadowthor said:
wow, if this is true, ppl who bought the gtx 260 or 280 will be pissed.


Well, I think if they did make another dual gpu card it'd likely have the same problems that the 9800 GX2 had - namely: lack of scaling in SLI, huge power draw, high temperatuers, and "can be beaten in fps by a combination of 2 lesser cards for cheaper cost"

a gtx280 would be comparable to a the 9800 GTX of the 9800GX2 era~

Honestly, i'm more than a little surprised that Nvidia could even be thinking about launching a gtx280x2 - namely because they already know they are going to get beaten by the AMD 4870x2 - so they should really be concentrating on a new architecture instead, well in my opinion anyways.
a b U Graphics card
July 19, 2008 3:11:45 AM

At 55nm, at the same clocks, the power may be doable. But at higher clocks , no way. Especially whats listed here.. Also, itd have to be a sandwich style like the 9800x2, so again, the power goes up. Not so sure this is doable at 55nm, plus better cooling, thus more power needed. Just doesnt look theorhetically possible
July 19, 2008 3:21:33 AM

Possible or not, I just don't see the point. 4870x2 is already competing with the gtx280 - and 4870x2 quadfire is beating gtx280 SLI solidly.

Even if the gtx280x2 or whatever its called could be manufacturered and "beat" a 4870x2, it would be a microstutter fest in quad SLI... meaning 4870x2 quadfire would run over gtx-X2 quad sli

I don't know, I just .. well maybe they could do it and maybe they planned to do it anyways - but I think it'd be a waste of money and resources especially if they want to have dx10.1 cards ready for next spring...
July 19, 2008 3:47:58 AM

Dont forget it would cost $1000+

Why buy an entire PC with reasonable graphics power when you can pay the same amount for JUST a video card eh? Sounds like one helluva deal.....




...if youre a dolt.
July 19, 2008 6:39:04 AM

ovaltineplease said:
why would it cost 1000$?


I think he is just over exaggerating a little. I would say at least 700 or 800 considering the GTX280 is finally around $450. Still way too much to even consider unless you like throwing money into the wind. I don't really see Nvidia pulling out a brand new card 2 months after they released their SUPER GTX series. It Doesn't make good business sense to make a new architect, throw it away 2 months later and replace it with something possibly a little better. Who knows tho, we'll just speculate a little more and wait and see.
July 19, 2008 6:46:07 AM

Just_An_Engineer said:
You should be as it's not true. That same site reported earlier that the GTX350 would be 45nm and has been running a long series of BS rumors. If you looks around the other forums you will see that nobody is taking it seriously.


good call b/c TSMC is delaying its commencement on the 45 nm operations, so definetly its not going to be that if it comes out this year, where as 55 nm , hmm maybe refresh..., still though 480 Nvidian SP that's just insanity...
July 19, 2008 6:49:30 AM

ovaltineplease said:
why would it cost 1000$?


lol ovaltine I expect you too understand that effectively doubling the SP, would almost double the price, not exactly but man that would be expensive, 800ish at least if it comes out some time soon, which jaydeejohn seems to say and I even think so, that it's bogus...at least the gddr5 part...something is off here, and the hardware is moving too fast software simply cannot keep up,
July 19, 2008 7:51:23 AM

JAYDEEJOHN said:
If all the speeds are ramped up, and you throw in the die shrink, youre still talking about a G280x2 power demand, which I just cant see happening


I can see it happening. Typical Nvidia work around and the fan's be darned as far as PSU's go. They'll probably charge $650 for it, with a bit of a loss and say you need two PSU's.

All they need is the high end, regardless of how many are manufactured or how much power it draws. They've never been concerned with thermals, price or power supply requirements before.

All they want is sheer fps in Crysis, or whatever current generation game is top for reviews. It's all perception: 'Nvidia has the high end, their GTX350 beats the 4870x2 by 6 fps in Crysis on a 30" LCD, so that means we must all buy GTX280's for Christmas, or GTX260's if we can't afford that, or 9800gtx+'s if we're really broke'.

Marketing trumps engineering every time. At least with Nvidia.
July 19, 2008 8:39:48 AM

Color me skeptical; G92 had a 2.5 shader:core clock ratio, but GT200 went with 2.16:1, quite likely because they couldn't get the shaders to run stably at that clock speed.

Likewise, at 3360MHz memory, it'd be 315.04 GB/sec, not 316 GB/sec as it stated... A little nitpicking, I know.

I'm also HIGHLY doubtful that they'd be able to squeeze that sort of clock rate from a 576mm² chip, even after a revision to 55nm. And even then, I look at the fill-rates, and none of it even lines up; it's like the numbers were produced without any math involved at all.

For now, I'll actually maintain that at least part of those "specs" are fabricated for the time being, likely by someone over-eager to try to fill in the gaps because nVidia won't tell anyone what goes in those gaps.
July 19, 2008 12:16:41 PM

FrozenGpu said:
lol ovaltine I expect you too understand that effectively doubling the SP, would almost double the price,


Judging by that, let's look at the 3850 price at launch. That was 179$ with 320 Shaders.
The 4850 came out with, 800 Shaders, which is 2.5 times the amount. So it should cost ~450$.
That would make the 4870x2 quite the expensive card, almost cracking the magic 1000$ barrier.
To make it worse, i could start comparing the 8600 to 8800 series based on their shader count and their price, but that would be nuts, wouldn't it?

You don't happen to work at nvidias financial division, calculating the launch prices, would you?

The Shader count may be a factor in the cost but it is clearly not the dominating one.
July 19, 2008 12:25:09 PM

Those specs don't look legit to me. Even if it was true, then wow. This card looks like a killer, not to mention twice as expensive. I would love a 512 bit and GDDR5 but I don't see that happening anytime soon.
a b U Graphics card
July 19, 2008 1:09:32 PM

Lets play the pricing game. The cooling solution would have to be more expensive, because were talking around a 33% shrink but doubling all the physical numbers here, so youre still cooling around 40% more, and Im rounding and being conservative here. Its still a 512 bit bus. The pcbs will be expensive, even more than the 65nm part because, one its a smaller process/startup, and two, its GDDR5, which even easier to design, still has to have a completely new design for wire layout. Not really any savings there except maybe a little power, therefor not doubling the requirements regarding the pcb. The chips would be approximately 35% larger, I know it says 576m^m2, but unless its a completely new arch, its 2 G200s, which after die shrink and doubling is more like 760m^m2, or around 380 apiece. No power savings, and at 760, its currently 35% or so larger, thus no money savings, and will be harder, or take more to cool. Now the interesting part. If all things come out the same, meaning that leakage/power usage per transistor is % the same per die size, you could say that per transistor youd be saving 35% but youd still be using more power, not alot but more, and this never happens, it just doesnt. Its why when we see say a 30% shrink, with no other changes, we dont see a 30% clock change, as thermals dont run straight with transistor size. And that brings us to the clock speeds. There you have a 33% increase, which as I said, you just cant do, and keep within thermals. So, even if kept within thermals, which isnt possible, the gains from shrink, from memory, from wiring placement all these things, pretending that it could all be done and within thermals, would still leave us with doubling the power draw of a G200x2, which is impossible because of the PCI-e2 compliance. To give you an example. The 4870x2 requires a 8 pin and a 6 pin, tho you can run it 6 and 6, as saw in some previews. At 8 and 6 is currently as high as we can go, period. Doubling a G280s power is adding another 90 watts or another 8 pin to it, and it just cant be done
July 19, 2008 1:29:20 PM

Slobogob said:
You don't happen to work at nvidias financial division, calculating the launch prices, would you?

To be honest, I think that right now, nVidia's people know less about what they're doing than the average enthusiast does. So I think that appeals to nVidia's authority at this point are pretty much moot.

Slobogob said:
The Shader count may be a factor in the cost but it is clearly not the dominating one.

However, RAM prices affect things QUITE well. I'd note that both the 3850 and 4850 came with 512MB of GDDR3, being what is now a middling amount of what is now a commonplace and cheap kind of memory. Meanwhile, the supposed GTX 350 outright DOUBLES the amount of memory, as well as switching from cheap GDDR3 to expensive GDDR5. GDDR5 is a memory technology in its infancy, meaning that currently, 512 mbit (64MB) chips, the smallest ones, are the only kind that are quite plentiful at the moment, meaning that they're considerably cheaper than 1024 mbit (128MB) chips. A 512-bit memory interface means that you're going to have a whole 16 chips of RAM, which is fine for 1024MB; that's just using 512 mbit chips, which are also even cheaper in GDDR3 form. Howeve, the 1024 mbit ones that would be required to get a whole 2048MB on a 512-bit interface would be over twice as expensive... On top of the cost changes in going from GDDR3 to GDDR5. You're probably talking winding up paying 3-5 times as much for the VRAM, which on a board with so much of it, is going to be a significant portion of the price.

So yeah, when comparing that supposed GTX 350 to the GTX 280 as far as price goes, comparing the HD 4850 to the HD 3850 is a very flawed analogy that doesn't take all that much into the picture.

John Vuong said:
Those specs don't look legit to me. Even if it was true, then wow. This card looks like a killer, not to mention twice as expensive. I would love a 512 bit and GDDR5 but I don't see that happening anytime soon.

Plus, as I noted, there was the part where none of the throughputs were divisible into the clock speeds, or even divisible into any logical number of processing units that the chip might have. (gimme a break; I'm assuming it'll be a whole number, and even a multiple of 8, as have nVidia's stuff since the G70)

JAYDEEJOHN said:
The chips would be approximately 35% larger, I know it says 576m^m2, but unless its a completely new arch, its 2 G200s, which after die shrink and doubling is more like 760m^m2, or around 380 apiece.

Actually, the way I calculated it out, a shift from a 65nm process to a 55nm process would take the chip from 24x24 mm (576mm²) to 20.3x20.3 mm, (412mm²) only 28.5% in terms of die area savings. This could be eaten up by an additional 39.8% transistors, which would bring the total of transistors up from 1400 million to 1957 million... To put that into perspective, that's about 81.8% the transistor count of a G80.

JAYDEEJOHN said:
And that brings us to the clock speeds. There you have a 33% increase, which as I said, you just cant do, and keep within thermals.

Plus, each transistor shrink does not grant an increase in maximum stable clock speed that matches the decrease in size, otherwise we'd have seen a doubling in clock speed every 18 months outside of the NetBust era. But rather, if memory serves, the typical full-node step only allows for transistors to switch up to around 20-30% faster than before on average, while retaining the same level of reliability; as we saww on top-end Radeon cards, we went from a maximum of:
  • 540 MHz on 130nm, (Radeon X850XT PE) to
  • 650 MHz on 90nm, (Radeon X1950XTX) to
  • 743 MHz on 80nm, (Radeon HD 2900XT) to
  • 775 MHz on 55nm. (Radeon HD 3870)
    Similar, on top-of-the-line GeForce cards:
  • 425 MHz on 130nm, (GeForce 6800ultra) to
  • 550 MHz on 110nm, (GeForce 7800GTX 512) to
  • 512 MHz on 90nm, (GeForce 8800ultra) to
  • 675 MHz on 65nm (GeForce 9800GTX)
    Obviously, I left out some processes because they weren't used for high-end GPUs; I'm not counting the (almost always higher) core speeds found in mid-range GPUs, since the GTX 350, as supposed, is not a mid-range GPU by any stretch of the imagination.

    JAYDEEJOHN said:
    The 4870x2 requires a 8 pin and a 6 pin, tho you can run it 6 and 6, as saw in some previews. At 8 and 6 is currently as high as we can go, period. Doubling a G280s power is adding another 90 watts or another 8 pin to it, and it just cant be done

    Yeah, the limit on a PCI-e 1.x slot is 250 watts; 75 for the slot, 75 for a 6-pin connector, and 100 for an 8-pin connector; the GTX 280 already comes dangerously close to that level... A higher level is possible if you're using a PCI-express 2.0 slot, since if I recall correctly, a x16 slot can supply 100-125 watts of power, bringing the total to 275-300. Of course, if you make a card that needs that much, then you screw out the 90% of people that have a perfectly good motherboard and CPU (including Core2extremes) that would not be able to use the card because it couldn't provide enough power.
    a b U Graphics card
    July 19, 2008 1:48:02 PM

    In a way, unless nVidia changes the ratio again, which would mean going back to the old 2.5 to 1, the sp's do have a direct relation to die size, given ratios are constant like you were saying. They changed their ratio with the 200 series, like Marvelous pointed out, and thats where he thinks its hurt them the most, and he could well be right. Even going back to the old ratio, youre still talking about a number of things. Changes all over the arch, and a new shrink, which nVidia doesnt usually do on their highend/new arch, if ever.
    July 19, 2008 2:23:35 PM

    Slobogob said:
    Judging by that, let's look at the 3850 price at launch. That was 179$ with 320 Shaders.
    The 4850 came out with, 800 Shaders, which is 2.5 times the amount. So it should cost ~450$.
    That would make the 4870x2 quite the expensive card, almost cracking the magic 1000$ barrier.
    To make it worse, i could start comparing the 8600 to 8800 series based on their shader count and their price, but that would be nuts, wouldn't it?

    You don't happen to work at nvidias financial division, calculating the launch prices, would you?

    The Shader count may be a factor in the cost but it is clearly not the dominating one.


    lol that is clearly not the point son, ati SP do not equal Nvidan SP, period!

    you have to take ati SP's divide them by 5, then you get the equvilanecy to NVidian SP's...

    but there are clearly more transisstors the PCB board ahs to be packed by more resistors, transistors, capacitors, and etc etc etc, just to provide processing overhead, so yeah its going to cost more, ty for humoring me, but clearly SP's are not the only determining spec to look at....
    July 19, 2008 2:32:15 PM

    nottheking said:
    To be honest, I think that right now, nVidia's people know less about what they're doing than the average enthusiast does. So I think that appeals to nVidia's authority at this point are pretty much moot.


    However, RAM prices affect things QUITE well. I'd note that both the 3850 and 4850 came with 512MB of GDDR3, being what is now a middling amount of what is now a commonplace and cheap kind of memory. Meanwhile, the supposed GTX 350 outright DOUBLES the amount of memory, as well as switching from cheap GDDR3 to expensive GDDR5. GDDR5 is a memory technology in its infancy, meaning that currently, 512 mbit (64MB) chips, the smallest ones, are the only kind that are quite plentiful at the moment, meaning that they're considerably cheaper than 1024 mbit (128MB) chips. A 512-bit memory interface means that you're going to have a whole 16 chips of RAM, which is fine for 1024MB; that's just using 512 mbit chips, which are also even cheaper in GDDR3 form. Howeve, the 1024 mbit ones that would be required to get a whole 2048MB on a 512-bit interface would be over twice as expensive... On top of the cost changes in going from GDDR3 to GDDR5. You're probably talking winding up paying 3-5 times as much for the VRAM, which on a board with so much of it, is going to be a significant portion of the price.

    So yeah, when comparing that supposed GTX 350 to the GTX 280 as far as price goes, comparing the HD 4850 to the HD 3850 is a very flawed analogy that doesn't take all that much into the picture.


    Plus, as I noted, there was the part where none of the throughputs were divisible into the clock speeds, or even divisible into any logical number of processing units that the chip might have. (gimme a break; I'm assuming it'll be a whole number, and even a multiple of 8, as have nVidia's stuff since the G70)


    Actually, the way I calculated it out, a shift from a 65nm process to a 55nm process would take the chip from 24x24 mm (576mm²) to 20.3x20.3 mm, (412mm²) only 28.5% in terms of die area savings. This could be eaten up by an additional 39.8% transistors, which would bring the total of transistors up from 1400 million to 1957 million... To put that into perspective, that's about 81.8% the transistor count of a G80.


    Plus, each transistor shrink does not grant an increase in maximum stable clock speed that matches the decrease in size, otherwise we'd have seen a doubling in clock speed every 18 months outside of the NetBust era. But rather, if memory serves, the typical full-node step only allows for transistors to switch up to around 20-30% faster than before on average, while retaining the same level of reliability; as we saww on top-end Radeon cards, we went from a maximum of:
  • 540 MHz on 130nm, (Radeon X850XT PE) to
  • 650 MHz on 90nm, (Radeon X1950XTX) to
  • 743 MHz on 80nm, (Radeon HD 2900XT) to
  • 775 MHz on 55nm. (Radeon HD 3870)
    Similar, on top-of-the-line GeForce cards:
  • 425 MHz on 130nm, (GeForce 6800ultra) to
  • 550 MHz on 110nm, (GeForce 7800GTX 512) to
  • 512 MHz on 90nm, (GeForce 8800ultra) to
  • 675 MHz on 65nm (GeForce 9800GTX)
    Obviously, I left out some processes because they weren't used for high-end GPUs; I'm not counting the (almost always higher) core speeds found in mid-range GPUs, since the GTX 350, as supposed, is not a mid-range GPU by any stretch of the imagination.


    Yeah, the limit on a PCI-e 1.x slot is 250 watts; 75 for the slot, 75 for a 6-pin connector, and 100 for an 8-pin connector; the GTX 280 already comes dangerously close to that level... A higher level is possible if you're using a PCI-express 2.0 slot, since if I recall correctly, a x16 slot can supply 100-125 watts of power, bringing the total to 275-300. Of course, if you make a card that needs that much, then you screw out the 90% of people that have a perfectly good motherboard and CPU (including Core2extremes) that would not be able to use the card because it couldn't provide enough power.



  • While it was entertaining to read your essay ( I hate to say it but I couldn't be bothered, not trying to be rude ) - it is unlikely that the MSRP would be any higher than 649.99 - otherwise they wouldn't have a hope to keep up with 4870x2, and they obviously know this otherwise gtx260/280 wouldnt have dropped in price like they did. Doubling the count of the card's physical statistics certainly makes it more expensive, but its not going to throw it into the 1000$ range anyways. If nvidia made a gtx280x2 it wouldn't use GDDR5, it'd use GDDR3 as there is no rhyme or reason for them to move to GDDR5 when the architecture is questionable to support it.

    Building graphics cards might seem as simple as "throw a bunch of crap on a pcb" to some people, but its not - its obviously much more complex than that and making large sweeping architectural changes like switching to GDDR5 obviously requires a different type construction and design.
    a b U Graphics card
    July 19, 2008 2:49:14 PM

    Exactly. I said I was rounding, and being conservative on top of that, all in nVidias favor. Your numbers look more precise, and I didnt want to research power draw/shrink ratios. Good to know, ty. Yea, the pcb would have to be reworked for GDDR5, and itd be redundant as youve already got a 512 bus. As OTP says, theyre hard enough to balance, and adding all this overkill would not only up the costs out of control for market, but most likely bottleneck the card, and bring it so out of balance that wed be all buying R600s and praising them heheh.
    July 19, 2008 3:04:37 PM


    Nvidia don't really care about the production costs for the top-end unit , they just want the performance crown no matter what.They make most of their money at sub $150 level.Losing money and making money, they both occur at various times, there's no company without any loss in the history.

    Too much speculation and sticking in the past or basing everything on what has happenend before in this thread.Forget about codenames like gt200b, they are only to trick.
        • 1 / 3
        • 2
        • 3
        • Newest
    !