randomizer

Champion
Moderator
IT LOOKS LIKE we were right about Fermi being too big, too hot, and too late, Nvidia just castrated it to 448SPs. Even at that, it is a 225 Watt part, slipping into the future.

The main point is from an Nvidia PDF first found here. On page 6, there are some interesting specs, 448 stream processors (SPs), not 512, 1.40GHz, slower than the G200's 1.476GHz, and the big 6GB GDDR5 variant is delayed until 2H 2010. To be charitable, the last one isn't Nvidia's fault, it needs 64x32GDDR5 to make it work, and that isn't coming until 2H 2010 now.

http://www.semiaccurate.com/2009/12/21/nvidia-castrates-fermi-448sps/

This could of course be a lower end part.
 

4745454b

Titan
Moderator
My god, it does look that bad...

So they have a 448SP part using ~225W on a 10 layer PCB (how expensive is that? I'm no expert but I don't think most video cards use that many.) and whats up with the memory? It says they are using GDDR5, but the clock speed is wrong. 2GHz is to fast for the real speed, while its to slow for the quad pumped data transfer speed. 2GHz would make sense, but only if they are still using DDR3. The shaders run slower then whats found in the G200 and the big question remains what were they finally able to get the core running at? If A2 was only able to hit 500MHz they better hope A3 got it a lot faster. If the design called for 750MHz and you're only able to hit 500MHz, you have a big problem.

4.25CM^2? Isn't that a little big? Add in bad yields and that 10 layer board, I don't see this selling for anything cheap.

I'd love to see some info on the layers. Doing a quick search I saw the GTX260 went from 16 down to 10. What is normal for a video card? I ask for education on this matter.
 
Thats the worst news in all this.
Dropping the 64 SPs only means that for Tesla, yields are poor, and weve already heard from an nVidia employee saying they needed close to 100% no fault, and TSMC isnt delivering.
These cards need perfection, they run full out 24/7 and the clocks are lowered because of this.
So, to me, this means a 512 SP part will be coming with the gfx part, and the clocks will be decent too, but the power draw may be soooooo high therell never be a x2 solution, as something like a full 512 higher clocked gfx card will run 250 watts or more.
This is reminding me of the 280s, where some were really good, most were hot, and alot defective.
I also think its simply due to its size.
Last we come to perf, and unless its truly a killer card, itll be a permanent second fiddle to the 5970, and most likely beat the 5870, and it better be close to the 5970, or they wont be able to sell it for much
 

randomizer

Champion
Moderator
NVIDIA needed time to get the GTX260 from 192 to 216SP, so it may be the same situation here. The launch part may end up replaced by an equally priced part with more SPs as the process improves (if it does).
 
Well, remember, the 280s didnt oc for crap, while the 260s did very well.
Getting to this size, I think its pushing the limits, and why we dont see anything much larger in almost any solution.
Prosses will mature tho, youre right there

Another example is the 1900 series, where a xtx vs a xt was a small difference, and neither oceed that well, and both were large dies
 

xrodney

Distinguished
Jul 14, 2006
588
0
19,010
Huge hot chip that maybe is more powerfull then hd5870, but at what costs.
5970 will still beat it up and there is no way this coming as dualchip card it will just not fit to specification. And giving its huge chip and low yields it will be much more expensive. Not even mentioning lack of output options, single DVI ftw. I would say its epic fail yet again for nvidia.
 

JeanLuc

Distinguished
Oct 21, 2002
979
0
18,990



I think the situation with the GTX260 going from 192SP to 216SP was more down to the fact that the Radeon HD4870 offered more performance at a lower price point against a GTX260 with only 192SP and Nvidia had to reposition it in order for it to be competitive.
 
Very possible to see a 480 SP tho.
Also, its a 190-225 watter.
Heres the problem, usually getting the "perfect" chip, it doesnt oc for crap, with decent thermals, whereas your "leaky chips run hotter and oc well, and thats how theyll be binned, just like before, using the same thermal solutions on both, with one running hotter and no oc, while the other can run even hotter oceed, but has better cooling per working parts and better ocees
 

jonpaul37

Distinguished
May 29, 2008
2,481
0
19,960
hope ATI's 5890 works out to be better than Nvidia's top single-solution Fermi-based card. This would etch ATI's graphics dominence in stone and force Nvidia to possibly (yeah, that's right, i said it) drop their prices.

I'm no fanboi, but Nvidia needs to price things better, it's the only reason i don't use Nvidia cards as of late.
 
Well, from what Im seeing, it wont topple the 5970, and itll be costly, rare/hard to find and power hungry.
At least these are the early signs, especially with this news, and our history of huge silicon chips.
Im still in doubt as to its scaling as well, but time will tell
 

4745454b

Titan
Moderator
Keep in mind this is at 40nm as well. Are there any nodes coming up that would allow Nvidia to re enable them? If the heat goes down enough at 32nm, or some other node, they might get the chip they wanted in the first place.
 
This is where other problems come in for nVidia, as ATI will have tons of time to tweak the 5xxx series, and make, say, a 5890 thatll surprise, as the 40nm process matures, whereas, if GF300 doesnt show up soon, itll be hot, on a troubled process, using lots of power, and no real time for a upgrade, as the new process will be rolling in, and ATI will/should have a new chip out, as well as nVidia on the newer process, or at the least, a finely tuned already tweaked 5xxx version, with their new arch coming mid 2011 at the latest, tho thats looking far ahead.
So, nVidia needs this one to work, and work well
 

TRENDING THREADS