Sign in with
Sign up | Sign in
Your question

Nvidia next generation to use 512bit and 448bit memory controller

Last response: in Graphics & Displays
Share
April 8, 2008 4:27:15 PM

http://www.fudzilla.com/index.php?option=com_content&ta...


Can it be that Nvidia will bring a real next generation product just a quarter after it released its 9800 series? Well, we don’t have the answer to that particular question, but as we reported here, Nvidia is working on a new GPU codenamed D10U-30. The D10U-30 will feature 1,024MB of GDDR3 memory and we learned that there will be one more SKU below it.

The second one is codenamed D10U-20 and it will have 896MB of memory, again of the GDDR3 flavor. This new card indicates that Nvidia can play with the memory configuration and that the new chip might support more than the regular 256-bit memory interface.

This one might support 384-bit or some other memory configurations, but we still don’t have enough details about it. It looks like Nvidia doesn’t feel that going for GDDR4 is necessary and it looks like the company will rather jump directly from GDDR3 to GDDR5.
April 8, 2008 4:31:20 PM

My guess would be....

32 ROP, either 48 or 96 TMU, 192SP, 512bit memory controller

28 ROP, either 40 or 80 TMU, 160SP, 448bit memory controller.
April 8, 2008 6:09:06 PM

Unfortunately, after the rumours (unfounded!) of the 9800GX2, I'll believe it when i see it!
If it's true, looks like Nvidia reckon the HD 4xxx's will be monsters! :D 
April 9, 2008 2:03:38 AM

Well, I hope NVIDIA will use a 512 bit memory bus and then I will be happy. And FFS, use GDDR5, ATI is going to use it, why stick with GDDR3? That is getting old.
April 9, 2008 2:17:02 AM

will nvidia do what they have done before with the 8800GTX,they release a "REAL KING" which will last for 2 years and in those 2 year they can bring out many rename remasked lightly tweaked cards to fool people and empty people's pocket?

and to John Vuong even GDDR3 is old but but the GDDR3 used in the 9800GTX is 0.8ns memory chip its spec clock very high and is high enough to go into the GDDR4 territory of 2400mhz which standard GDDR4 start off with 2000mhz.its like DDR2-1066 and DDR3-1066.
April 9, 2008 2:45:16 AM

I'll wait until the card is released; I don't trust Nvidia or the rumors after all the BS about the 9800 series.
a b U Graphics card
a b Î Nvidia
April 9, 2008 2:50:16 AM

Yeah I read that last night before going to bed and sent a quick e-mail to FUAD saying if the memory size is correct it's unlikely to be 384 bit versus 448bit.

Marv, I like the breakdown above, I think the former TMU number instead of the later though but potential a higher internal ratio vis-avis addressing within the TMU.
a b U Graphics card
a b Î Nvidia
April 9, 2008 2:54:26 AM

John Vuong said:
Well, I hope NVIDIA will use a 512 bit memory bus and then I will be happy. And FFS, use GDDR5, ATI is going to use it, why stick with GDDR3? That is getting old.


It sounds like they're still having tolerance issues in supporting GDDR4/5.

The cost issue is no longer prohibitive, and the power/heat issue wouldn't be a problem as the GDDR5 don't require 1.8V to run @ high speed unlike the GDDR4.

It'll be interesting to see what the bandwidths match up to be because GDDR5 will launch just under twice as fast as GDDR3, so could be interesting. However I would not be surprised that with good cooling the GDDR3 overclocks pretty well and when multiplied by the 512bit interface, the bandwidth will scale amazingly well under overclocking. Something that was rather limited in early HD2900s.
April 9, 2008 3:28:23 AM

the problem gddr5 is latency
a b U Graphics card
April 9, 2008 11:20:56 AM

At twice the speed? No argument here, just wondering, if GDDR5 is 2x faster and higher, will latency be such a prohibiter?
April 9, 2008 11:54:57 AM

I love this discussions about VGAs E-peen.

More MGhz Doesnt mean more performance. Just more MGhz. Netburst is a good proof of that.

Next Gen Cards ? They will come with PPU attached.

The Geforce 9xxx are just refurbished G92. Nvidia is holding the PPU card ( Or the 2 cores Card) while peeps continue to buy VGAs to expose their e-peen.
a b U Graphics card
April 9, 2008 12:00:07 PM

Im waiting for next gen, as I want better performance too, it has nothing to do with the size of my e peen. Maybe, or more to the point, the size of my LCD
April 9, 2008 12:51:52 PM

Well, i still have a old Chip, that can still muscle with a 8600GT. Fast GPUs and Fast Rams are nice, but if the bus is really slow...well...nothing much more to say about it aint it ?

The resolutions of LCDs is a nice point, but doubling the cards for 50% (or less) improved perfomance, is the perfect sale. Really. It is. not to mention games, due to their development cycle, never (or very rarely) take advantage of cutting edge Hardware/CUDA/Developing Tools.

God i hope Ati/Amd, really get their act together, so Nvidia NEEDs to draw out good Cards (Pun intended). I hate the refurbishing of old chips, just with more and faster RAM, and OCed GPU.

I wont upgrade my build so far. the "upgrades" are nice and dandy, but their arent sufficient, Performance/moneywise.
April 9, 2008 12:58:52 PM

radnor said:
I love this discussions about VGAs E-peen.

More MGhz Doesnt mean more performance. Just more MGhz. Netburst is a good proof of that.

Next Gen Cards ? They will come with PPU attached.

The Geforce 9xxx are just refurbished G92. Nvidia is holding the PPU card ( Or the 2 cores Card) while peeps continue to buy VGAs to expose their e-peen.


I don't know about E-Peens but I buy cards because the games run slow. I don't know about others but I do try to hold out long as I could or get a good deal on one that isn't going to hurt my pockets.

These dx10 cards aren't pentium 4. FUD didn't even mention mhz. PPU is not necessary when we have have 4 core CPU. The biggest leaps came from SP and you could call the geforce 7 like Pentium 4. Looking at speculated specs though it seems to be leaps and bounds faster than 8800gtx. A card that can finally play crysis at very high detail. A single core no nonsense SLI card.

April 9, 2008 1:21:51 PM

marvelous211 said:
I don't know about E-Peens but I buy cards because the games run slow. I don't know about others but I do try to hold out long as I could or get a good deal on one that isn't going to hurt my pockets.

These dx10 cards aren't pentium 4. FUD didn't even mention mhz. PPU is not necessary when we have have 4 core CPU. The biggest leaps came from SP and you could call the geforce 7 like Pentium 4. Looking at speculated specs though it seems to be leaps and bounds faster than 8800gtx. A card that can finally play crysis at very high detail. A single core no nonsense SLI card.


Just one thing. A PPU, or a specialized CPU if you prefer would be the next logical leap. 2 GPUs in the same PCB is a very old tale. Called V5000 with 2xVSA Chips. I had one, it still works btw, although drivers arent made for a long time. As many are showing their enthusiast and their opinion (some of them filled with valid reasons themselfes), i show my opinion. I created this account on this forum just for it. Its a older opinion you may say, less enthusiastic, with more...huumm..memory of old breakthroughs.

The 8800Gtx is a great card, but in the market of VGAs, its already a very old card (or chip if you prefer). Crysis isnt heavy it self, the VGA market this last year just has run really slow compared to other years. Ati and Nvidia are both to blame on this matter. I remember the Hype about Farcry, and it was ran fully few months after being launched by new Hardware.

Honestly, i want the best bang for my buck aswell (Euro in this case) but i look at the options, and my R480 Chip still run smoothly enough (Crysis at medium , 30-40 fps).

About the PPU, this small hint. For Rasterization (Or Ray Tracing, that i think intel will fail deeply) your CPU is pure muscle. Some operations he will do nicely, other he will take loads of time. Its what we call a Generic Processor. Can do everything, but not nicely. The next gen Eye-Candy and performace wise, will be(i hope) a Embedded PPU, or instead of this silly SLI/Crossfire fight ( double the money for less than 50% perfomance, plus software problems, plus, games arent made for SLI/Crossfire yet), a "addon-card" with dedicated support. A second card, made for dedicated work. Wanna play in max settings ? Sure buy the "addon" card. I know i will. But its silly to add another card with the same specs. With all the trouble it comes with.


Bah, just venting my dissapointment about the GPUs industry atm. Sorry if i hit some soft spots.
April 9, 2008 1:24:21 PM

well, for the sake of the costumer (us), amd & nvidia (placement due the alphabetic) should do unified coding so every game engine can be optimized as in console, what dy think guys?
April 9, 2008 1:30:44 PM

radnor said:
The resolutions of LCDs is a nice point, but doubling the cards for 50% (or less) improved perfomance, is the perfect sale. Really. It is. not to mention games, due to their development cycle, never (or very rarely) take advantage of cutting edge Hardware/CUDA/Developing Tools.

I agree that SLI has historically been a bad investment, but this latest generation of cards (and more importantly their associated drivers) has made it a much more viable option. I would go so far as to say that 2 8800GTs in SLI is the best value high performance solution on the market right now.
radnor said:
I love this discussions about VGAs E-peen.

E-peen has little to do with it. If my 8800GT ran Crysis fine at very high settings with a little bit of AA then I wouldn't upgrade again until there was a game that needed more power.
radnor said:
More MGhz Doesnt mean more performance. Just more MGhz. Netburst is a good proof of that.

Actually, a 3GHz Netburst processor is faster than a 2GHz Netburst processor, all other things being equal. Higher frequencies do mean more performance if the architecture is unchanged.
radnor said:
Next Gen Cards ? They will come with PPU attached.

More likely the "real" next gen cards will be based on an architecture that lends itself well to physics calculations. Nvidia is already planning a CUDA implementation of Physx for the 8 and 9 series, and those cards pretty much suck at branching calculations.
radnor said:
God i hope Ati/Amd, really get their act together, so Nvidia NEEDs to draw out good Cards (Pun intended). I hate the refurbishing of old chips, just with more and faster RAM, and OCed GPU.

I agree with you there. Hopefully Larrabee will provide some competition as well; the more the merrier :) 
April 9, 2008 1:32:45 PM

I don't know about 850xt playing medium with Crysis. Does crysis even support shader 2.0?
April 9, 2008 1:44:49 PM

marvelous211 said:
I don't know about 850xt playing medium with Crysis. Does crysis even support shader 2.0?


Pretty Much, want me to post a Screen shot ? im at work atm, but i can do it later. Only Bioshock doesnt. Hangs up alot. For the exception of Bioshock i can run almost everything with decent playability. Got 512mb GDDR3 and overall still handles pretty well.

@Homerdog.

Aprecciated the comments, some i agree some i dont, but hey, thats life. I Wrote 2 more posts after that little "flame". At least people are picking with the E-peen statement. Sometimes, in VGA Forum, just seems like it.
April 9, 2008 2:32:34 PM



No. Dunno if it is in 1440*900 or a bit lower. Its a X850XT 512MB, not a X800XT 256MB.
Btw, the link you showed me, it got really low results on other VGAs i already played Crysis , and other games aswell.
Sorry but some results there just dont add up.

I dont wanna sound crazy but some results seems really low. Compared to other systems i already played with. Inside those specs.
April 9, 2008 3:26:39 PM

I had a 850xt like 2 years ago. 850xt is just slightly clocked higher than x800xt. 5% difference. At medium settings in crysis 256mb card is enough for 1280x1024 resolution. I'm not too sure since I haven't tested it. At high settings it uses slightly over 320mb of vram @ 1280x1024.

The benchmarks aren't off at all when you consider they tested when the game was just released. Crysis eats GPU alive and spits it back out.

I had a 8600gts that would chug on 1440x900 medium. Playable? Yes but still slow as hell.
April 9, 2008 3:40:31 PM

radnor said:
Just one thing. A PPU, or a specialized CPU if you prefer would be the next logical leap. 2 GPUs in the same PCB is a very old tale. Called V5000 with 2xVSA Chips. I had one, it still works btw, although drivers arent made for a long time. As many are showing their enthusiast and their opinion (some of them filled with valid reasons themselfes), i show my opinion. I created this account on this forum just for it. Its a older opinion you may say, less enthusiastic, with more...huumm..memory of old breakthroughs.


I had a voodoo 5500 until I sold it off and got a Radeon 64meg vivo. :sol: 


The 8800Gtx is a great card, but in the market of VGAs, its already a very old card (or chip if you prefer). Crysis isnt heavy it self, the VGA market this last year just has run really slow compared to other years. Ati and Nvidia are both to blame on this matter. I remember the Hype about Farcry, and it was ran fully few months after being launched by new Hardware. said:

The 8800Gtx is a great card, but in the market of VGAs, its already a very old card (or chip if you prefer). Crysis isnt heavy it self, the VGA market this last year just has run really slow compared to other years. Ati and Nvidia are both to blame on this matter. I remember the Hype about Farcry, and it was ran fully few months after being launched by new Hardware.


It is old but it's a 24rop with a 384bit memory bus beast. The whole G92 was to bring down the price for mainstream which it did. It was re-spin of the old. Look how dirt cheap G92 chips are? They are down right cheap and get only within 10% of a full 8800gtx at fraction of the cost. Crysis is a GPU killer. Medium settings looks decent but you don't know what you are missing until you can play this game very high. I get 17fps @ the very high settings on my card. I just like looking at the pretty water and jungle. Nvidia's GT200 should be able to play this game at very high. It's going to be expensive. 512bit memory controllers aren't cheap and neither is double the transistor count.


Honestly, i want the best bang for my buck aswell (Euro in this case) but i look at the options, and my R480 Chip still run smoothly enough (Crysis at medium , 30-40 fps). said:

Honestly, i want the best bang for my buck aswell (Euro in this case) but i look at the options, and my R480 Chip still run smoothly enough (Crysis at medium , 30-40 fps).


I really doubt that. My 8600gts that is fast as 1950pro did 28fps at max overclocked settings.


About the PPU, this small hint. For Rasterization (Or Ray Tracing, that i think intel will fail deeply) your CPU is pure muscle. Some operations he will do nicely, other he will take loads of time. Its what we call a Generic Processor. Can do everything, but not nicely. The next gen Eye-Candy and performace wise, will be(i hope) a Embedded PPU, or instead of this silly SLI/Crossfire fight ( double the money for less than 50% perfomance, plus software problems, plus, games arent made for SLI/Crossfire yet), a "addon-card" with dedicated support. A second card, made for dedicated work. Wanna play in max settings ? Sure buy the "addon" card. I know i will. But its silly to add another card with the same specs. With all the trouble it comes with. said:

About the PPU, this small hint. For Rasterization (Or Ray Tracing, that i think intel will fail deeply) your CPU is pure muscle. Some operations he will do nicely, other he will take loads of time. Its what we call a Generic Processor. Can do everything, but not nicely. The next gen Eye-Candy and performace wise, will be(i hope) a Embedded PPU, or instead of this silly SLI/Crossfire fight ( double the money for less than 50% perfomance, plus software problems, plus, games arent made for SLI/Crossfire yet), a "addon-card" with dedicated support. A second card, made for dedicated work. Wanna play in max settings ? Sure buy the "addon" card. I know i will. But its silly to add another card with the same specs. With all the trouble it comes with.


Ray Tracing is up to Nvidia and not Intel. Far as I know Nvidia is going with a hybrid technology. At least this is what I've heard.
April 9, 2008 3:55:48 PM

Playable, i dont say its a breeze in a loaded (smoke, particles, mobs, etc) scene , but FPS doesnt drop too much.

Bottom line, what i do mean, and i think we agree in this point. Is that Nvidia & Ati, should be shipping much better products, that they are now. Now the Top of the line boards are just 2 GPUS on the same PCB (Or in 2 Pcbs same PCIE, this show you can make omolets..in several ways). There isnt no major breakthrough.

Smells like Vista. Really.

On-Topic: 512bit or 448bit buses on GPUs ? Now that would be a nice upgrade. But i think NVidia will leaves us to dry. Unless Ati, can pull a Rabbit out of the Hat.
a b U Graphics card
a b Î Nvidia
April 9, 2008 3:59:36 PM

radnor said:
I love this discussions about VGAs E-peen.


It's about architecture actually, which seems to have escaped you, which isn't surprising by what you wrote.

Quote:
More MGhz Doesnt mean more performance. Just more MGhz. Netburst is a good proof of that.


Actually it does, more than anything else. Like homerdog mentioned 3ghz of the same architecture is faster than 2 ghz of same architecture, and a 2Ghz Athlon XP will be faster than a 2Mhz Core2Quad. So while everything plays a part, performance is determined most importantly by speed. And anything running at a MGhz would like outperform the others in computational power and be restricted by other areas.

Quote:
Next Gen Cards ? They will come with PPU attached.


No, they won't. PPU is dead Period.
nVidia bought Ageia and all their IP belong to US, err... them.
nV is putting it all into their GPGPU architecture, and it makes more sense to assign sPUs for physics when needed and then graphics when needed then to was an IC and memory and board space on a PPU whose utility was non-existant when Ageia pushed a hardware PPU. The PPU as an IC is dead, GPGPUs is the way things are going.

Use your eBrain more than your eWang.
April 9, 2008 4:06:26 PM

It just depends really. How much are people willing to pay? How tolerant the gamers are about power consumption and heat? That's all that it comes down to really. On current fab they are pushing it to limits.

Supposedly ATI is not making anymore high end cards. Instead they want to crossfire bunch of medium range chips together because it will cost less to produce while Nvidia will sell their $800-$1000 monsters. But on current drivers ATI is playing the losing game since lot of the games don't scale like they are supposed to or the drivers are just not up to snuff.
April 9, 2008 4:31:35 PM

marvelous211 said:
I had a voodoo 5500 until I sold it off and got a Radeon 64meg vivo. :sol: 


Nice :)  My 5500 Lasted a bit more, then i exchanged it for a ti4600. It was a nice and expensive upgrade :) 

marvelous211 said:

It is old but it's a 24rop with a 384bit memory bus beast. The whole G92 was to bring down the price for mainstream which it did. It was re-spin of the old. Look how dirt cheap G92 chips are? They are down right cheap and get only within 10% of a full 8800gtx at fraction of the cost. Crysis is a GPU killer. Medium settings looks decent but you don't know what you are missing until you can play this game very high. I get 17fps @ the very high settings on my card. I just like looking at the pretty water and jungle. Nvidia's GT200 should be able to play this game at very high. It's going to be expensive. 512bit memory controllers aren't cheap and neither is double the transistor count.


I believe ya when you mean medium, i dont know what im losing. probably. I remember Farcry, Pulling out at Max Setting with my Barton Core and this R480 im with now. The time i spent swimming.


marvelous211 said:

I really doubt that. My 8600gts that is fast as 1950pro did 28fps at max overclocked settings.


I play it like that. Dont say there isnt a drop in frame rate when its getting "hot" but its quite playable.

marvelous211 said:

Ray Tracing is up to Nvidia and not Intel. Far as I know Nvidia is going with a hybrid technology. At least this is what I've heard.


Nvidia bought Ageia PhysX long time ago, so here is your PPU. They are just holding that card. G200 Will be a nice chip from the specs ive read. But my Netburst talk in the first post, was to say that more clock isnt more performance. Like More RAM. Or more cores, doesnt mean more performance.

AFAIK, i might be wrong. But intel wanna bully with the GPU boys. Its a bit old, but seems solid info.
http://blogs.intel.com/research/2007/10/real_time_raytr...

April 9, 2008 4:47:16 PM

homerdog said:
I agree that SLI has historically been a bad investment, but this latest generation of cards (and more importantly their associated drivers) has made it a much more viable option. I would go so far as to say that 2 8800GTs in SLI is the best value high performance solution on the market right now.



I agree with that too. Up until this year I have been terribly opposed to SLI/xfire but seeing the 8800GT and 9600GT SLI scaling improvements have really changed my mind. The mixing of the 3870 and 3850 has me even more impressed. It has presented itself as a decent upgrade path for those that can't afford a $400 behemoth GPU.

I am still opposed to the X2 solutions however. Even though I am impressed with ATI's ability innovative, you still end up with one expensive option opposed to the flexibility of upgrading within smaller price segments. Then we have NV using the get 'er done approach. JFC.

Back to the subject though, it will be nice to see NV get past their GDDR3 wire tracing issues. GDDR3 so ancient now… I don't care if GDDR 4 isn't any better performance wise, a company needs to address issues and this one has gone uncorrected for quite a while.

I am actually surprised that people don’t ask more often why NV hasn’t been using GDDR4. NV isn’t deciding to skip GDDR4, they blocked themselves off from using it and by time they corrected the issue…oh look GDDR5 is ready!
April 9, 2008 4:50:57 PM

The future of ray tracing isn't up to just one company. There needs to be a widely accepted API that support ray tracing before we will start to see it in games. OpenRT could work I suppose, but Intel seems to want to build its own API (IntelRT) for Larrabee.

Since DirectX seems to be the API of choice now, I suspect that we will need to see some ray tracing support from Microsoft's API before it really takes off.
April 9, 2008 4:56:04 PM

TheGreatGrapeApe said:
It's about architecture actually, which seems to have escaped you, which isn't surprising by what you wrote.


Good God, you peeps took the epeen comment really. Keep reading, ive already talked alot on other posts. Having a nice chat with marvelous211 btw. I love to talk to smart people that disagree with me. The conversation is always nice one.

TheGreatGrapeApe said:

Actually it does, more than anything else. Like homerdog mentioned 3ghz of the same architecture is faster than 2 ghz of same architecture, and a 2Ghz Athlon XP will be faster than a 2Mhz Core2Quad. So while everything plays a part, performance is determined most importantly by speed. And anything running at a MGhz would like outperform the others in computational power and be restricted by other areas.


Really ? Check the working of the P4 (Sckt 423) and Mac G5. youll see the mac rolling at 1Ghz, and taking the P4 down (At 1.5ghz) Just because a freaking Pipeline Issue. The same issue that Gave all the P4 Arquitecture , was the freakign Netburst. 3 Pipelines with 17 levels. They were nuts too think that would wort decently in a OOO (Out-Of-Order) CPU. it worked ofc, when they put the prescott core at 2.8Ghz to 3.4Ghz or 3.6ghz if i recall correctly. Ofc it was a burning hot cpu and power hungry. Between the freaking Tualatin Cores to the Prescott cores Intel was wiping the floor. You example is compeltely Irrealistic and ineficient if you were to prove anything. Btw, the G5 at 1Ghz worked with 5 pipelines with 8 stages. Than afterwards became 8 pipelines. The microcode was much better in the G5, too bad he was expensive.
NetBurst sucked from day one.

TheGreatGrapeApe said:

No, they won't. PPU is dead Period.
nVidia bought Ageia and all their IP belong to US, err... them.
nV is putting it all into their GPGPU architecture, and it makes more sense to assign sPUs for physics when needed and then graphics when needed then to was an IC and memory and board space on a PPU whose utility was non-existant when Ageia pushed a hardware PPU. The PPU as an IC is dead, GPGPUs is the way things are going.


The PPU is a wildcard ready to be pulled. Atm Nvidia Doesnt have a competitor. So doesnt need to show their teeth to be on top.

TheGreatGrapeApe said:
Use your eBrain more than your eWang.


Made me laugh i can give ya that.
a b U Graphics card
a b Î Nvidia
April 9, 2008 5:06:18 PM

marvelous211 said:
the problem gddr5 is latency


Not compared to GDDR4, but a little versus GDDR3, however the increase in latency wouldn't justify the increase in cost and and lower yields of a more complex memory interface and PCB IMO.

Also while latency does play a part with similar clocks, the difference at the start could favour the GDDR3, but once the GDDR5 ramps up the latency factor will be eclipsed by speed.

The main difference IMO is the architecture, the Ringbus is less affected by latency because it is very efficient and has multiple stops and keeps things in flight, so while it would also benefit from lower latency, it isn't compounded by latency. With the current Geforce traditional architecture it would be greatly affected by latency, but also by bandwidth (there's alot of back ends there, and more if either of those numbers above are true), bandwidth is likely to be king for feading all the TMUs and ROPs.

At launch they are both playing to their strengths somewhat, but as the memory and cards mature I think if they don't support GDDR5 internally they will be very limited longer term, but if it's only a bridge solution anyways, then maybe that doesn't matter too much if the GT200 arrives with GDDR5 support by the fall.
April 9, 2008 5:06:41 PM

GrapeApe knows his stuff. He'll make you eat your eWang. :p 
April 9, 2008 5:09:43 PM

Radnor, I think you're missing the point about clockspeed. Nobody is claiming that Netburst was good; we're simply stating that a 3GHz P4 is faster than a 2GHz P4. You are correct in thinking that Netburst processors have poor IPC, but clock a P4 high enough and it will outperform a G5 or Athlon64.

Also, the Netburst architecture was original designed to scale to VERY high frequencies. That is why it had such a deep pipeline. Intel did not foresee the heat and power consumption issues that they ran into, which ultimately forced them to revert to the older but more efficient P6 architecture.

As for the PPU, the concept of a physics processor will live on through software (CUDA) implementations on Nvidia GPUs, but dedicated PPU hardware is dead and will likely never come back.
April 9, 2008 5:15:04 PM

TheGreatGrapeApe said:
Not compared to GDDR4, but a little versus GDDR3, however the increase in latency wouldn't justify the increase in cost and and lower yields of a more complex memory interface and PCB IMO.

Also while latency does play a part with similar clocks, the difference at the start could favour the GDDR3, but once the GDDR5 ramps up the latency factor will be eclipsed by speed.

The main difference IMO is the architecture, the Ringbus is less affected by latency because it is very efficient and has multiple stops and keeps things in flight, so while it would also benefit from lower latency, it isn't compounded by latency. With the current Geforce traditional architecture it would be greatly affected by latency, but also by bandwidth (there's alot of back ends there, and more if either of those numbers above are true), bandwidth is likely to be king for feading all the TMUs and ROPs.

At launch they are both playing to their strengths somewhat, but as the memory and cards mature I think if they don't support GDDR5 internally they will be very limited longer term, but if it's only a bridge solution anyways, then maybe that doesn't matter too much if the GT200 arrives with GDDR5 support by the fall.


Well I ran into this...



Check out 3870 using GDDR4. It is actually slower than 3850. Which also makes some games slower.

a b U Graphics card
a b Î Nvidia
April 9, 2008 5:24:06 PM

radnor said:
Good God, you peeps took the epeen comment really. Keep reading, ive already talked alot on other posts.


Which doesn't mean anything, you may have 'talked' alot, but none of it is sofar is worth reading in this thread, nor does it relate to the topic at hand.

Quote:
Really ? Check the working of the P4 ...


You're missing the point. Typing more doesn't change that, instead start reading and thinking instead and you'll understand why what you say about netburst is irrelevant, and what you're ranting about is equally irrelevant. GDDR3 vs GDDR5 does not experience the same issue.

Quote:
...NetBurst sucked from day one.


No one is defending Netburst, but you obviously have a big burr in your saddle over it.
Move on and focus on the graphics, I don't care about the old marchitecture, and you miss the big blatant point about Mhz versus Ghz.

Quote:
The PPU is a wildcard ready to be pulled. Atm Nvidia Doesnt have a competitor. So doesnt need to show their teeth to be on top.


The PPU is dead, and nV have already said that all of it's functionality will be handle by GPGPU through CUDA as has been mentioned many times in this thread, so there is no wildcard, and in the physics gamer they are the underdog, and their competition is intel, the largest chip company in the world, so your statement ignores both the published fact, and even logic.

If you have nothing to add to the original topic, then don't bother with the tangents. For this discussion speed vs bandwidth is relevant, PPUs are not.

a b U Graphics card
a b Î Nvidia
April 9, 2008 5:40:49 PM

marvelous211 said:
Well I ran into this...

Check out 3870 using GDDR4. It is actually slower than 3850. Which also makes some games slower.


Interesting, need more study, forgot the intial HD2900 was GDDR3, was looking at it's initial benchies. Gotta check maybe B3D looked a little deeper.

Hmm, gonna have to look deeper into that.

Another good source would be the HD2600 series, but it's RBE requirements would be much less.

April 9, 2008 5:42:01 PM

Look for this in nVidia's entrance into the Console market -- by themselves this time -- they like the profit numbers Microsoft are finally seeing out of their XBOX360.
a b U Graphics card
a b Î Nvidia
April 9, 2008 7:05:32 PM

Unfortunately no fill rates or sub tests other than the theoretical. But theTechReport did compare a few of the various HD3870 and 3850, and the results still put the higher clocked ones out front, unfortunately the numbers are a little spoiled by also having higher clocked cores too;

http://techreport.com/articles.x/14120/2

But interesting to see the overall impact. I doubt it would be enough to make up for the speed delta long term, but short term both should be playing to their strengths, although of course with slightly different price benefits, and if the RV770 is aimed at the mid and the 10-30 is aimed at the top, then as long as performance is there than the cost structure even suits their targets, especially if the R700 requires a more complex PCB rather than a dual die single package solution.
April 9, 2008 7:08:17 PM

V8VENOM said:
Look for this in nVidia's entrance into the Console market -- by themselves this time -- they like the profit numbers Microsoft are finally seeing out of their XBOX360.

:hello:  If you could provide a link or two elaborating on this I'd appreciate it.
April 9, 2008 7:38:39 PM

V8VENOM said:
Look for this in nVidia's entrance into the Console market -- by themselves this time -- they like the profit numbers Microsoft are finally seeing out of their XBOX360.


Wouldn't they have some sort of contractual restrictions to do so seeing that they would be competing with Sony? I have no clue what I am talking about here but it seems like since they helped to make the guts of the playstation then they wouldn't be allowed to romp about with their own units apart from Sony.
April 9, 2008 8:07:44 PM

marvelous211 said:
My guess would be....

32 ROP, either 48 or 96 TMU, 192SP, 512bit memory controller

28 ROP, either 40 or 80 TMU, 160SP, 448bit memory controller.



If I have to gues

256 bit and 112 bit... they are so much cheaper and easier to produce :-)


April 9, 2008 8:32:38 PM

hannibal said:
If I have to gues

256 bit and 112 bit... they are so much cheaper and easier to produce :-)


Not quite. GT200 is the real next generation. Not like G92 just a rehash of the old gen.

With 256bit the ROP count is only limited to 16 which would be what? Raise clock speeds on better process? Not to mention the memory count. With 1024 vram it makes sense to put eight 64bit memory controllers for every 4 rop. SP handles TMU's.

It's just logical on my estimate. 256bit and where the hell did you get 112bit memory bus? That doesn't make sense. :pt1cable: 
a b U Graphics card
a b Î Nvidia
April 9, 2008 10:08:34 PM

Yeah, and I don't see much advantage to 112 bit versus 128bit, heck even module distribution would be insane and involve much smaller memory crossbars to allow such a mathematical distribution (well without disabling a 128bit for no benefit).
April 10, 2008 4:41:44 AM

Nvidia raises memory bits by 64 not 16.

Just look at their video card line up. Even 8800gs is 3 64bits on a 12rops. 8800gtx 6 64bits on a 24rop.
April 10, 2008 6:03:55 AM

Lets wait and see, put this in your "someone wrote this once" category.

There is nothing Sony own on a new design unless nVidia worked out a deal with Sony to prevent them entering the market -- which is highly unlikely.

Just research nVidia's CEO -- it'll be clear what they plan to do. Ageia isn't just a waste of money. And it is technically "not a CPU".

I think what finalized the direction was that Microsoft are now profitable with XBOX360 and it has atleast a 4-5 year life span. Toss in the direction of where the gaming market is going -- there is plenty of room/profit waiting for a capable company. This does NOT mean, nVidia will be out on the market for PC GPUs, it just means they're putting their eggs in other baskets.

Besides, do you really think nVidia want to take on Intel?? I mean seriously? The resource difference between the two companies is way too much for nVidia to handle.
!