Sign in with
Sign up | Sign in
Your question

Can you tell me more about 512 bit ram and pin count?

Tags:
  • Graphics Cards
  • Bandwidth
  • Memory
  • Graphics
Last response: in Graphics & Displays
Share
October 17, 2008 1:07:14 PM

I read about pin count:

"More Bandwidth Without More Pins
There are three ways to increase memory bandwidth in a system, generally. First, you can increase memory clock rate. This has its drawbacks—some memory types become error-prone beyond certain clock frequencies, and require more power to run at these high speeds. Memories that work at higher frequencies without losing coherency or using more power require substantial changes and new standards, and that's basically what GDDR5 is.

Second, you can increase the bus width. The Radeon HD 2900 (or "R600" chip) used a 512-bit memory interface, and the GeForce 8800 GTX and Ultra (or "G80" chip) used a 384-bit interface. This requires the chip to have a lot of pins for the memory interface, which is undesirable. No matter how small the lithography of your manufacturing process, you can only fit so many physical pins in so little space, so these wide memory busses with their many pins guarantee a large chip. Large chips means fewer per wafer, and higher costs. This isn't so bad if your chip was going to be really big and expensive anyway, but it's murder on those mainstream and budget graphics cards (which is why they all have 256-bit or 128-bit memory interfaces)."

Can't they make the pins smaller? We have had 256 bit ram since the radeon 9700 days. Todays pins should be much smaller!

"What about cost. This stuff is going to cost a fortune, right? Well, yes and no. High-speed GDDR3 and GDDR4 memory is certainly expensive. We're told to expect GDDR5 to initially cost something like 10–20% more than the really high speed GDDR3. Of course, you don't buy memories, you buy a graphics card. Though the memory will cost more, it will be offset somewhat in other places on the product you buy. You don't need as wide a memory interface which means a smaller chip with fewer pins. The board doesn't need to contain as many layers to support wider memory busses, and the trace wire design can be a bit more simple and straightforward, reducing board design costs. As production ramps up, GDDR5 could be as cost effective overall as GDDR3. It will only be appropriate for relatively high-end cards at first, but should be affordable enough for the $80–150 card segment over the next couple years."

What's to stop Nvidia from going 512 bit on GDDR5? And will 1024 bit memory be a reality soon? Can't they use some other interface besides pins? Ive read GDDR5 is similar to quad pumped.

Im also wondering will they ever quit crippling video cards(other than onboard) with 64 bit memory? Low end should be 128 bits, midrange 256 and high end 512! If Nvidia goes 512 bit on GDDR5, they will own ATI big time!

More about : 512 bit ram pin count

a b U Graphics card
a b } Memory
October 17, 2008 1:26:26 PM

I have a quite hard time understanding your post. (Mind linking where you quoted from?)

I believe making pins smaller than a certain size is not possible since it needs to be soldered to the board and it too fragile they will break.

AFAIK nVidia can't use GDDR5 due to the GPU design. (Some one correct me if I am wrong)

As far as cripling goes it's the economy.


Btw, the GTX280 has a 512 bit interface on GDDR3. See:

http://www.nvidia.com/object/geforce_gtx_280.html
October 17, 2008 1:42:11 PM

If I were to guess smaller pins will lead to more resistance.
Related resources
a c 130 U Graphics card
a b } Memory
October 17, 2008 2:44:14 PM


I see you have opened a new thread on the subject.
This extract from the article i linked you to on the other thread addresses your first point. " No matter how small the lithography of your manufacturing process, you can only fit so many physical pins in so little space," My understanding of it is that if you go below a certain size you start running into leakage and transference issues, which basically means that the wires are so close together that the currant will leak out and/or migrate across each other.

Shadow posted this
AFAIK nVidia can't use GDDR5 due to the GPU design. (Some one correct me if I am wrong)

Thats my understanding also

Your last point stands up to logic but its down to economics (again as Shadow has said) Intel work the same way. Its much more cost effective to work on one process and sell the chips with reduced functionality as lower spec models.
Its as if you think they take perfectly good chips and cripple them just to make a lower end card ? In truth it would be madness to do that, the "crippled" chips are ones that have failed to pass the testing fully and so are released as lower spec chips. That's why the clocks are set lower and most of the time you can get a good overclock. Its much more beneficial for the companies to give a bit of performance away free to overclockers than have a steady flow of returned chips because they tried to sell them too close to the point they already knew stability issues would occur. The HD 4670 is made the way it is again down to cost, reducing it to 128 bit makes it cheaper to make and buy and it competes at the performance point ATI are targeting.

Mactronix :) 
October 17, 2008 9:14:15 PM

mactronix said:
I see you have opened a new thread on the subject.
This extract from the article i linked you to on the other thread addresses your first point. " No matter how small the lithography of your manufacturing process, you can only fit so many physical pins in so little space," My understanding of it is that if you go below a certain size you start running into leakage and transference issues, which basically means that the wires are so close together that the currant will leak out and/or migrate across each other.

Shadow posted this
AFAIK nVidia can't use GDDR5 due to the GPU design. (Some one correct me if I am wrong)

Thats my understanding also

Your last point stands up to logic but its down to economics (again as Shadow has said) Intel work the same way. Its much more cost effective to work on one process and sell the chips with reduced functionality as lower spec models.
Its as if you think they take perfectly good chips and cripple them just to make a lower end card ? In truth it would be madness to do that, the "crippled" chips are ones that have failed to pass the testing fully and so are released as lower spec chips. That's why the clocks are set lower and most of the time you can get a good overclock. Its much more beneficial for the companies to give a bit of performance away free to overclockers than have a steady flow of returned chips because they tried to sell them too close to the point they already knew stability issues would occur. The HD 4670 is made the way it is again down to cost, reducing it to 128 bit makes it cheaper to make and buy and it competes at the performance point ATI are targeting.

Mactronix :) 



If they can make transistors smaller and smaller, why can't they find a way to do this with pins? I remember when the 9700pro was the first 256 bit card and the hd2900xt the first 512 bit. I am sure they will find a way some how. That's the beauty of technology, it never stays still :)  People once said no way, no how 512 bit will ever be possible. Todays technology makes 512 bit difficult but it will get easier and easier.

Nvidia is designing new GPUs for GDDR5, probably gonna see it on the 350GTX.

I understand itll always be easier and cheaper to make parts with less transistors and such. Midrange video cards aren't the same core as high end cards with disabled portions, they are a complete different simplified core with less of any units and transistors. The hd4850/70 has 800 shaders for instance. The hd4650/70 has 320 shaders. Their lower series has even less such as 40, 80, 120.

But I am interested in seeing 512 bit ram be available on ALL high end cards and the end to 64 bit ram with 128 bit being the lowest!


a b U Graphics card
a b } Memory
October 17, 2008 10:43:12 PM

Why?

Current high end cards are very rarely memory bandwidth limited, and therefore wouldn't see much benefit from more bandwidth. Current cores just can't use more bandwidth than either 512 bit, 1000MHz or so DDR3 (2000 eff.) or 256 bit, 900-1000MHz DDR5 (3600-4000 eff.) can supply, so adding 512 bit DDR5 would only increase costs without actually giving a significant performance enhancement.

Keep in mind that what actually matters isn't bus width, it's bandwidth. Increasing bus width is just one of several ways of increasing bandwidth. You mention that a 256 bit bus was introduced on the 9700pro, but it had 310MHz DDR, leading to a total bandwidth of only 19.8GBps. The modern cards, such as the HD4870, may still be using the 256 bit bus, but there is an almost staggering difference in bandwidth due to the increase in memory clock and memory technology since. The 4870 uses 900MHz (3600 effective) GDDR5, allowing it to achieve 115GBps on that same bus, an increase of nearly 6x. That is why a wider bus is really not needed, and memory technology has been increasing at a similar pace to the graphics core technology, so as cores require more and more bandwidth, higher clocked memory has been available to provide that bandwidth. Sometime, it may actually require a wider bus, as the memory may not be able to supply the required bandwidth, but we are not to that point yet.
October 18, 2008 3:48:13 AM

I don't think we'll see buses wider than 512 bits. I just don't think it's feasible. Graphics chips can't get any bigger, in a literal sense. They'll get more transistors, and more ROPs/TMUs/Shaders etc., but I don't believe we're going to see many (if any) dies bigger than the GTX280, hence, no wider buses than 512 bits.

I can't be sure, but I don't think the HD 4870 is anywhere near bandwidth limited. The HD 4850 doesn't appear to be, either, because if it were, the HD 4870 would have more than a 20% increase in speed (Has a 20% higher core clock, and gains < 20%) because of it's double bandwidth. So I think that memory will increase both in clock, and in bandwidth for each clock (ala DDR & GDDR5). Also, hence, no wider than 512-bits.
a c 130 U Graphics card
a b } Memory
October 18, 2008 7:31:27 AM


I dont know what it is you want to hear ? 1. It physically dosent fit 2. Its not needed 3. Its not economical
And i never said all graphics cards are the same core i said the chips that fail testing are released as lower spec. As you seem to know this i would have thought you would have realised that or are you just trying to be funny.
Im sory if this is coming across a bit strong but people keep telling you why its not used today. Yes in the future they may well end up using 1024 but now its just not viable on the fab they are using.

Mactronix
Related resources
!