Intel's FB-DIMM, any kind of RAM will work for your contro..

G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Intel is introducing a type of DRAM called FB-DIMMs (fully buffered).
Apparently the idea is to be able to put any kind of DRAM technology (e.g.
DDR1 vs. DDR2) behind a buffer without having to worry about redesigning
your memory controller. Of course this intermediate step will add some
latency to the performance of the DRAM.

It is assumed that this is Intel's way of finally acknowledging that it has
to start integrating DRAM controllers onboard its CPUs, like AMD does
already. Of course adding latency to the interfaces is exactly the opposite
of what is the main advantage of integrating the DRAM controllers in the
first place.

http://arstechnica.com/news/posts/1082164553.html

Yousuf Khan

--
Humans: contact me at ykhan at rogers dot com
Spambots: just reply to this email address ;-)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

A buffer is meant to reduce overall latency, not to increase it AFAIK.


On Sun, 18 Apr 2004 10:48:44 GMT, "Yousuf Khan" <news.tally.bbbl67@spamgourmet.com> wrote:

>Intel is introducing a type of DRAM called FB-DIMMs (fully buffered).
>Apparently the idea is to be able to put any kind of DRAM technology (e.g.
>DDR1 vs. DDR2) behind a buffer without having to worry about redesigning
>your memory controller. Of course this intermediate step will add some
>latency to the performance of the DRAM.
>
>It is assumed that this is Intel's way of finally acknowledging that it has
>to start integrating DRAM controllers onboard its CPUs, like AMD does
>already. Of course adding latency to the interfaces is exactly the opposite
>of what is the main advantage of integrating the DRAM controllers in the
>first place.
>
>http://arstechnica.com/news/posts/1082164553.html
>
> Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

<geno_cyber@tin.it> wrote in message
news:u34580ltlccpd5p5e47mjv9j2c4lk4b4d9@4ax.com...
> A buffer is meant to reduce overall latency, not to increase it AFAIK.

Not necessarily, a buffer is also meant to increase overall bandwidth, which
may be done at the expense of latency.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Sun, 18 Apr 2004 17:37:36 GMT, "Yousuf Khan" <news.tally.bbbl67@spamgourmet.com> wrote:

><geno_cyber@tin.it> wrote in message
>news:u34580ltlccpd5p5e47mjv9j2c4lk4b4d9@4ax.com...
>> A buffer is meant to reduce overall latency, not to increase it AFAIK.
>
>Not necessarily, a buffer is also meant to increase overall bandwidth, which
>may be done at the expense of latency.
>

Cache on CPU is not meant to increase bandwidth but to decrease overall latency to retrieve data
from slower RAM. More cache-like buffers in the path thru the memory controller can only improve
latency, unless there's some serious design flaws.
I never seen a CPU that gets slower in accessing data when it can cache and has a good hit/miss
ratio.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

<geno_cyber@tin.it> wrote in message
news:lft5801qjivarf2mhfoiko04riq02srkp5@4ax.com...
> On Sun, 18 Apr 2004 17:37:36 GMT, "Yousuf Khan"
> <news.tally.bbbl67@spamgourmet.com> wrote:
>
>><geno_cyber@tin.it> wrote in message
>>news:u34580ltlccpd5p5e47mjv9j2c4lk4b4d9@4ax.com...
>>> A buffer is meant to reduce overall latency, not to increase it AFAIK.
>>
>>Not necessarily, a buffer is also meant to increase overall bandwidth,
>>which
>>may be done at the expense of latency.

> Cache on CPU is not meant to increase bandwidth but to decrease overall
> latency to retrieve data
> from slower RAM.

Yes, but not by making the RAM any faster, but by avoiding RAM accesses.
We add cache to the CPU because we admit our RAM is slow.

> More cache-like buffers in the path thru the memory controller can only
> improve
> latency, unless there's some serious design flaws.

That makes no sense. Everything between the CPU and the memory will
increase latency. Even caches increase worst case latency because some time
is spent searching the cache before we start the memory access. I think
you're confused.

> I never seen a CPU that gets slower in accessing data when it can cache
> and has a good hit/miss
> ratio.

Except that we're talking about memory latency due to buffers. And by
memory latency we mean the most time it will take between when we ask the
CPU to read a byte of memory and when we get that byte.

DS
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Sun, 18 Apr 2004 21:43:19 GMT, geno_cyber@tin.it wrote:

>On Sun, 18 Apr 2004 17:37:36 GMT, "Yousuf Khan" <news.tally.bbbl67@spamgourmet.com> wrote:
>
>><geno_cyber@tin.it> wrote in message
>>news:u34580ltlccpd5p5e47mjv9j2c4lk4b4d9@4ax.com...
>>> A buffer is meant to reduce overall latency, not to increase it AFAIK.
>>
>>Not necessarily, a buffer is also meant to increase overall bandwidth, which
>>may be done at the expense of latency.
>>
>
>Cache on CPU is not meant to increase bandwidth but to decrease overall latency to retrieve data
>from slower RAM. More cache-like buffers in the path thru the memory controller can only improve
>latency, unless there's some serious design flaws.
>I never seen a CPU that gets slower in accessing data when it can cache and has a good hit/miss
>ratio.

You're using "buffer" interchangeably with "cache" - a mistake our Yousuf
would never, ever make. Caches and their effects aren't pertinent to a
discussion of the buffering technique found on Fully Buffered DIMMs and their
effects on latency and bandwidth...

/daytripper (hth ;-)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Sun, 18 Apr 2004 22:32:32 GMT, daytripper <day_trippr@REMOVEyahoo.com> wrote:

>On Sun, 18 Apr 2004 21:43:19 GMT, geno_cyber@tin.it wrote:
>
>>On Sun, 18 Apr 2004 17:37:36 GMT, "Yousuf Khan" <news.tally.bbbl67@spamgourmet.com> wrote:
>>
>>><geno_cyber@tin.it> wrote in message
>>>news:u34580ltlccpd5p5e47mjv9j2c4lk4b4d9@4ax.com...
>>>> A buffer is meant to reduce overall latency, not to increase it AFAIK.
>>>
>>>Not necessarily, a buffer is also meant to increase overall bandwidth, which
>>>may be done at the expense of latency.
>>>
>>
>>Cache on CPU is not meant to increase bandwidth but to decrease overall latency to retrieve data
>>from slower RAM. More cache-like buffers in the path thru the memory controller can only improve
>>latency, unless there's some serious design flaws.
>>I never seen a CPU that gets slower in accessing data when it can cache and has a good hit/miss
>>ratio.
>
>You're using "buffer" interchangeably with "cache" - a mistake our Yousuf
>would never, ever make. Caches and their effects aren't pertinent to a
>discussion of the buffering technique found on Fully Buffered DIMMs and their
>effects on latency and bandwidth...

FB-DIMMs are supposed to work with an added cheap CPU or DSP with some fast RAM, I doubt embedded
DRAM on-chip simply due to higher costs but you never know how much they could make a product cheap
if they really want to and no expensive DSP or CPU is needed there anyway for the FB-DIMM to work.
I know how both caches and buffers work (circular buffering, FIFO buffering and so on) and because
they're used to achieve similar results sometimes (like on DSPs architectures where buffering is a
key to performance with proper assembly code...) , it's not that wrong to refer to a cache as a
buffer even if its mechanism it's quite different the goal it's almost the same. The truth is that
both ways of making bits data faster to be retrieved are useful and a proper combination of these
techniques can achieve higher performance both at the bandwidth and latency levels.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Mon, 19 Apr 2004 00:38:16 GMT, geno_cyber@tin.it wrote:

>On Sun, 18 Apr 2004 22:32:32 GMT, daytripper <day_trippr@REMOVEyahoo.com> wrote:
>
>>On Sun, 18 Apr 2004 21:43:19 GMT, geno_cyber@tin.it wrote:
>>
>>>On Sun, 18 Apr 2004 17:37:36 GMT, "Yousuf Khan" <news.tally.bbbl67@spamgourmet.com> wrote:
>>>
>>>><geno_cyber@tin.it> wrote in message
>>>>news:u34580ltlccpd5p5e47mjv9j2c4lk4b4d9@4ax.com...
>>>>> A buffer is meant to reduce overall latency, not to increase it AFAIK.
>>>>
>>>>Not necessarily, a buffer is also meant to increase overall bandwidth, which
>>>>may be done at the expense of latency.
>>>>
>>>
>>>Cache on CPU is not meant to increase bandwidth but to decrease overall latency to retrieve data
>>>from slower RAM. More cache-like buffers in the path thru the memory controller can only improve
>>>latency, unless there's some serious design flaws.
>>>I never seen a CPU that gets slower in accessing data when it can cache and has a good hit/miss
>>>ratio.
>>
>>You're using "buffer" interchangeably with "cache" - a mistake our Yousuf
>>would never, ever make. Caches and their effects aren't pertinent to a
>>discussion of the buffering technique found on Fully Buffered DIMMs and their
>>effects on latency and bandwidth...
>
>FB-DIMMs are supposed to work with an added cheap CPU or DSP with some fast RAM, I doubt embedded
>DRAM on-chip simply due to higher costs but you never know how much they could make a product cheap
>if they really want to and no expensive DSP or CPU is needed there anyway for the FB-DIMM to work.
>I know how both caches and buffers work (circular buffering, FIFO buffering and so on) and because
>they're used to achieve similar results sometimes (like on DSPs architectures where buffering is a
>key to performance with proper assembly code...) , it's not that wrong to refer to a cache as a
>buffer even if its mechanism it's quite different the goal it's almost the same. The truth is that
>both ways of making bits data faster to be retrieved are useful and a proper combination of these
>techniques can achieve higher performance both at the bandwidth and latency levels.

Ummm.....no. You're still missing the gist of the discussion, and confusing
various forms of caching with the up and down-sides of using buffers in a
point-to-point interconnect.

Maybe going back and starting over might help...

/daytripper
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

geno_cyber@tin.it wrote:

>FB-DIMMs are supposed to work...

Do you ever get it right, Geno? I don't think I've seen it...
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Yousuf Khan" <news.tally.bbbl67@spamgourmet.com> wrote in message
news:A1zgc.114205$2oI1.47233@twister01.bloor.is.net.cable.rogers.com..
..
> <geno_cyber@tin.it> wrote in message
> news:u34580ltlccpd5p5e47mjv9j2c4lk4b4d9@4ax.com...
> > A buffer is meant to reduce overall latency, not to increase it
AFAIK.
>
> Not necessarily, a buffer is also meant to increase overall
bandwidth, which
> may be done at the expense of latency.

This particular buffer reduces the DRAM interface pinout by a factor
of 3 for CPU chips having the memory interface on-chip (such as
Opteron, the late and unlamented Timna, and future Intel CPUs). This
reduces the cost of the CPU chip while increasing the cost of the DIMM
(because of the added buffer chip).

And yes, the presence of the buffer does increase the latency.

There are other tradeoffs, the main one being the ability to add lots
more DRAM into a server. Not important for desktops. YMMV.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Mon, 19 Apr 2004 07:33:46 -0500, chrisv <chrisv@nospam.invalid> wrote:

>geno_cyber@tin.it wrote:
>
>>FB-DIMMs are supposed to work...
>
>Do you ever get it right, Geno? I don't think I've seen it...


-------

http://www.faqs.org/docs/artu/ch12s04.html

Caching Operation Results
Sometimes you can get the best of both worlds (low latency and good throughput) by computing
expensive results as needed and caching them for later use. Earlier we mentioned that named reduces
latency by batching; it also reduces latency by caching the results of previous network transactions
with other DNS servers.

------
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Sun, 18 Apr 2004 17:37:36 GMT, "Yousuf Khan" <news.tally.bbbl67@spamgourmet.com> wrote:

><geno_cyber@tin.it> wrote in message
>news:u34580ltlccpd5p5e47mjv9j2c4lk4b4d9@4ax.com...
>> A buffer is meant to reduce overall latency, not to increase it AFAIK.
>
>Not necessarily, a buffer is also meant to increase overall bandwidth, which
>may be done at the expense of latency.
>
> Yousuf Khan
>

http://www.analog.com/UploadedFiles/Application_Notes/144361534EE157.pdf


As you can see this Analog Devices DSP uses a mixed technique of buffering/caching to improve
latency in the best case scenario. Obviously if the caching doesn't work and the data it's not
locally available then the latency has to be higher because you've to get data from slower memory
but when the data is locally available the latency can be reduced down to zero approx in some cases.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Mon, 19 Apr 2004 07:33:46 -0500, chrisv <chrisv@nospam.invalid> wrote:

>geno_cyber@tin.it wrote:
>
>>FB-DIMMs are supposed to work...
>
>Do you ever get it right, Geno? I don't think I've seen it...

It's a lost cause...
 

rush

Distinguished
Apr 4, 2004
214
0
18,680
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

geno_cyber@tin.it wrote :

> FB-DIMMs are supposed to work with an added cheap CPU or DSP with
> some fast RAM, I doubt embedded DRAM on-chip simply due to higher
> costs but you never know how much they could make a product cheap
> if they really want to and no expensive DSP or CPU is needed there
> anyway for the FB-DIMM to work. I know how both caches and buffers
> work (circular buffering, FIFO buffering and so on) and because
> they're used to achieve similar results sometimes (like on DSPs
> architectures where buffering is a key to performance with proper
> assembly code...) , it's not that wrong to refer to a cache as a
> buffer even if its mechanism it's quite different the goal it's
> almost the same. The truth is that both ways of making bits data
> faster to be retrieved are useful and a proper combination of
> these techniques can achieve higher performance both at the
> bandwidth and latency levels.

cache is a form of a buffer
buffer is not necesarly a cache, imagine one byte buffer, would you
call it a cache ?

Pozdrawiam.
--
RusH //
http://pulse.pdi.net/~rush/qv30/
Like ninjas, true hackers are shrouded in secrecy and mystery.
You may never know -- UNTIL IT'S TOO LATE.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

RusH wrote:

> geno_cyber@tin.it wrote :
>
>
>>FB-DIMMs are supposed to work with an added cheap CPU or DSP with
>>some fast RAM, I doubt embedded DRAM on-chip simply due to higher
>>costs but you never know how much they could make a product cheap
>>if they really want to and no expensive DSP or CPU is needed there
>>anyway for the FB-DIMM to work. I know how both caches and buffers
>>work (circular buffering, FIFO buffering and so on) and because
>>they're used to achieve similar results sometimes (like on DSPs
>>architectures where buffering is a key to performance with proper
>>assembly code...) , it's not that wrong to refer to a cache as a
>>buffer even if its mechanism it's quite different the goal it's
>>almost the same. The truth is that both ways of making bits data
>>faster to be retrieved are useful and a proper combination of
>>these techniques can achieve higher performance both at the
>>bandwidth and latency levels.
>
>
> cache is a form of a buffer
> buffer is not necesarly a cache, imagine one byte buffer, would you
> call it a cache ?

Sure; you can think of it as a *really* small cache, which will
therefore have a terrible hit ratio, thus (most likely) increasing latency.

--
Mike Smith
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Sun, 18 Apr 2004 22:32:32 GMT, daytripper
<day_trippr@REMOVEyahoo.com> wrote:

>You're using "buffer" interchangeably with "cache" - a mistake our Yousuf
>would never, ever make. Caches and their effects aren't pertinent to a
>discussion of the buffering technique found on Fully Buffered DIMMs and their
>effects on latency and bandwidth...

Ah! I was getting quite confused by his statement about the buffer &
cache until you said this. Makes it perfectly clear now! :pppP

--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

<geno_cyber@tin.it> wrote in message
news:udj7801kk4mg1ba4sdsh2fcuga90knoc8f@4ax.com...
> On Sun, 18 Apr 2004 17:37:36 GMT, "Yousuf Khan"
<news.tally.bbbl67@spamgourmet.com> wrote:
> As you can see this Analog Devices DSP uses a mixed technique of
buffering/caching to improve
> latency in the best case scenario. Obviously if the caching doesn't work
and the data it's not
> locally available then the latency has to be higher because you've to get
data from slower memory
> but when the data is locally available the latency can be reduced down to
zero approx in some cases.

In this case the buffer is used to eliminate DRAM interface differences when
going from one technology to a new one.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Mon, 19 Apr 2004 17:58:52 GMT, "Yousuf Khan"
<news.tally.bbbl67@spamgourmet.com> wrote:

><geno_cyber@tin.it> wrote in message
>news:udj7801kk4mg1ba4sdsh2fcuga90knoc8f@4ax.com...
>> On Sun, 18 Apr 2004 17:37:36 GMT, "Yousuf Khan"
><news.tally.bbbl67@spamgourmet.com> wrote:
>> As you can see this Analog Devices DSP uses a mixed technique of
>buffering/caching to improve
>> latency in the best case scenario. Obviously if the caching doesn't work
>and the data it's not
>> locally available then the latency has to be higher because you've to get
>data from slower memory
>> but when the data is locally available the latency can be reduced down to
>zero approx in some cases.
>
>In this case the buffer is used to eliminate DRAM interface differences when
>going from one technology to a new one.

"But wait! There's more!"

The "FB" buffer on an FBdimm is also a bus repeater (aka "buffer") for the
"next" FBdimm in the chain of FBdimms that comprise a channel. The presence of
this buffer feature allows the channel to run at the advertised frequencies in
the face of LOTS of FBdimms on a single channel - frequencies that could not
be achieved if all those dimms were on the typical multi drop memory
interconnect (ala most multi-dimm SDR/DDR/DDR2 implementations).

Anyway...

I thought I knew the answer to this, but I haven't found it documented either
way: is the FB bus repeater simply a stateless signal buffer, thus adding its
lane-to-lane skew to the next device in the chain (which would imply some huge
de-skewing tasks for the nth FBdimm in - say - an 8 FBdimm implementation). Or
does the buffer de-skew lanes before passing the transaction on to the next
node?

/daytripper
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Mon, 19 Apr 2004 21:55:29 GMT, daytripper
<day_trippr@REMOVEyahoo.com> wrote:

>The "FB" buffer on an FBdimm is also a bus repeater (aka "buffer") for the
>"next" FBdimm in the chain of FBdimms that comprise a channel. The presence of
>this buffer feature allows the channel to run at the advertised frequencies in
>the face of LOTS of FBdimms on a single channel - frequencies that could not
>be achieved if all those dimms were on the typical multi drop memory
>interconnect (ala most multi-dimm SDR/DDR/DDR2 implementations).

Does this also mean that I could in theory put a very fast say 1.6Ghz
buffer on the FBDIMM and sell it as say DDR3-1.6Ghz because of that.
Even though the actual ram chips are only capable of say 200Mhz?
:pPpPpP

--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"The little lost angel" <a?n?g?e?l@lovergirl.lrigrevol.moc.com> wrote in
message news:4084b2f1.41363671@news.pacific.net.sg...
> Does this also mean that I could in theory put a very fast say 1.6Ghz
> buffer on the FBDIMM and sell it as say DDR3-1.6Ghz because of that.
> Even though the actual ram chips are only capable of say 200Mhz?

Wasn't there also some talk back in the early days of the K7 Athlon about
Micron coming out with an AMD chipset with a huge buffer built into its own
silicon. Micron went so far as to give it a cool codename, Samurai or Mamba
or something. But nothing else came of it after that.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Tue, 20 Apr 2004 05:21:19 GMT, a?n?g?e?l@lovergirl.lrigrevol.moc.com (The
little lost angel) wrote:

>On Mon, 19 Apr 2004 21:55:29 GMT, daytripper
><day_trippr@REMOVEyahoo.com> wrote:
>
>>The "FB" buffer on an FBdimm is also a bus repeater (aka "buffer") for the
>>"next" FBdimm in the chain of FBdimms that comprise a channel. The presence of
>>this buffer feature allows the channel to run at the advertised frequencies in
>>the face of LOTS of FBdimms on a single channel - frequencies that could not
>>be achieved if all those dimms were on the typical multi drop memory
>>interconnect (ala most multi-dimm SDR/DDR/DDR2 implementations).
>
>Does this also mean that I could in theory put a very fast say 1.6Ghz
>buffer on the FBDIMM and sell it as say DDR3-1.6Ghz because of that.
>Even though the actual ram chips are only capable of say 200Mhz?
>:pPpPpP

The short answer is: certainly.

The longer answer is: this is *exactly* the whole point of this technology: to
make heaps of s l o w but cheap (read: "commodity") drams look fast when
viewed at the memory channel, in order to accommodate large memory capacities
for server platforms (ie: I doubt you'll be seeing FBdimms on conventional
desktop machines anytime soon).

Like the similar schemes that have gone before this one, it sacrifices some
latency at the transaction level for beau coup bandwidth at the channel level.

No doubt everyone will have their favorite benchmark to bang against this to
see if the net effect is positive...

/daytripper (Mine would use rather nasty strides ;-)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Tue, 20 Apr 2004 14:28:18 GMT, "Yousuf Khan"
<news.tally.bbbl67@spamgourmet.com> wrote:

>Wasn't there also some talk back in the early days of the K7 Athlon about
>Micron coming out with an AMD chipset with a huge buffer built into its own
>silicon. Micron went so far as to give it a cool codename, Samurai or Mamba
>or something. But nothing else came of it after that.

Hmm, don't remember that much. Only remember for sure what you forgot,
it was Samurai :p

--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Tue, 20 Apr 2004 14:28:18 GMT, "Yousuf Khan"
<news.tally.bbbl67@spamgourmet.com> wrote:
>"The little lost angel" <a?n?g?e?l@lovergirl.lrigrevol.moc.com> wrote in
>message news:4084b2f1.41363671@news.pacific.net.sg...
>> Does this also mean that I could in theory put a very fast say 1.6Ghz
>> buffer on the FBDIMM and sell it as say DDR3-1.6Ghz because of that.
>> Even though the actual ram chips are only capable of say 200Mhz?
>
>Wasn't there also some talk back in the early days of the K7 Athlon about
>Micron coming out with an AMD chipset with a huge buffer built into its own
>silicon. Micron went so far as to give it a cool codename, Samurai or Mamba
>or something. But nothing else came of it after that.

I believe they even built a prototype. Never made it to market
though. Either way, the chipset in question just had an L3 cache (8MB
of eDRAM if my memory serves) on the chipset, nothing really to do
with the buffers in Fully Buffered DIMMs. Buffer != cache.

-------------
Tony Hill
hilla <underscore> 20 <at> yahoo <dot> ca