Sign-in / Sign-up
Your question

Low-Latency DDR2?

Tags:
  • CPUs
  • DDR2
  • Latency
  • Intel
Last response: in CPUs
April 18, 2004 3:41:53 AM

A little off topic here, but it's good news for Intel, which'll be using DDR2-533 as a performance solution in grantsdale paired with 1066Mhz FSB:

Kingston is <A HREF="http://www.vr-zone.com/?i=685" target="_new">currently working on</A> a low-latency DDR2-533 variant. Now while the normal DDR2-533 has 4-4-4 latencies - which is equivalent to an abolute access delay that is equal to DDR1-400 at 3-3-3 - this new variant has 3-3-3 timings! :cool:

Given that Intel still has time until June, doesn't it seem possible that some memory manufacturers might actually break JEDEC's standard and go beyond? I mean, JEDEC was very conservative and shy about DDR400, but it's mainstream now with excellent latencies. Noone expected the latencies to show such great numbers in DDR400 - it was expected that DDR2 would be needed to overcome the limitations of DDR1 and that DDR-400 would only be a niche product. Everyone was worried that dual DDR-400 wasn't a good idea.

Which brings us to the point: it seems likely that latencies are an issue that will be cleared out. DDR2-533@4-4-4 is almost commonplace now, and the memory manufacturers still have almost 2 months to improve techniques. Remember DDR400 and their problems with timings like 3-3-3? And with Intel fueling Infineon's research, well... I don't think DDR2 will be such a disappointment at start (more so if it indeed starts at DDR2-533 or DDR2-667 in June). Intel always gives the memory industry a hard time - which is good for us.

<i><font color=red>You never change the existing reality by fighting it. Instead, create a new model that makes the old one obsolete</font color=red> - Buckminster Fuller </i>

More about : low latency ddr2

April 18, 2004 5:35:46 AM

Well, DDR2 is still higher latency by design really. So while it may be possible to produce DDR2-533 3-3-3 dimms, it should be easier to produce DDR1-500 or 550 dimms with even lower latency. And if you consider price projections of regular DDR2 dimms (roughly 2x the price of DDR-1), God knows what those LL variants will cost. Would anyone get exited by this if it costs 4x as much as equally or better performing DDR-1 ?

> doesn't it seem possible that some memory manufacturers
>might actually break JEDEC's standard and go beyond?

Sure, but a price. 1) limited compatibility, 2) high price. That wouldnt be too bad if it was matched with "3) best performance", but that remains to be seen, so I'm still not overly excited by this. Still seems to me DDR-1 is the way to go for most of this year at least, but let's wait on benchmarks and actual shipping products/prices before making a final judgement.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
April 18, 2004 5:32:41 PM

Hm, you're right, we should wait and see.

In any case, several memory manufacturers have announced the readiness of even DDR2-667. And at least TwinMOS reported getting DDR2-667 @ 4-4-4 instead of the expected 5-5-5. So... well, let's just sit back and see what happens.

<i><font color=red>You never change the existing reality by fighting it. Instead, create a new model that makes the old one obsolete</font color=red> - Buckminster Fuller </i>
Related resources
April 19, 2004 10:29:01 AM

You have to understand something a DDR-2 533 3-3-3 have a ''real'' internal latency of 1.5-1.5-1.5 at 133 mghz.While the systemes see the I/O buffer at 266 mghz with twice the lantency.The best that you can ask is DDR-2 at 4-4-4 with no AL with internal frenquency ranging from 100 mghz to 300 mghz.Samsung should be able they work on DDR-2 for a while.

We should have stick with rambus.

i need to change useur name.<P ID="edit"><FONT SIZE=-1><EM>Edited by juin on 04/19/04 06:46 AM.</EM></FONT></P>
April 19, 2004 12:45:31 PM

Quote:
DDR-2 533 3-3-3 have a ''real'' internal latency of 1.5-1.5-1.5 at 133 mghz

How's that? is the latency 1.5-1.5-1.5 or 3-3-3(how much the CPU has to wait)?
Quote:
We should have stick with rambus.

It's 16 bit memory interfacepushes me away from this. EDO, SDRAM, DDRAM and what other type of DRAM ever existed has 64-bit. That means double the bandwidth(on single channel) and quad (on dual channel). Your only fact to sustain this is that RDRAM is dual channel.
April 20, 2004 2:48:28 AM

I don't think Rambus is offering any bandwidth-limited solutions. In fact, their newest XDR-RAM technology is to enable transfers of up to 100GB/s, and much more than the ~8-10GB/s we'll be seeing in the next few years, if we stick to DDR.

<i><font color=red>You never change the existing reality by fighting it. Instead, create a new model that makes the old one obsolete</font color=red> - Buckminster Fuller </i>
April 20, 2004 5:10:36 AM

but who wants to pay royalties to them, intel even got tired of it and fed up. id like to see fb-dimms used though, i think they could match nicely with amd's line especially, but again its all about money. im still in the wait and see mode, since im hearing all these manufacturers with word of ddr2 ready, but of course no one will be buying anyway for a couple months, so basically they will just sit and the price will be as high , maybe even a bit higher once the baords are out. itll be intertesting to watch.
April 20, 2004 7:12:28 AM

fb-dimms seems like a neat solution for servers, where capacity is a bigger concern than ultimate performance. For the desktop, I don't think its a worthwhile technology, at least not as long as higher density DIMMs enable me to install plenty of RAM using 2 or 3 DIMM slots. FB-Dimm would only increase latency there.

Even for servers, it seems much more usefull for intel than amd, because AMD doesnt really have a capacity/speed problem either with its ODMC; 4 opterons give you up to 32 DIMM slots, enough for 64 GB today, 128 GB tomorrow. The problem is more finding enough space on the MB than anything else. Even a 2 way opteron could handle up to 16 DIMM slots. Intel OTOH, as well as the rest of the industry would have a much harder time connecting 32 dimm slots to a single or even two northbridge chips. FB Dimm seems like a godsent for Xeon and Itanium.

The only advantage FB Dimm would offer over plain vanilla DDR for Opteron is simpler MB designs, but already a 4 way opteron board is a lot easier to do than a 4 way Xeon or Itanium board, thanks to HTT. I'm not expecting AMD to brace FB Dimm any time soon, and neither do I expect intel to put it on the desktop anytime soon. But it will be great for Xeon and Itanium.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
April 20, 2004 8:09:47 PM

How's that? is the latency 1.5-1.5-1.5 or 3-3-3(how much the CPU has to wait

Lostcircuit have a explaination on I/O buffer cell array timing and bus protocol on DDR-2.

It's 16 bit memory interfacepushes me away from this. EDO, SDRAM, DDRAM and what other type of DRAM ever existed has 64-bit. That means double the bandwidth(on single channel) and quad (on dual channel). Your only fact to sustain this is that RDRAM is dual channel

RDRAM use 64 bit from cell array point of view but move the data with a serial 16bit bus.A bit like PCI-EX a bus are more scalable and accepte more dimm (as there more channel)

i need to change useur name.
April 20, 2004 8:12:05 PM

That misconception that been carry for a long time.As many will argue that RDRAM or XDR have more latency which is also a misconception or myth.

i need to change useur name.
April 20, 2004 8:16:41 PM

well rambus does a good job trying to wow users with 100gb/s bandwidth
April 20, 2004 8:23:11 PM

Even for servers, it seems much more usefull for intel than amd, because AMD doesnt really have a capacity/speed problem either with its ODMC; 4 opterons give you up to 32 DIMM slots, enough for 64 GB today, 128 GB tomorrow. The problem is more finding enough space on the MB than anything else. Even a 2 way opteron could handle up to 16 DIMM slots. Intel OTOH, as well as the rest of the industry would have a much harder time connecting 32 dimm slots to a single or even two northbridge chips. FB Dimm seems like a godsent for Xeon and Itanium.

Any SGI or HP building block come with more that 4 channel per cpu for or 8 channel per MX1.That twice the number of channel.

While that give intel 1 advantage either more speed as channel have less dimm so higher speed DIMM can be use or more DIMM in overall.

You should know that Opteron drop any ram speed at 133 mghz if 5 to 8 dimm is use.While this is not happening with FD-DIMM so a opteron could have use large capacity memory with the same speed to overcome the natural lantency increase from scaling in CPU.

i need to change useur name.
April 20, 2004 9:48:23 PM

>You should know that Opteron drop any ram speed at 133 mghz
>if 5 to 8 dimm is use

Sure. But having 8 memory controllers with just 4 cpu's, still gives you an awefull lot of bandwith, low latency, AND high capacity even if you have to drop the speed to DDR266.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
April 21, 2004 2:17:43 AM

No not really.

i need to change useur name.
April 22, 2004 12:45:10 PM

Quote:
Sure. But having 8 memory controllers with just 4 cpu's, still gives you an awefull lot of bandwith, low latency, AND high capacity even if you have to drop the speed to DDR266.

This is a little bit above my knowledge. Can anyone exlain this to me?
April 22, 2004 1:00:21 PM

Quote:
don't think Rambus is offering any bandwidth-limited solutions.

RDRAM PC 800 had a 1.6 GB/s bandwidth. it was above SDRAM and the 400Mhz freq allowed you to sync it with Intel's 400 MHZ FSB. I perfectly agree with you.
Quote:
In fact, their newest XDR-RAM technology is to enable transfers of up to 100GB/s, and much more than the ~8-10GB/s we'll be seeing in the next few years, if we stick to DDR.

I haven't heard of XDR yet, but any link is welcome and I might stick to it.
Is XDR out allready?
April 22, 2004 1:03:13 PM

Quote:
Lostcircuit have a explaination on I/O buffer cell array timing and bus protocol on DDR-2.

Is this a site, a magazine or what ( I have no Idea)
April 22, 2004 3:00:28 PM

Rather simple really. Opteron has ondie memory controller (two of them, or one dual channel, or one 144 bit one, depending how you want to look at it). Therefore, in a multi cpu system, each opteron has its own dedicated memory bandwith, so it scales with each added cpu. A 4 way opteron with DDR400 has an agregate memory bandwith of 25 GB/s, where as a typcial 4 way Xeon using a broadcom quad channel chipset only has 6.4 GB/s. As you use more dimms per channel however, maximum speed for opteron drops from DDR400 to DDR333 (if I'm not mistaken, could be 266 as well for maximum capacity), but that hardly changes the picture.

As for capacity.. same thing. Each opteron can address up to 8 Dimms, so a 4 way opteron could handle up to 32 Dimm, good for 64 GB (not that you'll find many boards supporting this though, kinda overkill). A 4 way Xeon MP could only handle half as much.

As for latency, obviously having your memory controller ondie drastically reduces latency compared to having to go off chip over a slow FSB to an external memory controller, shared by several other cpu's. Of course, Opteron also has to go "off die" when it needs access to memory connected to one of the other cpu's, but even there HyperTransport + ODMC provides a much faster solution than shared FSB+external MC. The worst case scenario for a 4 way opteron is still far better than the best case scenario (well, they are all equal) for Xeon MP. Having a NUMA aware OS will also reduce the number of times memory has to be fetched over HTT instead of locally, and using local memory, Opteron can reduce memory latencies by over 300% over the best Xeon chipsets.

So, while FB Dimm may offer a solution to intel for improving memory capacity and/or simplify the creation of memory controllers/motherboards with more than the current number of channels, it really offers a solution to a problem opteron hardly has, and IMHO, its not nearly as efficient or elegant as Opterons topology. Both technologies are not necessarely mutually exclusive, but I don't see much benefit in an opteron+FB Dimm solution. Simpler motherboards at the expensive of lower performance (higher latency).

The only situation where FB Dimm would offer something for opteron is for single, maybe dual cpu servers where you need an excessive ammount of memory (more than 8 or 16 Dimms respectively). I'm not sure there is any demand for such machines. IF you need more than 32 GB of RAM (64 soon when 4 GB dimms hit the shelves), I really doubt you'll want a dual cpu machine, especially considering the cost of such ungodly ammounts of server RAM, and the performance you'll be missing. 32 GB of registered, ECC RAM costs $16,000 (crucial). Buy it from HP/IBM/Dell/SUN, and it will be 2x to 4x as much.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
April 27, 2004 3:50:05 AM

And the same itanium systemes have 4*4 channel for a max of 2.1GB*16=32 GB/S on full dimm and that make a total of dimm of 4*4*4 some may use 4*4*8 dimm per cell.
When it come for the I/O opteron and EV7 offer the worse performance.CPU-RAM performance been improve at the cost of I/O as they have to be route 1 to the cpu and them to the ram while on itanium is go directly to the ram.HP offer much more on that didicate I/O path with high speed serial bus to the chipset after to the memory scale port.Low bandwith low pin count easy and flexibility.

Like AMD ceo soon Opteron will be able to take on the larger machine but for now it while stay on the 2 way market.<P ID="edit"><FONT SIZE=-1><EM>Edited by juin on 04/26/04 11:59 PM.</EM></FONT></P>
April 27, 2004 6:51:39 AM

>And the same itanium systemes have 4*4 channel for a max of
>2.1GB*16=32 GB/S on full dimm and that make a total of dimm
>of 4*4*4 some may use 4*4*8 dimm per cell.

PC2100, and those DIMMS are supported per NODE, in other words, divide by 4 on a typical configuration, and surprise, you end up pretty much the same limits as on an Opteron. Not too mention its a whole lot more difficult to implement if all those DIMMS have to be connected to the same memorycontroller.

>When it come for the I/O opteron and EV7 offer the worse >performance.CPU-RAM performance been improve at the cost of
>I/O as they have to be route 1 to the cpu and them to the
>ram while on itanium is go directly to the ram

LOL !

>but for now it while stay on the 2 way market

Sure. LOL

<A HREF="http://www.appro.com/product/server_4145h.asp" target="_new">appro 4 way opteron </A>
<A HREF="http://www.verari.com/4u.asp" target="_new">verari (racksaver) 4 way opteron </A>
<A HREF="http://h71016.www7.hp.com/dstore/ctoBases.asp?ProductLi..." target="_new">HP 4 way opteron</A>
<A HREF="http://www.opteronics.com/qop_rc0452.htm" target="_new">Opteronics 4 way opteron </A>
<A HREF="http://colfax-intl.com/jlrid/SpotLight_more.asp?L=71&S=..." target="_new">Colfax (newisys) 4 way opteron</A>
<A HREF="http://www.polywell.com/us/rackservers/poly8400am.asp" target="_new">Polywell 4 way opteron</A>
..

I could go on and on, the list endless, and is basically, everyone and their dog except IBM, Sun (both of which will offer quad opterons very soon), and yeah, Dell.

= The views stated herein are my personal views, and not necessarily the views of my wife. =