Sign in with
Sign up | Sign in
Your question

RDRAM has lower latency than DDR

Last response: in Memory
Share
April 18, 2001 5:07:39 AM

I'd like to clear up a great deal of misconceptions regarding RDRAM that are floating around this forum. Most of the people seem to think RDRAM has terrible latency. This is actually the reverse of the truth.

The accepted definition of latency is the time between the moment the RAS (Row Address Strobe) is activated (ACT command sampled) to the moment the first data bit becomes valid. Synchronous device timing is always a multiple of the device clock period.

The fundamental latency of a DRAM is determined by the intrinsic speed of the memory core. All commodity DRAMs use the same memory core technology, so all DRAMs are subject to the same intrinsic latency. Any differences in latency between DRAM types are therefore only the result of the differences in the speed of their interfaces.

At the 800 MHz data rate, the interface to a Rambus RDRAM operates with an extremely fine timing granularity of 1.25 ns, resulting in a component latency of 38.75 ns. The PC100 SDRAM interface runs with a coarse timing granularity of 10 ns. Its interface timing matches the memory core timing very well, so that its component latency ends up to be 40 ns. The 133 MHz SDRAM interface, with its coarse timing granularity of 7.5 ns, incurs a mismatch with the timing of the memory core which increases the component latency significantly, to 45 ns.

The latency timing values can be computed easily from the device data sheets. For the PC100 and 133 MHz SDRAMs, the component latency is the sum of the tRCD and CL values. The RDRAM's component latency is the sum of the tRCD and TCAC values, plus one half clock period for the data to become valid.

Although component latency is an important factor in system performance, system latency is even more important, since it is system latency that stalls the CPU. System latency is determined by adding external address and data delays to the component latency. For PCs, the system latency is measured as the time to return 32-bytes of data, also referred to as the "cache line fill" data, to the CPU.

In a system, SDRAMs suffer from what is known as the two-cycle addressing problem. The address must be driven for two clock cycles (20 ns at 100 MHz, 15ns at 133MHz) in order to provide time for the signals to settle on the SDRAM's highly loaded address bus. After the two-cycle address delay and the component delay, three more clocks are required to return the 32 bytes of data in the case of SDR, two more clocks in the case of DDR. The system latency of 100MHz and 133MHz SDRAM adds five (SDR) or four (DDR) clocks to the component latency. The total SDRAM system latency is 90ns for 100MHz SDR, 82.5ns for 133MHz SDR, 80ns for 100MHz DDR (equivolent of 200MHz - PC1600) and 75ns for 133MHz DDR (equivolent of 266MHz - PC2100.)

The superior electrical characteristics of a Rambus system eliminate the two-cycle addressing problem, requiring only 10 ns to drive the address to the RDRAM. The 32 bytes of data stream back to the CPU at 1.6GB/second, which works out to be 18.75 ns. Adding in the component latency, the RDRAM system latency is 70 ns, faster than PC100, PC133, PC1600, and PC2100 SDRAM.

Measured at either the component or system level, Rambus DRAMs have the fastest latency. Surprisingly, due to the mismatch between its interface and core timing, the 133MHz SDRAM (SDR or DDR) is significantly slower than the 100MHz SDRAM (SDR or DDR.) The RDRAM's low latency coupled with its 1.6 gigabyte per second bandwidth per channel provide the highest possible sustained system performance.

-Raystonn

-- The center of your digital world --
April 18, 2001 6:59:54 PM

there has to be a way to say that so it makes sense and doesn't come accross like a lot of mumbo jumbo.
And the math behind it might be nice too instead of all words.
April 18, 2001 8:05:10 PM

The 'math' is right there. What part of this did you not understand? If you have specific questions on it, let me know. This isn't the place to start teaching basic mathematics though.

-Raystonn

-- The center of your digital world --
Related resources
Anonymous
a b } Memory
April 18, 2001 8:40:40 PM

Here are two links to other documents that go into great detail to find the differences between RDRAM and SDRAM. <A HREF="http://www.tomshardware.com/mainboard/00q1/000315/rambu..." target="_new">This one at tomshardware.com</A>, and <A HREF="http://www.realworldtech.com/page.cfm?ArticleID=RWT1107..." target="_new">this one at realworldtech.com</A>.

Both of these are really good reading. In a nutshell, concerning the latencies of the two memories, each individual memory chip (RD- or SD- RAM) has "roughly" the same latency, since they are both DRAM arrays in nature.
The difference is that SDRAMs are connected in parallel, while RDRAMs are connected in serial. Thus with higher memory sizes in a system, SDRAM maintains a constant latency, while RDRAM has a higher latency that increases with the amount of memory installed.

Hope this helps.

--------
I have not yet begun to procrastinate.
April 18, 2001 8:45:43 PM

"RDRAM has a higher latency that increases with the amount of memory installed"

When you add memory to the furthest banks, you do increase latency by around 5ns. Even with this penalty, RDRAM still has lower latency than its SDRAM counterparts.

-Raystonn

-- The center of your digital world --
April 18, 2001 9:58:15 PM

His numbers are completely inaccurate. Note he doesn't even state how he gets these numbers. He also starts mixing L1 and L2 caches into the discussion. RAM doesn't have these caches. Those are features of CPUs and have nothing to do with RDRAM vs SDRAM. Do the math yourself. It's not too difficult to figure out the latency of the memory modules. Follow along with my post and you can see how the numbers were obtained. My numbers are accurate.

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 18, 2001 10:15:37 PM

What is important is if we compare the latencies of an individual chip and the memory controller to the chip vs. the latencies that the cpu sees, which is the important factor.

from <A HREF="http://pcquest.ciol.com/content/technology/10004404.asp" target="_new">http://pcquest.ciol.com/content/technology/10004404.asp...;/A>
quote:
Quote:
All memories take some time to process a request for data and transfer the same. This is called latency. Due to the serial nature of RDRAM, the chips closest to the memory controller take much less time to respond to the controller, compared to those that are located further away. This difference in time can be quite a lot, since the farthest RDRAM chip can be about a foot away from the memory controller. Hence, the controller must find a way to manage all these different latencies. To do this, the controller finds out the highest latency value in all the RDRAM chips during the boot phase, and then programs the rest to work at that latency. Thus, even though the actual latency for RDRAM may be very low, more often that not, the RIMM ends up working at a much higher latency value.

Both the latest SDRAM as well as RDRAM have 20 nanoseconds latency. But because of the reasons given above, RDRAM always has latency greater than published figures.

If I can repeat his last line, <b>"RDRAM always has latency greater than published figures"</b>

Here's some more articles, for your reading pleasure:

<A HREF="http://www.hardwarecentral.com/hardwarecentral/reviews/..." target="_new">http://www.hardwarecentral.com/hardwarecentral/reviews/...;/A>

<A HREF="http://www.tomshardware.com/mainboard/00q2/000529/" target="_new">http://www.tomshardware.com/mainboard/00q2/000529/&lt;/A>

<A HREF="http://www.overclockers.com/articles146/" target="_new">http://www.overclockers.com/articles146/&lt;/A>


--------
I have not yet begun to procrastinate.
April 18, 2001 10:25:08 PM

As I've already said, when you add memory to the furthest banks, you do increase latency by around 5ns. This is not as bad an impact as he claims. With this extra 5ns RDRAM still has the lowest latency. It also has bandwidth far exceeding DDR.

-- The center of your digital world --
Anonymous
a b } Memory
April 18, 2001 10:34:01 PM

Quote:
Note he doesn't even state how he gets these numbers

How about this, written directly below his table:
"In my model the L1 hit ratio is 97%, the L2 hit ratio is 84% and 78% respectively and the main memory page hit ratio is 55%. These hit ratios are taken from a 1998 presentation by Forrest Norrod, senior director, Cyrix Corp. entitled "The Future of CPU Bus Architectures - A Cyrix Perspective". The column marked 'average DRAM access' refers to average critical word first latency in CPU clocks, plus 6 cycles for the L2 miss, and an extra cycle for data forwarding. The average latency is calculated based on 55% page hits, 22.5% row hits, and 22.5% page misses. The column labeled 'average memory access' is the average DRAM access multiplied by the L1 and L2 cache miss rates. The average CPI is calculated by adding the 50% of the average access time (since about half of x86 instructions perform a data memory access) to the base architectural figure of 0.5 CPI. The average MIPs is calculated by dividing 800 MHz by the average CPI figure. The average DRAM BW figure is derived as the product of MIPs x 50% data accesses x L1 miss rate x L2 miss rate x 32 bytes per cache line x 1.33 (66% reads, 33% writes with write miss allocate policy selected)."

Quote:
He also starts mixing L1 and L2 caches into the discussion. RAM doesn't have these caches

I guarantee you system will be slower without L1, L2 caches. They most certainly do affect system performance. No it won't change the latency of the memory, but it will definitely affect overall performance, which is one thing he was describing.



--------
I have not yet begun to procrastinate.
Anonymous
a b } Memory
April 18, 2001 10:40:06 PM

Quote:
when you add memory to the furthest banks, you do increase latency by around 5ns

That's great, except that the whole channel is adjusted to reflect this higher latency. The timing of the memory controller is adjusted on boot-up to be synchronized with the slowest chip in the channel. This is why RDRAM weakens as more memory (>256MB) is added to the system.

Granted, if you only want 32MB, like the PS2, RDRAM is the way to go, and I think if RamBus ever pulls its head out of its @ss, it might be benificial to put RDRAM on video cards. However, for the server environment, where large memories are installed and quick access is a neccessity, RDRAM may not be the best option.

--------
I have not yet begun to procrastinate.
April 18, 2001 10:41:57 PM

I'm comparing RDRAM and SDRAM. Ever system will be affected similarly by the CPUs L1 and L2 cache regardless of what kind of memory they use.

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 18, 2001 10:46:25 PM

Exactly my point. The addition of it in that comparison chart was to show the effect on CPU efficiency, and not memory latency. It was an extraneous detail that you caught wind of and questioned. Nothing wrong with that, it just wasn't the part of the chart I was referring to :smile:

--------
I have not yet begun to procrastinate.
April 18, 2001 10:49:41 PM

"That's great, except that the whole channel is adjusted to reflect this higher latency"

And.... it still has lower latency even with the 5ns penalty. So that doesn't really matter much.

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 18, 2001 10:56:56 PM

Do you have any benchmarks to back up your claims?

Perhaps you haven't looked at the in-depth articles Tom has published outlining the differences between the two systems. <A HREF="http://www.tomshardware.com/mainboard/00q2/000529/index..." target="_new">Try here</A> for starters.

--------
I have not yet begun to procrastinate.
April 18, 2001 11:15:33 PM

Benchmarks? Use simple mathematics. You can compute the latencies yourself as I did.

On a 100MHz frontside bus each clock is 10ns. On a 133MHz frontside bus each clock is 7.5ns.

At the 800 MHz data rate, the interface to a Rambus RDRAM operates with an extremely fine timing granularity of 1.25 ns, resulting in a component latency of 38.75 ns. The PC100 SDRAM interface runs with a coarse timing granularity of 10 ns. Its interface timing matches the memory core timing very well, so that its component latency ends up to be 40 ns. The 133 MHz SDRAM interface, with its coarse timing granularity of 7.5 ns, incurs a mismatch with the timing of the memory core which increases the component latency significantly, to 45 ns.

In a system, SDRAMs suffer from what is known as the two-cycle addressing problem. The address must be driven for two (CAS2) clock cycles (20 ns at 100 MHz, 15ns at 133MHz) in order to provide time for the signals to settle on the SDRAM's highly loaded address bus. After the two-cycle address delay and the component delay, three more clocks are required to return the 32 bytes of data in the case of SDR, two more clocks in the case of DDR. (This is based on the data transfer rate. We compute how long it takes to transfer 32 bytes of data at 100MHz, 133MHz, 2*100MHz, and 2*133MHz.)

The system latency of 100MHz and 133MHz SDRAM thus adds five (SDR) or four (DDR) clocks to the component latency when used at CAS2. Therefore, the total SDRAM system latency is 90ns for 100MHz SDR, 82.5ns for 133MHz SDR, 80ns for 100MHz DDR (equivolent of 200MHz - PC1600) and 75ns for 133MHz DDR (equivolent of 266MHz - PC2100.)

RDRAM eliminates the two-cycle addressing problem, requiring only 10ns to drive the address to memory. The 32 bytes of data stream back to the CPU at 1.6GB/second, which works out to be 18.75ns. Adding in the component latency, the RDRAM system latency is 70 ns, faster than PC100, PC133, PC1600, and PC2100 SDRAM.

It's simple mathematics. When placed in a system that makes use of the bandwidth and low latency, such as a P4 system, you will see incredible benchmark scores in memory tests.

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 18, 2001 11:33:24 PM

It's great that you repeated your original statement, but have you ever noticed that any real-world, or synthesized measurement of a system always falls short of the published theoretical values? There are other factors to account for than just absolute response time of the DRAM chips

For instance, InQuest has published a <A HREF="http://www.inqst.com/articles/p4bandwidth/p4bandwidthma..." target="_new">cool article</A> about the p4 and it's bus utilization. There is a nice chart about 2/3 the way down that compares the p4 access latency to that of the p3.

When an benchmark is heavily bandwidth dependent, and access sequential data, the p4 screams ahead. Note that applications of this sort are uncommon.
However, under more normal cercumstances, the p4's huge bandwidth and bus utilization does not materialize into huge real-world results.


--------
I have not yet begun to procrastinate.
April 18, 2001 11:40:16 PM

What does any of this have to do with RDRAM. This thread is not about the P4. It's about RDRAM. You best reread what I said above. It's not just a repeat of my first statement. I go into more detail on how I attained these numbers.

RDRAM does in fact have lower latency and higher bandwidth than SDRAM (SDR or DDR.)

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 18, 2001 11:48:49 PM

uhhh.... you are the one that brought up the p4
Quote:
When placed in a system that makes use of the bandwidth and low latency, such as a P4 system, you will see incredible benchmark scores in memory tests.

So I provided some more insight into that statement, and that the huge bandwidth of RDRAM + p4 does not always mean increased performance.

--------
I have not yet begun to procrastinate.
April 19, 2001 12:09:32 AM

"the huge bandwidth of RDRAM + p4 does not always mean increased performance"

It does mean increased performance in any memory benchmark. Non-memory benchmarks test the system as a whole, with more emphasis on other subsystems, and are not very reliant on the speed of memory. Hence they are not being discussed in this thread.

The purpose of all of this is to stop many of the circular arguments I've seen floating around. Many people yell that "RDRAM has worse latency than DDR, so the P4 sucks." Many people yell that "The P4 sucks, so RDRAM is crap." This is a circular argument and is invalid. I've shown that RDRAM has better latency than SDRAM (SDR and DDR.) It does not hinder the performance of the P4. In fact, RDRAM will help any system with any processor more than SDRAM (SDR or DDR) will because of its superior latency and bandwidth.

I'm not comparing processors here. I'm comparing RAM. An Athlon built to use RDRAM woudl perform better than an Athlon built to use DDR.

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 19, 2001 12:32:15 AM

Quote:
It does mean increased performance in any memory benchmark

<A HREF="http://www.tomshardware.com/mainboard/00q2/000529/image..." target="_new">Not this one!</A>. Although it's a p3, it still shows the differences in SDRAM to RDRAM. taken from <A HREF="http://www.tomshardware.com/mainboard/00q2/000529/index..." target="_new">this page on Tom's</A>

I agree that circular arguements are futile, but on the same note, you must realize where RDRAM fails to impress.

Quote:
I'm not comparing processors here. I'm comparing RAM. An Athlon built to use RDRAM woudl perform better than an Athlon built to use DDR.

I'd like to see that! However, I doubt AMD would ever sell it's soul to the devil. Right now, the DDR manufacuturers on trial agains rambus are close business partners with AMD.



--------
I have not yet begun to procrastinate.
April 19, 2001 12:40:56 AM

Perhaps I should rephrase: It does mean increased performance in any memory benchmark when used in a system that can take advantage of it.

The benchmark you showed (by the way your first URL was bad) was for a PIII system using RDRAM. Motherboards should never have been made for PIII systems with RDRAM. The extra bandwidth was wasted and the bus timings were off, not calibrated for RDRAM.

Test any processor actually _made_ to work well with RDRAM and you'll see the benefits.

-Raystonn

-- The center of your digital world --
April 19, 2001 1:02:06 AM

http://www.anandtech.com/showdoc.html?i=1373&p=14

IF RDRAM didn't suffer from high latencies, then why on earth doesn't it perform any better on a P3? Without a good chipset like the i850- RDRAM would go nowhere! RDRAM is not a efficient design- it required a WHOLE new processor and chipset to get any performance. DDR SDRAM just needed a new chipset- which took 6 months to design. How long as Intel been working with Rambus on RDRAM? Since like 97.

-MP Jesse

"Signatures Still Suck"
April 19, 2001 2:29:30 AM

A quote from Tomas Pabst: "Overall RDRAM latency is not "comparable" with SDRAM latency; it is actually much worse. This too is in contradiction with Rambus claims."

I don't know what makes you think your smarter than him.

- Tempus fugit donec vestrum relictus tripudium. Autem amor praeterea magis pretium.
April 19, 2001 3:58:36 AM

*yawn*

Ok, it has lower latency. Does it get 5 more fps on Unreal Tourney at 1600x1200 compared to DDR?
Anonymous
a b } Memory
April 19, 2001 4:51:34 PM

I would also like to know if the Rambus combined with the P4 outpreforms DDR or SDRAM systems in aplications such as games (unreal tournament, Quake, etc.) And then what about other applications?
April 23, 2001 6:55:15 AM

"I don't know what makes you think your smarter than him."

I don't know why you think Tom is God. But none of this is very relevant to the real discussion. RDRAM requires a highspeed bus to perform well. The P3 does not supply this.

-Raystonn

-- The center of your digital world --
April 23, 2001 6:56:36 AM

I don't think memory executes programs. For a comparison of CPU's, see the CPU forum.

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 24, 2001 5:22:13 PM

So answer me this! If the latency increase with the more memory you add (with RDRAM) what is the optiomal amount of RDRAM for a system, to get minaml latency with a sufficient amount of ram? Is it better two have two 64 MB peices or two 128 MB peices of RDRAM? Of course the first has lower latency but is it worth having less ram for lower latency?
April 24, 2001 6:26:55 PM

"what is the optiomal amount of RDRAM for a system"

I suggest getting the 2 largest RIMMs you can afford. This allows memory manufacturers to optimize the memory on chip (though they don't have to) whereas getting 4 RIMMs forces serialization.

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 24, 2001 7:18:51 PM

But didn´t you say that the more ram the more latency? Or does that only apply to how many banks you fill up? Aren´t the memory chips on the individual RDRAM pieces in serial as well?
April 24, 2001 8:19:29 PM

"Aren´t the memory chips on the individual RDRAM pieces in serial as well"

The memory chips on the RIMMs come in all kinds of sizes. They don't even have to be implemented serially. They can be implemented as parallel and exposed to the memory subsystem as 1 large memory chip on the RIMM. Then you only get further latency by adding new RIMMs to the system.

It's all up to the company making the memory. They can optimize the memory however they like.

-Raystonn

-- The center of your digital world --
April 25, 2001 3:12:51 AM

Ray~ I enjoyed reading your original post that kicked-off this thread. I'm serious when I say that you write extremely well. But my feelings about this topic are best summed up by Tom's signature of the week:

"In theory, there is no difference between theory and practice. In practice, there is" - ergeorge.
April 25, 2001 3:14:19 AM

oooo. Good one. I mean it. Cleaver.

- Tempus fugit donec vestrum relictus tripudium. Autem amor praeterea magis pretium.
April 25, 2001 3:19:23 AM

My numbers are all firmly based in reality thanks. I haven't even begun to mention the effects of a load on the system on memory latency for RDRAM and SDRAM (SDR or DDR.) I can if you'd like.

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 25, 2001 4:23:17 PM

So, how would I find out how the Ram maker designed the module. Because I would prefer of course a Rimm with the lowest possible latency, which I would think would be one with the chips on the Rimm in Parallel right?
Anonymous
a b } Memory
April 25, 2001 7:15:53 PM

If RDRAM indeed doesn't have to be implemented serially, then there might be future for it, after all. I don't know how anyone could have conceived of such an obviously unscalable idea. The "only 5ns extra" is not an argument, unless one wants to talk only about the present-day, home-computer market. And even then, I'm not sure how convincing is that theory in practice.

Leo
April 25, 2001 8:38:50 PM

You'll have to ask the manufacturers. Try visiting their websites.

-Raystonn

-- The center of your digital world --
April 25, 2001 8:41:09 PM

Actually, RDRAM scales much better under system load than SDRAM (SDR or DDR.) Under a typical system load with a great deal of reading and writing to memory, RDRAM will vastly outperform the SDRAM. SDRAM will find itself the victim of many collisions and dead wait states transitioning from reading to writing and back.

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 25, 2001 8:51:40 PM

Sorry, I was talking about scalability with respect to adding more RAM. Not everyone is gonna be happy with 256MB.

Leo
April 25, 2001 9:05:25 PM

You are free to have as much as 2GB of memory on Intel's P4 motherboard. Please check out the "Bandwidth and Latency: FAO 1 and 2" thread in this forum for more details on latency issues.

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 25, 2001 9:14:31 PM

Forget it... it's not what I was talking about.

Remember the time when 512 KB was more than anyone could wish for? :)  I was talking about the RDRAM technology having a future in principle. And I was saying "maybe," but only if it didn't have to be limited to the serial connection. The very concept of memory getting slower when more is added (*even* if marginally) is the stuff that jokes are made of.

Leo
April 25, 2001 9:22:10 PM

Not really, the "stuff that jokes are made of" would include memory getting slower as system load and memory accesses increase. SDRAM (SDR and DDR) gives you low latency while there's a negligable system load or a synthetic benchmark doing only reading or only writing. When system load increases or when reads are mixed with writes (normal in standard applications) SDRAM latency goes sky high. RDRAM is not affected at all by high loads. You get the power when you need it.

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 25, 2001 9:33:44 PM

Again you're not listening.

You mentioned briefly before that RDRAM doesn't in fact have to be connected serially. Personally, I found that interesting and commented that if indeed that were the case, then RDRAM might have a future, after all.

Now, if we could kindly go back to that issue, *is* it really the case? Because if it is, then I would have one less reason to avoid RDRAM.

Leo
April 25, 2001 9:48:32 PM

That's an implementation issue that's entirely up to the manufacturer. You get less latency in medium and high memory-use applications with the serial architecture than with the parallel architecture, so I would not recommend a manufacturer do that. But hey, it's up to them.

-Raystonn

-- The center of your digital world --
Anonymous
a b } Memory
April 26, 2001 2:08:19 PM

A question still remains. If RDRAM is better at high loads, has higher memory bandwidth and good latency, then why is it not being used in any graphics cards? And we're talking an industry where there is a memory bandwidth crunch, and companies are going after each other's throat.

Leo
April 26, 2001 9:58:29 PM

Because it has bad press.

-Raystonn

= The views stated herein are my personal views, and not necessarily the views of my employer. =
Anonymous
a b } Memory
April 27, 2001 10:41:19 AM

YOU ARE ABSOLUTELY CORRECT< AND THIS IS WHAT I HAVE BEEN TRYING TO TELL PEOPLE..

TOM started all this latency bullshit and he was way off..

rambus does effectually have lower latency when you consider the CAS 2, clock doubled 800 mhz,
smaller instructions, and higher speed,
and just as with the 20 stage pipeline of the P4 is necessary in order to speed up the performance and at the end is actually faster if the design takes that into account..
good job and good post my friend..
finally people are starting to listen and forget this
AMD DDR CRAP and LIE

best
CAMERON


CYBERIMAGE
<A HREF="http://www.4CyberImage.com " target="_new">http://www.4CyberImage.com </A>
Ultra High Performance Computers-
Anonymous
a b } Memory
April 27, 2001 10:48:10 AM

NOT TRUE,

RAMBUS is serial, but can be concurrent and operate in multiple channels like the 3.2 GPS 850 memory design, which effectively eliminates much of the serial latency..
that combined with much higher speed, smaller data,
and clock doubling, and optimized and buffered chipset is much faster than DDR or SDRAM...

TOM was way off in his latency crap..
the proof is in the pudding,,
I have tested dozens of P4 machines in memtach, Stream,
SANDRA 2001b, and SPEC, and it is way faster under every
interation than sdram or ddr even in ramdom bytes and out of order streams

the memory pages of rambus and P4 chipsets like 850 are optimized too, which is not often taken into account,
and there are bioses like Award on the ASUS which allow for turbo charging rambus with certain optimizing settings
that make it faster still
this latency thing about rambus needs to die as in real workd application and with P4 chipsets and system it simply is NOT TRUE
best
CAMERON

CYBERIMAGE
<A HREF="http://www.4CyberImage.com " target="_new">http://www.4CyberImage.com </A>
Ultra High Performance Computers-
!