Speed Vs. Latency: Myths And Facts
There's a myth that every new memory format brings with it a latency penalty. The myth is perpetuated by the method upon which latency labels are based: Clock cycles.
Consider the latency ratings of the three most recent memory formats: Upper-midrange DDR-333 was rated at CAS 2; similar-market DDR2-667 was rated at CAS 4 and today's middle DDR3-1333 is often rated at CAS 8. Most people would be shocked to learn that these vastly different rated timings result in the same actual response time, which is specifically 12 nanoseconds.
Because cycle time is the inverse of clock speed (1/2 of DDR data rates), the DDR-333 reference clock cycled every six nanoseconds, DDR2-667 every three nanoseconds and DDR3-1333 every 1.5 nanoseconds. Latency is measured in clock cycles, and two 6ns cycles occur in the same time as four 3ns cycles or eight 1.5ns cycles. If you still have your doubts, do the math!
The problem perceived by many less-informed buyers is that faster memory responds more slowly, but it's obvious from these examples that this simply isn't often the case. The real problem isn't that response times are getting slower, but instead that they've failed to get quicker! When we see astronomical "speeds," we hope that our entire systems will become "more responsive" as a result. Yet, memory latencies are one place where things really haven't changed much.
We still hope to find some truly "quick" modules, so today's tests will include both "highest stable speed" and "lowest stable latency" configurations.
But What Are All Those Numbers?
So latency is measured in clock cycles rather than time, but what do all its numbers refer to? Most buyers look at only the first four latency values, and these appear in order of importance with numbers such as 9-9-9-24 in the case of high-speed DDR3 modules. These are typically labeled CAS Latency (tCL), RAS to CAS Delay (tRCD), RAS Precharge Time (tRP) and Active Precharge Delay (tRAS). A full definition of these functions is found on Page 2 of our article "PC Memory: Just the Facts.
"Because cycle time is the inverse of clock speed (1/2 of DDR data rates), the DDR-333 reference clock cycled every six nanoseconds, DDR2-667 every three nanoseconds and DDR3-1333 every 1.5 nanoseconds. Latency is measured in clock cycles, and two 6ns cycles occur in the same time as four 3ns cycles or eight 1.5ns cycles. If you still have your doubts, do the math!"
Based off of the cycle-based latencies of the DDR-333 (CAS 2), DDR2-667 (CAS 4), and DDR3-1333 (CAS8), and their frequences, you come to the conclusion that each of the memory types will retrieve memory in the same amount of time. The higher CAS's are offset by the frequences of the higher technologies so that even though the DDR2 and DDR3 take more cycles, they also go through more cycles per unit time than DDR. How is it then, that DDR2 and DDR3 technologies are "better" and provide more bandwidth if they provide data in the same amount of time? I do not know much about the technical details of how RAM works, and I have always had this question in mind.
Bandwidth = Rate at which you can get the "goodies"
7-7-6-24-2t at 1333Mhz or
9-9-9-24-2t at 1600Mhz
This is FSB at 1600Mhz unlinked. Is there a method to calculate the best setting without running hours of benchmarks?
however hard I want to see what temperatures were other modules at
a voltage of ~ 2.1v, does not mean that the platinum series is not performant but I saw a ReapearX which tended easy to 1.9v(EVP)940Mhz, that means nearly a DDR 1900, which is something, but in chapter of stability/temperature in hours of functioning, ReapearX beats them all.
First you issue a command to open a row (this is your latency), then in a row you can access any data you want at the rate of 1 datum per cycle with latency depending on pipelining.
So for instance if you want to read 1 datum at address 0 it will take your CAS lat + 1 cycle.
So for instance if you want to read 8 datums at address 0 it will take your CAS lat + 8 cycle.
Since CPUs like to fill their cache lines with the next data that will probably be accessed they always read more than what you wanted anyway, so the extra throughput provided by higher clock speed helps.
But if the CPU stalls waiting for RAM it is the latency that matters.