The New Mainstream Standard?
The portfolio of DDR3 speeds has opened up far sooner than it had for DDR2, as DDR3 data rates of 1066, 1333 and 1600 MHz have all appeared within the past few months to replace DDR2's 533-, 667-, and 800 MHz data rates. As with DDR2, higher "nonstandard" speeds are also available, but standard speeds are what most buyers need to be familiar with Compare Prices on DDR3-1333 Memory.
Today, we bring you what should have eventually become the "mainstream choice" of DDR3 speeds, as its 1333 MHz data rate falls between the "low-cost" and "high-performance" 1066 MHz and 1600 MHz standards that fill the spectrum. A total of 13 top brands were invited to participate, and eight were able to respond with a total of ten kits for your consideration.
As with most of our shootouts, we pushed each kit to the edge of stability to find its ultimate performance, but before we move to the test results, let's consider the market for DDR3. What advantages does it have over DDR2? Why was it introduced? And when new technology comes at a price premium, who should buy it?
What's In A Name?
The "official" name for DDR memory is based on its bandwidth rather than clock speed. The easy method to convert data rate to bandwidth is to multiply by eight. Thus, DDR-400 is called PC-3200; DDR2-800 is called PC2-6400 and DDR3-1600 is called PC2-12800.
The math behind this conversion factor is simple: PC memory modules based on SDRAM technology use a 64-bit connection; there are eight bits in a byte and 64 bits equal eight bytes. For example, DDR2-800 transfers 800 megabits per pathway per second; its 64 pathways provide one eight-byte transfer per cycle and 800 times eight is 6400.
The problem comes with "rounding" and was first noticed with DDR-266 (PC-2100). The data rate of 266 MHz is actually 266.6 (continuously repeating decimal) megahertz, so the true transfer rate was 2133 MHz.
Today's DDR3-1333 has a peak bandwidth of 10666 MHz, which can be improperly rounded down and called PC3-10600, rounded up to be called PC3-10700 or stated without rounding as PC3-10666 depending on the manufacturer's desires.
Buyers will find that searching some venders for multiple DDR3-1333 brands will require them to check all three "ratings" to view modules of the same actual speed, but most brands label their DDR3-1333 products as either PC3-10600 or PC3-10666.
"Because cycle time is the inverse of clock speed (1/2 of DDR data rates), the DDR-333 reference clock cycled every six nanoseconds, DDR2-667 every three nanoseconds and DDR3-1333 every 1.5 nanoseconds. Latency is measured in clock cycles, and two 6ns cycles occur in the same time as four 3ns cycles or eight 1.5ns cycles. If you still have your doubts, do the math!"
Based off of the cycle-based latencies of the DDR-333 (CAS 2), DDR2-667 (CAS 4), and DDR3-1333 (CAS8), and their frequences, you come to the conclusion that each of the memory types will retrieve memory in the same amount of time. The higher CAS's are offset by the frequences of the higher technologies so that even though the DDR2 and DDR3 take more cycles, they also go through more cycles per unit time than DDR. How is it then, that DDR2 and DDR3 technologies are "better" and provide more bandwidth if they provide data in the same amount of time? I do not know much about the technical details of how RAM works, and I have always had this question in mind.
Bandwidth = Rate at which you can get the "goodies"
7-7-6-24-2t at 1333Mhz or
9-9-9-24-2t at 1600Mhz
This is FSB at 1600Mhz unlinked. Is there a method to calculate the best setting without running hours of benchmarks?
however hard I want to see what temperatures were other modules at
a voltage of ~ 2.1v, does not mean that the platinum series is not performant but I saw a ReapearX which tended easy to 1.9v(EVP)940Mhz, that means nearly a DDR 1900, which is something, but in chapter of stability/temperature in hours of functioning, ReapearX beats them all.
First you issue a command to open a row (this is your latency), then in a row you can access any data you want at the rate of 1 datum per cycle with latency depending on pipelining.
So for instance if you want to read 1 datum at address 0 it will take your CAS lat + 1 cycle.
So for instance if you want to read 8 datums at address 0 it will take your CAS lat + 8 cycle.
Since CPUs like to fill their cache lines with the next data that will probably be accessed they always read more than what you wanted anyway, so the extra throughput provided by higher clock speed helps.
But if the CPU stalls waiting for RAM it is the latency that matters.