Sign in with
Sign up | Sign in

Lowest Latency Test Results

DDR3-1333 Speed and Latency Shootout
By

Using a relatively safe 1.80 volt setting, the DDR3-1333 test modules reached the following "best stable timings" at 1600 MHz, 1333 MHz and 1066 MHz data rates.

Lowest Stable Latencies at 1.80 Volts
DDR3-1600 DDR3-1333 DDR3-1066 Rated Settings
Aeneon X-Tune DDR3-1333 9-8-8-15 8-7-6-13 6-5-5-10 8-8-8-15
G.Skill PC3-10600 Failed 8-7-7-14 7-6-6-12 9-9-9-24
Kingston HyperX PC3-11000 Failed 7-7-6-13 6-6-5-12 8-8-8-24
Kingston ValueRAM PC3-10600 9-7-6-15 8-6-6-12 6-5-4-9 8-8-8-24
Mushkin EM3-10666 9-8-7-14 8-6-5-14 6-5-4-14 9-9-9-24
OCZ Platinum PC3-10666 8-7-6-15 6-5-4-12 4-4-3-9 7-7-7-20
OCZ ReaperX PC3-10666 8-7-6-13 6-5-4-12 5-4-3-8 6-5-5-18
Patriot PC3-10666 Unstable 6-6-5-12 5-5-4-9 7-7-7-20
Super Talent PC3-10600 7-6-6-13 6-5-5-10 5-4-4-9 8-8-8-18
Wintec AMPX PC3-10600 8-7-6-15 6-5-4-12 5-4-3-9 9-9-9-24

OCZ pulls amazing 4-4-3-9 timings at a 1066 MHz data rate, while the potentially lower-cost Wintec AMPX finds itself in a three-way tie with both OCZ kits at DDR3-1333. Overclockers looking for the lowest latency might prefer Super Talent's 7-6-6-13 timings at a 1600 MHz data rate.

Patriots DDR3-1333 had reached a stable 1652 MHz data rate on Gigabyte's top-end P35 motherboard, but the Asus Maximus Extreme's X38 chipset appears to be just a little more finicky. The modules didn't even reach a 1600 MHz data rate on the newer platform, but tied for second place in DDR3-1333 latencies.

Lower latencies are meant to improve system performance, so let's consider what the benchmarks can tell us.

Display all 6 comments.
This thread is closed for comments
  • 1 Hide
    dv8silencer , May 7, 2008 12:45 AM
    I have a question: on your page 3 where you discuss the memory myth you do some calculations:


    "Because cycle time is the inverse of clock speed (1/2 of DDR data rates), the DDR-333 reference clock cycled every six nanoseconds, DDR2-667 every three nanoseconds and DDR3-1333 every 1.5 nanoseconds. Latency is measured in clock cycles, and two 6ns cycles occur in the same time as four 3ns cycles or eight 1.5ns cycles. If you still have your doubts, do the math!"

    Based off of the cycle-based latencies of the DDR-333 (CAS 2), DDR2-667 (CAS 4), and DDR3-1333 (CAS8), and their frequences, you come to the conclusion that each of the memory types will retrieve memory in the same amount of time. The higher CAS's are offset by the frequences of the higher technologies so that even though the DDR2 and DDR3 take more cycles, they also go through more cycles per unit time than DDR. How is it then, that DDR2 and DDR3 technologies are "better" and provide more bandwidth if they provide data in the same amount of time? I do not know much about the technical details of how RAM works, and I have always had this question in mind.
    Thanks
  • 1 Hide
    Anonymous , June 27, 2008 12:08 PM
    Latency = How fast you can get to the "goodies"
    Bandwidth = Rate at which you can get the "goodies"
  • 0 Hide
    Anonymous , June 27, 2008 9:23 PM
    So, I have OCZ memory I can run stable at
    7-7-6-24-2t at 1333Mhz or
    9-9-9-24-2t at 1600Mhz
    This is FSB at 1600Mhz unlinked. Is there a method to calculate the best setting without running hours of benchmarks?
  • 0 Hide
    Anonymous , October 3, 2008 5:13 PM
    Sorry dude but you are underestimating the ReapearX modules,
    however hard I want to see what temperatures were other modules at
    a voltage of ~ 2.1v, does not mean that the platinum series is not performant but I saw a ReapearX which tended easy to 1.9v(EVP)940Mhz, that means nearly a DDR 1900, which is something, but in chapter of stability/temperature in hours of functioning, ReapearX beats them all.
  • 0 Hide
    Anonymous , October 6, 2008 5:47 PM
    All SDRAM (including DDR variants) works more or less the same, they are divided in banks, banks are divided in rows, and rows contain the data (as columns).
    First you issue a command to open a row (this is your latency), then in a row you can access any data you want at the rate of 1 datum per cycle with latency depending on pipelining.

    So for instance if you want to read 1 datum at address 0 it will take your CAS lat + 1 cycle.

    So for instance if you want to read 8 datums at address 0 it will take your CAS lat + 8 cycle.

    Since CPUs like to fill their cache lines with the next data that will probably be accessed they always read more than what you wanted anyway, so the extra throughput provided by higher clock speed helps.

    But if the CPU stalls waiting for RAM it is the latency that matters.
  • 0 Hide
    Anonymous , December 15, 2012 10:41 AM
    what is on pc3-10600s "s" ?