CL9 vs CL11: what's the difference?

bit_user

Polypheme
Ambassador
I'm having trouble finding any remotely recent comparison or benchmark of different CAS Latency.

Once upon a time, a slower memory with a smaller CL could equal or exceed the faster memory with a larger CL in some benchmarks. Does that still hold true with the prefetching capabilities of modern CPUs and their integrated memory controllers?
 
Solution
Although I credited Nikorr's most helpful post with the answer, anyone genuinely interested in the subject should check out my post with the analysis.
Just check these and it shows that it doesn't matter much.

There's only a small difference. Having timings that high (CL11) negates most of the speed advantage. You're not likely to notice the difference, and that unnoticeable difference isn't worth the price.


RAM speed comparison - We're looking at a less than 2% difference from the fastest to the slowest.
39732.png

39732.png

39735.png
 

bit_user

Polypheme
Ambassador

Thanks for the info.

Whenever I look at benchmarks, I usually approach with two questions:

1) What is the most relevant benchmark for my purposes?

2) What sort of benchmark highlights the differences?


The reason I ask myself #2 is that I rarely find a benchmark or set of benchmarks that perfectly fit my usage. So, what I want to know is that if I start to stray outside of the usage patterns in the common benchmarks, when am I likely to get into territory where the difference matters most (and how much might that be)? Another way to think of it is as the worst case scenario or the upper bound.

Anyway, the reason I'm rambling about benchmarks is that I doubt any of the ones you cited stress the memory subsystem very hard. Caches=yes. But not as much main memory.

Thanks for posting, but I will keep searching.
 
And watch these. This is real life use, not a synthetic benchmark,

and the RAM is with in small differences. So theoretical numbers will show the difference, but inm real life, these don't matter. Most of the tasks u do, is done in milliseconds,

and longer tasks are done in seconds and that is hard to differentiate without the stop watch.

39740.png


39741.png
 

bit_user

Polypheme
Ambassador

Okay, so on the Physics benchmark (which I feel probably involves a lot of random access and therefore is a pretty good stress test of CAS), we get a difference between DDR3-1333 CL9 and CL7 of 2.7% a difference between DDR3-1600 CL9 and CL7 of 2.0%, and a difference between DDR3-1866 CL9 and CL8 of 0.6%. That's some good info.

In the WinRAR graph I posted, the DDR3-1066 CL5 is 5.8% faster than CL7. The DDR3-1333 CL6 is 7.8% faster than CL9. The DDR3-1600 CL6 is only 3.2% faster than CL9.

Now, to produce data most relevant to my question (the diff between 9 and 11), let's look at just differences of 2, at the highest end of the ranges measured. DDR3-1333 CL7 is 3.8% faster than CL9. DDR3-1600 CL7 is 2.4% faster than CL9.

In summary, we're seeing a difference between CL7 and CL9 of 2.7% and 2.0% on DDR3-1333 and DDR3-1600 on the physics benchmark, and a difference of 3.8% and 2.4% on the WinRAR benchmark. I guess that establishes a pretty good upper bound for a CL difference of 2 on DDR3-1600 (the memory I'll be using).
 

bit_user

Polypheme
Ambassador
I appreciate all the help, but I pretty much got the info I wanted. Or reasonably close, at least.

In case you're asking out of curiosity, I plan to use a E5-1620 Xeon (it's OEM-only, but I'm pretty sure I can get one) on a Supermicro X9SRA motherboard that I already bought.

The reason I was asking about CAS Latency is that I found some dual-ranked ECC ram that's CAS 9, but all the rest of the ECC DDR3-1600 is CAS 11. I was wondering whether I should risk the dual-ranked stuff, but I think I'll stick with CAS 11.

What I'm building is a GPU-compute workstation. ECC is a non-negotiable, as is overclocking or running outside of stock timings. It's just a shame Intel decided to limit that to its Xeons (and a handful of low power SKUs).
 

TRENDING THREADS