Low cas latency vs high speed

Stoic Muffins

Reputable
Aug 25, 2014
136
0
4,690
I am looking into buying new ram and was wondering if it would be better to get ram that was ddr3 cas 7 running at 1600 or ddr3 cas 11 running at 2400, I am looking to both game and run everyday programs, which would be faster and more stable in the long run. Is 2400 fast enough or should i move up to 2800? the cas gets up to 16 with high speeds though so is it worth it? I am also curious as to whether ddr4 is really faster than ddr3?

Thank you
- Stoic Muffins
 
Solution
Guys, here's the problem:

For over a decade, memory chips haven't really been able to break 8.5 nanoseconds by very much, regardless of how much $ you spend to slice or dice the frequency / latency variables.

Check out this article: Five Overclockable 32 GB DDR3 Kits, Reviewed - http://www.tomshardware.com/reviews/32-gb-ddr3-ram,3790.html

Here's the math on memory:

Example #1 - 1600.00(MHz) / 7(ClockLatency) = 228.57(MHz) / 2(DoubleDataRate) = 114.29(MHz) then 1(Second) / 114.29(MHz) = 8.75nS (nanoSeconds)

Example #2 - 1866.67(MHz) / 8(ClockLatency) = 233.33(MHz) / 2(DoubleDataRate) = 116.67(MHz) then 1(Second) / 116.67(MHz) = 8.57nS (nanoSeconds)

Example #3 - 2133.33(MHz) / 9(ClockLatency) = 237.04(MHz) / 2(DoubleDataRate)...

millwright

Distinguished
It is almost always better to go for the faster RAM.

You won't notice the difference.

Latency goes up as RAM speeds increase, but Ram still gets faster.

From what I have read the first DDR4 to come out is about the same as existing fast DDR3, but DDR4 will continue increasing in speed. even with it's increased latency.
 

millwright

Distinguished
In simple number, the difference between 1600 and 2400 Mhz is 800 clock ticks.

The difference between CAS7 and CAS11 is 4 clock ticks.

I'll take the 800 clock tick speed up over the 4 any day

The 4 is really 8 or 10 but still a pittance.
 

CompuTronix

Intel Master
Moderator
Guys, here's the problem:

For over a decade, memory chips haven't really been able to break 8.5 nanoseconds by very much, regardless of how much $ you spend to slice or dice the frequency / latency variables.

Check out this article: Five Overclockable 32 GB DDR3 Kits, Reviewed - http://www.tomshardware.com/reviews/32-gb-ddr3-ram,3790.html

Here's the math on memory:

Example #1 - 1600.00(MHz) / 7(ClockLatency) = 228.57(MHz) / 2(DoubleDataRate) = 114.29(MHz) then 1(Second) / 114.29(MHz) = 8.75nS (nanoSeconds)

Example #2 - 1866.67(MHz) / 8(ClockLatency) = 233.33(MHz) / 2(DoubleDataRate) = 116.67(MHz) then 1(Second) / 116.67(MHz) = 8.57nS (nanoSeconds)

Example #3 - 2133.33(MHz) / 9(ClockLatency) = 237.04(MHz) / 2(DoubleDataRate) = 118.52(MHz) then 1(Second) / 118.52(MHz) = 8.44nS (nanoSeconds)

Example #4 - 2400.00(MHz) / 11(ClockLatency) = 218.18(MHz) / 2(DoubleDataRate) = 109.09(MHz) then 1(Second) / 109.09(MHz) = 9.17nS (nanoSeconds)

Example #5 - 2800.00(MHz) / 12(ClockLatency) = 233.33(MHz) / 2(DoubleDataRate) = 116.67(MHz) then 1(Second) / 116.67(MHz) = 8.57nS (nanoSeconds)

Faster memory is selling bandwidth. Overvolting, overclocking and / or tightening up secondary and terciary timings can hack the data slightly, but 8.5nS will remain hard to break until the memory chip manufacturers have their next technological breakthrough. Regardless, the most important factor is to have enough memory. Beyond that, memory has little effect on overall system performance.

Remember that on most computers, unless you run software that specifically requires a page file (swap file), if you have 12GB or more of memory, you can set your page file to zero. This accelerates the system because your CPU and drive don't waste read / write cycles, which also extends the life of your SSD. It's better to put the money in a better cooler to overclock your K series CPU, and / or overclock a better GPU.

CT :sol:
 
Solution

Tradesman1

Legenda in Aeternum

________________________

When looking at logical progression though you would go to 2400/10 vs the 2400/11 you show, which will show a difference, this also shows as the ns for an action, it doesn't throw in the fact that for each action as freqs go up, there is wider bandwidth so more can be done
 

Tradesman1

Legenda in Aeternum
People seem to always want to look at one or the other, when they need to look at a combo of both and take into consideration what they do.....i.e. generally you want the bandwidth for multi-tasking and doing memory centric things, but then again if you are single tasking or gaming for the most part 1600/7 will outperform a 2133/11 set....generally when comparing sets you want to look at each set going a step up or down in both freq and CL i.e. in your example 1600/ is great next would be 1866/8 then 2133/9 then 2400/10, 2666/11 that's a high performance DRAM ladder......throw in 2133/11 to that mix and it falls to underneath the 1866/8 or 1600/7 performance wise
 

CompuTronix

Intel Master
Moderator
Stoic Muffins,

There's yet another variable to consider. Some individual CPU's have IMC's (Integrated Memory Controllers) that don't respond well to higher frequency memory, which can limit the CPU's overclock and cause the core temperatures to run hotter. If your luck of the draw is such that you happen to own one of these CPU's, then what you end up doing is underclocking your 2800 modules and tightening up the timings ... which isn't necessarily a bad thing.

Check out this Overclocking Guide: 3 Step Guide to Overclock Your i7 / i5 Haswell Platform - http://www.overclockers.com/3step-guide-to-overclock-intel-haswell

Tradesman1, I know the Sticky's were recently shuffled, but if I'm not mistaken, didn't 4Ryan6 have a good one in the Overclocking Forum about IMC / high frequency memory / CPU related issues?

CT :sol:
 


I hate to be a party pooper but there are some inaccuracies here.

First off, don't confuse the data rate and the IO bus clock. DDR3-1600 has an 800Mhz differential IO bus clock. Data is transferred on the IO pins using a bidirectional strobe. The strobe tracks both the rising edge and falling edge of the clock, for up to two transfers per clock whenever data is transferred. There's nothing intrinsically periodic about the data lanes or the data strobes, so expressing them in units of frequency in time domain is inappropriate; transfers is a better unit. DDR3-1600 operates at up to 1600 MT/s using an 800Mhz clock.

Second, CL stands for Column Latency, not Clock Latency. There two separate interfaces at work on modern DRAM. The first interface is the outward facing interface between the memory controller and the DRAM integrated circuit's IO controller. The second interface is the interior interface between the DRAM's IO controller and the DRAM's memory banks. The number of banks per IC depends on the particular type of SDRAM. Good old fashioned SDR SDRAM has 2 or 4 memory banks, DDR SDRAM has 4 memory banks, DDR2 SDRAM has 4 or 8 memory banks, DDR3 SDRAM has 8 memory banks, and DDR4 SDRAM has 16 memory banks organized into 4 bank groups (borrowed from GDDR5). With the exception of GDDR5 and DDR4 each memory bank is fully independent (bank groups are fully independent in those two). The memory controller can issue a read command to bank 0, then issue a row active command to bank 1, then issue a precharge command to bank 2, and then go back to bank 0 to start receiving the burst transfer.

What you have correctly assessed is that the internal DRAM operation has not improved significantly over the years but by interleaving multiple DRAM banks inside of a single IC behind a multiplexer the performance of the IO bus itself has improved dramatically. A good memory controller can still keep the IO bus busy nearly 100% of the time regardless of the specifics of the timings. A tighter CL in particular means that the memory controller need not handle as many operations in flight at once, but this effect has been mitigated substantially over the years as memory controllers have improved. Intel's DDR3 controller in particular is regarded as being very, very good.

Third, I really wouldn't advise disabling the page file. It won't speed up the system at all.
 

CompuTronix

Intel Master
Moderator
Pinhedd,

Thanks for the excellent info! I seldom contribute to memory threads, which is why I have no Memory Badge, but I do like to read and be informed. Although memory isn't my particular area of expertise, I thought I'd have a go at this thread.

You should write a Sticky for the Memory Forum. The existing Memory FAQ Sticky - http://www.tomshardware.com/forum/275873-30-memory-please-read-posting - written by BrentUnitedMem seems quite good, but it's a little dated at 4 years old. I'm sure our readers would benefit from a new Memory Sticky!

One question though; the following Tom's Articles touch on memory and swap files relative to SSD's and HDD's. As I and many others have been running on 12GB or more of RAM without a swap file for a few years without any problems, would you please elaborate on this for the benefit of the thread, as well as for our silent readers?

Experiment: Can Adding RAM Improve Your SSD's Endurance? - http://www.tomshardware.com/reviews/ssd-ram-endurance,3475-4.html
Memory Upgrade: Is It Time To Add More RAM? - http://www.tomshardware.com/reviews/ram-memory-upgrade,2778-7.html

Thanks!

CT :sol:

EDIT: Stoic Muffins, as Tradesman1's and Pinhedd's answers are more accurate and informed than mine, I have respectfully unselected my answer as the "Solution".
 


I'm currently in the process of rewriting the memory FAQ in its entirety. I've got a half finished draft on one of my cloud storage volumes somewhere. I've been picking at it for months, I just haven't had the motivation to finish it. Maybe I'll get back to it sometime soon.

Anyway, moving on to swapping.

Physical memory is a use-it-or-lose-it resource. In terms of volume, it is also a scarce resource. More is almost always better. A single benchmark won't necessarily show an improvement, but that does not mean that the system as a whole will not show an improvement.
If a volume of physical memory is available but uncommitted to any task, it is doing nothing useful. If that volume is committed to a task, but the contents are infrequently or never used, it is still doing nothing useful. It makes sense then to fill the physical memory with the most frequently used data in the system, and provide a mechanism for ejecting infrequently used data from the memory while still retaining it for the purposes of task completion. Page caching and disk caching solve the first problem (filling uncommitted memory with frequently used data) and page swapping solves the second problem (ejecting committed but infrequently used data from physical memory).
Ejecting infrequently used memory frees up the corresponding physical memory for more demanding use, either for another process that will make use of it, or the cache which will improve the responsiveness of programs that access the file system.

There are some methodology problems in both of those articles that you linked to.
Neither one of them describe the size of the swap volume. The sweet spot seems to be somewhere between 1x and 1.5x the installed capacity and allowing the system to manage the size will usually put it somewhere in the range.
Neither one of them reran the tests to see if the impact of caching (the freed up memory was being filled with something) had any effect, or if the system changed the size of the page file over time to accommodate the programs. On Windows the swap files are files on the file system, so the OS must go through the file system to change the size which can incur temporary overhead.
The second article is laughably incomplete. Many of the datapoints that should be there (eg, 16GiB with a 16GiB swap) are missing. The 12 GiB one should have been repeated to try and improve the certainty in the small difference between the runtimes. It's just not possible to draw a sound conclusion from what they did.

In terms of SSD endurance, most SSDs last a very, very long time and the existence of a page file will not have a significant impact on the lifespan of the device. Many SSDs have built-in wear levelling as well as spare blocks to replace those that become unreadable over time. The only SSD that I would advise to use without a page file is any SSD using Samsung's TLC NAND Flash (Samsung 840) as the endurance of TLC NAND is crap.

The page file is also useful for holding kernel crash dumps. A 1024MiB page file is the recommended minimum.
 

CompuTronix

Intel Master
Moderator
I always appreciate a detailed explanation. Thank you for taking the time to write it.

It's clear that many of these articles are written to accommodate a proposed idea, which is assigned a deadline, and ultimately results in too little testing to support meaningful conclusions based on adequate empirical data, or sometimes even to provide fundamental apples to apples comparisons. There's no room for shortcuts or any substitutions for attention to detail.

Fortunately, I run Samsung 840's in my computers and those that I build for others. Gotta love their Magician utility!

Sorry, Stoic Muffins, for hijacking your thread ... we're not supposed to do this, but I'm sure we've provided plenty of information ... probably way more than you expected! We aim to please! :D
 


If you have the Samsung 840 Pro you have nothing to worry about as it still uses the 2-bit MLC NAND Flash which has acceptable P/E endurance. The Samsung 840 (non pro) and Samsung 840 EVO use the 3-bit MLC (also called TLC) which is several times less robust.
 

CompuTronix

Intel Master
Moderator


Yes, all my Samsung's are 840 Pro's. Paid a premium for some of them, but didn't mind. I plan on snapping up several more of the 256's.
 


I see no reason why it shouldn't as long as the motherboard firmware supports configuration. The bulk of the works is done by the memory controller (which is now located in the CPU package) and the memory modules themselves. The motherboard just provides connectivity for the most part.