If I have an IMAC that has a max memory speed of 1333 and install 1600 DDR3 sodimms will the latency improve from the 1600 timing specs? In other words will it then run at 9 or 8 latency when running at 1333 instead of it spec of 11 at 1600?
This needs a response from someone who has actually run some type of memory test that shows resulting latency figures for the same memory clocked normally and then clocked down to the next lowest speed. Also is there an app that reports cas latency as a result
More about :faster memory ddr3 1600 clocked ddr3 1333 improve latency original specs
I don't know how imacs are since i never bought one unless i used it for production but theres not much of a difference in my view. I bought 1600 9-9-9-24 2T ddr3 for my amd 1055T it reduces it down to 1333 8-8-8-20 1T on automatic settings mode. It feels snappy enough for me but when i change it to 1600 its only half a second faster in speed when loading up icons in control panel or games. I got pics in memory benchmark if you wan't to see the difference its not much in my settings since i don't overclock.
Well to be very very honest memories sticks that are below 1000mhz usually are slow, for eg. U have a memory stick of 780mhz and u run a benchmark,a graphics benchmark,I personally believe that a graphics benchmark is the best way to see if the clock speeds on,ur memory stick have affected the frame rates,well so as I was saying suppose that u have a memory stick of 4gb installed running at 780mhz and in the benchmark results u get 40 fps, now if u put in a 1333 MHz u will see a definite improvement of 4-5 frames but, above that like for 1600 MHz stick or 1800 MHz or 2430 MHz u might only be able to squeeze a man of 2 or 3 fps,In short not making a noticeable difference now next comes the memory size as the size increases it decreases the processing time,but only to some extent like if I had 4 GB stick,installed running at 1333mhz and it took me,30,secs to copy a 500mb file from one place to another then on increasing the size to 8gb will produce a significant difference and might reduce the time by 10 or 12 seconds that is a huge improvement, but then, if 16gb is installed then barely,0.5-1 second improvement will be observed and so, I can conclude is that the difference in times or latencies or fps alters to a definite extent because software these days is'nt pretty much optimized according to the more,rather pretty more advanced hardware
I hope this clears everything up for u
any questions any problems always available
my sys specs
i7 3770k @4.6ghz
asrock z77 extreme4
2x r9 290 oc versions
corsair vengeance 32gb
Seagate barracuda 3tb
1 tb Samsung ssd
corsair h55i with two noctua fans
Technically, most latency in DRAM is caused by combinational logic and analog processing delays and those delays are measured in nanoseconds. But because the DRAM front-end is synchronous, those delays get expressed in terms of cycles at a given clock rate for convenience by dividing the delay by clock period and rounding up.
So if you have 1600-10-10-10 RAM that you want to run at 1333, the 1333 latencies should be 10 * 1333/1600 = 8.33 rounded up, which is 9-9-9. Depending on how much rounding-up was behind the 10-10-10 original timings, 8-8-8 might also be possible.