Closed

Single memory vs dual channel

Is there any difference on performance between
2x4GB dual channel and 1 single stick of 8GB?

i want 8gb of ram and the single stick is much cheaper so if the performance is almost the same i rather get that single stick.

this is what i want to buy
http://www.overclockers.co.uk/showproduct.php?prodid=MY-080-PA
24 answers Last reply
More about single memory dual channel
  1. dual channel is twice as fast as single. But in the real world you won't see much difference except in benchmarks. Also the single stick route gives you better upgrade path...
  2. Hmm so my single stick of 8Gb will be 2 times slower than 2 sticks of 4gb in dual channel?


    -if i buy another identical 8gb stick, and place it correctly on the motherboard, will it work as dual channel for a total of 16gb dual channel?
  3. No not exactly 2 times faster. Just 2 times the theoretical bandwidth or something. You will see a 10% gain at best in bench marks, but you cant tell the difference in real life. And thats odd that 1 module costs a lot less. Usually they cost the same here on the US

    -And yes you can just add another 8 gb in to make it two channels. Although they dont have to be the same brand and model, make sure they run the same frequency and timings or else your mobo will throttle the faster one until they both match.
  4. Yes, best to add another 8gb stick since you already own one now. And memory is fairly cheap now. Go ahead and don't look back....
  5. thanks guys!
  6. It is better to go with 8GB single channel, because the amount of ram counts always the most.
  7. Dual Channel provides a 10-15% performance boost, not near twice as much, what happens is a stick of DRAM is treated as a 64bit device, when dual channel is enabled the 2 or 4 sticks in dual channel are seen as a single 128bit device, so it runs faster........And yes you can try adding another stick, they may or may not work together, even if the same exact model...also if 1600 or better XMP won't work, XMP is programmed into the sticks by the packaged set, which is why DRAM is available in so many different configs of sticks
  8. Anonymous said:
    Is there any difference on performance between
    2x4GB dual channel and 1 single stick of 8GB?

    i want 8gb of ram and the single stick is much cheaper so if the performance is almost the same i rather get that single stick.

    this is what i want to buy
    http://www.overclockers.co.uk/showproduct.php?prodid=MY-080-PA


    Well, what is the motherboard and the CPU you have or you are looking at? If your system supports single 8GB module then you should be fine else there is no option you will you have to go with two 4GB modules.
  9. yo man this thread is so old why is it still being discussed lol.
  10. Good point ;) Didn't even look at the dates, I saw it when Victorst responded.....
  11. Azn Cracker said:
    yo man this thread is so old why is it still being discussed lol.


    I'm not agree. This thread is old but the question and the answers still valuable for everyone looking for it, Google and other search engines still link to the old threads. It's a wast of time by reading through threads without finding a clear answer of your question. Thanks to all taking your time sharing your knowledge, posting your answers (even the old threads). Hundreds, thousands newbies will read and appreciate your contribution. Keep doing it everyone!!
  12. And with an integrated GPU ....

    http://www.youtube.com/watch?v=QMCvvtaZ5Z0

    Also . .if you are in the market for second hand ram, you can often pick up dual channel kits of what where originally more expensive branded lower latency (gaming) ram cheaper than single sticks of the equivalent overall size in proably a less prestigious brand. ie .. cheaper dual channel kits, faster CL .
  13. if, as they say. little difference ..

    a cheap way of populating 2 mobos is a 2 x 4gb kit (say), & spitting it over 2 x mobos

    makes you wonder if a ram starved pc may be almost as quick swapping out to an ssd cache file?

    I see $50 oz $ for 64gb kingston & $20+ for 128gb - ie - $1600~ for 126 gb ram vs A$70 in ssd
  14. msroadkill612 said:
    if, as they say. little difference ..

    a cheap way of populating 2 mobos is a 2 x 4gb kit (say), & spitting it over 2 x mobos

    makes you wonder if a ram starved pc may be almost as quick swapping out to an ssd cache file?

    I see $50 oz $ for 64gb kingston & $20+ for 128gb - ie - $1600~ for 126 gb ram vs A$70 in ssd




    WTF?
  15. What I have heard, so are single-channel faster than the dual-channel.

    dual-channel - must read everything twice. and then pass on. (it has dual channel because companies have not been able to build a better single channel with bigger size )

    so what I have understood and learned so is Single-channel fastest and what it is for MHz

    some PC do not support single-channel (might be good to know)

    please correct me if I'm wrong

    Regards
    Limpan
  16. Limpan said:

    please correct me if I'm wrong

    Regards
    Limpan


    A single channel can often achieve the best latency characteristics compared with multi-channel configurations, however, it's rare to find a workload that scales better with the improved latency than it would with the significant bandwidth advantages that come from running multiple channels.

    Configuration changes effecting performance, in order from having the most effect to the least effect in my experience:
    1. Channel interleave
    2. Rank interleave
    3. Speed
    4. Timings

    That's not to say I haven't managed to find some workloads that scale with channel interleave at the bottom of that list, but in these cases, the performance advantage of going that route aren't significant enough to offset the advantages of channel interleave in other workloads.
  17. mdocod said:
    Limpan said:

    please correct me if I'm wrong

    Regards
    Limpan


    A single channel can often achieve the best latency characteristics compared with multi-channel configurations, however, it's rare to find a workload that scales better with the improved latency than it would with the significant bandwidth advantages that come from running multiple channels.



    i found this site. no numbers as evidence, but it makes perfect sense

    http://www.techarp.com/showFreeBOG.aspx?lang=0&bogno=231

    thanks for the quick reply mdocod
  18. Limpan said:
    What I have heard, so are single-channel faster than the dual-channel.

    dual-channel - must read everything twice. and then pass on. (it has dual channel because companies have not been able to build a better single channel with bigger size )

    so what I have understood and learned so is Single-channel fastest and what it is for MHz

    some PC do not support single-channel (might be good to know)

    please correct me if I'm wrong

    Regards
    Limpan


    Dual channel is a little better, by 3 - 5 %.
  19. Yes dual channel is faster, can be up to 10-15% on Intel, single channel runs the DRAM as the single 64bit device that it is, in dual channel the memory controller see tall the DRAM as a single 128bit device and runs it accordingly.
  20. Tradesman1 said:
    Yes dual channel is faster, can be up to 10-15% on Intel, single channel runs the DRAM as the single 64bit device that it is, in dual channel the memory controller see tall the DRAM as a single 128bit device and runs it accordingly.


    That's Ganged mode. Intel uses unganged interleaved mode.
  21. Intel in Dual channel sees the DRAM as a 128 bit device, in tri channel on 1366 as a 192 bit device and in quad mode on 2011 as a 256 bit device
  22. Tradesman1 said:
    Intel in Dual channel sees the DRAM as a 128 bit device, in tri channel on 1366 as a 192 bit device and in quad mode on 2011 as a 256 bit device


    If they were ganged that would be true. However, Intel has used interleaved memory for quite some time; AMD supports it as well as far as I know. Interleaving ping-pongs physical address space assignments along cache block sized chunks (64 bytes) between banks, ranks, and channels (each can be enabled independently).

    No interleaving:

    Addresses 0-63 will be on channel0, rank0, bank0
    Addresses 64-127 will be on channel0, rank0, bank0
    Addresses 128-191 will be on channel0, rank0, bank0
    Addresses 192-255 will be on channel0, rank0, bank0

    This pattern repeats until channel0,rank0,bank0 is full, then moves on to channel0,rank0,bank1 and so on.

    Channel interleaving:

    Addresses 0-63 will be on channel0, rank0, bank0
    Addresses 64-127 will be on channel1, rank0, bank0
    Addresses 128-191 will be on channel2, rank0, bank0
    Addresses 192-255 will be on channel3, rank0, bank0
    Addresses 256-319 will be on channel0,rank0, bank0

    This pattern repeats until rank0,bank0 is full on each channel, then moves on to rank0,bank1, and so on.

    Bank interleaving:

    Addresses 0-63 will be on channel0, rank0, bank0
    Addresses 64-127 will be on channel0, rank0, bank1
    Addresses 128-191 will be on channel0, rank0, bank2
    Addresses 192-255 will be on channel0, rank0, bank3
    Addresses 256-319 will be on channel0, rank0, bank4
    Addresses 320-383 will be on channel0, rank0, bank5
    Addresses 384-447 will be on channel0, rank0, bank6
    Addresses 448-511 will be on channel0, rank0, bank7
    Addresses 512-575 will be on channel0, rank0, bank0

    This pattern repeats until all of the banks on channel0, rank0 are full, and then moves on to channel0, rank1 (if installed) and eventually to channel1, rank0.

    Channel and bank interleaving:

    Addresses 0-63 will be on channel0, rank0, bank0
    Addresses 64-127 will be on channel1, rank0, bank0
    Addresses 128-191 will be on channel2, rank0, bank0
    Addresses 192-255 will be on channel3, rank0, bank0
    Addresses 256-319 will be on channel0, rank0, bank1
    Addresses 320-383 will be on channel1, rank0, bank1
    Addresses 384-447 will be on channel2, rank0, bank1
    Addresses 448-511 will be on channel3, rank0, bank1

    This pattern repeats through the 8 DDR3 banks until all are full, then moves on to the next rank.

    The benefit of using four independent 64-bit DRAM channels over a single large 256-bit DRAM channel is reduced latency for small datasets, especially those that have little spatial locality.

    Loading a physically contiguous 256 byte dataset aligned on a 256 byte boundary using channel interleaving would be incredibly quick. Just select rank0,bank0 and read the desired column address. However, loading the same data set using bank interleaving requires opening four rows on four banks on one channel (banks 0,1,2,3) and burst transferring from all of them sequentially. Multiple channels does not help here. Without any interleaving, the memory controller would have to perform four separate read operations from the same row (or two rows if it crosses a row boundary) from a single bank.

    Similarly, loading a physically contiguous 64 byte data set (a single cache block) using channel interleaving would be quick if the memory controller could operate all four channels independently, but if the channels are ganged together, 192 bytes out of 256 bytes will be masked off. As a result, loading two or more unrelated 64-byte datasets may result in a block which incurs a latency penalty. Loading the same 64 byte data set using bank interleaving is very simple as well. However, if no interleaving is done, the memory controller may be blocked while it waits for another read operation on the same bank to complete. Ganging in a bank-interleaved configuration would be quite useless as there's non-unit stride between the addresses associated with each channel.

    Channel and bank interleaving provides the best of both worlds and results in lower random access times across the board.
  23. Pinhedd said:
    Tradesman1 said:
    Intel in Dual channel sees the DRAM as a 128 bit device, in tri channel on 1366 as a 192 bit device and in quad mode on 2011 as a 256 bit device


    If they were ganged that would be true. However, Intel has used interleaved memory for quite some time; AMD supports it as well as far as I know. Interleaving ping-pongs physical address space assignments along cache block sized chunks (64 bytes) between banks, ranks, and channels (each can be enabled independently).

    No interleaving:

    Addresses 0-63 will be on channel0, rank0, bank0
    Addresses 64-127 will be on channel0, rank0, bank0
    Addresses 128-191 will be on channel0, rank0, bank0
    Addresses 192-255 will be on channel0, rank0, bank0

    This pattern repeats until channel0,rank0,bank0 is full, then moves on to channel0,rank0,bank1 and so on.

    Channel interleaving:

    Addresses 0-63 will be on channel0, rank0, bank0
    Addresses 64-127 will be on channel1, rank0, bank0
    Addresses 128-191 will be on channel2, rank0, bank0
    Addresses 192-255 will be on channel3, rank0, bank0
    Addresses 256-319 will be on channel0,rank0, bank0

    This pattern repeats until rank0,bank0 is full on each channel, then moves on to rank0,bank1, and so on.

    Bank interleaving:

    Addresses 0-63 will be on channel0, rank0, bank0
    Addresses 64-127 will be on channel0, rank0, bank1
    Addresses 128-191 will be on channel0, rank0, bank2
    Addresses 192-255 will be on channel0, rank0, bank3
    Addresses 256-319 will be on channel0, rank0, bank4
    Addresses 320-383 will be on channel0, rank0, bank5
    Addresses 384-447 will be on channel0, rank0, bank6
    Addresses 448-511 will be on channel0, rank0, bank7
    Addresses 512-575 will be on channel0, rank0, bank0

    This pattern repeats until all of the banks on channel0, rank0 are full, and then moves on to channel0, rank1 (if installed) and eventually to channel1, rank0.

    Channel and bank interleaving:

    Addresses 0-63 will be on channel0, rank0, bank0
    Addresses 64-127 will be on channel1, rank0, bank0
    Addresses 128-191 will be on channel2, rank0, bank0
    Addresses 192-255 will be on channel3, rank0, bank0
    Addresses 256-319 will be on channel0, rank0, bank1
    Addresses 320-383 will be on channel1, rank0, bank1
    Addresses 384-447 will be on channel2, rank0, bank1
    Addresses 448-511 will be on channel3, rank0, bank1

    This pattern repeats through the 8 DDR3 banks until all are full, then moves on to the next rank.

    The benefit of using four independent 64-bit DRAM channels over a single large 256-bit DRAM channel is reduced latency for small datasets, especially those that have little spatial locality.

    Loading a physically contiguous 256 byte dataset aligned on a 256 byte boundary using channel interleaving would be incredibly quick. Just select rank0,bank0 and read the desired column address. However, loading the same data set using bank interleaving requires opening four rows on four banks on one channel (banks 0,1,2,3) and burst transferring from all of them sequentially. Multiple channels does not help here. Without any interleaving, the memory controller would have to perform four separate read operations from the same row (or two rows if it crosses a row boundary) from a single bank.

    Similarly, loading a physically contiguous 64 byte data set (a single cache block) using channel interleaving would be quick if the memory controller could operate all four channels independently, but if the channels are ganged together, 192 bytes out of 256 bytes will be masked off. As a result, loading two or more unrelated 64-byte datasets may result in a block which incurs a latency penalty. Loading the same 64 byte data set using bank interleaving is very simple as well. However, if no interleaving is done, the memory controller may be blocked while it waits for another read operation on the same bank to complete. Ganging in a bank-interleaved configuration would be quite useless as there's non-unit stride between the addresses associated with each channel.

    Channel and bank interleaving provides the best of both worlds and results in lower random access times across the board.


    Pinhedd - what an amazing detail you mentioned.
    Pardon me for an add-on question, but I was looking at DDR3 1600 MHz Corsair RAM for my Asus M4A89 GTD Pro motherboard which houses an AMD Phenom II X6 1090T processor (black edition).
    Now from the motherboard documentation I could gather that it supports dual channel RAM. I currently have 8 GB RAM (2x4GB) Corsair 1600 MHz DDR3 RAM (model CMX4GX3M1A1600C9) installed and I wish to upgrade my RAM to 16/24 GB and I was looking at buying three modules of Corsair dual channel RAM (CMX16GX3M2A1600C11). This model comes in (2x8GB). Will it be OK to have three modules of RAM installed or should I install 32 GB (2 x(2x8GB))?
    Also, I read many forums regarding maximum memory supported by Asus M4A89 GTD Pro motherboard and AMD Phenom IIX6 and they suggest that it can be easily scaled up to 32GB, because memory management is handled by the OS not by the BIOS in most of the AM3/AM3+ socket motherboard, though the Asus motherboard documentation says it supports only 16 GB!

    Appreciate your help and your time! Thanks mate.
  24. sammat said:
    Pinhedd said:
    Tradesman1 said:
    Intel in Dual channel sees the DRAM as a 128 bit device, in tri channel on 1366 as a 192 bit device and in quad mode on 2011 as a 256 bit device


    If they were ganged that would be true. However, Intel has used interleaved memory for quite some time; AMD supports it as well as far as I know. Interleaving ping-pongs physical address space assignments along cache block sized chunks (64 bytes) between banks, ranks, and channels (each can be enabled independently).

    No interleaving:

    Addresses 0-63 will be on channel0, rank0, bank0
    Addresses 64-127 will be on channel0, rank0, bank0
    Addresses 128-191 will be on channel0, rank0, bank0
    Addresses 192-255 will be on channel0, rank0, bank0

    This pattern repeats until channel0,rank0,bank0 is full, then moves on to channel0,rank0,bank1 and so on.

    Channel interleaving:

    Addresses 0-63 will be on channel0, rank0, bank0
    Addresses 64-127 will be on channel1, rank0, bank0
    Addresses 128-191 will be on channel2, rank0, bank0
    Addresses 192-255 will be on channel3, rank0, bank0
    Addresses 256-319 will be on channel0,rank0, bank0

    This pattern repeats until rank0,bank0 is full on each channel, then moves on to rank0,bank1, and so on.

    Bank interleaving:

    Addresses 0-63 will be on channel0, rank0, bank0
    Addresses 64-127 will be on channel0, rank0, bank1
    Addresses 128-191 will be on channel0, rank0, bank2
    Addresses 192-255 will be on channel0, rank0, bank3
    Addresses 256-319 will be on channel0, rank0, bank4
    Addresses 320-383 will be on channel0, rank0, bank5
    Addresses 384-447 will be on channel0, rank0, bank6
    Addresses 448-511 will be on channel0, rank0, bank7
    Addresses 512-575 will be on channel0, rank0, bank0

    This pattern repeats until all of the banks on channel0, rank0 are full, and then moves on to channel0, rank1 (if installed) and eventually to channel1, rank0.

    Channel and bank interleaving:

    Addresses 0-63 will be on channel0, rank0, bank0
    Addresses 64-127 will be on channel1, rank0, bank0
    Addresses 128-191 will be on channel2, rank0, bank0
    Addresses 192-255 will be on channel3, rank0, bank0
    Addresses 256-319 will be on channel0, rank0, bank1
    Addresses 320-383 will be on channel1, rank0, bank1
    Addresses 384-447 will be on channel2, rank0, bank1
    Addresses 448-511 will be on channel3, rank0, bank1

    This pattern repeats through the 8 DDR3 banks until all are full, then moves on to the next rank.

    The benefit of using four independent 64-bit DRAM channels over a single large 256-bit DRAM channel is reduced latency for small datasets, especially those that have little spatial locality.

    Loading a physically contiguous 256 byte dataset aligned on a 256 byte boundary using channel interleaving would be incredibly quick. Just select rank0,bank0 and read the desired column address. However, loading the same data set using bank interleaving requires opening four rows on four banks on one channel (banks 0,1,2,3) and burst transferring from all of them sequentially. Multiple channels does not help here. Without any interleaving, the memory controller would have to perform four separate read operations from the same row (or two rows if it crosses a row boundary) from a single bank.

    Similarly, loading a physically contiguous 64 byte data set (a single cache block) using channel interleaving would be quick if the memory controller could operate all four channels independently, but if the channels are ganged together, 192 bytes out of 256 bytes will be masked off. As a result, loading two or more unrelated 64-byte datasets may result in a block which incurs a latency penalty. Loading the same 64 byte data set using bank interleaving is very simple as well. However, if no interleaving is done, the memory controller may be blocked while it waits for another read operation on the same bank to complete. Ganging in a bank-interleaved configuration would be quite useless as there's non-unit stride between the addresses associated with each channel.

    Channel and bank interleaving provides the best of both worlds and results in lower random access times across the board.


    Pinhedd - what an amazing detail you mentioned.
    Pardon me for an add-on question, but I was looking at DDR3 1600 MHz Corsair RAM for my Asus M4A89 GTD Pro motherboard which houses an AMD Phenom II X6 1090T processor (black edition).
    Now from the motherboard documentation I could gather that it supports dual channel RAM. I currently have 8 GB RAM (2x4GB) Corsair 1600 MHz DDR3 RAM (model CMX4GX3M1A1600C9) installed and I wish to upgrade my RAM to 16/24 GB and I was looking at buying three modules of Corsair dual channel RAM (CMX16GX3M2A1600C11). This model comes in (2x8GB). Will it be OK to have three modules of RAM installed or should I install 32 GB (2 x(2x8GB))?
    Also, I read many forums regarding maximum memory supported by Asus M4A89 GTD Pro motherboard and AMD Phenom IIX6 and they suggest that it can be easily scaled up to 32GB, because memory management is handled by the OS not by the BIOS in most of the AM3/AM3+ socket motherboard, though the Asus motherboard documentation says it supports only 16 GB!

    Appreciate your help and your time! Thanks mate.


    AMD's memory support tends to be a bit pickier than Intel's. Intel does allow unbalanced channels thanks to its Flex Mode technology. As far as I know, memory channels on AMD's platforms must be balanced. I haven't used an AMD microprocessor in years though so I'm just going off of memory (pun intended) here.

    What I would recommend is adding 2x8 GiB to your existing 2x4 GiB and see if that works. If it does, you'll end up with 24GiB. Asus is generally pretty good about expanding DRAM support over time through firmware updates. The box may have listed 16GiB maximum at the time of manufacturing based on market realities (4GiB DIMMs may have been the standard at the time) rather than chipset or firmware limitations.
    If more than 16GiB doesn't work, and there's no firmware update available to address it (this is handled by the BIOS/UEFI firmware, not the operating system) then you'll have 16GiB anyway.
    Please note though that most 8GiB DIMMs use 2x4GiB ranks (one on each side of the PCB) with each rank being constructed from 8x4Gib DRAM ICs. Some older Intel platforms (Intel 5 series chipsets) do not support 4Gib DRAM ICs, only 2Gib DRAM ICs or smaller. I do not know whether or not AMD platforms have similar limitations. What this means is that 8GiB dual-rank DIMMs and 4GiB single-rank DIMMs may not work on your motherboard. In order to reach 16GiB you'll have to add two more dual-rank 4GiB DIMMs that are constructed from 2x2GiB ranks using older 2Gib DRAM ICs.
Ask a new question

Read More

Memory Performance Dual Channel