Sign in with
Sign up | Sign in

RAM Parallelism: Single And Dual Channel

Parallel Processing, Part 2: RAM and HDD

Some Memory History

Developments in system main memory (also known as RAM - random access memory) had been mostly linear until AMD and Intel introduced dual channel controllers in 2003. In the server space, you can also find Xeon platforms (Bensley or the latest Stoakley platform for 45 nm processors), which utilize a quad channel memory controller.

Memory modules, as opposed to installing individual memory chips, were introduced to facilitate memory deployment in the 1990s. The first Single Inline Memory Modules (SIMMs) had 30 pins and were eight bits wide, which meant that pairs had to be used for 286 and 386SX computers (16 bit machines), and four SIMMs were required for 386DX systems and up (32 bit architectures). 30-pin SIMM and SIPP modules were available at 256 kB to 4 MB each, and they were replaced by the 72-pin PS/2 SIMMs in the mid ’90s. The fact that at least two or four modules had to be used has nothing to do with parallelism; it is only because the system bus width had to be matched.

72-pin SIMMs were used for fast page mode DRAM (FPM), which was quickly replaced by Extended Data Out (EDO) memory in the late 1990s. Although 64 MB PS/2 SIMMs existed, they typically maxed out at 32 MB per module. EDO delivers better read performance when multiple data is read out of a page, where the row address doesn’t have to be changed. EDO reached a peak bandwidth of 266 MB/s.

EDO was replaced by synchronous DRAM (SDRAM) on 128-pin DIMMs (64-bit data bus at 3.3 V), where the clock is defined by the system bus or the memory controller. First generation PC66 memory was already twice as fast as EDO-DRAM, and following generations scaled nicely: PC100 and PC133 became popular. After that, double data rate (DDR) SDRAM was introduced, using 184-pin DDR-DIMMs. These reduced the voltage to 2.5 V and doubled performance by transferring data on the rising as well as falling edge of the clock signal, at up to 200 MHz base clock speeds (DDR400). DDR2 memory on 240-pin DIMMs as well as DDR3 is still based on this same technology, but offers a larger prefetch and much increased clock speeds of up to 400 MHz with DDR2 (DDR2-800) and projected 800 MHz for DDR3 (DDR3-1600).

All of these technologies worked on a single memory channel, which means they increased bandwidth over previous generations by widening the memory bus and by accelerating memory speed.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 6 comments.
This thread is closed for comments
  • 0 Hide
    perzy , August 8, 2008 8:11 AM
    This is a great article!
  • 1 Hide
    Anonymous , September 15, 2008 9:40 PM
    (Firt of all: Excuse my poor English... )
    mmm yours memory tests don't convince me. You should run, for example, Winrar AND Lame IN PARALLEL/SIMULTANEOUS (i.e multitasking), otherwise, caches don't are flushed (and it's when dual channel really is important). Note that it's not a superflous situation; under normal use a system commonly have several huge memory applications run concurrently (word, browser whith a lot of tabs open, anti-virus, etc. )
  • 3 Hide
    hellwig , November 21, 2008 6:33 PM
    I doubt anyone from Tom's will see this comment on such an old article, but it would have been interesting to see Single-vs.-Dual channel memory using an AMD processor. Since Tom's like Intel, the new Core i7's would also be beneficial. The point is, the article acknowledges the Core 2's have a tremendous amount of L2 cache to combat FSB (and consequently Memory) latencies. How is the comparison with an AMD or nwe Core i7 where there is NO FSB and the L2 Cache is significantly reduce? I would imagine this is where dual/tripple-channel shows is mustard. I hope we see a single vs. dual vs. triple channel comparison soon.
  • 0 Hide
    meodowla , September 15, 2009 9:27 AM
    Won't it be different when using a AMD processor with Memory Controller inside CPU.
  • 0 Hide
    junghm69 , March 24, 2011 4:48 AM
    My Windows Experience Index 3D gaming graphics score goes up from 3.8 to 5.1 when I switch from dual channel to single channel. This makes absolutely no sense. I thought dual channel was supposed to be better than single channel. Can anyone explain this?

    I seriously doubt that this score is accurate. I am using the built in graphics controller on the motherboard which is an AMD 760G chipset (ATI HD 3000 or 3200 I think). I've used Radeon HD 5450 video cards on similar systems and they give me a score of 5.4. How can a built in graphics controller give me a 5.1?

    AMD Athlon II X3 435 Rana (2.9 ghz)
    Asus M4A78LT-M motherboard
    4 GB G.Skill DDR3-1333 (2x2GB) F3-10600CL9D-4GBNT CL9-9-9-24 1.5V
    Windows 7 Ultimate 64-bit
  • 0 Hide
    Anonymous , April 24, 2011 12:57 AM
    because if you used 2 different memory chips both will run at the speed of the lowest memory chip when you activate dual channel in your motherboard