Difference between low and high-density SDRAM?

Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

I'd like to add a used PC100 or PC133 256MB DIMM to my AOpen AX6BC
BX440. It requires low-density: how can I discern the difference
between low and high-density (I have the mobo manual, and believe I
have a handle on the other spec requirements and compatability
w/existing 2x128s)? Thank you
14 answers Last reply
More about difference high density sdram
  1. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    On 6 Dec 2004 00:34:58 -0800, alancemor@yahoo.com (Lance Morgan) wrote:

    >I'd like to add a used PC100 or PC133 256MB DIMM to my AOpen AX6BC
    >BX440. It requires low-density: how can I discern the difference
    >between low and high-density (I have the mobo manual, and believe I
    >have a handle on the other spec requirements and compatability
    >w/existing 2x128s)? Thank you

    In the "normal" sense of "low density" it would mean the inidivual capacity
    of each chip - a 440BX mbrd can use chips up to 128M-bit, so you need to
    make sure that the DIMM you buy is double-sided with 16 chips.

    A while back, there were also DIMM mfrs selling "high density DIMMs", a
    term they coined to describe a DIMM made with memory chips with a data
    width of 4-bits. They also had 16 chips and were populated on both sides
    of the module but are configured as a single 64-bit wide row/rank of
    memory. The 440BX does *not* support such a configuration so, as well as
    the 16 chip count, you need to be sure that the memory chips are a 16Mx8
    configuration and not 32Mx4.

    The best advice is just to go to www.crucial.com and plug your mbrd
    name/model into their selector. Their prices are reasonable and they will
    not sell you a dud or mismatch.

    Rgds, George Macdonald

    "Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
  2. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    Lance Morgan wrote:
    > I'd like to add a used PC100 or PC133 256MB DIMM to my AOpen AX6BC
    > BX440. It requires low-density: how can I discern the difference
    > between low and high-density (I have the mobo manual, and believe I
    > have a handle on the other spec requirements and compatability
    > w/existing 2x128s)? Thank you

    In the SDRAM days, they came up with the term low-density vs.
    high-density to describe the difference between DIMMs with memory
    modules on only one side of the module vs. those that have them on both
    sides of the module. Now, you could have 256MB of RAM entirely on one
    side of the module, or 256MB split half between the front and back of
    the module.

    The single-sided was considered "low-density", despite the fact that it
    has packed 256MB into half the number of chips as the dual-sided module.
    Most of us would normally call a chip with a higher number of circuits
    to be higher-density, but in this case they aren't referring to the
    internal electronic density, but the just the density of the number of
    chips. The reason this is important at all is because those
    "high-density" modules, having more chips on them, drew a lot more
    power, many motherboards couldn't supply the required amount of power to
    those types of modules. Or if they could, they could only supply them to
    one module, but not more modules.

    I'll echo what others have told you about going to Crucial.com, but also
    add you might want to check out Kingston.com, they both have online
    forms which allow you to choose precisely what type of modules are
    certified for their particular motherboards.

    Yousuf Khan
  3. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    Yousuf Khan wrote:
    > Lance Morgan wrote:
    >
    >> I'd like to add a used PC100 or PC133 256MB DIMM to my AOpen AX6BC
    >> BX440. It requires low-density: how can I discern the difference
    >> between low and high-density (I have the mobo manual, and believe I
    >> have a handle on the other spec requirements and compatability
    >> w/existing 2x128s)? Thank you
    >
    >
    > In the SDRAM days, they came up with the term low-density vs.
    > high-density to describe the difference between DIMMs with memory
    > modules on only one side of the module vs. those that have them on both
    > sides of the module. Now, you could have 256MB of RAM entirely on one
    > side of the module, or 256MB split half between the front and back of
    > the module.
    >
    > The single-sided was considered "low-density", despite the fact that it
    > has packed 256MB into half the number of chips as the dual-sided module.
    > Most of us would normally call a chip with a higher number of circuits
    > to be higher-density, but in this case they aren't referring to the
    > internal electronic density, but the just the density of the number of
    > chips. The reason this is important at all is because those
    > "high-density" modules, having more chips on them, drew a lot more
    > power, many motherboards couldn't supply the required amount of power to
    > those types of modules. Or if they could, they could only supply them to
    > one module, but not more modules.

    I'm sorry but that is not right.

    Chips on one vs two sides was called single and double sided, which is also
    confusing because, while it was physically true for the early chips (and
    'common'), it really refers to having two 'groups' of chips with each
    'group' being 64 bit wide. I.E. a double 'sided' module could still have
    all chips on one physical side of the module. What made it 'double' was
    having two 64 bit wide 'groups', as in 2 groups of, say, 8 meg x 64, which
    would give 128 Mbytes total. The two groups are addressed as if they are
    two (single sided) sticks even though in the one socket, which is why the
    board must support double sided sticks. The actual physical layout is
    irrelevant, other than the practicality of assembly, since the electronics
    has no way of even knowing where they're located, much less care.

    The term 'density', as it is used in this context, relates to the chip
    organization.

    As noted above, the data bus is 64 bits wide. If the chips are organized
    8x8 (64 Mbit) then 8 of them make up what I called a 'group' (I'm avoiding
    "bank" because that's used internally for a completely different thing).
    I.E. 8x8 chips, times 8 of them, is 8x64 for 64 Mbytes. Put two groups on
    the stick and you get 128 Mbytes, as in the above example for "double sided."

    Now, one way to think of 'high density' (and the one they mean) is to be
    able to get more Mbytes per 64 bit wide 'group'. And one way to do that is
    to organize the chip as 16x4 instead of 8x8. Same number of bits in the
    chip (64 M, so *IT* is the same 'density') but it now takes 16 to fill the
    64 bit wide 'group', because they are each only 4 bits wide, for 128 Mbyte
    on the 'single side' (twice as much. e.g. 'high density'). Note again that
    physical location is irrelevant and if you can figure out how to get 32 on
    the stick you could have a 256 Mbyte 'double sided' module.

    The problem is it takes an additional address line to address a x4 chip vs
    the same size x8 so if the motherboard expects 8x8 chips, and has only that
    many address lines, then it will only see half of a 16x4 chip. It needs the
    'low density' 8x8.

    Now, the thing we all are familiar with because it's 'in the spec' for the
    motherboard is a statement like "supports 256 Meg RAM modules" or "supports
    768 Meg (total)". That presumes the chip organization that was 'standard'
    at the time since that would be all they knew about. So putting a 512 meg
    module in a 256 meg socket won't work because it can't address a module
    that large and putting in a 256 Meg 'high density' module will result in
    only half being seen, or not work, for a similar reason: it can't fully
    address the x4 chips being used. (there can be other differences, such as
    chip banking and refresh rate, but this is enough for the gist of it)


    > I'll echo what others have told you about going to Crucial.com, but also
    > add you might want to check out Kingston.com, they both have online
    > forms which allow you to choose precisely what type of modules are
    > certified for their particular motherboards.

    I imagine it'll be fine for his board but I've noted they're not always
    right. I have an original issue BH6 rev 1.0, BX440 chipset, and most
    'selection guides' claim it supports 256 Meg per socket and 768 meg total,
    but it doesn't. It only supports 128 meg per socket, 384 total. It's the
    BH6 rev 1.1 that supports 256 meg sticks, but they don't distinguish
    between them.


    > Yousuf Khan
  4. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    alancemor@yahoo.com (Lance Morgan) wrote:

    >I'd like to add a used PC100 or PC133 256MB DIMM to my AOpen AX6BC
    >BX440.

    Out of curiosity, how much do expect to pay?

    Oh in advance [playing].
  5. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    On 6 Dec 2004 00:34:58 -0800, alancemor@yahoo.com (Lance Morgan)
    wrote:

    >I'd like to add a used PC100 or PC133 256MB DIMM to my AOpen AX6BC
    >BX440. It requires low-density: how can I discern the difference
    >between low and high-density (I have the mobo manual, and believe I
    >have a handle on the other spec requirements and compatability
    >w/existing 2x128s)? Thank you

    It's all relative to what's available at that time I think. Double
    sided DIMM should be using "low" density chips for that period of
    time. The density refering to the mbits per chip, so lower density
    requires more chips per DIMM to make up the same amount of RAM.

    --
    L.Angel: I'm looking for web design work.
    If you need basic to med complexity webpages at affordable rates, email me :)
    Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
    If you really want, FrontPage & DreamWeaver too.
    But keep in mind you pay extra bandwidth for their bloated code
  6. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    magic_philter@hotmail.com wrote:

    > This looks like the same issue I am dealing with
    > I am looking to put a 512MB DIMM in my ASUS CUV4X-E
    > It supports 1.5GB in 4 slots (max 512MB per slot)
    > I am located in Canada, and I don't want to bring the RAM across the
    > border, so I am trying to determine whether the board supports
    > high-density chips, as they are more inexpensive!
    > Crucial.com says these chips will work:
    > 512MB - CT64M64S4D7E SDRAM, PC133 · CL=2 · Unbuffered ·
    > Non-parity · 133MHz · 3.3V · 64Meg x 64
    > 512MB - CT64M64S4D75 SDRAM, PC133 · CL=3 · Unbuffered ·
    > Non-parity · 133MHz · 3.3V · 64Meg x 64
    > and some other sites I have tried tell me that the board supports 16x8
    > or 32x8 configurations with a CAS latency of 2.
    > My local parts store has 2 different modules, low or high density ( I
    > haven't seen the sticks to count the modules) - does any of the info
    > above tell me if this board will support HD modules?
    > Thanks!
    >

    The 16x8 and 32x8 chips are what's used on the 'low density' modules and
    that's what you need. A 512 Meg module would use 16 32x8s.

    Note, many listings will show the 'high density' organization (x4) being
    compatible with the chipset on your board, and they are, EXCEPT FOR ASUS. I
    mention that in case your local shops are not familiar with the Asus
    'exception'.

    So the specs for the RAM you want would look like:

    PC133 512MB,168 pins DIMM SDRAM NonECC 'low density'

    32X8 (The memory chip organization: 256Mbit 32x8, low density)

    16 Chips (Simply a 'must be' for it to add up. 512/32 = 16)

    3.3V
    CL3 (or CL2)
    NonECC

    Note, the x64 numbers you see, like 32x64 or 64x64, are merely the memory
    size and rather redundant as the bus is always 64 bits wide. It is NOT
    telling you the density. That is in the x4 (high) or x8 (low) chip type.
  7. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    On Mon, 06 Dec 2004 09:55:31 -0600, David Maynard <dNOTmayn@ev1.net> wrote:

    >Yousuf Khan wrote:
    >> Lance Morgan wrote:
    >>
    >>> I'd like to add a used PC100 or PC133 256MB DIMM to my AOpen AX6BC
    >>> BX440. It requires low-density: how can I discern the difference
    >>> between low and high-density (I have the mobo manual, and believe I
    >>> have a handle on the other spec requirements and compatability
    >>> w/existing 2x128s)? Thank you
    >>
    >>
    >> In the SDRAM days, they came up with the term low-density vs.
    >> high-density to describe the difference between DIMMs with memory
    >> modules on only one side of the module vs. those that have them on both
    >> sides of the module. Now, you could have 256MB of RAM entirely on one
    >> side of the module, or 256MB split half between the front and back of
    >> the module.
    >>
    >> The single-sided was considered "low-density", despite the fact that it
    >> has packed 256MB into half the number of chips as the dual-sided module.
    >> Most of us would normally call a chip with a higher number of circuits
    >> to be higher-density, but in this case they aren't referring to the
    >> internal electronic density, but the just the density of the number of
    >> chips. The reason this is important at all is because those
    >> "high-density" modules, having more chips on them, drew a lot more
    >> power, many motherboards couldn't supply the required amount of power to
    >> those types of modules. Or if they could, they could only supply them to
    >> one module, but not more modules.
    >
    >I'm sorry but that is not right.
    >
    >Chips on one vs two sides was called single and double sided, which is also
    >confusing because, while it was physically true for the early chips (and
    >'common'), it really refers to having two 'groups' of chips with each
    >'group' being 64 bit wide. I.E. a double 'sided' module could still have
    >all chips on one physical side of the module. What made it 'double' was
    >having two 64 bit wide 'groups', as in 2 groups of, say, 8 meg x 64, which
    >would give 128 Mbytes total. The two groups are addressed as if they are
    >two (single sided) sticks even though in the one socket, which is why the
    >board must support double sided sticks. The actual physical layout is
    >irrelevant, other than the practicality of assembly, since the electronics
    >has no way of even knowing where they're located, much less care.
    >
    >The term 'density', as it is used in this context, relates to the chip
    >organization.

    It's been used interchangably for both.

    >As noted above, the data bus is 64 bits wide. If the chips are organized
    >8x8 (64 Mbit) then 8 of them make up what I called a 'group' (I'm avoiding
    >"bank" because that's used internally for a completely different thing).
    >I.E. 8x8 chips, times 8 of them, is 8x64 for 64 Mbytes. Put two groups on
    >the stick and you get 128 Mbytes, as in the above example for "double sided."

    What you are calling "group" has been called "row" by Intel in their docs
    in the past. The term "rank" seems to be accepted if not preferred now.

    >Now, one way to think of 'high density' (and the one they mean) is to be
    >able to get more Mbytes per 64 bit wide 'group'. And one way to do that is
    >to organize the chip as 16x4 instead of 8x8. Same number of bits in the
    >chip (64 M, so *IT* is the same 'density') but it now takes 16 to fill the
    >64 bit wide 'group', because they are each only 4 bits wide, for 128 Mbyte
    >on the 'single side' (twice as much. e.g. 'high density'). Note again that
    >physical location is irrelevant and if you can figure out how to get 32 on
    >the stick you could have a 256 Mbyte 'double sided' module.

    Some DIMM mfrs used this nomenclature but they were bottom-feeders, notably
    SyncMAX. In fact they were using (surplus ?) chips normally used for high
    capacity 32-chip registered DIMMs in unbuffered 16-chip single-rank DIMMs.

    >The problem is it takes an additional address line to address a x4 chip vs
    >the same size x8 so if the motherboard expects 8x8 chips, and has only that
    >many address lines, then it will only see half of a 16x4 chip. It needs the
    >'low density' 8x8.

    IIRC the addressing was never a problem; VIA chipsets generally "supported"
    the x4 chip configurations and in practice, often only with one or two
    DIMMs inserted in the mbrd - a third DIMM would not work due to signal
    loading and SyncMAX had tables of "motherboard compatibility" demonstrating
    this. Intel's 440BX and other desktop chipset docs specifically ruled such
    configurations out as placing too much load on the signals - Intel never
    approved x4 memory chips for any of their desktop chipsets which used
    unbuffered DIMMs and had spec updates which covered this for some... IIRC
    the 430VX was one.

    >Now, the thing we all are familiar with because it's 'in the spec' for the
    >motherboard is a statement like "supports 256 Meg RAM modules" or "supports
    >768 Meg (total)". That presumes the chip organization that was 'standard'
    >at the time since that would be all they knew about. So putting a 512 meg
    >module in a 256 meg socket won't work because it can't address a module
    >that large and putting in a 256 Meg 'high density' module will result in
    >only half being seen, or not work, for a similar reason: it can't fully
    >address the x4 chips being used. (there can be other differences, such as
    >chip banking and refresh rate, but this is enough for the gist of it)
    >
    >
    >> I'll echo what others have told you about going to Crucial.com, but also
    >> add you might want to check out Kingston.com, they both have online
    >> forms which allow you to choose precisely what type of modules are
    >> certified for their particular motherboards.
    >
    >I imagine it'll be fine for his board but I've noted they're not always
    >right. I have an original issue BH6 rev 1.0, BX440 chipset, and most
    >'selection guides' claim it supports 256 Meg per socket and 768 meg total,
    >but it doesn't. It only supports 128 meg per socket, 384 total. It's the
    >BH6 rev 1.1 that supports 256 meg sticks, but they don't distinguish
    >between them.

    Hmmm, according to Intel docs, the 3-DIMM mbrds didn't have to have FET
    switches and some early 440BX mbrds didn't have them - could be the reason.

    Rgds, George Macdonald

    "Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
  8. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    George Macdonald wrote:

    > On Mon, 06 Dec 2004 09:55:31 -0600, David Maynard <dNOTmayn@ev1.net> wrote:
    >
    >
    >>Yousuf Khan wrote:
    >>
    >>>Lance Morgan wrote:
    >>>
    >>>
    >>>>I'd like to add a used PC100 or PC133 256MB DIMM to my AOpen AX6BC
    >>>>BX440. It requires low-density: how can I discern the difference
    >>>>between low and high-density (I have the mobo manual, and believe I
    >>>>have a handle on the other spec requirements and compatability
    >>>>w/existing 2x128s)? Thank you
    >>>
    >>>
    >>>In the SDRAM days, they came up with the term low-density vs.
    >>>high-density to describe the difference between DIMMs with memory
    >>>modules on only one side of the module vs. those that have them on both
    >>>sides of the module. Now, you could have 256MB of RAM entirely on one
    >>>side of the module, or 256MB split half between the front and back of
    >>>the module.
    >>>
    >>>The single-sided was considered "low-density", despite the fact that it
    >>>has packed 256MB into half the number of chips as the dual-sided module.
    >>>Most of us would normally call a chip with a higher number of circuits
    >>>to be higher-density, but in this case they aren't referring to the
    >>>internal electronic density, but the just the density of the number of
    >>>chips. The reason this is important at all is because those
    >>>"high-density" modules, having more chips on them, drew a lot more
    >>>power, many motherboards couldn't supply the required amount of power to
    >>>those types of modules. Or if they could, they could only supply them to
    >>>one module, but not more modules.
    >>
    >>I'm sorry but that is not right.
    >>
    >>Chips on one vs two sides was called single and double sided, which is also
    >>confusing because, while it was physically true for the early chips (and
    >>'common'), it really refers to having two 'groups' of chips with each
    >>'group' being 64 bit wide. I.E. a double 'sided' module could still have
    >>all chips on one physical side of the module. What made it 'double' was
    >>having two 64 bit wide 'groups', as in 2 groups of, say, 8 meg x 64, which
    >>would give 128 Mbytes total. The two groups are addressed as if they are
    >>two (single sided) sticks even though in the one socket, which is why the
    >>board must support double sided sticks. The actual physical layout is
    >>irrelevant, other than the practicality of assembly, since the electronics
    >>has no way of even knowing where they're located, much less care.
    >>
    >>The term 'density', as it is used in this context, relates to the chip
    >>organization.
    >
    >
    > It's been used interchangably for both.

    That's why I clarified what context I was using.

    >>As noted above, the data bus is 64 bits wide. If the chips are organized
    >>8x8 (64 Mbit) then 8 of them make up what I called a 'group' (I'm avoiding
    >>"bank" because that's used internally for a completely different thing).
    >>I.E. 8x8 chips, times 8 of them, is 8x64 for 64 Mbytes. Put two groups on
    >>the stick and you get 128 Mbytes, as in the above example for "double sided."
    >
    >
    > What you are calling "group" has been called "row" by Intel in their docs
    > in the past. The term "rank" seems to be accepted if not preferred now.

    Yeah. And it's also commonly called bank, consistent with the previous SIMM
    'bank' terminology.

    I just thought 'group' would be intuitive enough for the average reader.


    >>Now, one way to think of 'high density' (and the one they mean) is to be
    >>able to get more Mbytes per 64 bit wide 'group'. And one way to do that is
    >>to organize the chip as 16x4 instead of 8x8. Same number of bits in the
    >>chip (64 M, so *IT* is the same 'density') but it now takes 16 to fill the
    >>64 bit wide 'group', because they are each only 4 bits wide, for 128 Mbyte
    >>on the 'single side' (twice as much. e.g. 'high density'). Note again that
    >>physical location is irrelevant and if you can figure out how to get 32 on
    >>the stick you could have a 256 Mbyte 'double sided' module.
    >
    >
    > Some DIMM mfrs used this nomenclature but they were bottom-feeders, notably
    > SyncMAX. In fact they were using (surplus ?) chips normally used for high
    > capacity 32-chip registered DIMMs in unbuffered 16-chip single-rank DIMMs.

    I've seen the term used quite a bit, not just SyncMAX (whoever they are).
    For example, do a search on pricewatch for "single sided" and you'll see
    Spectek, Corsair, Infineon, and Muskin listed. (I picked single sided
    because that's the 'rare' case)

    My MS-5169, BH6, P2B-VM, BP6, and D6VAA user manuals all say they support
    single and double sided DIMMS.

    Same terminology with 72 pin SIMMS. As my ancient AN4 Green 486 motherboard
    manual states for the code key "(S) Single Sided 72pin SIMM, (D) Double
    Sided 72 pin SIMM)


    >>The problem is it takes an additional address line to address a x4 chip vs
    >>the same size x8 so if the motherboard expects 8x8 chips, and has only that
    >>many address lines, then it will only see half of a 16x4 chip. It needs the
    >>'low density' 8x8.
    >
    >
    > IIRC the addressing was never a problem;

    That's why the most common symptom with a 'high density' module in a
    motherboard that doesn't support it is seeing only half the memory.

    > VIA chipsets generally "supported"
    > the x4 chip configurations

    Yes, the VIA chipsets do (133a and 694). Asus motherboards using those
    chipsets being an exception.

    > and in practice, often only with one or two
    > DIMMs inserted in the mbrd - a third DIMM would not work due to signal
    > loading and SyncMAX had tables of "motherboard compatibility" demonstrating
    > this.

    I really don't know anything about 'SyncMAX' or what funky things they were
    doing.

    > Intel's 440BX and other desktop chipset docs specifically ruled such
    > configurations out as placing too much load on the signals - Intel never
    > approved x4 memory chips for any of their desktop chipsets which used
    > unbuffered DIMMs and had spec updates which covered this for some... IIRC
    > the 430VX was one.

    Using x4 chips doesn't put any more chips on the individual signal lines
    than using x8 does. Are you saying an x4 chip is just inherently a 'heavy
    load' for some reason?


    >>Now, the thing we all are familiar with because it's 'in the spec' for the
    >>motherboard is a statement like "supports 256 Meg RAM modules" or "supports
    >>768 Meg (total)". That presumes the chip organization that was 'standard'
    >>at the time since that would be all they knew about. So putting a 512 meg
    >>module in a 256 meg socket won't work because it can't address a module
    >>that large and putting in a 256 Meg 'high density' module will result in
    >>only half being seen, or not work, for a similar reason: it can't fully
    >>address the x4 chips being used. (there can be other differences, such as
    >>chip banking and refresh rate, but this is enough for the gist of it)
    >>
    >>
    >>
    >>>I'll echo what others have told you about going to Crucial.com, but also
    >>>add you might want to check out Kingston.com, they both have online
    >>>forms which allow you to choose precisely what type of modules are
    >>>certified for their particular motherboards.
    >>
    >>I imagine it'll be fine for his board but I've noted they're not always
    >>right. I have an original issue BH6 rev 1.0, BX440 chipset, and most
    >>'selection guides' claim it supports 256 Meg per socket and 768 meg total,
    >>but it doesn't. It only supports 128 meg per socket, 384 total. It's the
    >>BH6 rev 1.1 that supports 256 meg sticks, but they don't distinguish
    >>between them.
    >
    >
    > Hmmm, according to Intel docs, the 3-DIMM mbrds didn't have to have FET
    > switches and some early 440BX mbrds didn't have them - could be the reason.

    It isn't a loading issue as far as I can tell. If I plug a 256 meg stick in
    I get 128 meg, which works just fine but isn't particularly helpful.

    The mobo specs are unambiguous. BH6 v1.0 max memory size is 384 Meg. BP6,
    with the same BX chipset, is max mem size 768 Meg. BH6 supports max 128 Meg
    DIMMS and 3 x 128 is 384. The BP6 supports 256 Meg DIMMS and 3 x 256 is
    768. If I take the exact same three 256 Meg sticks out of the BP6 and plug
    them into the BH6 I end up with a perfectly fine and functional 384 Meg.

    One can address twice as much as the other, in the socket and in total.

    Btw, since you brought up the BX data sheets, it states "Supports up to 4
    double-sided
    DIMMs."

    > Rgds, George Macdonald
    >
    > "Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
  9. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    In comp.sys.ibm.pc.hardware.chips David Maynard <dNOTmayn@ev1.net> wrote:
    > George Macdonald wrote:

    > > What you are calling "group" has been called "row" by Intel in their docs
    > > in the past. The term "rank" seems to be accepted if not preferred now.

    > Yeah. And it's also commonly called bank, consistent with the previous SIMM
    > 'bank' terminology.

    > I just thought 'group' would be intuitive enough for the average reader.

    Please. Call it a rank.

    Not "group", not "bank".

    The last time I looked to buy a Dell box, the DRAM config asked me if I
    wanted 1 GB of memory with 2 ranks or 4 ranks. The costs are different.
    Using lower number of ranks preserves some upward expandability.

    > > Intel's 440BX and other desktop chipset docs specifically ruled such
    > > configurations out as placing too much load on the signals - Intel never
    > > approved x4 memory chips for any of their desktop chipsets which used
    > > unbuffered DIMMs and had spec updates which covered this for some... IIRC
    > > the 430VX was one.

    > Using x4 chips doesn't put any more chips on the individual signal lines
    > than using x8 does. Are you saying an x4 chip is just inherently a 'heavy
    > load' for some reason?

    You need twice the number of DRAM chips on the module if you use x4
    devices as opposed to x8 devices. Since the address is sent to every
    single DRAM device on the memory module, more DRAM devices means
    heavier loads on the address lines, and that load is 8 or 16 devices
    per line if not more.

    This is one reason why DDRx SDRAM devices still needs a "full cycle" on
    the command and address busses while the data bus can be cranked up to
    2x the data rate. The loading there is limited to ~4 loads. It will drop
    to 2 loads as data rates continue to climb.


    --
    davewang202(at)yahoo(dot)com
  10. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    David Wang wrote:

    > In comp.sys.ibm.pc.hardware.chips David Maynard <dNOTmayn@ev1.net> wrote:
    >
    >>George Macdonald wrote:
    >
    >
    >>>What you are calling "group" has been called "row" by Intel in their docs
    >>>in the past. The term "rank" seems to be accepted if not preferred now.
    >
    >
    >>Yeah. And it's also commonly called bank, consistent with the previous SIMM
    >>'bank' terminology.
    >
    >
    >>I just thought 'group' would be intuitive enough for the average reader.
    >
    >
    > Please. Call it a rank.
    >
    > Not "group", not "bank".
    >
    > The last time I looked to buy a Dell box, the DRAM config asked me if I
    > wanted 1 GB of memory with 2 ranks or 4 ranks. The costs are different.
    > Using lower number of ranks preserves some upward expandability.
    >
    >
    >>> Intel's 440BX and other desktop chipset docs specifically ruled such
    >>>configurations out as placing too much load on the signals - Intel never
    >>>approved x4 memory chips for any of their desktop chipsets which used
    >>>unbuffered DIMMs and had spec updates which covered this for some... IIRC
    >>>the 430VX was one.
    >
    >
    >>Using x4 chips doesn't put any more chips on the individual signal lines
    >>than using x8 does. Are you saying an x4 chip is just inherently a 'heavy
    >>load' for some reason?
    >
    >
    > You need twice the number of DRAM chips on the module if you use x4
    > devices as opposed to x8 devices. Since the address is sent to every
    > single DRAM device on the memory module, more DRAM devices means
    > heavier loads on the address lines, and that load is 8 or 16 devices
    > per line if not more.

    Ah, right. Of course. I was myopically thinking of the data lines.

    >
    > This is one reason why DDRx SDRAM devices still needs a "full cycle" on
    > the command and address busses while the data bus can be cranked up to
    > 2x the data rate. The loading there is limited to ~4 loads. It will drop
    > to 2 loads as data rates continue to climb.
    >
    >
    >
  11. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    On Mon, 06 Dec 2004 20:19:12 -0600, David Maynard <dNOTmayn@ev1.net> wrote:

    >George Macdonald wrote:
    >
    >>>Now, one way to think of 'high density' (and the one they mean) is to be
    >>>able to get more Mbytes per 64 bit wide 'group'. And one way to do that is
    >>>to organize the chip as 16x4 instead of 8x8. Same number of bits in the
    >>>chip (64 M, so *IT* is the same 'density') but it now takes 16 to fill the
    >>>64 bit wide 'group', because they are each only 4 bits wide, for 128 Mbyte
    >>>on the 'single side' (twice as much. e.g. 'high density'). Note again that
    >>>physical location is irrelevant and if you can figure out how to get 32 on
    >>>the stick you could have a 256 Mbyte 'double sided' module.
    >>
    >>
    >> Some DIMM mfrs used this nomenclature but they were bottom-feeders, notably
    >> SyncMAX. In fact they were using (surplus ?) chips normally used for high
    >> capacity 32-chip registered DIMMs in unbuffered 16-chip single-rank DIMMs.
    >
    >I've seen the term used quite a bit, not just SyncMAX (whoever they are).
    >For example, do a search on pricewatch for "single sided" and you'll see
    >Spectek, Corsair, Infineon, and Muskin listed. (I picked single sided
    >because that's the 'rare' case)

    Not sure what you mean here - single sided means (to me and I thought
    everbody else) 8/9 devices per module on oe side... just like the Crucial
    512MB DIMMs I bought 3weeks ago. Those 16-device thingies are non-standard
    configs which do not conform to Jedec rules.

    >My MS-5169, BH6, P2B-VM, BP6, and D6VAA user manuals all say they support
    >single and double sided DIMMS.

    I would not advise trying to use x4 device DIMMs on any Intel chipset mbrd
    - just one *might* work but Intel has "outlawed" them - read the docs.

    <<snip>>

    >> IIRC the addressing was never a problem;
    >
    >That's why the most common symptom with a 'high density' module in a
    >motherboard that doesn't support it is seeing only half the memory.

    No experience there but there were sufficieent address lines on all the
    Intel chipsets I recall reading the docs for - Intel comments on this just
    before ruling them out on bus loading grounds.

    >> and in practice, often only with one or two
    >> DIMMs inserted in the mbrd - a third DIMM would not work due to signal
    >> loading and SyncMAX had tables of "motherboard compatibility" demonstrating
    >> this.
    >
    >I really don't know anything about 'SyncMAX' or what funky things they were
    >doing.

    Their Web site is not hard to find.:-)

    >> Intel's 440BX and other desktop chipset docs specifically ruled such
    >> configurations out as placing too much load on the signals - Intel never
    >> approved x4 memory chips for any of their desktop chipsets which used
    >> unbuffered DIMMs and had spec updates which covered this for some... IIRC
    >> the 430VX was one.
    >
    >Using x4 chips doesn't put any more chips on the individual signal lines
    >than using x8 does. Are you saying an x4 chip is just inherently a 'heavy
    >load' for some reason?

    Yep - as soon as the overloaded, because it's driving double the chips it
    should, Chip Select is "latched", and assuming it can be, the address lines
    are also driving 16 chips.

    >> Hmmm, according to Intel docs, the 3-DIMM mbrds didn't have to have FET
    >> switches and some early 440BX mbrds didn't have them - could be the reason.
    >
    >It isn't a loading issue as far as I can tell. If I plug a 256 meg stick in
    >I get 128 meg, which works just fine but isn't particularly helpful.

    Yes.

    >The mobo specs are unambiguous. BH6 v1.0 max memory size is 384 Meg. BP6,
    >with the same BX chipset, is max mem size 768 Meg. BH6 supports max 128 Meg
    >DIMMS and 3 x 128 is 384. The BP6 supports 256 Meg DIMMS and 3 x 256 is
    >768. If I take the exact same three 256 Meg sticks out of the BP6 and plug
    >them into the BH6 I end up with a perfectly fine and functional 384 Meg.
    >
    >One can address twice as much as the other, in the socket and in total.
    >
    >Btw, since you brought up the BX data sheets, it states "Supports up to 4
    >double-sided
    >DIMMs."

    Yes but *WITH* FET switches to buffer the signals. For the 3-DIMM setup
    they're not required, according to the original data sheet, but I suspect
    that they were used on later 3-DIMM boards.

    Rgds, George Macdonald

    "Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
  12. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    George Macdonald wrote:

    > On Mon, 06 Dec 2004 20:19:12 -0600, David Maynard <dNOTmayn@ev1.net> wrote:
    >
    >
    >>George Macdonald wrote:
    >>
    >>
    >>>>Now, one way to think of 'high density' (and the one they mean) is to be
    >>>>able to get more Mbytes per 64 bit wide 'group'. And one way to do that is
    >>>>to organize the chip as 16x4 instead of 8x8. Same number of bits in the
    >>>>chip (64 M, so *IT* is the same 'density') but it now takes 16 to fill the
    >>>>64 bit wide 'group', because they are each only 4 bits wide, for 128 Mbyte
    >>>>on the 'single side' (twice as much. e.g. 'high density'). Note again that
    >>>>physical location is irrelevant and if you can figure out how to get 32 on
    >>>>the stick you could have a 256 Mbyte 'double sided' module.
    >>>
    >>>
    >>>Some DIMM mfrs used this nomenclature but they were bottom-feeders, notably
    >>>SyncMAX. In fact they were using (surplus ?) chips normally used for high
    >>>capacity 32-chip registered DIMMs in unbuffered 16-chip single-rank DIMMs.
    >>
    >>I've seen the term used quite a bit, not just SyncMAX (whoever they are).
    >>For example, do a search on pricewatch for "single sided" and you'll see
    >>Spectek, Corsair, Infineon, and Muskin listed. (I picked single sided
    >>because that's the 'rare' case)
    >
    >
    > Not sure what you mean here

    What I mean here, and am demonstrating with the examples, is that the
    terminology 'single and double sided' is not some oddball thing from a few
    'bottom feeders' but, rather, a widely used industry terminology.

    > - single sided means (to me and I thought
    > everbody else) 8/9 devices per module on oe side... just like the Crucial
    > 512MB DIMMs I bought 3weeks ago. Those 16-device thingies are non-standard
    > configs which do not conform to Jedec rules.

    That *is* what most people think because it is usually coincidental, and is
    what inspired the usage, but the motherboard can't tell, much less care,
    where the chips are mounted and there is no need for a motherboard to
    'support' where someone places the chips.

    What single/double sided really means is, in the modern parlance, whether
    the module has 1 or 2 ranks or, in another common parlance, 1 or 2 rows or,
    in another common parlance, 1 or 2 banks.


    >>My MS-5169, BH6, P2B-VM, BP6, and D6VAA user manuals all say they support
    >>single and double sided DIMMS.
    >
    >
    > I would not advise trying to use x4 device DIMMs on any Intel chipset mbrd
    > - just one *might* work but Intel has "outlawed" them - read the docs.

    I didn't say anything about x4 devices. Those are simply more examples,
    this time from the motherboard manufacturer side of the equation, of the
    common single/double sided terminology.

    >
    > <<snip>>
    >
    >>>IIRC the addressing was never a problem;
    >>
    >>That's why the most common symptom with a 'high density' module in a
    >>motherboard that doesn't support it is seeing only half the memory.
    >
    >
    > No experience there but there were sufficieent address lines on all the
    > Intel chipsets I recall reading the docs for - Intel comments on this just
    > before ruling them out on bus loading grounds.

    Well, it isn't just a matter of 'address lines', the chipset has to support
    the chip organization in it's multiplexing.


    >>>and in practice, often only with one or two
    >>>DIMMs inserted in the mbrd - a third DIMM would not work due to signal
    >>>loading and SyncMAX had tables of "motherboard compatibility" demonstrating
    >>>this.
    >>
    >>I really don't know anything about 'SyncMAX' or what funky things they were
    >>doing.
    >
    >
    > Their Web site is not hard to find.:-)

    No doubt. But since I don't have any of their modules it didn't seem important.


    >>> Intel's 440BX and other desktop chipset docs specifically ruled such
    >>>configurations out as placing too much load on the signals - Intel never
    >>>approved x4 memory chips for any of their desktop chipsets which used
    >>>unbuffered DIMMs and had spec updates which covered this for some... IIRC
    >>>the 430VX was one.
    >>
    >>Using x4 chips doesn't put any more chips on the individual signal lines
    >>than using x8 does. Are you saying an x4 chip is just inherently a 'heavy
    >>load' for some reason?
    >
    >
    > Yep - as soon as the overloaded, because it's driving double the chips it
    > should, Chip Select is "latched", and assuming it can be, the address lines
    > are also driving 16 chips.

    Yes, I was thinking of the data lines and not the address. My boo-boo


    >>>Hmmm, according to Intel docs, the 3-DIMM mbrds didn't have to have FET
    >>>switches and some early 440BX mbrds didn't have them - could be the reason.
    >>
    >>It isn't a loading issue as far as I can tell. If I plug a 256 meg stick in
    >>I get 128 meg, which works just fine but isn't particularly helpful.
    >
    >
    > Yes.

    Yes, what?

    The 128 meg vs 256 meg stick issue is with equal numbers of 8x8 vs 16x8
    chips. The loading is the same but 8x8 works and 16x8 don't.


    >>The mobo specs are unambiguous. BH6 v1.0 max memory size is 384 Meg. BP6,
    >>with the same BX chipset, is max mem size 768 Meg. BH6 supports max 128 Meg
    >>DIMMS and 3 x 128 is 384. The BP6 supports 256 Meg DIMMS and 3 x 256 is
    >>768. If I take the exact same three 256 Meg sticks out of the BP6 and plug
    >>them into the BH6 I end up with a perfectly fine and functional 384 Meg.
    >>
    >>One can address twice as much as the other, in the socket and in total.
    >>
    >>Btw, since you brought up the BX data sheets, it states "Supports up to 4
    >>double-sided
    >>DIMMs."
    >
    >
    > Yes but *WITH* FET switches to buffer the signals. For the 3-DIMM setup
    > they're not required, according to the original data sheet,

    Right. But '4 sockets' has nothing to do with the point I was making. It's
    simply another example of the 'single/double sided' DIMM as common industry
    usage, this time from the Intel BX data sheets. They're saying it supports
    up to 8 ranks as 4 - 2 rank (I.E. double sided) modules.


    > but I suspect
    > that they were used on later 3-DIMM boards.

    Perhaps, but that doesn't alter whether it supports 16x8 chips or not.

    >
    > Rgds, George Macdonald
    >
    > "Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
  13. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    On Tue, 07 Dec 2004 21:24:25 -0600, David Maynard <dNOTmayn@ev1.net> wrote:

    >George Macdonald wrote:
    >
    >> On Mon, 06 Dec 2004 20:19:12 -0600, David Maynard <dNOTmayn@ev1.net> wrote:
    >>
    >>
    >>>George Macdonald wrote:
    >>>
    >>>
    >>>>>Now, one way to think of 'high density' (and the one they mean) is to be
    >>>>>able to get more Mbytes per 64 bit wide 'group'. And one way to do that is
    >>>>>to organize the chip as 16x4 instead of 8x8. Same number of bits in the
    >>>>>chip (64 M, so *IT* is the same 'density') but it now takes 16 to fill the
    >>>>>64 bit wide 'group', because they are each only 4 bits wide, for 128 Mbyte
    >>>>>on the 'single side' (twice as much. e.g. 'high density'). Note again that
    >>>>>physical location is irrelevant and if you can figure out how to get 32 on
    >>>>>the stick you could have a 256 Mbyte 'double sided' module.
    >>>>
    >>>>
    >>>>Some DIMM mfrs used this nomenclature but they were bottom-feeders, notably
    >>>>SyncMAX. In fact they were using (surplus ?) chips normally used for high
    >>>>capacity 32-chip registered DIMMs in unbuffered 16-chip single-rank DIMMs.
    >>>
    >>>I've seen the term used quite a bit, not just SyncMAX (whoever they are).
    >>>For example, do a search on pricewatch for "single sided" and you'll see
    >>>Spectek, Corsair, Infineon, and Muskin listed. (I picked single sided
    >>>because that's the 'rare' case)
    >>
    >>
    >> Not sure what you mean here
    >
    >What I mean here, and am demonstrating with the examples, is that the
    >terminology 'single and double sided' is not some oddball thing from a few
    >'bottom feeders' but, rather, a widely used industry terminology.

    <sigh> Single sided means 8/9 chips on one side of a module, with the other
    side unpopulated - nothing oddball. What *is* oddball is the 16-chip
    unbuffered DIMM made with x4 chips, populated on both sides as a single
    rank - what you, and some unscrupulous DIMM "mfrs" call a "high density"
    module.

    >> - single sided means (to me and I thought
    >> everbody else) 8/9 devices per module on oe side... just like the Crucial
    >> 512MB DIMMs I bought 3weeks ago. Those 16-device thingies are non-standard
    >> configs which do not conform to Jedec rules.
    >
    >That *is* what most people think because it is usually coincidental, and is
    >what inspired the usage, but the motherboard can't tell, much less care,
    >where the chips are mounted and there is no need for a motherboard to
    >'support' where someone places the chips.
    >
    >What single/double sided really means is, in the modern parlance, whether
    >the module has 1 or 2 ranks or, in another common parlance, 1 or 2 rows or,
    >in another common parlance, 1 or 2 banks.

    What I've been saying all along. Lets' be quite clear here: the 16-chip,
    single rank, unbuffered DIMM does *not* conform to industry standards -
    it's a bastard child.

    >>>My MS-5169, BH6, P2B-VM, BP6, and D6VAA user manuals all say they support
    >>>single and double sided DIMMS.
    >>
    >>
    >> I would not advise trying to use x4 device DIMMs on any Intel chipset mbrd
    >> - just one *might* work but Intel has "outlawed" them - read the docs.
    >
    >I didn't say anything about x4 devices. Those are simply more examples,
    >this time from the motherboard manufacturer side of the equation, of the
    >common single/double sided terminology.

    Yes you *DID* mention x4 devices and it's still up above in the 1st quoted
    para where you talk of chips which are "16x4" and "only 4 bits wide"...
    exact quotes!!

    >> <<snip>>
    >>
    >>>>IIRC the addressing was never a problem;
    >>>
    >>>That's why the most common symptom with a 'high density' module in a
    >>>motherboard that doesn't support it is seeing only half the memory.
    >>
    >>
    >> No experience there but there were sufficieent address lines on all the
    >> Intel chipsets I recall reading the docs for - Intel comments on this just
    >> before ruling them out on bus loading grounds.
    >
    >Well, it isn't just a matter of 'address lines', the chipset has to support
    >the chip organization in it's multiplexing.

    Read the Intel chipset docs and maybe it'll be clearer to you.

    >>>>Hmmm, according to Intel docs, the 3-DIMM mbrds didn't have to have FET
    >>>>switches and some early 440BX mbrds didn't have them - could be the reason.
    >>>
    >>>It isn't a loading issue as far as I can tell. If I plug a 256 meg stick in
    >>>I get 128 meg, which works just fine but isn't particularly helpful.
    >>
    >>
    >> Yes.
    >
    >Yes, what?

    Yes it's a loading issue - read the docs.

    >The 128 meg vs 256 meg stick issue is with equal numbers of 8x8 vs 16x8
    >chips. The loading is the same but 8x8 works and 16x8 don't.
    >
    >
    >>>The mobo specs are unambiguous. BH6 v1.0 max memory size is 384 Meg. BP6,
    >>>with the same BX chipset, is max mem size 768 Meg. BH6 supports max 128 Meg
    >>>DIMMS and 3 x 128 is 384. The BP6 supports 256 Meg DIMMS and 3 x 256 is
    >>>768. If I take the exact same three 256 Meg sticks out of the BP6 and plug
    >>>them into the BH6 I end up with a perfectly fine and functional 384 Meg.
    >>>
    >>>One can address twice as much as the other, in the socket and in total.
    >>>
    >>>Btw, since you brought up the BX data sheets, it states "Supports up to 4
    >>>double-sided
    >>>DIMMs."
    >>
    >>
    >> Yes but *WITH* FET switches to buffer the signals. For the 3-DIMM setup
    >> they're not required, according to the original data sheet,
    >
    >Right. But '4 sockets' has nothing to do with the point I was making. It's
    >simply another example of the 'single/double sided' DIMM as common industry
    >usage, this time from the Intel BX data sheets. They're saying it supports
    >up to 8 ranks as 4 - 2 rank (I.E. double sided) modules.

    No, just read the Intel 440BX data sheet. The 4-DIMM/8-rank config *must*
    have FET switches because of loading considerations.

    >> but I suspect
    >> that they were used on later 3-DIMM boards.
    >
    >Perhaps, but that doesn't alter whether it supports 16x8 chips or not.

    It matters as to how many 64-bit wide, 8-chip memory ranks you can put on a
    mbrd reliably.

    Rgds, George Macdonald

    "Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
  14. Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.chips (More info?)

    George Macdonald wrote:

    > On Tue, 07 Dec 2004 21:24:25 -0600, David Maynard <dNOTmayn@ev1.net> wrote:
    >
    >
    >>George Macdonald wrote:
    >>
    >>
    >>>On Mon, 06 Dec 2004 20:19:12 -0600, David Maynard <dNOTmayn@ev1.net> wrote:
    >>>
    >>>
    >>>
    >>>>George Macdonald wrote:
    >>>>
    >>>>
    >>>>
    >>>>>>Now, one way to think of 'high density' (and the one they mean) is to be
    >>>>>>able to get more Mbytes per 64 bit wide 'group'. And one way to do that is
    >>>>>>to organize the chip as 16x4 instead of 8x8. Same number of bits in the
    >>>>>>chip (64 M, so *IT* is the same 'density') but it now takes 16 to fill the
    >>>>>>64 bit wide 'group', because they are each only 4 bits wide, for 128 Mbyte
    >>>>>>on the 'single side' (twice as much. e.g. 'high density'). Note again that
    >>>>>>physical location is irrelevant and if you can figure out how to get 32 on
    >>>>>>the stick you could have a 256 Mbyte 'double sided' module.
    >>>>>
    >>>>>
    >>>>>Some DIMM mfrs used this nomenclature but they were bottom-feeders, notably
    >>>>>SyncMAX. In fact they were using (surplus ?) chips normally used for high
    >>>>>capacity 32-chip registered DIMMs in unbuffered 16-chip single-rank DIMMs.
    >>>>
    >>>>I've seen the term used quite a bit, not just SyncMAX (whoever they are).
    >>>>For example, do a search on pricewatch for "single sided" and you'll see
    >>>>Spectek, Corsair, Infineon, and Muskin listed. (I picked single sided
    >>>>because that's the 'rare' case)
    >>>
    >>>
    >>>Not sure what you mean here
    >>
    >>What I mean here, and am demonstrating with the examples, is that the
    >>terminology 'single and double sided' is not some oddball thing from a few
    >>'bottom feeders' but, rather, a widely used industry terminology.
    >
    >
    > <sigh> Single sided means 8/9 chips on one side of a module, with the other
    > side unpopulated - nothing oddball.

    I've already explained this. The physical location is a common coincidence
    but is not what matters to the electronics. 'Single sided' means 1 rank on
    the module and 'Double sided' means 2 ranks on the module.

    > What *is* oddball is the 16-chip
    > unbuffered DIMM made with x4 chips, populated on both sides as a single
    > rank - what you, and some unscrupulous DIMM "mfrs" call a "high density"
    > module.

    Has nothing to do with 'me'. I don't make them and I don't dream up the
    terms. I'm just explaining what's out there.


    >>>- single sided means (to me and I thought
    >>>everbody else) 8/9 devices per module on oe side... just like the Crucial
    >>>512MB DIMMs I bought 3weeks ago. Those 16-device thingies are non-standard
    >>>configs which do not conform to Jedec rules.
    >>
    >>That *is* what most people think because it is usually coincidental, and is
    >>what inspired the usage, but the motherboard can't tell, much less care,
    >>where the chips are mounted and there is no need for a motherboard to
    >>'support' where someone places the chips.
    >>
    >>What single/double sided really means is, in the modern parlance, whether
    >>the module has 1 or 2 ranks or, in another common parlance, 1 or 2 rows or,
    >>in another common parlance, 1 or 2 banks.
    >
    >
    > What I've been saying all along.

    No, what you just said was it meant the physical location of the chips.
    That's the common coincidence, and the origins of the unfortunate
    terminology, but it isn't what matters to the electronics.

    > Lets' be quite clear here: the 16-chip,
    > single rank, unbuffered DIMM does *not* conform to industry standards -
    > it's a bastard child.

    That is a different matter than single vs double sided. You keep mixed the
    two topics together but they're separate.


    >>>>My MS-5169, BH6, P2B-VM, BP6, and D6VAA user manuals all say they support
    >>>>single and double sided DIMMS.
    >>>
    >>>
    >>>I would not advise trying to use x4 device DIMMs on any Intel chipset mbrd
    >>>- just one *might* work but Intel has "outlawed" them - read the docs.
    >>
    >>I didn't say anything about x4 devices. Those are simply more examples,
    >>this time from the motherboard manufacturer side of the equation, of the
    >>common single/double sided terminology.
    >
    >
    > Yes you *DID* mention x4 devices
    > and it's still up above in the 1st quoted
    > para where you talk of chips which are "16x4" and "only 4 bits wide"...
    > exact quotes!!

    Not in the context of what double/single sided means and it being common
    industry usage.

    Whether a board supports the x4 chips is a completely different topic than
    talking about single vs double sided and THAT was the topic when I listed
    the memory manufacturers and motherboards mentioning single/double sided
    support.


    >>><<snip>>
    >>>
    >>>>>IIRC the addressing was never a problem;
    >>>>
    >>>>That's why the most common symptom with a 'high density' module in a
    >>>>motherboard that doesn't support it is seeing only half the memory.
    >>>
    >>>
    >>>No experience there but there were sufficieent address lines on all the
    >>>Intel chipsets I recall reading the docs for - Intel comments on this just
    >>>before ruling them out on bus loading grounds.
    >>
    >>Well, it isn't just a matter of 'address lines', the chipset has to support
    >>the chip organization in it's multiplexing.
    >
    >
    > Read the Intel chipset docs and maybe it'll be clearer to you.

    Maybe you should read it and then the chip organization will be clearer to you.


    >>>>>Hmmm, according to Intel docs, the 3-DIMM mbrds didn't have to have FET
    >>>>>switches and some early 440BX mbrds didn't have them - could be the reason.
    >>>>
    >>>>It isn't a loading issue as far as I can tell. If I plug a 256 meg stick in
    >>>>I get 128 meg, which works just fine but isn't particularly helpful.
    >>>
    >>>
    >>>Yes.
    >>
    >>Yes, what?
    >
    >
    > Yes it's a loading issue - read the docs.

    If you ever read what the hell was written you'd know it's not. x4 has
    nothing to do with the BH6 128 meg vs 256 meg issue.


    >>The 128 meg vs 256 meg stick issue is with equal numbers of 8x8 vs 16x8
    >>chips. The loading is the same but 8x8 works and 16x8 don't.
    >>
    >>
    >>
    >>>>The mobo specs are unambiguous. BH6 v1.0 max memory size is 384 Meg. BP6,
    >>>>with the same BX chipset, is max mem size 768 Meg. BH6 supports max 128 Meg
    >>>>DIMMS and 3 x 128 is 384. The BP6 supports 256 Meg DIMMS and 3 x 256 is
    >>>>768. If I take the exact same three 256 Meg sticks out of the BP6 and plug
    >>>>them into the BH6 I end up with a perfectly fine and functional 384 Meg.
    >>>>
    >>>>One can address twice as much as the other, in the socket and in total.
    >>>>
    >>>>Btw, since you brought up the BX data sheets, it states "Supports up to 4
    >>>>double-sided
    >>>>DIMMs."
    >>>
    >>>
    >>>Yes but *WITH* FET switches to buffer the signals. For the 3-DIMM setup
    >>>they're not required, according to the original data sheet,
    >>
    >>Right. But '4 sockets' has nothing to do with the point I was making. It's
    >>simply another example of the 'single/double sided' DIMM as common industry
    >>usage, this time from the Intel BX data sheets. They're saying it supports
    >>up to 8 ranks as 4 - 2 rank (I.E. double sided) modules.
    >
    >
    > No, just read the Intel 440BX data sheet. The 4-DIMM/8-rank config *must*
    > have FET switches because of loading considerations.

    Good grief. Read the above paragraph again. "4 sockets has nothing to do
    with the point I was making" and here you are screaming about 4-DIMMs again.


    >>>but I suspect
    >>>that they were used on later 3-DIMM boards.
    >>
    >>Perhaps, but that doesn't alter whether it supports 16x8 chips or not.
    >
    > It matters as to how many 64-bit wide, 8-chip memory ranks you can put on a
    > mbrd reliably.

    Which might be relevant if ANYWHERE I had mentioned trying to shove 8 ranks
    into 3 sockets.

    The 128 meg vs 256 meg stick BH6 issue is with equal numbers of 8x8 vs 16x8
    chips. The loading is the same but 8x8 works and 16x8 don't. Just ONE stick
    of 2 ranks.

    FET switches makes NO difference for ONE stupid socket and 16x8 is NOT a 4x
    loading issue.


    > Rgds, George Macdonald
    >
    > "Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
Ask a new question

Read More

CPUs Hardware SDRAM Homebuilt Product