100mb network and 100mb backbone, good bad or indifferent?

Archived from groups: comp.dcom.lans.ethernet (More info?)

I have always heard that a 10mb network should be on a hundred mb backbone.
is there any drawbacks to having everything on 100mb.

we are talking about 5 servers, 4 switches and about 60 workstations.

any comments ?
22 answers Last reply
More about 100mb network 100mb backbone good indifferent
  1. Archived from groups: comp.dcom.lans.ethernet (More info?)

    On Sun, 30 Jan 2005, Sonco wrote:

    > I have always heard that a 10mb network should be on a hundred mb
    > backbone.

    It's M (Mega) not m (milli), by the way.

    > is there any drawbacks to having everything on 100mb.

    Could be. It depends on the actual traffic flows.

    Most of our departmental network was like that until quite recently,
    and I can't say that we've really noticed a major improvement to the
    existing nodes by upgrading the uplinks to Gbit. (For the new servers
    which have Gbit interfaces it's a different story, of course, but that
    doesn't seem to be relevant to your question.)

    > we are talking about 5 servers, 4 switches and about 60 workstations.

    But, more to the point, what sort (levels and pattern) of traffic are
    they carrying?

    > any comments ?

    Boxes with Gbit-capable uplinks are very affordable nowadays.
  2. Archived from groups: comp.dcom.lans.ethernet (More info?)

    Alan J. Flavell wrote:

    >> I have always heard that a 10mb network should be on a hundred mb
    >> backbone.
    >
    > It's M (Mega) not m (milli), by the way.
    >

    Could be a really *SLOW* network. ;-)
  3. Archived from groups: comp.dcom.lans.ethernet (More info?)

    In article <364hcuF4th9aqU1@individual.net>, Sonco <sonbo@comcast.net> wrote:
    :I have always heard that a 10mb network should be on a hundred mb backbone.

    Not really. The key factor is that your backbone needs to be fast
    enough to handle the traffic being put over it without undue buffering
    and delays. That's really only something that you can determine through
    traffic measurements.

    :we are talking about 5 servers, 4 switches and about 60 workstations.

    That doesn't really give us much feel for the extent to which the
    server-to-server or server-to-workstation traffic is taxing the
    existing equipment. For example, if those are streaming media servers
    and you are doing video production, you should probably be going right
    to gigabit, but if you are just doing a little light email then
    you might not stress even a 10 Mb/s backbone.

    In our network, which is roughly 4 times as large as yours,
    our measurements show that most of the users would barely notice
    if we were to drop them and their switches down to 10 Mb/s.
    There is, though, one user on one port of one of the switches who
    is producing more data than the rest of the users combined: that one
    user should have a gigabit port and gigabit backbone to the server.

    Other than that, our traffic is fairly localized, with the greatest
    portion of the user traffic occuring within one [server] room. User
    traffic tends to be rather "bursty" and it can be difficult to
    find the right balance of cost and necessity. If one user
    saturates their local link 1% of the time, do they need an upgrade?
    10%? 30%? 80%? You can't place any hard numbers, because it depends
    on how the user is using the network: if that user can start a
    transfer and then work productively while the transfer is occuring,
    as long as the transfer finishes within a few hours, then a slow
    connection might be good enough -- but if you have a researcher
    who is interacting strongly with a computer model, then the
    difference between 5 minutes and 30 seconds can be very important.


    Our experience is that the activity which most greatly taxes our
    network is the [automated] backups. If you have enough data stored
    then even if only a fraction of it changes each day, doing
    a full backup can easily keep your network links saturated
    for 6 or more hours. After a point of growth, those backups
    aren't going to finish overnight or even over the weekend,
    and the backup traffic is going to start interfering with user
    traffic: by the time you reach a terabyte or so, you might find
    that you are designing your network around the backups
    rather than around the user traffic.
    --
    *We* are now the times. -- Wim Wenders (WoD)
  4. Archived from groups: comp.dcom.lans.ethernet (More info?)

    On Sun, 30 Jan 2005, Walter Roberson wrote:

    > network is the [automated] backups. If you have enough data stored
    > then even if only a fraction of it changes each day, doing
    > a full backup can easily keep your network links saturated
    > for 6 or more hours. After a point of growth, those backups
    > aren't going to finish overnight or even over the weekend,
    > and the backup traffic is going to start interfering with user
    > traffic: by the time you reach a terabyte or so, you might find
    > that you are designing your network around the backups
    > rather than around the user traffic.

    You make an excellent point. On the one hand it's better if backups
    are off-site, just in case the building burns down (don't laugh -
    we've seen it happen). On the other hand, as you say, it would need a
    sizeable network pipe to achieve it, for disk systems of quite an
    affordable size.

    Even taking full backups "in house" would be an intolerable strain on
    our network infrastructure. A dedicated link is probably to be
    recommended for many situations.
  5. Archived from groups: comp.dcom.lans.ethernet (More info?)

    If that's all there is to it then you might ask yourself a few questions:

    Is the 10Mb network there so that it will intentionally limit the data rate
    from any client to the backbone? Why?

    Compared to 10Mb, if everything is 100Mb, will that necessarily increase the
    volume of data? Probably not. It would only increase the peak data rate -
    and reduce the time accordingly. Thus reduce the possibility of collisions.

    Depending on the topology of the servers and switches there should be less
    opportunity for collisions anyway - until multiple workstations vie for
    service on the same server.

    Ask this: why would it matter if the network elements limit data rate to
    10Mb on the clients or if network loading limits data rate to <100Mb on
    occasion?

    Hints:
    If the traffic is such that the 10Mb clients are limited to less than 10Mb
    on occasion then having them faster won't help on those occasions.
    Otherwise, there is the opportunity for the clients to get faster service if
    they are configured at 100Mb.

    If there are clients that will immediately query the servers after receiving
    each prior server response then they would burn up bandwidth. That would be
    an unfortunate design but may well exist. In this case having slower
    clients could be helpful.

    I imagine that the "conventional wisdom" goes like this:

    "If you have 60 clients that will all be served by a backbone then it's a
    good idea for the backbone to be faster than the individual clients"
    This is all well and good IF:
    - The clients already exist
    - The slower speed at the clients is acceptable
    - The clients don't already exist and having higher speed at the clients is
    more expensive
    - The clients will use more bandwidth if it's available (as the pathological
    case above).
    However, IF:
    - The network doesn't already exist.
    - Cost is pretty much independent of speed
    - There is no pathological behavior
    - Faster at the clients is better
    Then having everything as fast as possible could be a good idea.

    I'm not suggesting anything but that you ask yourself some questions about
    these things. I'm sure others will provide you with better guidance.

    Fred


    "Sonco" <sonbo@comcast.net> wrote in message
    news:364hcuF4th9aqU1@individual.net...
    >I have always heard that a 10mb network should be on a hundred mb backbone.
    > is there any drawbacks to having everything on 100mb.
    >
    > we are talking about 5 servers, 4 switches and about 60 workstations.
    >
    > any comments ?
    >
    >
  6. Archived from groups: comp.dcom.lans.ethernet (More info?)

    Begin <ctj55g$9tj$1@canopus.cc.umanitoba.ca>
    On 2005-01-30, Walter Roberson <roberson@ibd.nrc-cnrc.gc.ca> wrote:
    > There is, though, one user on one port of one of the switches who
    > is producing more data than the rest of the users combined: that one
    > user should have a gigabit port and gigabit backbone to the server.

    I'd start with putting him on netnanny or something suitably annoying
    to see if that doesn't reduce the traffic to something reasonable. :-)


    --
    j p d (at) d s b (dot) t u d e l f t (dot) n l .
    Sure hope it's not a ``plank'', aka w4r3z s1t3.
  7. Archived from groups: comp.dcom.lans.ethernet (More info?)

    In article <3670g5F4uacnkU1@individual.net>,
    jpd <read_the_sig@do.not.spam.it.invalid> wrote:
    >Begin <ctj55g$9tj$1@canopus.cc.umanitoba.ca>
    >On 2005-01-30, Walter Roberson <roberson@ibd.nrc-cnrc.gc.ca> wrote:
    >> There is, though, one user on one port of one of the switches who
    >> is producing more data than the rest of the users combined: that one
    >> user should have a gigabit port and gigabit backbone to the server.
    >
    >I'd start with putting him on netnanny or something suitably annoying
    >to see if that doesn't reduce the traffic to something reasonable. :-)
    >
    >
    >--
    > j p d (at) d s b (dot) t u d e l f t (dot) n l .
    > Sure hope it's not a ``plank'', aka w4r3z s1t3.


    How are you measuring your network ?

    You've got to give us more information before I'll say that your
    requirement exceeds a plain 100Mb full duplex switched network. It's
    a no-braiener to make the server-switch connection a Gbe interface,
    but beyond that you need to do some work to understand what your
    bottleneck is. The physical plant has lots to do with the design. If
    you're in one building with a few floors then pulling a run from the
    main equipment room to a switch on each floor gives you a colapsed
    backbone for not a lot of money. Pulling fibre is a no-brainer and
    100Mb to each floor may be enough and save the price of a an expensive
    Gbe switch in the center, but a couple years from now when you need it
    it will be an easy upgrade path, and cheaper.

    Don't overengineer without understanding your requirements.

    "more traffic than the rest of the network, combined" is meaingless
    for the purposes of this discussion unless you give us numbers. It
    also suggests that that you have a small # of machines since the
    larger the base the harder it is for one machine to exceed the
    aggregate unless he's very different, such as the only diskless
    workstation, or does daily full backups to the servfer.

    It would be worth the time to see if he's got a virus or spyware.
    Then you should try to understand his business and what makes him
    special. I would, before I asked for the money got Gbe. If the
    netowrk connection isn't his bottleneck then there is no reason to
    spend money on him.

    A desktop machine generating enough data for a 100MB/sec net
    connection to be a bottleneck is a rare thing in business.
    --

    a d y k e s @ p a n i x . c o m

    Don't blame me. I voted for Gore.
  8. Archived from groups: comp.dcom.lans.ethernet (More info?)

    Walter Roberson <roberson@ibd.nrc-cnrc.gc.ca> wrote:
    > There is, though, one user on one port of one of the switches who
    > is producing more data than the rest of the users combined:

    Sniffer time! What is all that traffic?

    Is the user running some non-standard software that syncs a
    lot (MS FastIndex?), or is s/he infected with a virus?

    -- Robert
  9. Archived from groups: comp.dcom.lans.ethernet (More info?)

    In article <ctllao$jqq$1@panix5.panix.com>, Al Dykes <adykes@panix.com> wrote:
    :In article <3670g5F4uacnkU1@individual.net>,
    :jpd <read_the_sig@do.not.spam.it.invalid> wrote:
    :>Begin <ctj55g$9tj$1@canopus.cc.umanitoba.ca>
    :>On 2005-01-30, Walter Roberson <roberson@ibd.nrc-cnrc.gc.ca> wrote:
    :>> There is, though, one user on one port of one of the switches who
    :>> is producing more data than the rest of the users combined: that one
    :>> user should have a gigabit port and gigabit backbone to the server.

    :>I'd start with putting him on netnanny or something suitably annoying
    :>to see if that doesn't reduce the traffic to something reasonable. :-)

    As the OP of that statement, I can say that it would not be an
    appropriate solution to the situation.


    :How are you measuring your network ?

    Well, since you ask, I have Fluke's Network Inspector monitoring
    all of the switches and producing 5-minute trend graphs. I also
    sometimes turn on MRTG and watch the graphs for awhile. More often
    though, I record the packet counters and examine the packet volumes
    along the various links, cross-correlating from each end of the
    link to avoid making mistakes.

    :You've got to give us more information before I'll say that your
    :requirement exceeds a plain 100Mb full duplex switched network. It's
    :a no-braiener to make the server-switch connection a Gbe interface,
    :but beyond that you need to do some work to understand what your
    :bottleneck is. The physical plant has lots to do with the design. If
    :you're in one building with a few floors then pulling a run from the
    :main equipment room to a switch on each floor gives you a colapsed
    :backbone for not a lot of money.

    We have a building with 4 floors and two "wings", with a traditional
    star topology to a basement LAN router. One of the two wings is
    within the 100 meter copper limit, but due to the way the cross-connect
    between the wings run, the switches in the other wing are near or
    exceed the 100 m limit, so those have fibre to the core router.
    Fibre was also installed on the other side for future expansion, with
    all the fibre terminating in that basement room.

    :Pulling fibre is a no-brainer and
    :100Mb to each floor may be enough and save the price of a an expensive
    :Gbe switch in the center, but a couple years from now when you need it
    :it will be an easy upgrade path, and cheaper.

    Our measurements show we need the "expensive Gbe switch" anyhow, in
    order to keep up with our backups -- we've just gone from ~ 1 TB
    of storage to ~12 TB of storage capacity [not all used yet!!]
    The question was whether it would be best to install a managed gigabit
    switch in the wing where only one user is producing a great amount
    of data. Our conclusion was that it would be cheaper to pull
    copper [and fibre too since most of the cost is in the labour
    of putting the cables into the trays] over to our mini-NOC where
    we intend the new core router to live.

    :"more traffic than the rest of the network, combined" is meaingless
    :for the purposes of this discussion

    I didn't say "than the rest of the network combined", I said
    "than the rest of the users combined". There's a difference.

    :unless you give us numbers.


    The one user produces ~50 gigabytes per day, usually 6 days a week,
    250-350 Gb per week total. Assuming 50% transfer efficiency
    [allowing for overheads and architectural limitations as you
    get towards gigabit], that is half a working day of continuous
    data transfer at 100 Mb/s. By way of comparison, all of our other
    servers combined [other than the one the above user data is stored on]
    backed up this morning into 437 Gb of tape. If you need more exact
    numbers, such as number of packets and bytes transfered per port, then
    I can supply several months worth of that information in ~5 minute
    increments, but it would be a bit of a nuisance to extract it in
    detail out of the database it is in.

    Personally though, I don't think it'd be productive to dig up the
    details. I monitored the system carefully before making decisions
    about which portions needed upgrading and which did not. You should not,
    though, neglect the influence of power politics: if my measurements show
    that an entire subdepartment could easily fit into 10 Mb/s whilst
    a different subdepartment is overflowing 100 Mb/s, the first
    subdepartment will tend to feel that it is owed a network upgrade
    when the second gets one...


    :also suggests that that you have a small # of machines since the
    :larger the base the harder it is for one machine to exceed the
    :aggregate unless he's very different, such as the only diskless
    :workstation, or does daily full backups to the servfer.

    Look@Lan tells me I have 324 different devices intermittantly on the
    network. Over 600 devices are assign IP addresses, but they
    don't all necessarily get used in the same month. We probably
    average close to 4 networked devices per person.


    :It would be worth the time to see if he's got a virus or spyware.
    :Then you should try to understand his business and what makes him
    :special.

    You ass-u-me'd that I don't understand his business and what makes him
    special. I have a fairly good idea of what makes him special.

    :I would, before I asked for the money got Gbe. If the
    :netowrk connection isn't his bottleneck then there is no reason to
    :spend money on him.

    :A desktop machine generating enough data for a 100MB/sec net
    :connection to be a bottleneck is a rare thing in business.

    Again you have ass-u-me'd. You failed to look at my email address
    and take a step such as visiting our web site. We aren't -in-
    business: we are public sector biomedical research.

    For what it's worth, the user is involved in Proteomics and is,
    if I recall correctly, doing automated DNA sequence analysis.
    The rate of data production swamps our previous high-point
    of projects having to do with Functional Imaging of the Brain
    in MRI machines, a typical run of which was only 1/2 Gb.
    --
    I don't know if there's destiny,
    but there's a decision! -- Wim Wenders (WoD)
  10. Archived from groups: comp.dcom.lans.ethernet (More info?)

    In article <86vLd.25867$iC4.24061@newssvr30.news.prodigy.com>,
    Robert Redelmeier <redelm@ev1.net.invalid> wrote:
    :Walter Roberson <roberson@ibd.nrc-cnrc.gc.ca> wrote:
    :> There is, though, one user on one port of one of the switches who
    :> is producing more data than the rest of the users combined:

    :Sniffer time! What is all that traffic?

    :Is the user running some non-standard software that syncs a
    :lot (MS FastIndex?), or is s/he infected with a virus?

    Non-standard perhaps, but not that syncs a lot, and no virus is
    involved. The user is running a high-speed scientific instrument.
    The other scientific instruments in the building do not produce
    data nearly as quickly.
    --
    Those were borogoves and the momerathsoutgrabe completely mimsy.
  11. Archived from groups: comp.dcom.lans.ethernet (More info?)

    Walter Roberson <roberson@ibd.nrc-cnrc.gc.ca> wrote:
    > Non-standard perhaps, but not that syncs a lot, and
    > no virus is involved. The user is running a high-speed
    > scientific instrument. The other scientific instruments
    > in the building do not produce data nearly as quickly.

    Ah, so you understand the traffic. How about using local
    storage, and compressing for network backup? It sounds like
    this guy should be on his own 100 port. Nothing wrong
    with elevating a "superuser" to the backbone.

    -- Robert
  12. Archived from groups: comp.dcom.lans.ethernet (More info?)

    Sonco wrote:

    > I have always heard that a 10mb network should be on a hundred mb
    > backbone. is there any drawbacks to having everything on 100mb.
    > we are talking about 5 servers, 4 switches and about 60 workstations.
    > any comments ?

    These are all academic discussions at this point. The only drawback
    might be that you will not see 10X the increase in speed. But at
    today's switchport pricing, why bother with anything less than 10Mbps?
    For all I know, the time spent EVALUATING the network will cost more
    than just upgrading it.

    --

    hsb


    "Somehow I imagined this experience would be more rewarding" Calvin
    **************************ROT13 MY ADDRESS*************************
    Due to the volume of email that I receive, I may not not be able to
    reply to emails sent to my account. Please post a followup instead.
    ********************************************************************
  13. Archived from groups: comp.dcom.lans.ethernet (More info?)

    Fred Marshall wrote:

    (snip)

    > Compared to 10Mb, if everything is 100Mb, will that necessarily increase the
    > volume of data? Probably not. It would only increase the peak data rate -
    > and reduce the time accordingly. Thus reduce the possibility of collisions.

    > Depending on the topology of the servers and switches there should be less
    > opportunity for collisions anyway - until multiple workstations vie for
    > service on the same server.

    > Ask this: why would it matter if the network elements limit data rate to
    > 10Mb on the clients or if network loading limits data rate to <100Mb on
    > occasion?

    There is a possibility that a 100Mb network could be slower, but
    not so likely.

    If a switched 100Mb network with all links full duplex and
    no flow control saturates the uplink, it can be slower than
    10Mb links, full or half duplex, to a 100Mb uplink.

    That should be relatively unlikely, but it is possible.

    -- glen
  14. Archived from groups: comp.dcom.lans.ethernet (More info?)

    Fred Marshall wrote:

    > If that's all there is to it then you might ask yourself a few questions:
    >
    > Is the 10Mb network there so that it will intentionally limit the data
    > rate
    > from any client to the backbone? Why?
    >
    > Compared to 10Mb, if everything is 100Mb, will that necessarily increase
    > the
    > volume of data? Probably not. It would only increase the peak data rate
    > -
    > and reduce the time accordingly. Thus reduce the possibility of
    > collisions.

    ?????? If it's a 100 MB/sec backbone feeding into 10 at the desktop then
    it's necessarily a switched architecture of some sort and collisions should
    not be an issue.

    > Depending on the topology of the servers and switches there should be less
    > opportunity for collisions anyway - until multiple workstations vie for
    > service on the same server.

    In a switched architecture the only time collisions occur is when something
    is misconfigured.

    > Ask this: why would it matter if the network elements limit data rate to
    > 10Mb on the clients or if network loading limits data rate to <100Mb on
    > occasion?
    >
    > Hints:
    > If the traffic is such that the 10Mb clients are limited to less than 10Mb
    > on occasion then having them faster won't help on those occasions.
    > Otherwise, there is the opportunity for the clients to get faster service
    > if they are configured at 100Mb.
    >
    > If there are clients that will immediately query the servers after
    > receiving
    > each prior server response then they would burn up bandwidth. That would
    > be
    > an unfortunate design but may well exist. In this case having slower
    > clients could be helpful.
    >
    > I imagine that the "conventional wisdom" goes like this:
    >
    > "If you have 60 clients that will all be served by a backbone then it's a
    > good idea for the backbone to be faster than the individual clients"
    > This is all well and good IF:
    > - The clients already exist
    > - The slower speed at the clients is acceptable
    > - The clients don't already exist and having higher speed at the clients
    > is more expensive
    > - The clients will use more bandwidth if it's available (as the
    > pathological case above).
    > However, IF:
    > - The network doesn't already exist.
    > - Cost is pretty much independent of speed
    > - There is no pathological behavior
    > - Faster at the clients is better
    > Then having everything as fast as possible could be a good idea.
    >
    > I'm not suggesting anything but that you ask yourself some questions about
    > these things. I'm sure others will provide you with better guidance.
    >
    > Fred
    >
    >
    >
    >
    > "Sonco" <sonbo@comcast.net> wrote in message
    > news:364hcuF4th9aqU1@individual.net...
    >>I have always heard that a 10mb network should be on a hundred mb
    >>backbone.
    >> is there any drawbacks to having everything on 100mb.
    >>
    >> we are talking about 5 servers, 4 switches and about 60 workstations.
    >>
    >> any comments ?
    >>
    >>

    --
    --John
    Reply to jclarke at ae tee tee global dot net
    (was jclarke at eye bee em dot net)
  15. Archived from groups: comp.dcom.lans.ethernet (More info?)

    In article <ctojk7027n6@news4.newsguy.com>,
    J. Clarke <jclarke@nospam.invalid> wrote:
    :In a switched architecture the only time collisions occur is when something
    :is misconfigured.

    Switched != "full duplex".

    If you have a host connected to a switch port with nothing
    inbetween (no hubs or whatever) and both ports are running full
    duplex, then Yes, no collisions. However, if the ports are
    running half-duplex then transmissions from the host to the switch
    can collide with transmissions from the switch to the host.

    :?????? If it's a 100 MB/sec backbone feeding into 10 at the desktop then
    :it's necessarily a switched architecture of some sort and collisions should
    :not be an issue.

    As best I recall, it was only fairly recently that there has
    been anything approaching a "standard" for 10 Mb full duplex.
    Thus the existance of 10 Mb devices makes it quite likely that
    some (many) of the links are running half duplex, for which see above.
    --
    *We* are now the times. -- Wim Wenders (WoD)
  16. Archived from groups: comp.dcom.lans.ethernet (More info?)

    In article <ctol8f$cvm$1@canopus.cc.umanitoba.ca>,
    roberson@ibd.nrc-cnrc.gc.ca (Walter Roberson) wrote:

    >
    > As best I recall, it was only fairly recently that there has
    > been anything approaching a "standard" for 10 Mb full duplex.
    > Thus the existance of 10 Mb devices makes it quite likely that
    > some (many) of the links are running half duplex, for which see above.

    The standard for 10 Mb/s Full Duplex operation was approved/published in
    1997 (going on 8 years, now). Even before that, there were products
    (non-standard or "pre-standard"). That said, there are lots of legacy
    10 Mb/s devices that may not be able to (or ever need to) operate in
    full-duplex mode.


    --
    Rich Seifert Networks and Communications Consulting
    21885 Bear Creek Way
    (408) 395-5700 Los Gatos, CA 95033
    (408) 228-0803 FAX

    Send replies to: usenet at richseifert dot com
  17. Archived from groups: comp.dcom.lans.ethernet (More info?)

    Begin <ctm00e$st4$1@canopus.cc.umanitoba.ca>
    On 2005-01-31, Walter Roberson <roberson@ibd.nrc-cnrc.gc.ca> wrote:
    [snip!]
    >:>I'd start with putting him on netnanny or something suitably annoying
    >:>to see if that doesn't reduce the traffic to something reasonable. :-)
    >
    > As the OP of that statement, I can say that it would not be an
    > appropriate solution to the situation.

    If you're sure. You didn't say so, so the possibility was left open.
    (See other people making alike suggestions for alike possibilities.)


    >:It would be worth the time to see if he's got a virus or spyware.
    >:Then you should try to understand his business and what makes him
    >:special.
    >
    > You ass-u-me'd that I don't understand his business and what makes him
    ^^^^^^^^
    > special. I have a fairly good idea of what makes him special.

    Que? _You_ didn't specify, or didn't even give a hint you knew what
    this one anomaly makes so special. And given that as often as not
    people don't even know an ISDN cable from an ethernet cable, let alone
    understand why they're not interchangeable, it was, and still is, basic
    prudence to not disallow the possibility.


    [snip: sensible advice]
    >
    > Again you have ass-u-me'd. You failed to look at my email address
    > and take a step such as visiting our web site. We aren't -in-
    > business: we are public sector biomedical research.

    Excuse me? Why should anyone want to go out of their way to give you
    free advice? Just because you're in academentia? Furrfu.

    If it's true what you claim (and no, I haven't checked, why should I),
    you've been among them doktores too long. Time to sniff some fresh air,
    and see the colour of your sky fade to blue again.


    --
    j p d (at) d s b (dot) t u d e l f t (dot) n l .
  18. Archived from groups: comp.dcom.lans.ethernet (More info?)

    In article <36al78F50lcojU1@individual.net>,
    jpd <read_the_sig@do.not.spam.it.invalid> wrote:
    :Que? _You_ didn't specify, or didn't even give a hint you knew what
    :this one anomaly makes so special.

    I certainly didn't give any hint that I -didn't- understand why that
    one user was producing masses of data. I did, though, give useful
    hints to the OP based upon our experiences -- hints that would
    tend to lead people to understand that I have done non-trivial
    network traffic analysis.

    :And given that as often as not
    :people don't even know an ISDN cable from an ethernet cable, let alone
    :understand why they're not interchangeable, it was, and still is, basic
    :prudence to not disallow the possibility.

    I am a relative newcomer to comp.dcom.lans.ethernet, having posted
    only 142 messages here during the last year, about 100 of which were
    in the last 6 months. That's only about 2 a week during that time, so
    I can understand why you might not have recognized my name. These days
    I'm mostly three newsgroups further over, in comp.dcom.sys.cisco,
    answering about 35 questions a week.


    :Excuse me? Why should anyone want to go out of their way to give you
    :free advice? Just because you're in academentia? Furrfu.

    If you re-examine my posting that started this subtree, you will see
    that I wasn't asking for advice, I was giving it.


    :If it's true what you claim (and no, I haven't checked, why should I),
    :you've been among them doktores too long. Time to sniff some fresh air,
    :and see the colour of your sky fade to blue again.

    The weather's been quite strange here this winter. Normally at this
    time of year it is clear and cold (-42 overnight is pretty common here
    for the first week of February); instead we've had a mix of deep cold
    and abnormal highs... and very little sunshine. It's a choice between
    grey skies and greyer skies.
    --
    Beware of bugs in the above code; I have only proved it correct,
    not tried it. -- Donald Knuth
  19. Archived from groups: comp.dcom.lans.ethernet (More info?)

    "glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
    news:utKdnRxYd6hAz2LcRVn-qw@comcast.com...
    > Fred Marshall wrote:
    >
    > (snip)
    >
    >> Compared to 10Mb, if everything is 100Mb, will that necessarily increase
    >> the volume of data? Probably not. It would only increase the peak data
    >> rate - and reduce the time accordingly. Thus reduce the possibility of
    >> collisions.
    >
    >> Depending on the topology of the servers and switches there should be
    >> less opportunity for collisions anyway - until multiple workstations vie
    >> for service on the same server.
    >
    >> Ask this: why would it matter if the network elements limit data rate to
    >> 10Mb on the clients or if network loading limits data rate to <100Mb on
    >> occasion?
    >
    > There is a possibility that a 100Mb network could be slower, but
    > not so likely.
    >
    > If a switched 100Mb network with all links full duplex and
    > no flow control saturates the uplink, it can be slower than
    > 10Mb links, full or half duplex, to a 100Mb uplink.
    >
    > That should be relatively unlikely, but it is possible.

    Glen,

    I'm not sure I understand...
    What is the basis of comparison?
    You said:
    "If a switched 100Mb network with all links full duplex and
    no flow control saturates the uplink, it can be slower than
    10Mb links, full or half duplex, to a 100Mb uplink."
    Let me paraphrase:
    "Comparing:
    a LAN of N clients using 100Mb links each, full duplex, no flow control
    with:
    a Lan of N clients using 10Mb, full or half duplex
    with a 100Mb uplink for both then:
    The 100Mb links can demonstrate lower throughput compared to the 10Mb links,
    given the same traffic demand."

    So, the traffic demand can be no greater than what will be presented with
    the 10Mb links - although the peaks will be higher with the 100Mb links. If
    that's the case how can the 100Mb links cause lower throughput? There must
    be some system overhead issue eh?

    On the other hand if somehow one allows the 100Mb linked clients to demand
    more throughput than their 10Mb cousins then are you saying that THEN the
    100Mb linked clients will actually provide lower throughput than the 10Mb
    clients *or* lower throughput than some other idealized situation that's not
    under discussion (yet)?

    Fred
  20. Archived from groups: comp.dcom.lans.ethernet (More info?)

    Fred Marshall wrote:

    > "glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote
    (snip)

    >>If a switched 100Mb network with all links full duplex and
    >>no flow control saturates the uplink, it can be slower than
    >>10Mb links, full or half duplex, to a 100Mb uplink.

    >>That should be relatively unlikely, but it is possible.


    > I'm not sure I understand...
    > What is the basis of comparison?
    > You said:
    > "If a switched 100Mb network with all links full duplex and
    > no flow control saturates the uplink, it can be slower than
    > 10Mb links, full or half duplex, to a 100Mb uplink."
    > Let me paraphrase:
    > "Comparing:
    > a LAN of N clients using 100Mb links each, full duplex, no flow control
    > with:
    > a Lan of N clients using 10Mb, full or half duplex
    > with a 100Mb uplink for both then:
    > The 100Mb links can demonstrate lower throughput compared to the 10Mb links,
    > given the same traffic demand."

    > So, the traffic demand can be no greater than what will be presented with
    > the 10Mb links - although the peaks will be higher with the 100Mb links. If
    > that's the case how can the 100Mb links cause lower throughput? There must
    > be some system overhead issue eh?

    > On the other hand if somehow one allows the 100Mb linked clients to demand
    > more throughput than their 10Mb cousins then are you saying that THEN the
    > 100Mb linked clients will actually provide lower throughput than the 10Mb
    > clients *or* lower throughput than some other idealized situation that's not
    > under discussion (yet)?

    Well, traffic demand and traffic allowed are different. If many
    hosts are able to send somewhere close to 100Mb, the switch will
    be forced to discard packets. Without flow control there is no
    way for the switch to slow down the traffic coming in.

    On half duplex links, some switches will do flow control by forcing
    collisions on incoming packets, so at least the sending host knows
    that the packets aren't getting through.

    This problem will only occur when there is enough traffic to more
    than fill the 100Mb uplink, which should be relatively rare.
    Maybe a group of machines all trying to backup data to a server
    at the same time would do it.

    Somewhat similar to traffic meters on freeway onramps that limit
    the rate traffic can enter the freeway to allow the freeway to
    run faster. People aren't good at doing flow control without
    an external signal and the threat of a ticket.

    -- glen
  21. Archived from groups: comp.dcom.lans.ethernet (More info?)

    "glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
    news:2MKdnZlw7uASTpzfRVn-sA@comcast.com...
    > Fred Marshall wrote:
    >
    >> "glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote
    > (snip)
    >
    >>>If a switched 100Mb network with all links full duplex and
    >>>no flow control saturates the uplink, it can be slower than
    >>>10Mb links, full or half duplex, to a 100Mb uplink.
    >
    >>>That should be relatively unlikely, but it is possible.
    >
    >
    >> I'm not sure I understand...
    >> What is the basis of comparison?
    >> You said:
    >> "If a switched 100Mb network with all links full duplex and
    >> no flow control saturates the uplink, it can be slower than
    >> 10Mb links, full or half duplex, to a 100Mb uplink."
    >> Let me paraphrase:
    >> "Comparing:
    >> a LAN of N clients using 100Mb links each, full duplex, no flow control
    >> with:
    >> a Lan of N clients using 10Mb, full or half duplex
    >> with a 100Mb uplink for both then:
    >> The 100Mb links can demonstrate lower throughput compared to the 10Mb
    >> links, given the same traffic demand."
    >
    >> So, the traffic demand can be no greater than what will be presented with
    >> the 10Mb links - although the peaks will be higher with the 100Mb links.
    >> If that's the case how can the 100Mb links cause lower throughput? There
    >> must be some system overhead issue eh?
    >
    >> On the other hand if somehow one allows the 100Mb linked clients to
    >> demand more throughput than their 10Mb cousins then are you saying that
    >> THEN the 100Mb linked clients will actually provide lower throughput than
    >> the 10Mb clients *or* lower throughput than some other idealized
    >> situation that's not under discussion (yet)?
    >
    > Well, traffic demand and traffic allowed are different. If many
    > hosts are able to send somewhere close to 100Mb, the switch will
    > be forced to discard packets. Without flow control there is no
    > way for the switch to slow down the traffic coming in.
    >
    > On half duplex links, some switches will do flow control by forcing
    > collisions on incoming packets, so at least the sending host knows
    > that the packets aren't getting through.
    >
    > This problem will only occur when there is enough traffic to more
    > than fill the 100Mb uplink, which should be relatively rare.
    > Maybe a group of machines all trying to backup data to a server
    > at the same time would do it.
    >
    > Somewhat similar to traffic meters on freeway onramps that limit
    > the rate traffic can enter the freeway to allow the freeway to
    > run faster. People aren't good at doing flow control without
    > an external signal and the threat of a ticket.
    >
    > -- glen

    Ah, OK. The links act as a throttle - which is not the same thing as saying
    that the clients work "OK" with those slower links. Even if the clients
    work "OK", they will demand more given the opportunity. The former is a
    perspective of the user and the latter is the perspective of the
    applications.

    Fred
  22. Archived from groups: comp.dcom.lans.ethernet (More info?)

    Fred Marshall wrote:
    (snip)

    > Ah, OK. The links act as a throttle - which is not the same thing as saying
    > that the clients work "OK" with those slower links. Even if the clients
    > work "OK", they will demand more given the opportunity. The former is a
    > perspective of the user and the latter is the perspective of the
    > applications.

    The important point being that with 10Mb half duplex links, the
    sending hosts either can't send faster than the uplink can take
    the data, or get collisions as flow control.

    The data from 10 links of 10Mb/s each should fit into a 100Mb/s
    uplink, so even full duplex it should be fine.

    -- glen
Ask a new question

Read More

Ethernet Card Workstations Servers Networking