Company network slowdown

Archived from groups: (More info?)

Question about typical company network. We are looking at going gigabit mainly
because of a perceived network slowdown in the past 6 months or so. But... some
of use are not sure that the 100 Mb T1 current network is really the fault.
Question is: We have some really speedy computers on the network and some not so
speedy. Can slow clock speed computers drag down the entire network? We have B /
G Wi-Fi on both sides of the firewall. Can they drag down overall speed of the
network? We have hubs / switches that feed other hubs / switches. How bad a
practice is that?
There are about 50 wired drops around the building and around 8 wi-fi hot spots.
Previous IT guy set the wi-fi up with all different SSIDs. We don't care about
lap top roaming so maybe that's not a big deal. Or not?
Any suggestions?
22 answers Last reply
More about company network slowdown
  1. Archived from groups: (More info?)

    DanR wrote:

    > Question about typical company network. We are looking at going gigabit mainly
    > because of a perceived network slowdown in the past 6 months or so. But... some
    > of use are not sure that the 100 Mb T1 current network is really the fault.
    > Question is: We have some really speedy computers on the network and some not so
    > speedy. Can slow clock speed computers drag down the entire network? We have B /
    > G Wi-Fi on both sides of the firewall. Can they drag down overall speed of the
    > network? We have hubs / switches that feed other hubs / switches. How bad a
    > practice is that?
    > There are about 50 wired drops around the building and around 8 wi-fi hot spots.
    > Previous IT guy set the wi-fi up with all different SSIDs. We don't care about
    > lap top roaming so maybe that's not a big deal. Or not?
    > Any suggestions?
    >
    >
    Have you run a sniffer over the network to determine where the
    consumption and waste is ?
  2. Archived from groups: (More info?)

    You must run a network traffic analysis prog to see where the bottlenecks
    are and how the bandwidth is being used/shared.

    Consider putting high bandwidth 'power' users on their own network if
    possible... give them a fibre spine if required.

    Someone should be managing your network - reliabilty, usability and security
    will be compromised if you let benign (?) anarchy rule ;-)

    Have fun

    Guy


    "DanR" <dhr22@sorrynospm.com> wrote in message
    news:QVoUe.3268$6e1.1632@newssvr14.news.prodigy.com...
    > Question about typical company network. We are looking at going gigabit
    > mainly
    > because of a perceived network slowdown in the past 6 months or so. But...
    > some
    > of use are not sure that the 100 Mb T1 current network is really the
    > fault.
    > Question is: We have some really speedy computers on the network and some
    > not so
    > speedy. Can slow clock speed computers drag down the entire network? We
    > have B /
    > G Wi-Fi on both sides of the firewall. Can they drag down overall speed of
    > the
    > network? We have hubs / switches that feed other hubs / switches. How bad
    > a
    > practice is that?
    > There are about 50 wired drops around the building and around 8 wi-fi hot
    > spots.
    > Previous IT guy set the wi-fi up with all different SSIDs. We don't care
    > about
    > lap top roaming so maybe that's not a big deal. Or not?
    > Any suggestions?
    >
    >
  3. Archived from groups: (More info?)

    On Sat, 10 Sep 2005 23:08:32 GMT, "DanR" <dhr22@sorrynospm.com> wrote:

    >Yes, I should have provided more information about our network hardware. Problem
    >is I don't really know.

    Fine. However you should have some clue who's got performance
    problems.

    >We are a production company with 6 Avid sweets, 2 audio
    >sweets, one online editing room and an interactive department.

    That's Suite's, not sweets.

    >We don't have any
    >IT people per se... but have designated one of our coders to be responsible for
    >the network.

    I can't tell for sure but if you have 50 boxes, you really should get
    someone qualified to do the troubleshooting. It's easy enough to plan
    and setup a new network. It's requires experience to troubleshoot an
    existing network.

    >He's a sharp guy and seems to know his network jargon. And he is
    >new on the job having taken over the network from someone who left. Because I'm
    >fairly handy with computers in general

    Well, ok.

    >I'm helping the boss think through our
    >move to giga-bit and the coincidental network / Internet slowdown we have been
    >experiencing.

    Ok, so it's an *INTERNET* slowdown, not a server to client or render
    farm slowdown. That's not going to change at all by going to gigabit.
    You're bottlenecked at 1.5Mbits/sec at the T1 and that's your limit.
    Do the traffic monitoring to see what and how much is moving in and
    out of the T1. Don't be surprised if you see worms, file sharing, and
    garbage.

    >The main reason to go giga-bit is to move very large files around
    >on the network. (video files in the giga-Bytes) And because of the Internet
    >slowdown of late we are talking and wondering if that will improve Internet
    >throughput.

    That's very different from an *INTERNET* slowdown. Most render farms
    are interconnected with gigabit ethernet. The big boxes have multiple
    gigabit cards to distribute the load. I got to play with one RAID
    server with 4 cards and a load balancer. Yeah, for in house traffic,
    gigabit is great.

    However, you still have to know if you're making an improvement. For
    that you need numbers, measurements, calculations, and pretty graphs
    to impress the boss. I suggest MRTG for traffic monitoring.

    >Obviously it will be a fairly expensive endeavor to run all new
    >cable throughout the building and get new NICs.

    Baloney. CAT5e will do gigabit just fine. You don't really need
    CAT6. Keep the cable lengths down to less than 300ft. Avoid long
    flexible ethernet CAT5 jumpers. Borrow a cable certifier and test
    your wiring. New gigabit NIC's are cheap. Netgear GA311 is about
    $20. I recently upgraded a law office with gigabit everything. It
    was a barely noticeable improvement. You only notice an improvement
    if your existing 100baseTX system is saturated. Do the measurements
    and you'll know for sure. If lazy, use Windoze XP Perfmon to check
    client network utilization.

    >So we're also thinking about
    >only doing new giga-drops at some work stations and not the entire network.

    Fine. Draw the topology map as I suggested and see how many boxes in
    between the gigabit NIC's need to be upgraded.

    >All
    >new drops will be home runs and if we do the entire building that means all home
    >runs.

    Home runs to what? I smell a big building with cable lengths more
    than 300ft which will require some intermediate boxes. Home runs
    aren't always best.

    >But there's a but and that is that we are considering fiber to the upper
    >floor because of long runs.

    How long? If you don't know, guess.

    >So that is a bit of background and I'm just trying to learn what I can so I can
    >ask intelligent questions and better understand what the heck is going on.

    Well, ok. I think I've given you a good start on the buzzwords. So
    far, you've made the decision to spend some money, considerable time,
    and a bit of guesswork, in order to upgrade a network that you don't
    have a clue where it's running slow, why it's running slow, or whether
    you have a traffic problem. Also, this has nothing to do with
    wireless so you're asking in the wrong newsgroup. To insure that
    you'll get no useful answers, you've supplied not one single name,
    number, model number, distance, or accurate description.

    >I'm
    >basically a home network guy and that is the extent of my network hardware
    >knowledge.

    Well, you're learning. Business LAN's are very similar except that
    reliability is a much bigger issue than performance or features. Your
    real task will be to fix whatever problem you can't seem to describe
    accurately, and do it without breaking anything else or having 50
    irate graphic artists screaming at you. That's quite different from
    home networking.

    >I appreciate the help so far provided. Thank you all.
    >Jeff... when you say "A T1 (DS1) is 1.544Mbits/sec. You'll get about
    >1.3Mbits/sec thruput in both directions." Does that mean that just one
    >workstation at a time will see that throughput?

    No. The bandwidth is distributed roughly equally among the
    workstations.

    >If 10 computers / workstations
    >are at the same time doing a Microsoft update for example... are they sharing
    >that 1.3Mbit bandwidth?

    Yes. In theory, each workstation will get 1/10th the incoming
    bandwidth. MS Update is a bad example because of the way they do
    bandwidth limiting, but that's a diversion and not part of this
    discussion.

    >Are they each then downloading at 130Kb. Does it work
    >that way?

    Yes.

    >Also curious about one of our people who constantly listens to
    >Internet radio streams. Any harm there?

    No. I do that in the office. Screaming audio is from 24Kbits/sec to
    about 128Kbits/sec. Compared to your 1500Kbit/sec, the screaming
    audio listener only eats about 8% of your incoming bandwidth.
    However, if you're saturating the T1 with other traffic (do the
    sniffing), then that last 8% might be fatal.


    --
    Jeff Liebermann jeffl@comix.santa-cruz.ca.us
    150 Felker St #D http://www.LearnByDestroying.com
    Santa Cruz CA 95060 http://802.11junk.com
    Skype: JeffLiebermann AE6KS 831-336-2558
  4. Archived from groups: (More info?)

    "DanR" <dhr22@sorrynospm.com> wrote in message
    news:QVoUe.3268$6e1.1632@newssvr14.news.prodigy.com...
    > Question about typical company network. We are looking at going gigabit
    mainly
    > because of a perceived network slowdown in the past 6 months or so. But...
    some
    > of use are not sure that the 100 Mb T1 current network is really the
    fault.
    > Question is: We have some really speedy computers on the network and some
    not so
    > speedy. Can slow clock speed computers drag down the entire network? We
    have B /
    > G Wi-Fi on both sides of the firewall. Can they drag down overall speed of
    the
    > network? We have hubs / switches that feed other hubs / switches. How bad
    a
    > practice is that?
    > There are about 50 wired drops around the building and around 8 wi-fi hot
    spots.
    > Previous IT guy set the wi-fi up with all different SSIDs. We don't care
    about
    > lap top roaming so maybe that's not a big deal. Or not?
    > Any suggestions?
    >
    If you are running from the server through one switch and using one output
    to feed another switch at 100 Mb, then taking the outputs of the second
    switch to feed a number of workstations, then all those workstations must
    share the single 100Mb feed from the first switch. Not good practice for
    maintaining good throughput and response.

    Just watching the "blinking lights" on the switches can give you some idea
    of loading and in what directions the load is coming from.

    Either you need to redistribute the workstation load more evenly or better,
    take the network to gigabit so that the data moves a bit faster. Also be on
    the lookout for a bad or "garbaging" NIC. Some varieties can soft fail
    slowly and really start dragging a network down. Using managed switches
    rather than unmanaged and setting them up properly usually makes a
    significant difference.

    You may also wish to look at adding a second (and third or fourth) ethernet
    port on your server and feeding a switch directly rather than using a point
    of an existing earlier switch. Four ethernet ports on the server, each
    feeding a single 16 port switch and then directly to the clients will share
    out the load significantly but be absolutely sure you use good NICs such as
    the genuine Intel Pro series rather than many of the cheap aftermarket types
    that generally cannot stand very high consistent traffic error free.

    Remember also the cascading guidelines for switches, 10Mb - 3 cascaded,
    100Mb - 2 cascaded, gigabit - no cascading.

    Peter
  5. Archived from groups: (More info?)

    Pierre wrote:
    > "DanR" <dhr22@sorrynospm.com> wrote in message
    > news:QVoUe.3268$6e1.1632@newssvr14.news.prodigy.com...
    >> Question about typical company network. We are looking at going gigabit
    >> mainly because of a perceived network slowdown in the past 6 months or so.
    >> But... some of use are not sure that the 100 Mb T1 current network is really
    >> the fault. Question is: We have some really speedy computers on the network
    >> and some not so speedy. Can slow clock speed computers drag down the entire
    >> network? We have B / G Wi-Fi on both sides of the firewall. Can they drag
    >> down overall speed of the network? We have hubs / switches that feed other
    >> hubs / switches. How bad a practice is that?
    >> There are about 50 wired drops around the building and around 8 wi-fi hot
    >> spots. Previous IT guy set the wi-fi up with all different SSIDs. We don't
    >> care about lap top roaming so maybe that's not a big deal. Or not?
    >> Any suggestions?
    >>
    > If you are running from the server through one switch and using one output
    > to feed another switch at 100 Mb, then taking the outputs of the second
    > switch to feed a number of workstations, then all those workstations must
    > share the single 100Mb feed from the first switch. Not good practice for
    > maintaining good throughput and response.
    >
    > Just watching the "blinking lights" on the switches can give you some idea
    > of loading and in what directions the load is coming from.
    >
    > Either you need to redistribute the workstation load more evenly or better,
    > take the network to gigabit so that the data moves a bit faster. Also be on
    > the lookout for a bad or "garbaging" NIC. Some varieties can soft fail

    What are the symptoms of a bad or "garbaging" NIC? Would it be constant traffic
    even when the user is not doing anything network related? Would "watching the
    "blinking lights" help find one of these NICs? Would a managed switch make a
    "garbaging" NIC a non issue?

    > slowly and really start dragging a network down. Using managed switches
    > rather than unmanaged and setting them up properly usually makes a
    > significant difference.
    >
    > You may also wish to look at adding a second (and third or fourth) ethernet
    > port on your server and feeding a switch directly rather than using a point
    > of an existing earlier switch. Four ethernet ports on the server, each
    > feeding a single 16 port switch and then directly to the clients will share
    > out the load significantly but be absolutely sure you use good NICs such as
    > the genuine Intel Pro series rather than many of the cheap aftermarket types
    > that generally cannot stand very high consistent traffic error free.
    >
    > Remember also the cascading guidelines for switches, 10Mb - 3 cascaded,
    > 100Mb - 2 cascaded, gigabit - no cascading.
    >
    > Peter
  6. Archived from groups: (More info?)

    Hi Dan,

    A garbaging NIC can often be found by watching the lights. Network software
    analysis tools very rarely find it as the data it is sending is invariably a
    load of rubbish and may not even be valid bytes. All it seems to do is use
    bandwidth. The user may even be otherwise totally inactive but the NIC keeps
    chattering. A final usual proof is to unplug the ethernet cable at the
    suspect machine and see if there is an improvement.

    Putting in a managed switch is not the way to fix that problem. You have to
    find the bad NIC and replace it. It is a bit like using a bucket to drain a
    flooded area when in fact the drain should be unblocked!

    As others have said, a good audit and mapping of the complete network is
    mandatory if you are going to approach the issues in any sort of logical
    manner. The scatter gun approach generally leads to more confusion.

    With a good map of your network, you can isolate sections logically and see
    if the isolated section was that hogging the network and then break that
    section into smaller sections until the culprit is found. There could well
    be other issues which have affected the network loading and performance too
    such as a new application installed, the server databases not responding
    quickly enough because of server performance issues and so on. Again. draw
    up in detail what the network has and step through it first.

    As an example, a client of mine runs some 50-60 workstations to two separate
    servers on a single network. The primary server is also running a moderately
    heavy SQL database and file storage of some 2 terabytes of image files
    averaging 1.5 Mb each. In any one minute period, it is usual to have some 20
    workstations up and down some 10-15 image files each, apart from referencing
    the SQL database and a medium accounting job. It used to run at 100 Mb with
    unmanaged switches on two segments and was a bit slow. Once a garbaging NIC
    dropped performance by some 25% overall. The same system is now upgraded
    with two managed switches and gigabit, it flies!
    Peter
    "DanR" <dhr22@sorrynospm.com> wrote in message
    news:RHJUe.2760$7D1.1746@newssvr12.news.prodigy.com...
    >
    >
    > Pierre wrote:
    > > "DanR" <dhr22@sorrynospm.com> wrote in message
    > > news:QVoUe.3268$6e1.1632@newssvr14.news.prodigy.com...
    > >> Question about typical company network. We are looking at going gigabit
    > >> mainly because of a perceived network slowdown in the past 6 months or
    so.
    > >> But... some of use are not sure that the 100 Mb T1 current network is
    really
    > >> the fault. Question is: We have some really speedy computers on the
    network
    > >> and some not so speedy. Can slow clock speed computers drag down the
    entire
    > >> network? We have B / G Wi-Fi on both sides of the firewall. Can they
    drag
    > >> down overall speed of the network? We have hubs / switches that feed
    other
    > >> hubs / switches. How bad a practice is that?
    > >> There are about 50 wired drops around the building and around 8 wi-fi
    hot
    > >> spots. Previous IT guy set the wi-fi up with all different SSIDs. We
    don't
    > >> care about lap top roaming so maybe that's not a big deal. Or not?
    > >> Any suggestions?
    > >>
    > > If you are running from the server through one switch and using one
    output
    > > to feed another switch at 100 Mb, then taking the outputs of the second
    > > switch to feed a number of workstations, then all those workstations
    must
    > > share the single 100Mb feed from the first switch. Not good practice for
    > > maintaining good throughput and response.
    > >
    > > Just watching the "blinking lights" on the switches can give you some
    idea
    > > of loading and in what directions the load is coming from.
    > >
    > > Either you need to redistribute the workstation load more evenly or
    better,
    > > take the network to gigabit so that the data moves a bit faster. Also be
    on
    > > the lookout for a bad or "garbaging" NIC. Some varieties can soft fail
    >
    > What are the symptoms of a bad or "garbaging" NIC? Would it be constant
    traffic
    > even when the user is not doing anything network related? Would "watching
    the
    > "blinking lights" help find one of these NICs? Would a managed switch make
    a
    > "garbaging" NIC a non issue?
    >
    > > slowly and really start dragging a network down. Using managed switches
    > > rather than unmanaged and setting them up properly usually makes a
    > > significant difference.
    > >
    > > You may also wish to look at adding a second (and third or fourth)
    ethernet
    > > port on your server and feeding a switch directly rather than using a
    point
    > > of an existing earlier switch. Four ethernet ports on the server, each
    > > feeding a single 16 port switch and then directly to the clients will
    share
    > > out the load significantly but be absolutely sure you use good NICs such
    as
    > > the genuine Intel Pro series rather than many of the cheap aftermarket
    types
    > > that generally cannot stand very high consistent traffic error free.
    > >
    > > Remember also the cascading guidelines for switches, 10Mb - 3 cascaded,
    > > 100Mb - 2 cascaded, gigabit - no cascading.
    > >
    > > Peter
    >
    >
  7. Archived from groups: (More info?)

    Jeff has it right again except for one part. Gigabit NICs are cheap and you
    get what you pay for. having been intimately associated with a similar type
    of installation, we ended up throwing out 23 Netgear GA311 NICs and a
    variety of other breeds. The majority of them just cannot reliably stand
    intense high volume traffic as occasioned by hundred megabyte file transfers
    running 24/7. They randomly and intermittently buckle resulting in a few
    more retries which takes precious bandwidth. Commercial installations
    usually run at sub 5 or 10% network utilisation. Graphics and imaging sites
    often run at 80%+ utilisation for minutes on end.

    After a lot of experimentation and testing of various NICs, we replaced all
    the NICs on the network with genuine Intel Pro series NICs which were a bit
    dearer and have never had a problem in the three years since and it flies.
    And no, I am an independent contractor with no interest or shares in Intel!

    Peter
    "Jeff Liebermann" <jeffl@comix.santa-cruz.ca.us> wrote in message
    news:0r07i1t243545j0jur8971jroso4usvkl3@4ax.com...
    > On Sat, 10 Sep 2005 23:08:32 GMT, "DanR" <dhr22@sorrynospm.com> wrote:
    >
    > >Yes, I should have provided more information about our network hardware.
    Problem
    > >is I don't really know.
    >
    > Fine. However you should have some clue who's got performance
    > problems.
    >
    > >We are a production company with 6 Avid sweets, 2 audio
    > >sweets, one online editing room and an interactive department.
    >
    > That's Suite's, not sweets.
    >
    > >We don't have any
    > >IT people per se... but have designated one of our coders to be
    responsible for
    > >the network.
    >
    > I can't tell for sure but if you have 50 boxes, you really should get
    > someone qualified to do the troubleshooting. It's easy enough to plan
    > and setup a new network. It's requires experience to troubleshoot an
    > existing network.
    >
    > >He's a sharp guy and seems to know his network jargon. And he is
    > >new on the job having taken over the network from someone who left.
    Because I'm
    > >fairly handy with computers in general
    >
    > Well, ok.
    >
    > >I'm helping the boss think through our
    > >move to giga-bit and the coincidental network / Internet slowdown we have
    been
    > >experiencing.
    >
    > Ok, so it's an *INTERNET* slowdown, not a server to client or render
    > farm slowdown. That's not going to change at all by going to gigabit.
    > You're bottlenecked at 1.5Mbits/sec at the T1 and that's your limit.
    > Do the traffic monitoring to see what and how much is moving in and
    > out of the T1. Don't be surprised if you see worms, file sharing, and
    > garbage.
    >
    > >The main reason to go giga-bit is to move very large files around
    > >on the network. (video files in the giga-Bytes) And because of the
    Internet
    > >slowdown of late we are talking and wondering if that will improve
    Internet
    > >throughput.
    >
    > That's very different from an *INTERNET* slowdown. Most render farms
    > are interconnected with gigabit ethernet. The big boxes have multiple
    > gigabit cards to distribute the load. I got to play with one RAID
    > server with 4 cards and a load balancer. Yeah, for in house traffic,
    > gigabit is great.
    >
    > However, you still have to know if you're making an improvement. For
    > that you need numbers, measurements, calculations, and pretty graphs
    > to impress the boss. I suggest MRTG for traffic monitoring.
    >
    > >Obviously it will be a fairly expensive endeavor to run all new
    > >cable throughout the building and get new NICs.
    >
    > Baloney. CAT5e will do gigabit just fine. You don't really need
    > CAT6. Keep the cable lengths down to less than 300ft. Avoid long
    > flexible ethernet CAT5 jumpers. Borrow a cable certifier and test
    > your wiring. New gigabit NIC's are cheap. Netgear GA311 is about
    > $20. I recently upgraded a law office with gigabit everything. It
    > was a barely noticeable improvement. You only notice an improvement
    > if your existing 100baseTX system is saturated. Do the measurements
    > and you'll know for sure. If lazy, use Windoze XP Perfmon to check
    > client network utilization.
    >
    > >So we're also thinking about
    > >only doing new giga-drops at some work stations and not the entire
    network.
    >
    > Fine. Draw the topology map as I suggested and see how many boxes in
    > between the gigabit NIC's need to be upgraded.
    >
    > >All
    > >new drops will be home runs and if we do the entire building that means
    all home
    > >runs.
    >
    > Home runs to what? I smell a big building with cable lengths more
    > than 300ft which will require some intermediate boxes. Home runs
    > aren't always best.
    >
    > >But there's a but and that is that we are considering fiber to the upper
    > >floor because of long runs.
    >
    > How long? If you don't know, guess.
    >
    > >So that is a bit of background and I'm just trying to learn what I can so
    I can
    > >ask intelligent questions and better understand what the heck is going
    on.
    >
    > Well, ok. I think I've given you a good start on the buzzwords. So
    > far, you've made the decision to spend some money, considerable time,
    > and a bit of guesswork, in order to upgrade a network that you don't
    > have a clue where it's running slow, why it's running slow, or whether
    > you have a traffic problem. Also, this has nothing to do with
    > wireless so you're asking in the wrong newsgroup. To insure that
    > you'll get no useful answers, you've supplied not one single name,
    > number, model number, distance, or accurate description.
    >
    > >I'm
    > >basically a home network guy and that is the extent of my network
    hardware
    > >knowledge.
    >
    > Well, you're learning. Business LAN's are very similar except that
    > reliability is a much bigger issue than performance or features. Your
    > real task will be to fix whatever problem you can't seem to describe
    > accurately, and do it without breaking anything else or having 50
    > irate graphic artists screaming at you. That's quite different from
    > home networking.
    >
    > >I appreciate the help so far provided. Thank you all.
    > >Jeff... when you say "A T1 (DS1) is 1.544Mbits/sec. You'll get about
    > >1.3Mbits/sec thruput in both directions." Does that mean that just one
    > >workstation at a time will see that throughput?
    >
    > No. The bandwidth is distributed roughly equally among the
    > workstations.
    >
    > >If 10 computers / workstations
    > >are at the same time doing a Microsoft update for example... are they
    sharing
    > >that 1.3Mbit bandwidth?
    >
    > Yes. In theory, each workstation will get 1/10th the incoming
    > bandwidth. MS Update is a bad example because of the way they do
    > bandwidth limiting, but that's a diversion and not part of this
    > discussion.
    >
    > >Are they each then downloading at 130Kb. Does it work
    > >that way?
    >
    > Yes.
    >
    > >Also curious about one of our people who constantly listens to
    > >Internet radio streams. Any harm there?
    >
    > No. I do that in the office. Screaming audio is from 24Kbits/sec to
    > about 128Kbits/sec. Compared to your 1500Kbit/sec, the screaming
    > audio listener only eats about 8% of your incoming bandwidth.
    > However, if you're saturating the T1 with other traffic (do the
    > sniffing), then that last 8% might be fatal.
    >
    >
    > --
    > Jeff Liebermann jeffl@comix.santa-cruz.ca.us
    > 150 Felker St #D http://www.LearnByDestroying.com
    > Santa Cruz CA 95060 http://802.11junk.com
    > Skype: JeffLiebermann AE6KS 831-336-2558
  8. Archived from groups: (More info?)

    On Sun, 11 Sep 2005 14:09:00 +1000, "Pierre" <rainsford@ihug.com.au>
    wrote:

    >Jeff has it right again except for one part. Gigabit NICs are cheap and you
    >get what you pay for. having been intimately associated with a similar type
    >of installation, we ended up throwing out 23 Netgear GA311 NICs and a
    >variety of other breeds. The majority of them just cannot reliably stand
    >intense high volume traffic as occasioned by hundred megabyte file transfers
    >running 24/7. They randomly and intermittently buckle resulting in a few
    >more retries which takes precious bandwidth. Commercial installations
    >usually run at sub 5 or 10% network utilisation. Graphics and imaging sites
    >often run at 80%+ utilisation for minutes on end.
    >
    >After a lot of experimentation and testing of various NICs, we replaced all
    >the NICs on the network with genuine Intel Pro series NICs which were a bit
    >dearer and have never had a problem in the three years since and it flies.
    >And no, I am an independent contractor with no interest or shares in Intel!
    >
    >Peter

    Oops. I just mean't the GA311 as an example of a cheap gigabit NIC.
    I have to confess that I don't have experience with the GA311 NIC
    under heavy continuous load. I guess I'll avoid the GA311 as the
    Intel card is only about $30 each.
    | http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=1275962&CatId=0
    My only point was that a gigabit conversion is no longer very
    expensive at the client end.

    Looking at Gigabit switches, the prices seem to hover around $10-$20
    per port for unmanaged and $25 to $40 per port for managed switches.
    I would go with the managed switch as I'm a big fan of SNMP monitoring
    and management. Knowing what's happening and being able to turn
    things on and off remotely is worth the extra dollars.
    | http://www.tigerdirect.com/applications/category/category_slc.asp?Nav=|c:201|c:596|
    94 gigabit switches to chose from, some of which are fairly cheap.

    Incidentally, you're largely proving my point, that gigabit is only
    effective when the network segment is heavily loaded. With light
    loads, I can do quite well with 100baseTX-FDX.


    --
    Jeff Liebermann jeffl@comix.santa-cruz.ca.us
    150 Felker St #D http://www.LearnByDestroying.com
    Santa Cruz CA 95060 http://802.11junk.com
    Skype: JeffLiebermann AE6KS 831-336-2558
  9. Archived from groups: (More info?)

    First of all i suggest updating drivers on all of your network card.
    The I suggest removing hubs and replacing them with switches
    Then run a traffic analyzer on the hosts (pc) where you see more traffic.
  10. Archived from groups: (More info?)

    Jeff, I want to make sure I understand your comments.
    >> Jeff... when you say "A T1 (DS1) is 1.544Mbits/sec. You'll get about
    >> 1.3Mbits/sec thruput in both directions." Does that mean that just one
    >> workstation at a time will see that throughput?
    >
    > No. The bandwidth is distributed roughly equally among the
    > workstations.
    >
    Could the above sentence read "No. The bandwidth is distributed roughly equally
    among the workstations" that are at that moment sending / receiving on the
    Internet.
    In other words... the active workstations share the bandwidth. True? I think
    that is what you said below.

    > If 10 computers / workstations
    >> are at the same time doing a Microsoft update for example... are they sharing
    >> that 1.3Mbit bandwidth?
    >
    > Yes. In theory, each workstation will get 1/10th the incoming
    > bandwidth. MS Update is a bad example because of the way they do
    > bandwidth limiting, but that's a diversion and not part of this
    > discussion.
    >
    >> Are they each then downloading at 130Kb. Does it work
    >> that way?
    >
    > Yes.
    >
    I'm really surprised to learn that a T1 Internet connection has these
    limitations. Seems then that (except for upload) it's like having 50 or so
    computers on a home DSL Internet connection. I would have thought that this
    would have been un-acceptable. My "thought" is not based on technical knowledge
    but I always assumed that a T1 was the ultimate way to go.
    One more thing. At any given time during the work day we have about 20
    computers using instant messaging. Most of the time there is not traffic but the
    apps are always listening. Is that much of a load?
    I am extremely grateful for the time you've spent providing all this good
    information. If we don't have to run all new cable your tip will save our
    company a lot of money and labor.
  11. Archived from groups: (More info?)

    On Mon, 12 Sep 2005 13:07:11 GMT, "DanR" <dhr22@sorrynospm.com> wrote:

    >Jeff, I want to make sure I understand your comments.
    >>> Jeff... when you say "A T1 (DS1) is 1.544Mbits/sec. You'll get about
    >>> 1.3Mbits/sec thruput in both directions." Does that mean that just one
    >>> workstation at a time will see that throughput?
    >>
    >> No. The bandwidth is distributed roughly equally among the
    >> workstations.

    >Could the above sentence read "No. The bandwidth is distributed roughly equally
    >among the workstations" that are at that moment sending / receiving on the
    >Internet.
    >In other words... the active workstations share the bandwidth. True? I think
    >that is what you said below.

    Yes, the active workstations share the bandwidth roughly equally.
    Note that this is NOT true with wireless where the distribution varies
    with the connection speed.

    No, this does NOT mean that one workstation at a time will that
    thruput as you previously stated.

    >I'm really surprised to learn that a T1 Internet connection has these
    >limitations.

    You get what you pay for. In the past, it was assumed that a T1(DS1)
    came with a superior level of support from the telcos. I still
    remember one hour service from Pacific Bell. Now daze, T1 is just
    another service and may just be a muxed channel off some telco fiber.
    I actually get better service from my DSL lines than I do from the
    T1's. The only real benefit of a T1 is the 1.5Mbits/sec outgoing
    bandwidth, which cannot be easily supplied via DSL.

    >Seems then that (except for upload) it's like having 50 or so
    >computers on a home DSL Internet connection.

    The conventional rule of thumb for loading is:
    100 users doing light web browsing and email.
    10 business users doing whatever business users do.
    1 file sharing user.

    >I would have thought that this
    >would have been un-acceptable.

    What is unacceptable? Only having 50 computers on a single T1?
    Again, it depends on what those users are doing. By todays standards
    of bloated and bandwidth hungry applications, a T1 is a small pipe.
    If you would kindly dig out the sniffer and see what's moving on your
    T1, you might have a better idea of whether you're dealing with a
    capacity problem or an abuse problem.

    For example, a customer calls me on Sunday morning (yawn) to ask why
    their T1 is moving large amounts of traffic when there's nobody in the
    office. This is a good question. I expected to find a virus, worm,
    or hacker. Instead, I found that a clever user had found a program
    that "synchronized" his files between his home computer and his office
    machine. He had set it up incorrectly and it was "synchronizing" much
    of the corporate server farm as well as gigabytes of junk on his
    desktop. Eventually, it would have killed his home computer, but I
    didn't want to wait. So, I dived into the managed ethernet switch,
    pulled the virtual plug on his machine, and left a nasty voicemail
    message. This type of nonsense happens all the time.

    Another example. A while back, I noticed that the MRTG traffic graphs
    showed that someone was downloading about 25Mbytes of something every
    5 minutes. It was causing problems with VoIP traffic and streaming
    content. It turned out to be Symantec Live Update trying to update
    Norton Antivirus. One problem. Norton AntiVirus had been removed
    from that machine, but not Live Update. It would merrily try to
    update NAV, fail, and then try again in 5 minutes by downloading
    everything over and over and over, etc.

    Moral: You need to know what's moving on your network or you can't do
    anything useful in the way of troubleshooting and capacity planning.

    >My "thought" is not based on technical knowledge

    Got it. Your thinking is based on emotion. I have a ladyfriend that
    sometimes operates that way. The scarey part is that it often works.
    There are books and classes to optimized intuition, crystal ball
    gazing, Ouigi Boards, and pseudo science that may help with this way
    to non-technical troubleshooting. I've often suspected that the
    government also uses this method in their technical ventures.

    >but I always assumed that a T1 was the ultimate way to go.

    You can't afford the ultimate. At this time, an OC-192 at
    9.6Gbits/sec symmetrical is about as fast as commonly available.
    Korea has 10Mbit/sec consumer service. Most cable modems and some DSL
    vendros will do 6Mbits/sec download and 512Kbits/sec upload. Desktops
    will soon have 10Gigabit ethernet cards. Some crude numbers:
    http://www.infobahn.com/research-information.htm

    Incidentally, if *ONLY* incoming bandwidth is an issue, you might
    wanna consider distributing the load. Get several DSL connections and
    use one of these to manage the load:
    http://www.edimax.com/html/english/products/list-PRIrouter.htm
    The DSL lines are MUCH cheaper than the T1. However, if your problem
    is outgoing bandwidth, a load balancing router will do nothing.

    >One more thing. At any given time during the work day we have about 20
    >computers using instant messaging. Most of the time there is not traffic but the
    >apps are always listening. Is that much of a load?

    No load at all. Some IM clients (i.e. AIM) deliver advertising and
    stupid videos which grab a small amount of bandwidth, but nothing
    disgusting and nothing that's running all the time. However, if
    people are using IM for file transfers, the bandwidth use might be
    momentarily quite high.

    >I am extremely grateful for the time you've spent providing all this good
    >information. If we don't have to run all new cable your tip will save our
    >company a lot of money and labor.

    I still think you need someone with network troubleshooting experience
    to impliment monitoring and traffic analysis. Render farms use LOTS
    of bandwidth. My guess(tm) is that you're speed problem may be in an
    unexpected area.


    --
    Jeff Liebermann jeffl@comix.santa-cruz.ca.us
    150 Felker St #D http://www.LearnByDestroying.com
    Santa Cruz CA 95060 http://802.11junk.com
    Skype: JeffLiebermann AE6KS 831-336-2558
  12. Archived from groups: (More info?)

    Good advice, Pierre. Also, Dan, do not overlook the network printers. I've
    had them start chattering several times (all HP's) and bring the network to
    it's knees. Drove us crazy trying to find the culprit.

    Cheers, Wizzzer
    --
    Nuke 'em 'til they glow,
    shoot 'em in the dark.
  13. Archived from groups: (More info?)

    On Mon, 12 Sep 2005 21:53:26 -0500, "Wizzzer" <notme@noway.com> wrote:

    >Good advice, Pierre. Also, Dan, do not overlook the network printers. I've
    >had them start chattering several times (all HP's) and bring the network to
    >it's knees. Drove us crazy trying to find the culprit.

    Been there. HP LaserJet 4 with Jetdirect J2552 card. If I run out of
    paper, it floods the networks with garbage that was impossible to
    decode with Ethereal. That took me 6 months to find. It was fixed
    with a firmware update to the Jetdirect card.


    --
    # Jeff Liebermann 150 Felker St #D Santa Cruz CA 95060
    # 831.336.2558 voice Skype: JeffLiebermann
    # http://www.LearnByDestroying.com AE6KS
    # http://802.11junk.com
    # jeffl@comix.santa-cruz.ca.us
    # jeffl@cruzio.com
  14. Archived from groups: (More info?)

    In the Usenet newsgroup alt.internet.wireless, in article
    <gjnci15ei80kbrlanvk0c5lc78974etjt9@4ax.com>, Jeff Liebermann wrote:

    >Been there. HP LaserJet 4 with Jetdirect J2552 card. If I run out of
    >paper, it floods the networks with garbage that was impossible to
    >decode with Ethereal. That took me 6 months to find.

    Don't know what your network looks like, but HP only has a handful of
    OUI blocks:

    [compton ~]$ zgrep -i Hewlett MACaddresses.gz | grep base | cut -d' '
    -f1 | column
    0001E6 000883 000E7F 00110A 001321 001560 0060B0
    0001E7 000A57 000F20 001185 001438 00306E 0080A0
    0004EA 000D9D 001083 001279 0014C2 0030C1 080009
    [compton ~]$

    That's straight out of the IEEE file. I'm at an R&D facility, and we're
    super paranoid, so every host is 'registered' meaning we know MAC, IP,
    user, location, which drop from which switch, serial and decal numbers,
    and the date of last tetanus shot for everything that connects to our net.
    If something starts squittering, I can ID the box in seconds. If the box
    is unknown, I can ID the drop, and it's 50/50 if the security goons get
    there before me or not.

    Old guy
  15. Archived from groups: (More info?)

    Thanks to all who responded. The detailed replies were very helpful and enabled
    some of us non experts to ask the right questions of the person who will do the
    hands on work. We are on our way to the upgrade and sniffing around for Internet
    issues.
    Dan
  16. Archived from groups: (More info?)

    In the Usenet newsgroup alt.internet.wireless, in article
    <gr6fi1tofbn8oii6uggb7sjd3hnb7eulmj@4ax.com>, Jeff Liebermann wrote:

    >Well, if the 802.3 Ethernet packets were well formed and contained MAC
    >addresses, tracing the problem back to the source would have been
    >trivial. Instead, what I was seeing was bursts of garbage that I
    >couldn't decode. I tired Ethereal, a Network General Sniffer, NT
    >Netmon, and a bunch of demo sniffers I downloaded just to see if they
    >could make sense of the traffic.

    Oh, _that_ kind of garbage. Yeah, had that with old 10Base5 tranceivers
    with later model Sun SS5 and SS20. Drove us absolutely bananas till
    we caught one in the lab. We were using a NetGen sniffer, and I
    forget what it was that we were finally able to spot - vaguely it was
    a fraction of the Ethernet header, but that was years ago.

    >I could see the garbage very lightly flashing the lights on the hubs,
    >but could not decode anything. I spend two days with a logic analyzer
    >trying to capture useful data and decode the contents manually, but
    >even that didn't produce anything useful.

    We had a Tektronix 535 scope on a platform, with another guy with the
    probe in the overhead ceiling. Total waste of time. We did see there
    was an occasional fractional packet (wasn't long enough to be a
    collision), and actually had people log into each box on the subnet
    and look at the ifconfig -a stuff. No joy.

    >Just to make it interesting, I made a rather stupid series of
    >mistakes. This was in the days when hubs were in fashion and switches
    >were expensive and scarce (approx 1997)

    1997 - we had just completed installing Kalpana Etherswitches to break
    our 750 foot lengths of 10Base5 into smaller segments, and to get the
    routers and busy servers onto their own ports. I didn't ask how
    expensive the Etherswitches were, but they made a significant
    improvement - and they had (some) smarts!!!

    >Nobody every deduced that the network running slow was caused by running
    >out of paper because there was always someone around to replace the paper
    >that was not directly involved in using the computahs. Running out of
    >paper was a very uncommon experience, so the time of slow downs were not
    >easy to predict.

    Yeah, our users are "trained" to reload paper bins. They'd manage to
    screw something else up, but paper usually got loaded as soon as
    someone came to pick up their printouts, and found nada. Some of
    the "smarter" ones learned how to cancel and re-run print jobs on
    alternative printers. Why they wouldn't reload the paper? Who knows.

    >I had wrongly decided that the various 16 port Linksys 10baseT hubs
    >were the likely culprits and convinced management to go for an HP
    >Procurve 4000 switch, mostly on the basis of speeding things up to
    >100baseTX-FDX.

    We had two buildings with twisted pair - I swear it was Cat 1/2 - and
    one section of the main building with Cat 5. Everything else was coax.

    >The nice thing about switches is that garbaged and trashed packets do
    >not go through a store and forward switch. Everything that was plugged
    >into the Procurve switch worked without a slowdown. Everything that was
    >still on the hub slowed to a crawl whenever the HP LJ4 ran out of paper.

    Similar with the old tranceivers, except we had them on three of the
    16 ports. That narrows it down, but doesn't get the exact answer.

    >Of course, I didn't bother labeling the cables so I didn't have an
    >immediate clue as to where the junk was coming from.

    Boy does that sound familiar ;-)

    >If there had been anything decoded by a sniffer, I would have found
    >the source almost immediately. Instead, it was a painful 6 month
    >ordeal, with lots of bad guesswork, and a substantial amount of luck
    >in finding the problem. What I consider the most important lesson
    >from the aformentioned exercise was that I could not have figured it
    >out without the statistics and diagnostics from the managed switch.

    Coax is just as bad if not worse - the blinky lights are on the
    tranceiver up in the ceiling (and under floor in the server rooms).
    Until we broke things up with the Etherswitches, our coax runs were
    up to 750 feet long, and had up to 400 systems on that one wire.
    Slightly out of spec, but it worked.

    Old guy
  17. Archived from groups: (More info?)

    On Wed, 14 Sep 2005 22:04:18 -0500, ibuprofin@painkiller.example.tld
    (Moe Trin) wrote:

    >forget what it was that we were finally able to spot - vaguely it was
    >a fraction of the Ethernet header, but that was years ago.

    My guess(tm) is that I was seeing the intentional trashing part of the
    ethernet collision detection mechanism going continuously. If the PAD
    (packet assembler/disassembler) detects a partial collision, it is
    suppose to intentionally trash the packet to prevent propogating
    garbage. The waveforms I saw looked like small pulses of this
    intentional trashing algorithm. The packet disassembler is programmed
    to detect this trashing and temporarily increase the collision backoff
    timers resulting in fairly long delays where nobody is transmitting.

    >We had a Tektronix 535 scope on a platform, with another guy with the
    >probe in the overhead ceiling.

    Ah, nostalgia. Much my HF ham radio station is thrown together with
    bright yellow coax cable. I always knew it was good for something. I
    also have a few 3C500 cards and transceivers which I'm sure will end
    up in a museum somewhere. About half the problems I found with the
    stuff were mechanical. Poor probe contact due to loose clamping and
    lousy terminations due to flakey N-connector assembly were most
    common. Also water incursion via unused probe holes and the usual
    mouse chewing. I built my own TDR (time domain relfectometer) which
    was a big help because I could "see" the probes and terminations.

    >Total waste of time.

    Great exercise.

    >We did see there
    >was an occasional fractional packet (wasn't long enough to be a
    >collision), and actually had people log into each box on the subnet
    >and look at the ifconfig -a stuff. No joy.

    Most managed boxes will log "runt" packets, framing errors, and such.
    | http://support.3com.com/infodeli/tools/netmgt/tncsunix/product/091500/c11ploss.htm
    However, that often doesn't happen when the PAD is implimented in a
    hardware decoder designed to take the load off the main processor.
    Early hardware PAD chips just discarded malformed packets and didn't
    report them. At best, it lumps everything into "discards" without any
    detail. That was the problem with using PC's to do the
    troubleshooting. The runts never showed up on any of the diagnostics.

    >1997 - we had just completed installing Kalpana Etherswitches to break
    >our 750 foot lengths of 10Base5 into smaller segments, and to get the
    >routers and busy servers onto their own ports. I didn't ask how
    >expensive the Etherswitches were, but they made a significant
    >improvement - and they had (some) smarts!!!

    More nostalgia. Kalpana was the first "LAN switch" vendor. They
    basically invented ethernet switching (followed by immediate copying
    by almost everyone in the biz). I never got to play with one.

    >Yeah, our users are "trained" to reload paper bins. They'd manage to
    >screw something else up, but paper usually got loaded as soon as
    >someone came to pick up their printouts, and found nada.

    There were quite a few people in the offices that didn't do anything
    with computers. Everyone knew how to load paper and were told to keep
    the printers full. The reason was not very subtle. This company had
    previous experienced a computer meltdown and would print out every
    customer order so as not to lose anything. Between the dozen or so
    printers, that was about 2-3 reams per day plus a huge pile at the end
    of each accounting period. About once a year, they would rent a huge
    paper shredder and recycle. I worked on their machines since the S100
    bus days and found that the paper trail was rarely used, but when it
    was, it was invaluable.

    >We had two buildings with twisted pair - I swear it was Cat 1/2 - and
    >one section of the main building with Cat 5. Everything else was coax.

    If you search Google, you'll find a few articles by me on using RG-6/u
    75 ohm coax for 10base2 (Cheapernet). Works fine if you don't have
    any BNC T connector taps and only transceivers at the ends of the
    coax. I learned networking the hard way with Arcnet (over both coax
    and flat telco wire) and Starlan with CAT3 and 25 pair telco bundles.
    I still have a few 10baseT runs over 25 pair telco cables.

    >>Of course, I didn't bother labeling the cables so I didn't have an
    >>immediate clue as to where the junk was coming from.
    >
    >Boy does that sound familiar ;-)

    I was in a hurry. Nobody wanted me around during working hours. I
    was very disruptive. No matter how hard I tried, I could never
    schedule any downtime during working hours. There was always someone
    who just had to get their report done at the very late moment. So, I
    would show up at about 7PM putter around until about 9PM, and then
    bring everything down. As I recall, I was done with the rewiring
    exercise at about 2AM and spent the next 3 hours fixing my crimping
    and punch down mistakes. I now have various cable testers, but at the
    time, it was with an ohms-guesser and clip leads. Somehow, I thought
    that I would remember where everything went and label things later.
    Later never arrived.

    >Coax is just as bad if not worse - the blinky lights are on the
    >tranceiver up in the ceiling (and under floor in the server rooms).
    >Until we broke things up with the Etherswitches, our coax runs were
    >up to 750 feet long, and had up to 400 systems on that one wire.
    >Slightly out of spec, but it worked.

    The segment length can be up to 500 meters for 10base5 so that's fine.
    Howeve, you're only suppose to have 100 nodes per segment, so you're
    more than slightly out of spec. I'm suprised it worked at all. Even
    repeaters wouldn't have made that conform as there were only suppose
    to be 3 segments max per system.

    If you want some really fun nightmares, try the original Sytek IBM RF
    baseband modem network implimentation. IBM was selling the technology
    for office networking on the original IBM PC's in the early 1980's.
    It's basically a CATV system, complete with 6MHz channels, with all
    the nightmares of analog systems. Much of the network hardware was
    right out of the cable TV business. The RF data networks were
    amazingly common and I made good money recrimping connectors, dealing
    with reflections, and playing RF troubleshooter. I just noticed I
    still have some of the Sytek cards in my pile. Ah, nostalgia.

    5 phone calls while writing this message. Maybe it's time to go to
    work?


    --
    Jeff Liebermann jeffl@comix.santa-cruz.ca.us
    150 Felker St #D http://www.LearnByDestroying.com
    Santa Cruz CA 95060 http://802.11junk.com
    Skype: JeffLiebermann AE6KS 831-336-2558
  18. Archived from groups: (More info?)

    In the Usenet newsgroup alt.internet.wireless, in article
    <87aji15brj5flcejtplo4b6hibidfv41fd@4ax.com>, Jeff Liebermann wrote:

    >My guess(tm) is that I was seeing the intentional trashing part of the
    >ethernet collision detection mechanism going continuously. If the PAD
    >(packet assembler/disassembler) detects a partial collision, it is
    >suppose to intentionally trash the packet to prevent propogating
    >garbage. The waveforms I saw looked like small pulses of this
    >intentional trashing algorithm.

    "The Specification" says this is supposed to be a 32 to 48 bit time
    jam - content not specified except that it must not make a valid CRC
    if by some bizarre reason the collision is detected that late. The
    detection to jam delay is supposed to be no more than two bit periods.

    >The packet disassembler is programmed to detect this trashing and
    >temporarily increase the collision backoff timers resulting in fairly
    >long delays where nobody is transmitting.

    Collision backoff should only effect the two (or more) systems who
    collided. Others on the wire shouldn't be effected, and they can
    slip in 9.6 usec after the wires stop vibrating.

    >Ah, nostalgia. Much my HF ham radio station is thrown together with
    >bright yellow coax cable. I always knew it was good for something.

    It's a good HF cable, but isn't rated for voltage (given that the
    normal signals are about 2 Vpp), and of course has the different
    jacket, solid center conductor, and double shielding.

    >I also have a few 3C500 cards and transceivers which I'm sure will end
    >up in a museum somewhere. About half the problems I found with the
    >stuff were mechanical. Poor probe contact due to loose clamping and
    >lousy terminations due to flakey N-connector assembly were most
    >common.

    We also had problems with the people not cleaning the outer braid out
    of the tap holes. As far as the N connectors, there were only two per
    run, and we tended to take special care when putting them together
    (though not the kind of care you used with RG-9 or 214 at 3+ GHz.).

    >Also water incursion via unused probe holes and the usual mouse chewing.

    Didn't have much of that, but then we left the transceivers in place once
    they got installed.

    >I built my own TDR (time domain relfectometer) which was a big help
    >because I could "see" the probes and terminations.

    The rise/fall spec was 20-30 ms, though the spec also required the
    harmonics down (second/third down 20 dB, fourth/fifth down 30 dB,
    sixth/seventh down 40 dB, higher down 50 dB) which really limits
    the resolution of TDR. Funny thing is, no-where in the spec do they
    give a required VSWR/return loss, whatever. I guess it's implied
    with the transceiver loading.

    >Early hardware PAD chips just discarded malformed packets and didn't
    >report them. At best, it lumps everything into "discards" without any
    >detail. That was the problem with using PC's to do the
    >troubleshooting. The runts never showed up on any of the diagnostics.

    Even late model hardware didn't report it. That's why we needed the
    NetGen sniffer. (Well, that was one reason.)

    >Everyone knew how to load paper and were told to keep the printers full.
    >The reason was not very subtle. This company had previous experienced
    >a computer meltdown and would print out every customer order so as not
    >to lose anything. Between the dozen or so printers, that was about 2-3
    >reams per day plus a huge pile at the end of each accounting period.

    Wowser! We just had a pretty good tape backup (yeah, I know) program.
    We knew the tapes were good, because they were verified during the day
    following the backup, and we had enough user mistooks that we were
    restoring files on a regular basis. There was off-site storage as well.

    >If you search Google, you'll find a few articles by me on using RG-6/u
    >75 ohm coax for 10base2 (Cheapernet). Works fine if you don't have
    >any BNC T connector taps and only transceivers at the ends of the
    >coax.

    Huh? What kind of transceivers had built in terminations?

    >I learned networking the hard way with Arcnet (over both coax
    >and flat telco wire) and Starlan with CAT3 and 25 pair telco bundles.
    >I still have a few 10baseT runs over 25 pair telco cables.

    <shudder>

    >So, I would show up at about 7PM putter around until about 9PM, and
    >then bring everything down. As I recall, I was done with the rewiring
    >exercise at about 2AM and spent the next 3 hours fixing my crimping
    >and punch down mistakes. I now have various cable testers, but at the
    >time, it was with an ohms-guesser and clip leads.

    We had very little twisted pair, and once the orange cable was in (and
    working), the rest of it was stringing four pair, and poking pins into
    DB15s, which we often fobbed off on the interns.

    >Somehow, I thought that I would remember where everything went and
    >label things later. Later never arrived.

    Naturally. We tried to pre-mark the drop cables so that there was both
    a plastic marker, and a felt tip marking at the transceiver end. The
    drop end had a felt tip marking and a Dymo label that went on the
    face plate. Early on, we also marked the transceivers with felt tips on
    masking tape, but the heat killed that. When we started installing
    fiber, this was outsourced, but one of the contract requirements was
    that the fibers had to be marked with serial numbers every ten feet
    and within a foot of each end, and the serial numbers had to be entered
    into "the book" with full location information not later than noon of
    the next day. We randomly tested this as part of the acceptance
    inspection. This seems to have worked, though I have no idea how much
    it added to the tab.

    >The segment length can be up to 500 meters for 10base5 so that's fine.
    >Howeve, you're only suppose to have 100 nodes per segment, so you're
    >more than slightly out of spec. I'm suprised it worked at all.

    The spec was ALSO 2.5 meters between transceivers - we violated that
    to heck and gone. ;-)

    >Even repeaters wouldn't have made that conform as there were only
    >suppose to be 3 segments max per system.

    Not so. There could be a maximum of two repeaters (or four half
    repeaters or what ever) between any two "stations". If you wanted
    to have a 250 meter cable with repeaters every 2.5 meters feeding
    100 OTHER segments, that was OK. Well, it was permitted - not
    that having 100^2 hosts in one collision domain would work worth
    a darn. ;-)

    Old guy
  19. Archived from groups: (More info?)

    On Sat, 17 Sep 2005 20:17:29 -0500, ibuprofin@painkiller.example.tld
    (Moe Trin) wrote:

    >>Nope. I wasn't looking at the ethernet waveforms although those were
    >>interesting to look at. I was sending a pulse down the line and
    >>looking for discontinuities.
    >
    >The rational is that the DIX specification isn't really covering the
    >higher frequencies because what happens up "there" doesn't have the
    >effect as the 10/20 MHz frequencies.

    Well, that's correct. The 802.3 MPE (Manchester Phase Encoding)
    waveform concentrates most of it's power around 10MHz. The
    10Mbits/sec data rate only requires about 30Mhz or bandwidth to
    operate. I don't see any reason to characterize the cable beyond its
    highest operating frequency.

    >Some of what you see on a 5 nano
    >rise/fall time (200 MHz = about 2 foot resolution) isn't there as far
    >as the Ethernet signal is concerned. None the less, having the sharp
    >rise/fall really does help in multiple mismatch conditions. A long
    >time ago, I had something similar (a NE555 driving half of a 74S74
    >driving an LH0002) when I was looking at a number of coax runs in a
    >data acquisition trailer, and it certainly was useful in locating
    >the fault, but we used VSWR meters to go/no-go the lines.

    Mine was a bit fancier. 555 driving some ECL gates to a fast
    switching xsistor with a clamp diode to prevent saturation (which
    would trash the risetime).

    This one seems to be too crude:
    http://www.tkk.fi/Misc/Electronics/circuits/tdr.html
    but does explain the principles involved.

    >Major problems, and breaks in the cable itself.

    I never had to deal with breaks in the cable. However, damage to the
    cable was a major cause of extended troubleshooting exercises. In one
    warehouse, the yellow cable was snaked through the overhead. No
    problem getting to the cable except that it was 20 ft off the floor
    which was about 3 ft too high for my tallest ladder.

    >>kinks that cause the dielectric to migrate
    >
    >That would have to be a pretty wicked bend - certainly down below the
    >MIL-C-17 bend radius requirements. For the orange stuff, we never had
    >bends shorter than about 2 foot radius.

    The yellow coax was so stiff that it wouldn't bend with less than a
    1ft radius anyway. However, that didn't stop anyone from trying to
    bend it across a sharp corner. The problem was always the same.
    Someone's workstation was a bit too far from the probe transceiver.
    DB15 extensions were impossible to find. The culprit certainly was
    not going to re-arrange their office layout for the convenience of the
    cabling. So, they give a good hard tug on the cable and try to brute
    force into giving them a few extra inches of cable length. If the
    cable bend radius started out at 1ft, it was now about 1 inch. Some
    of the buildings had metal studs in the walls and were capable of
    cutting the cable in half. I recall one 300ft run that had 3 or 4
    cable splices installed. Cheapernet installs had exactly the same
    problem except that the cable was even more fragile. The problem
    didn't go away until we went to 10baseT.

    >>The transceiver loading is fairly light. A few PF at most.
    >
    >Two puff max - shunt resistance over 100K

    As I vaguely recall, 4PF maximum. Very lightly loaded considering
    that the coax cable was about 25PF/ft.

    >Terminators speced as 49.9 Ohms +/- 1 percent, at 0 to 20 MHz with the
    >phase angle of the impedance not to exceed 5 degrees, which is relatively
    >good.

    Major overkill. I don't think any of my junk VSWR test equipment or
    directional couplers are accurate enough to measure that at low
    VSWR's. I would have to use a bridge to get the accuracy over the
    frequency range.

    The reason for the tight specs had nothing to do with VSWR. 10base5
    and 10base2 both use the DC levels on the coax for collision
    avoidance. The transceiver has a current source and uses the two
    terminators as a load to get the exact voltage required. Variations
    in typical production and installation, plus coax copper losses,
    caused the tolerance accumulation to potentially cause this voltage to
    drift out of spec. Rather than transfer the cost of a high tolerance
    current source to the transceivers, it was cheaper and easier to
    demand that the terminators were close to perfect. Eventually,
    everyone figured out how to make cheap precision current sources, so
    the terminator tolerances didn't need to be that critical. However,
    once written, such specs tend to be cast in stone.

    The 10base2 Cheapernet terminator specs are something like 51 ohms,
    +/- 5 percent. 10% will work. Unfortunately most of them were built
    with carbon composition resistors, which a slightly hygroscopic and
    will therefore tend to drift over time. I recently tested my pile of
    50, 75, and 93 ohm terminators and found many of them were way out of
    spec. A quick bake in the oven solved that problem.

    >The coax is allowed to have sinusoidal +/- 3 Ohm ripple at spacing
    >of two meters or less (on top of the 50 +/- 2 Ohms of the cable), and
    >that's probably dominant, I've seen people hand select MIL-R-11 carbon
    >composite resistors for "50" ohms, and then wonder why the VSWR is so
    >gross at higher frequencies. They might as well have used a wire wound.

    Sigh. You must work in a research or government environment. Nobody
    else I know could afford or has any interest in such details. Carbon
    comp resistors are terrible at higher frequencies but are probably
    just fine up to 30MHz. I vaguely recall tearing apart a 10base5
    terminator and finding a single 51 ohm carbon comp resistor in
    parallel with something to get it down to 49 ohms. Resistor lead
    length inductance is probably the real killer.

    >(...) He whips out his Simpson 260, center-center, shield-shield,
    >nothing (open) from center to shield. "Yup - it's OK." We had a nice
    >half hour lesson as I showed him how to use a slotted line and VSWR
    >meter, and how to measure insertion loss with a shorting plug, and how
    >it differed from the results using an open.

    I still have some slotted coax and waveguide lines floating around. I
    use them for skool demonstrations. These days, I use a network
    analyzer. It's nice to have everything displayed on a single Smith
    chart.

    I'm not going to try and justify my use of crude tests. I was looking
    for continuity, not compliance with specifications. I don't have a
    test lab available at the customers. I also can't do much if I find
    something wrong with the terminator except replace it with another
    one. Also, knowing exactly why the specs are so tight on the
    terminator was a big help in knowing what I could get away with. I
    did find a few failures with the ohms-guesser method. Shorts were
    common. Defective transceivers were a problem. 117VAC on the coax
    shield (long story here) was found with a volts-guesser.

    Incidentally, I still have about 4 Simpson 260 voltmeters in various
    levels of functionality. However, these days, I use a DVM.

    >Oh, so you don't like my Exabyte 8205s and 8505s? ;-)

    I have an 8205 somewhere. I never got into the 8mm drives. Too
    expensive at the time and they had already developed a rather bad
    reputation. I didn't see it as much of an improvement of DC-600 size
    drives except for capacity. Instead, I went directly to 4mm drives
    and jukeboxes. Big mistake as it took a few years to demonstrate
    their shortcomings. I later went to AIT which seems to have fixed
    most of my complaints.

    >I dunno - we've been using it for years, and it hasn't killed us yet.
    >(I know, I know - it will).

    Have you ever had to do a massive restore from tape (while under
    pressure)? I have and I can assure you that reliability is not one of
    the better features of tape. I have a small collection of recovery
    tools that I use in case I get the all too common read errors. I've
    also had to use a tape recovery service to deal with tape errors,
    where the tape head had worn enough to be unable to read an old tape
    or where a new replacement drive would not read an old tape. For a
    time, one customer would put the old DDS-2 drive in the safe along
    with the tapes just to be sure they had the hardware to read the
    tapes. To add a challenge, HP was constantly screwing around with the
    tape format so that a random version of their firmware would not
    necessarily read tapes made with a different version. Sony was doing
    a heroic job of trying to stay HP compatible but eventually gave up.

    This brings back nightmares of baby sitting tape restores that took
    all night and had to be watched constantly. I'm glad those days are
    over.

    >Actually, I shouldn't complain - my "RF" tool box has more adapters
    >than you can shake a stick at, though I am missing my WR-112 to clip
    >leads. ;-)

    Oh, be serious. I have about 20 lbs of adapters and connectors. I
    would visit the local hamfests and retired hams and buy up all the
    adapters I could find. It didn't matter what type or flavor. Best
    investment I ever made. Every Field Day, about a dozen adapters
    evaporate, but I have spares. In this case, BNC to F adapters are
    very common in CATV work to interface to the test equipment. I carry
    a pile of them. There's almost no loss but admittedly, they are a
    rather bad 75 ohm to 50 ohm match. At 10Mhz, it's tolerable.

    Incidentally, a demonstration I like to give at radio club meetings is
    grabbing one of my adapter boxes and stringing as many adapters in
    series as possible. I usually have a string about 6ft long. I attach
    a VSWR meter at both ends, dummy load, and 450Mhz transmitter. I then
    ask the assembled horde what they would predict for the loss.
    Conventional wisdom says that adapters are evil abomination and should
    be avoided at all cost. It turns out that the adapter chain has about
    the same loss as an equivalent length of RG-8/u. So much for the
    lossy adapter theory.

    >When I moved to this house, I initially set up using 10Base2, because
    >it was quick. The following winter, I spent one weekend in the attic
    >pulling CAT 5 to each room (why not), including eight drops in the
    >den. Only when I was finished did I think it might have been a good
    >idea to replace the US$2/mile cable that the klowns had for the phones.

    I use CAT5 for everything including video. About 5-7 cents per ft
    which is cheaper than coax. I did a remodel in about 1995 and was
    able to run conduit to various places. The size varies from 1/2" to
    1" schedule 40. If I need to run something, I just add it to the
    tangled mess. Of course, I ran the conduit from where I needed it
    least, to where I thought it might be useful. That resulted in more
    cables under the desk and along the floor. I also have about 500ft of
    fiber in the pipes, which has yet to be useful. If I had to do it
    today, I would use the blue flex plastic wiring conduit instead of
    schedule 40.


    --
    Jeff Liebermann jeffl@comix.santa-cruz.ca.us
    150 Felker St #D http://www.LearnByDestroying.com
    Santa Cruz CA 95060 http://802.11junk.com
    Skype: JeffLiebermann AE6KS 831-336-2558
  20. Archived from groups: (More info?)

    In the Usenet newsgroup alt.internet.wireless, in article
    <7r8ri1543dbbi4g4nmg38p7jv6nfaog370@4ax.com>, Jeff Liebermann wrote:

    >I don't see any reason to characterize the cable beyond its highest
    >operating frequency.

    Neither did the guys at DIX. There were tradeoff - the solid center
    conductor is to provide a constant depth target for the stinger in a
    vampire clamp. That was one of the changes between the original 3Base5
    (which did spec RG-8/U) and version 2 (10Base5). In theory, skin depth
    would prefer stranded, or if solid, a silver plating, much as the better
    hard lines use.

    >Mine was a bit fancier. 555 driving some ECL gates to a fast
    >switching xsistor with a clamp diode to prevent saturation (which
    >would trash the risetime).

    The LH0002 has the drive capability - it's meant as a 50 ohm cable driver,
    but it's only categorized to 50 MHz (though it's capable of 150 MHz with
    a little tweaking).

    >I never had to deal with breaks in the cable. However, damage to the
    >cable was a major cause of extended troubleshooting exercises. In one
    >warehouse, the yellow cable was snaked through the overhead. No
    >problem getting to the cable except that it was 20 ft off the floor
    >which was about 3 ft too high for my tallest ladder.

    Well, that should certainly keep it out of the way of the janitors. In
    most of our installations, it's tie-wrapped to the suspension for the
    false ceiling. The electricians and HVAC guys are all aware of the
    cable, and avoid it. Wish they'd do the same with the fiber.

    >Someone's workstation was a bit too far from the probe transceiver.
    >DB15 extensions were impossible to find.

    We have extension cables out the whazoo - it's how we train the interns
    every quarter. Two foot steps from 2 to 16 feet. Rarely need longer.

    >The culprit certainly was not going to re-arrange their office layout
    >for the convenience of the cabling.

    Absolutely

    >So, they give a good hard tug on the cable and try to brute force into
    >giving them a few extra inches of cable length.

    Don't have the problem. All of our overhead drops go to wall plates. We only
    have one room with under-floor cables, and nearly all of those also go to
    wall plates. The few that were not were knotted around the false floor
    supports.

    >Cheapernet installs had exactly the same problem except that the cable
    >was even more fragile.

    That was the main reason we never went to thinnet. That, and the vanity
    type who didn't like all those cables running around. Still have a
    secretary in mahogany row like that. We had cables ties to the underside
    of tables and desks so she wouldn't see them and complain.

    >>Terminators speced as 49.9 Ohms +/- 1 percent, at 0 to 20 MHz with the
    >>phase angle of the impedance not to exceed 5 degrees, which is relatively
    >>good.

    >Major overkill. I don't think any of my junk VSWR test equipment or
    >directional couplers are accurate enough to measure that at low
    >VSWR's. I would have to use a bridge to get the accuracy over the
    >frequency range.

    That's about a 1.01:1, or a 46 dB return loss. Directivity of less than
    40 dB would make it undetectable. Accurate measurements at that level are
    extremely difficult.

    >Sigh. You must work in a research or government environment.

    Two out of three

    >Nobody else I know could afford or has any interest in such details.
    >Carbon comp resistors are terrible at higher frequencies but are probably
    >just fine up to 30MHz.

    And number three - I had my 1st class phone with radar endorsement at 18.
    Of the RF type projects I've worked on, 1 was VLF, a large handfull were
    VHF, but the vast majority were above 5 GHz. And yes, I did moonlight
    several times as a transmitter engineer at broadcast stations. Even a
    1.1:1 gets warm when you have 10 KW forward. ;-)

    >I vaguely recall tearing apart a 10base5 terminator and finding a single
    >51 ohm carbon comp resistor in parallel with something to get it down to
    >49 ohms. Resistor lead length inductance is probably the real killer.

    As you mention, 10 MHz isn't that critical. If you have ever looked at the
    applicable MIL specs, you'd know that plenty of other things are allowed
    and will significantly effect the observed resistance and reactance of
    resistors. The one time thermal shock when soldering the resistor being
    just one example.

    >I still have some slotted coax and waveguide lines floating around. I
    >use them for skool demonstrations. These days, I use a network
    >analyzer. It's nice to have everything displayed on a single Smith
    >chart.

    When I was in this racket for keeps, I had three lines for 100 MHz to
    18 GHz. Below 100 MHz (which I didn't work to much), we had to make do
    with couplers.

    >I'm not going to try and justify my use of crude tests. I was looking
    >for continuity, not compliance with specifications. I don't have a
    >test lab available at the customers.

    ACK that - the best coupler I had access to for 10 MHz had a directivity
    of around 18 dB - meaning a 1.28:1 could show as perfect (or as a 1.66:1)
    depending on phase angles. And finding a sliding load that is accurate
    at that frequency...

    >I also can't do much if I find something wrong with the terminator except
    >replace it with another one. Also, knowing exactly why the specs are so
    >tight on the terminator was a big help in knowing what I could get away
    >with. I did find a few failures with the ohms-guesser method. Shorts
    >were common.

    We didn't do that much "new work", meaning installing a completely new
    coax. When we did, we were testing at every step, so it was a simple
    "what did you just change". Adding drops were much the same. You knew
    the link was satisfactory when you started, if it was borked when you
    were finished - it was probably what you did.

    >Defective transceivers were a problem.

    We had/have a reasonable failure rate.

    >117VAC on the coax shield (long story here) was found with a volts-guesser.

    That's what interns were for. ;-)

    >Incidentally, I still have about 4 Simpson 260 voltmeters in various
    >levels of functionality. However, these days, I use a DVM.

    I broke my last VOM about 8 years ago - been using DMMs ever since.

    >Have you ever had to do a massive restore from tape (while under
    >pressure)? I have and I can assure you that reliability is not one of
    >the better features of tape.

    How 'bout the time the registrar was deleting expired accounts, and
    fumble fingered. The speed at which a mistyped command executes is directly
    proportional to the amount of damage done. She didn't mess around just
    wiping an entire partition - NO!!! She got the whole d4mn drive. It only
    effected about 200 users, but everything they had done between the nightly
    backups and the little typing error was gone. Took nearly three hours to
    restore, and was only that quick because the drive she erased was one that
    had a full backup overnight, rather than something earlier in the week, and
    nightly incrementals since then. There were a bunch of severely unhappy
    users. The registrar quit that week, and went back to hustling tables
    at the TGIFridays.

    >HP was constantly screwing around with the tape format so that a random
    >version of their firmware would not necessarily read tapes made with a
    >different version.

    Good old UnSureStores. We used them for a while, but I think we're
    using Seagates now. "Not my problem"(tm)

    >I have about 20 lbs of adapters and connectors.

    No longer in the RF business, so my selection is much limited. But I
    had every 50 Ohm connector known to man from HN down to SMC, including
    APC-7s Most of my work was in N or SMA, and neither is rated very high
    on insertion cycles. We used the adapters, because when they got old,
    we could just toss 'em. It was a lot cheaper than replacing the connector
    on the test equipment.

    >In this case, BNC to F adapters are very common in CATV work to interface
    >to the test equipment. I carry a pile of them. There's almost no loss
    >but admittedly, they are a rather bad 75 ohm to 50 ohm match. At 10Mhz,
    >it's tolerable.

    There actually is a 75 Ohm BNC as well as a type N - Amphenol builds 'em.
    I did virtually zero work at 75 Ohm, except as matching sections. Even
    that was rare.

    >Conventional wisdom says that adapters are evil abomination and should
    >be avoided at all cost. It turns out that the adapter chain has about
    >the same loss as an equivalent length of RG-8/u. So much for the
    >lossy adapter theory.

    RG-8 at 450 MHz is within spec, but I'm starting to prefer RG-214 at that
    frequency for more consistent VSWRs. Some people actual tolerate RG-8 up
    to one or even three GHz. Not me. The double braid makes a substantial
    difference. Same for RG-142 vs. RG-58. The usual problem of multiple
    adapters is VSWR, rather than insertion loss. If they're quality adapter,
    neither the loss or VSWR should be that bad.

    >I use CAT5 for everything including video. About 5-7 cents per ft
    >which is cheaper than coax. I did a remodel in about 1995 and was
    >able to run conduit to various places. The size varies from 1/2" to
    >1" schedule 40.

    I wanted to do that. The boss here said no. I was actually planning
    on one inch everywhere. She didn't like the concept of me bashing holes
    in the wall to get it in.

    >I also have about 500ft of fiber in the pipes, which has yet to be
    >useful. If I had to do it today, I would use the blue flex plastic
    >wiring conduit instead of schedule 40.

    When I have to go to fiber, that's probably how I'm going to go. Right
    now, I've run tests with Gigabit copper, but I see a lot (10%) of packet
    errors.

    Old guy
  21. Archived from groups: (More info?)

    On Mon, 19 Sep 2005 21:43:09 -0500, ibuprofin@painkiller.example.tld
    (Moe Trin) wrote:

    >Don't have the problem. All of our overhead drops go to wall plates. We only
    >have one room with under-floor cables, and nearly all of those also go to
    >wall plates. The few that were not were knotted around the false floor
    >supports.

    How lavish. All of my early 10base5 and 10base2 installation were in
    "industrial" environments. That's where nobody cares about what it
    looks like as long as it works. Cables hanging from the overhead were
    standard. I don't recall ever running ethernet through the walls to
    wall plates. Everything was exposed, ugly, and messy. Works better
    that way.

    >We had cables ties to the underside
    >of tables and desks so she wouldn't see them and complain.

    Those are still around. I stock 4 boxes of CAT5, where the only
    difference is the jacket color. White, beige, grey, and blue. Well,
    actually there are two other boxes with plenum wire and a roll of red
    stranded CAT5 for making jumpers. I give the secretaries the choice
    of colors and invariably, they want some color I don't stock. Last
    week, I had to buy 1000ft for green CAT5 just to satisfy one inferior
    decorator. Also, if they're into Feng Shui, run away. I've had to
    negotiate the exact location, direction, color, and termination on the
    basis of how it affects the flow of chi.

    >And number three - I had my 1st class phone with radar endorsement at 18.

    2nd phone at 18. 1st phone with radar at 19. (Took me a while to
    find the old licenses).

    >Of the RF type projects I've worked on, 1 was VLF, a large handfull were
    >VHF, but the vast majority were above 5 GHz.

    Mostly VHF/UHF land mobile for me. Some HF design. 9 years working
    for Intech Inc on marine radios. 2 years for Granger Assoc on
    microwave. Everything else are various small business adventures and
    consulting. Got tired of RF and dived into computers, where I
    successfully repeated all my previous mistakes.

    >And yes, I did moonlight
    >several times as a transmitter engineer at broadcast stations. Even a
    >1.1:1 gets warm when you have 10 KW forward. ;-)

    At the first FM transmitter I baby-sat, there was a warning that if
    the VSWR meter (Bird) every moved from the peg, the final would
    probably blow. At a different nightmare, I had to risk my life
    grabbing radios out of the building as blocks of ice the size of desks
    came down from the TV tower because the automagic VSWR meter relay had
    failed to start the de-icers. Wheeee...

    >ACK that - the best coupler I had access to for 10 MHz had a directivity
    >of around 18 dB - meaning a 1.28:1 could show as perfect (or as a 1.66:1)
    >depending on phase angles. And finding a sliding load that is accurate
    >at that frequency...

    I also have some "line stretchers". However, I'm a big fan of using
    bridges for precision measurments. While not exactly precision,
    that's how the Bird and MFJ antenna analyzers work. It's difficult to
    build a really broadband HF directional coupler that is flat from 1.6
    to 30Mhz. I did that for a VSWR sensor and power guesser in one of
    the HF radios I designed and had to use two different types of ferrite
    materials to get the bandwidth flat. No fun.

    >We didn't do that much "new work", meaning installing a completely new
    >coax. When we did, we were testing at every step, so it was a simple
    >"what did you just change".

    Gaak. I charge by the hour so everything was a rush job. Throw
    everything together and hope for the best. Test only when done. I
    don't like doing it like that, but the labour content would have been
    double if I had stopped to test every connection as I went along.

    >Adding drops were much the same. You knew
    >the link was satisfactory when you started, if it was borked when you
    >were finished - it was probably what you did.

    That's what I really liked about 10base2. If you made a mistake in
    the cabling, everyone crashed simultaneously. I never had a problem
    with nobody noticing a problem. The entire company would be up in
    arms screaming at me. I kinda missed that with 10baseT and switches,
    where I could totally screw up a segment, and nobody would notice.

    >>117VAC on the coax shield (long story here) was found with a volts-guesser.
    >That's what interns were for. ;-)

    They're not disposable. They would have died finding that one.
    10base2 run between two buildings. Coax grounded at both ends to AC
    power ground. We lost the ground (or neutral) in one of the buildings
    making the 10base2 coax the AC power ground return for the entire
    building.

    >How 'bout the time the registrar was deleting expired accounts, and
    >fumble fingered. The speed at which a mistyped command executes is directly
    >proportional to the amount of damage done. She didn't mess around just
    (...)

    Medical office database server was trashed when the vendor installed a
    program update while everyone was still logged in. Only option was to
    restore all the data. DDS-2 drive took about 6 hours (i.e. most of
    the day) to restore from backups. I had to sit there and accept
    verbal abuse from the entire medical staff because it was taking so
    long. I didn't have time to do a proper verify or CRC on the data
    files, so we just went live with what was restored. I was lucky that
    time, but not every time.

    I forgot to mention DLT. That worked fairly well with few errors.
    However, it was rather expensive and the drives needed constant
    cleaning and ocassional rebuilds. In general, the tapes were
    transportable between different drives and error rate was very low.
    Recommended.

    >There actually is a 75 Ohm BNC as well as a type N - Amphenol builds 'em.
    >I did virtually zero work at 75 Ohm, except as matching sections. Even
    >that was rare.

    I did some work with CATV. Actually, I setup and built a bootleg CATV
    system in about 1975 around the neighborhood. At its peak, we had
    about 15 house on the system. It got shutdown because the local cable
    company got irate and claimed I was breaking numerous rules and
    ordinances. Long ago, I also worked for Subscription TV in Smog
    Angeles. The 75ohm BNC's are exactly the same dimensions as the 50ohm
    BNC's in the area around the center pin, PTFE sleeve, and shield
    fingers. However, the cable crimp diameters are different. Not
    exactly 75 ohms, but close enough for video.

    >The usual problem of multiple
    >adapters is VSWR, rather than insertion loss.

    My string of adapters showed almost no VSWR at 450Mhz. However, I
    cheated a bit. I avoided UHF connectors and right angle adapters. I
    also didn't use any non-characterized adapters such as phono. Another
    fun test was a long string of BNC "T" connectors in series. The short
    stub cause a small amount of VSWR and possibly some leakage, but the
    effects were rather small. I think I had about 50 connectors in
    series.

    >If they're quality adapter,
    >neither the loss or VSWR should be that bad.

    They're not. The point was that the various pundits that proclaim
    that ALL adapters are lossy or evil are totally wrong. For most
    applications, adapters a fine.

    >I wanted to do that. The boss here said no. I was actually planning
    >on one inch everywhere. She didn't like the concept of me bashing holes
    >in the wall to get it in.

    So far, I haven't had to do drywall rework. The main vertical pipes
    and splice boxes are hidden behind removeable wood panels. Another is
    under the stairs. Nail plates over all the fire breaks and some
    studs. However, FNT flex non-metallic tube (smurf tube) would have
    been much easier. However, the stuff is not up to code for many
    applications which might cause problems with the building inpsectors.

    >I've run tests with Gigabit copper, but I see a lot (10%) of packet
    >errors.

    That's awful. I don't see anything that bad. Almost all my gigabit
    stuff is monitored with SNMP based managed switches. I just logged
    into one of my busier installations and noted less than 0.5% error
    rate for about 300GBytes or traffic per day. Gigabit has FEC (forward
    error correction). If you're getting 10% uncorrected errors, then
    you've got a wiring problem. My guess is split pairs or rotten
    connections. I've also had problems where I ran the CAT5e (before
    overpriced CAT6) next to large metal objects (rack rails) or near
    magnetic interference sources (flourescent ballasts, motors,
    ferroresonant xformers, etc). I also had problems with stranded wire
    patch cables and gigabit. Borrow a cable certifier and run some cable
    tests.


    --
    Jeff Liebermann jeffl@comix.santa-cruz.ca.us
    150 Felker St #D http://www.LearnByDestroying.com
    Santa Cruz CA 95060 http://802.11junk.com
    Skype: JeffLiebermann AE6KS 831-336-2558
  22. Archived from groups: (More info?)

    In the Usenet newsgroup alt.internet.wireless, in article
    <49g0j1t135i54rvuthdm7e009525a1jhno@4ax.com>, Jeff Liebermann wrote:

    >All of my early 10base5 and 10base2 installation were in "industrial"
    >environments. That's where nobody cares about what it looks like as
    >long as it works. Cables hanging from the overhead were standard.
    >I don't recall ever running ethernet through the walls to wall plates.
    Everything was exposed, ugly, and messy. Works better that way.

    Our industrial areas were limited, but even there the stuff was
    mainly out of sight. Tends to reduce accidental damage.

    >I stock 4 boxes of CAT5, where the only difference is the jacket color.
    >White, beige, grey, and blue.

    We didn't have that much CAT5, and it was almost always a gray, with
    some older stuff beige. We also had a limited supply of green, but they
    were cross-over cables only.

    >I give the secretaries the choice of colors and invariably, they want
    >some color I don't stock.

    We follow the Ford Mantra - any color you want as long as it's black^Wgray.

    >Also, if they're into Feng Shui, run away. I've had to negotiate the exact
    >location, direction, color, and termination on the basis of how it affects
    >the flow of chi.

    I don't know if it's hiring policy or what, but we don't seem to have any
    of those.

    >2nd phone at 18. 1st phone with radar at 19. (Took me a while to
    >find the old licenses).

    Got the 2nd + radar, then the 1st - about a month apart. Was in the
    service in Denver, and had to fit the tests in between official duties.

    >Got tired of RF and dived into computers, where I successfully repeated
    >all my previous mistakes.

    I was doing computers in the 1960s as part of the system I was tech-repping
    in Japan.

    >At the first FM transmitter I baby-sat, there was a warning that if
    >the VSWR meter (Bird) every moved from the peg, the final would
    >probably blow.

    I mainly did AM and UHF TV repeaters. I lucked out in not having much
    hardware problems, so it was mainly just keeping the logs up, and making
    sure the air filters got changed regularly. Chief engineer would peak
    the tuning quarterly, whether it needed it or not ;-)

    >At a different nightmare, I had to risk my life grabbing radios out of
    >the building as blocks of ice the size of desks came down from the TV
    >tower because the automagic VSWR meter relay had failed to start the
    >de-icers. Wheeee...

    I was raised in the North East, but starting in the mid-1960s, spent most
    of the time in warmer climes. Yes, I did have one occasion when I was
    driving in snow (Sunol pass on I-680 between Fremont and Livermore - a
    whole 980 feet above sea level), and there were occasions when I saw
    snow on the hills above San Jose, but I've mainly forgotten what snow
    and freezing drizzle is. Not missing it one bit, either.

    >However, I'm a big fan of using bridges for precision measurments. While
    >not exactly precision, that's how the Bird and MFJ antenna analyzers work.
    >It's difficult to build a really broadband HF directional coupler that is
    >flat from 1.6 to 30Mhz.

    More than an octave is always going to be difficult. Nearly all of the
    stuff I worked on was narrow band - 10% was wide band. Still stuff like
    the slotted lines could cover an appreciable range. The last FAA project
    I worked on was MLS, and we had a 1 percent bandwidth. Waveguide everywhere
    and dual directional couplers (loop) that had been hand tweaked to provide
    46+ dB directionality. I'd hate to be the poor sod who was assembling those.

    >Gaak. I charge by the hour so everything was a rush job. Throw
    >everything together and hope for the best. Test only when done. I
    >don't like doing it like that, but the labour content would have been
    >double if I had stopped to test every connection as I went along.

    I dunno - the initial install and termination was one step, and as soon
    as we had the first drop working, we had a system on the wire to ping.
    Our "standard" was 'ping -s 8192 -c 25' which results in 150 packets. If
    we saw a single drop (and don't forget that with a -s 8192, that results
    in six packets that need to be faultless), we'd try again. A second
    "failure" or more than a single drop in the initial ping was rework time.

    >That's what I really liked about 10base2. If you made a mistake in
    >the cabling, everyone crashed simultaneously. I never had a problem
    >with nobody noticing a problem. The entire company would be up in
    >arms screaming at me.

    A bad vampire install on 10Base5 was the same. Had that happen a couple
    of times - no thank you.

    >I kinda missed that with 10baseT and switches, where I could totally
    >screw up a segment, and nobody would notice.

    That's why we wanted that first drop running something to ping. As
    soon as the drop was wired, and the far end connected to the central
    point - we'd throw a test set on and watch the lights. This detected
    the common wiring errors. Then a lap doggy, and a ping test. If that
    worked, on to the next mess. If not - find and fix.

    >>That's what interns were for. ;-)

    >They're not disposable.

    <innocent look> They're not??? </innocent look>

    >They would have died finding that one. 10base2 run between two
    >buildings. Coax grounded at both ends to AC power ground. We lost
    >the ground (or neutral) in one of the buildings making the 10base2
    >coax the AC power ground return for the entire building.

    Wowser! The facility wire bender was trained by Edison or Tesla, or
    somebody, and he was rather dogmatic about wiring safety. Every
    training course I've ever encountered has demanded one and only one
    ground. Actually, between buildings, we ran twisted pair to
    isolation transformers on both ends, and used half repeaters. About
    1992, that got replace with fiber.

    >Medical office database server was trashed when the vendor installed a
    >program update while everyone was still logged in.

    news://comp.risks/ - that's common place, and people NEVER seem to learn.

    >I had to sit there and accept verbal abuse from the entire medical staff
    >because it was taking so long.

    That's normal. The network can be down for ten minutes, and we hear about
    it from Corporate on the East Coast, even if it's midnight here.

    >I forgot to mention DLT. That worked fairly well with few errors.
    >However, it was rather expensive and the drives needed constant
    >cleaning and occasional rebuilds. In general, the tapes were
    >transportable between different drives and error rate was very low.

    Tape drive cleaning is a given here. I'm no longer involved, but with
    8mm drives, we were running about 7 hours a night per drive, so every
    Tuesday was cleaning day. Every drive in the place. Tuesday is also the
    day that tapes go/come off site, so the guy who collects tapes is the
    one who does the cleaning.

    >Recommended.

    At home, I've recently switched to redundant drive backups. I have two
    cheap systems each with two 120 Gig hard drives, that have multiple
    partitions - each one mirroring one of the drives on the servers.
    Hard drives are getting reasonably cheap - I paid US$50 each for these.
    The important stuff still gets burnt to CD, and lives off site.

    >The 75ohm BNC's are exactly the same dimensions as the 50ohm
    >BNC's in the area around the center pin, PTFE sleeve, and shield
    >fingers. However, the cable crimp diameters are different.

    I don't use them, but my understanding was that the dielectrics were
    different in order to bring the impedance in line, while maintaining
    the same intermatability.

    >My string of adapters showed almost no VSWR at 450Mhz. However, I
    >cheated a bit. I avoided UHF connectors and right angle adapters.

    UHFs were never spec'ed for impedance, although they've been used
    in RF since before WW2. Right Angles? Yeah, they've never been able
    to get a decent VSWR. There was a company (might have been Omni Spectra)
    that used to advertise pre-built cables of 141 semi-rigid with
    impossibly short radius turns that fit the same space as a 90 degree
    connector, but had VSWRs down in the 1.02 + 0.007GHz :1 VSWRs which was
    little worse than the guarantee of an SMA straight plug. I've also seen
    it done with quarter inch hard line in a type N, but vaguely remember
    that the cost was astronomical.

    >Another fun test was a long string of BNC "T" connectors in series. The
    >short stub cause a small amount of VSWR and possibly some leakage, but
    >the effects were rather small. I think I had about 50 connectors in
    >series.

    As long as you're low enough in frequency, that's true. Things start to
    go south on you above 3.0 GHz, when that 16 mm stub becomes a significant
    part of a wavelength.

    >So far, I haven't had to do drywall rework. The main vertical pipes
    >and splice boxes are hidden behind removeable wood panels. Another is
    >under the stairs.

    Single floor slab, and relatively low attic space (most rooms have
    cathedral ceilings). The boss isn't interested in extraneous wood trim
    strips - yeah, I thought of that too. I did hide some of the runs to
    points on the outside walls behind the base molding.

    >However, FNT flex non-metallic tube (smurf tube) would have been much
    >easier. However, the stuff is not up to code for many applications
    >which might cause problems with the building inpsectors.

    If we ever build a new house, there will be some added features. If
    you look back, it wasn't that long ago that phone wires went to one
    (at most two) places in the house, and the TV lead came through the
    window. My neighbor, who is a retired school principal still can't
    figure out why we need six computers in the den (neither can I). The
    other neighbor is a CIO with a major bank, and she can't understand
    why there are three networks in the house. (1 is company, 1 is the
    normal one, and the third is for play and backups.) One of the guys
    I work with just built a new house. Conduit in every room with two
    fibers - the "den" has six drops. The patch room is an air conditioned
    closet in the garage.

    >>I've run tests with Gigabit copper, but I see a lot (10%) of packet
    >>errors.

    >That's awful. I don't see anything that bad.

    Only tried once - I had to borrow Gigabit gear, as I haven't seen the
    need to buy it yet. Our network isn't that busy.

    >If you're getting 10% uncorrected errors, then you've got a wiring
    >problem. My guess is split pairs or rotten connections. I've also
    >had problems where I ran the CAT5e (before overpriced CAT6) next to
    >large metal objects (rack rails) or near magnetic interference sources
    >(flourescent ballasts, motors, ferroresonant xformers, etc).

    Magnetic - no, but the air ducts are steel, and bonded. They're also
    huge.

    >I also had problems with stranded wire patch cables and gigabit.

    I suspect some of the problems are the patch panel, which is in a
    closet along with the firewalls and routers. I'm _reasonably_ sure
    I've got the pairing correct, and did take care to keep the pairs
    twisted as close to the terminations as possible (less than an inch),
    but I'm not sure that punchdown blocks go that well with Gigabit.
    The 10BaseT performance is OK - virtually nil errors.

    Old guy
Ask a new question

Read More

WiFi Wireless Networking