Gigabit Ethernet: Dude, Where's My Bandwidth?

Network Speed Limiting Factors

While it might be relatively simple to collect the components to create a gigabit network, getting it to work at the maximum speed is more complex. The factors that can cause network slowdowns are numerous, but as we'll discover, it primarily comes down to how fast the hard drives can get data to the network controller.

The first limitation we should consider is the gigabit network controller’s interface with the system. If you're using a controller that resides on the older PCI bus, the amount of data throughput theoretically available is 133 MB/s. While this seems enough for Gigabit Ethernet's 125 MB/s requirement, remember that the PCI bus bandwidth is shared throughout the whole system. Every add-in PCI card and many system resources will be sharing that bandwidth, limiting the amount available to the network card. On newer PCI Express (PCIe) systems, this should be a non-issue, as every PCIe lane has at least 250 MB/s exclusively available to it.

Another aspect of the network that’s often suspect is the network cabling. It is often claimed that poor speeds are guaranteed if the network cable is run close to interference-causing power cables and cords. Longer cable runs can also be problematic as copper Cat 5e cables are only certified for 100 meter (about 300 ft.) lengths.

There are those who advocate using the newer Cat 6 standard instead of Cat 5e-class cable. While some of these claims are a little hard to quantify, we can certainly test how they might affect a small home gigabit network.

Let's not forget the operating system, either. While it's probably not being used much for gigabit networking these days, I shouldn't forget to mention that Windows 98 SE (and older operating systems) won't see a benefit from gigabit Ethernet as its TCP/IP stack will barely make use of 100 megabit connections to their fullest potential. Windows 2000 and more recent versions of Windows are fair game, although you might have more tweaking to do in older operating systems to coax them to perform at their maximum levels. We'll be using Windows Vista 32-bit for our tests, and as much as Vista has a bad reputation for some things, it's optimized to offer gigabit networking out of the box.

Next, let’s consider the hard drives. Even the older IDE interface sporting the ATA/133 specification should be able to support a theoretical 133 MB/s of data transfer, and the newer SATA specification should be able to breeze by the requirements, providing at least 1.5 Gb/s of bandwidth. But while the cables and controllers might be able to handle the data, the hard drives themselves might not.

Consider that a typical, modern 500 GB drive will likely be able to sustain data transfer rates somewhere in the neighborhood of ~65 MB/s. While it might burst faster at the start of the drive, it will slow down throughout the transfer. The data located at the end of the drive will read even slower, perhaps in the neighborhood of ~45 MB/s.

At this point, it looks like we’re getting an idea of where our bottleneck is probably coming from. What can we do? Let’s run some tests and see if we can achieve performance close to our network’s theoretical 125 MB/s limit.

Create a new thread in the US Reviews comments forum about this subject
This thread is closed for comments
125 comments
    Your comment
    Top Comments
  • flinxsl
    do you have any engineers on your staff that understand how this stuff works?? when you transfer some bits of data over a network, you don't just shoot the bits directly, they are sent in something called packets. Each packet contains control bits as overhead, which count toward the 125 Mbps limit, but don't count as data bits.

    11% loss due to negotiation and overhead on a network link is about ballpark for a home test.
    19
  • SpadeM
    For all tech people out there, the title of the article should have been a dead give away
    Quote:
    Gigabit Ethernet: Dude, Where's My Bandwidth?
    about the technical aspect of this piece. Sure they could have used a server platform with a server os, SSD's and ram disks, and why not some tech language what most people don't understand. But, as the titles states in a very suggestive way, this article is for people that have simple questions and seek simple answers.

    I'm ok with this piece, it isn't and injustice or it isn't wrong in any way IF you look at who it is addressed to. Remember the KISS rule guys.
    19
  • spectrewind
    Don Woligroski has some incorrect information, which invalidates this whole article. He should be writing about hard drives and mainboard bus information transfers. This article is entirely misleading.

    For example: "Cat 5e cables are only certified for 100 ft. lengths"
    This is incorrect. 100 meters (or 328 feet) maximum official segment length.

    Did I miss the section on MTU and data frame sizes. Segment? Jumbo frames? 1500 vs. 9000 for consumer devices? Fragmentation? TIA/EIA? These words and terms should have occurred in this article, but were omitted.

    Worthless writing. THG *used* to be better than this.
    12
  • Other Comments
  • gwiz1987
    why is the RAM-to-RAM network max speed on the graph 111.2 when u state 111.4? typo?
    -5
  • drtebi
    Interesting article, thank you. I wonder how a hardware based RAID 5 would perform on a gigabit network compared to a RAID 1?
    5
  • Anonymous
    Hello

    Thanks for the article. But I would like to ask how is the transfer speed measured. If it is just the (size of the file)/(a time needed for a tranfer) you are probably comsuming all the bandwith, beacuse you have to count in all the control part of the data packet (ethernet header, IP headrer, TCP header...)

    Blake
    5
  • jankee
    The article does not make any sense and created from an rookie. Remember you will not see a big difference when transfer small amount of data due to some transfer negotiating between network. Try to transfer some 8GB file or folder across, you then see the difference. The same concept like you are trying to race between a honda civic and a ferrari just in a distance of 20 feet away.

    Hope this is cleared out.
    -18
  • spectrewind
    Don Woligroski has some incorrect information, which invalidates this whole article. He should be writing about hard drives and mainboard bus information transfers. This article is entirely misleading.

    For example: "Cat 5e cables are only certified for 100 ft. lengths"
    This is incorrect. 100 meters (or 328 feet) maximum official segment length.

    Did I miss the section on MTU and data frame sizes. Segment? Jumbo frames? 1500 vs. 9000 for consumer devices? Fragmentation? TIA/EIA? These words and terms should have occurred in this article, but were omitted.

    Worthless writing. THG *used* to be better than this.
    12
  • IronRyan21
    Quote:
    There is a common misconception out there that gigabit networks require Category 5e class cable, but actually, even the older Cat 5 cable is gigabit-capable.


    Really? I thought Cat 5 wasn't gigabit capable? In fact cat 6 was the only way to go gigabit.
    -15
  • cg0def
    why didn't you test SSD performance? It's quite a hot topic and I'm sure a lot of people would like to know if it will in fact improve network performance. I can venture a guess but it'll be entirely theoretical.
    12
  • MartenKL
    Gbit is actually 10^9 bits per second, ie about 119 MB/s.
    8
  • flinxsl
    do you have any engineers on your staff that understand how this stuff works?? when you transfer some bits of data over a network, you don't just shoot the bits directly, they are sent in something called packets. Each packet contains control bits as overhead, which count toward the 125 Mbps limit, but don't count as data bits.

    11% loss due to negotiation and overhead on a network link is about ballpark for a home test.
    19
  • jankee
    After carefully read the article. I believe this is not a tech review, just a concern from a newbie because he does not understand much about all external factor of data transfer. All his simple thought is 1000 is ten time of 100 Mbs and expect have to be 10 time faster.

    Anyway, many difference factors will affect the transfer speed. The most accurate test need to use Ram Drive and have to use powerful machines to illuminate the machine bottle neck factor out.
    6
  • jankee
    Correction: "eliminate" (sorry)
    0
  • MartenKL
    Cat 5e is actually a newer standard than the aging Cat 6 standard. Cat 6a however is a relatively new standard that I would recommend, it does support 10Gb/s networks as well.
    8
  • Anonymous
    First of all, gigabit ethernet uses entirely different addressing and encoding from 100-meg, and overhead is one heck of a lot greater than that.
    First of all, there's the 10b/8b encoding, so an 8-bit byte is encoded to a 10-bit unit. Then there's a concept of invariable frame sizes, whereit might be possible that a TCP/IP packet spans two frames, filling 100% of the first and 1% of the second, it means 50.5% efficiency. Third, every frame is payload only in part, rest is taken up by header information, footer and CRC. It's not much, perhaps about 5% of the frame, but it can get noticeable.
    First, you have to divide by 10, not by 8, to get the speed in bytes/second (ie. 100 MB/s, not 125 MB/s).
    Second, if you transmit a lot of inefficient frames (networking programs aren't exactly frugal about bandwidth when they have gigabit ethernet, and next to none are actually optimized in any way for it), you might lose up to half of the bandwidth.
    Third, when you factor in the frame level overhead, you might end up with maybe 40-45 MB/s of the promised 100 MB/s...

    Fortunately, a lot of these issues can be resolved by optimizing software and firmware to take advantage of the available bandwidth and idiosyncracies of gigabit ethernet.
    0
  • MartenKL
    Ok my bad, this article is not for Tomshardware it is not meant for people that understand networking or maybe even computers. Pass this article on to another site with more "normal" visitors.

    Testing with a different file for ram to ram then used in the other tests really show the errors in these tests.
    1
  • cyberkuberiah
    this is why i am a regular reader here at tom's .
    0
  • apache_lives
    What i want to see is the effect of jumboframe packets and hdd allocation unit size (or stripe size) and its effects on network transfers since the packets etc transfer differently across the network cable - benchmarks?
    -1
  • SpadeM
    For all tech people out there, the title of the article should have been a dead give away
    Quote:
    Gigabit Ethernet: Dude, Where's My Bandwidth?
    about the technical aspect of this piece. Sure they could have used a server platform with a server os, SSD's and ram disks, and why not some tech language what most people don't understand. But, as the titles states in a very suggestive way, this article is for people that have simple questions and seek simple answers.

    I'm ok with this piece, it isn't and injustice or it isn't wrong in any way IF you look at who it is addressed to. Remember the KISS rule guys.
    19
  • Psycomo
    Technically you could see a 10 fold increase in thruput. A 100Mb/s network is capable of 12.5MB/s thruput. A 1Gb/s network has a thruput of 128MB/s. So therefore 100Mb/s(12.5MB/s) x 10 = 1Gb/s(128MB/s). Of course this takes in to account an optimal or perfect network invironement, one with a single user and a sata based SSD that can push that amount of data on the read/write. Your typical magnetic based HDD running at 7200 RPM will be hard pressed to push more then 100MB/s on the write which is going to be your data bottle neck as you can only push data as fast as it can be written. Of course even at that speed you are around an 8x improvement in thruput. You will also lose bandwith with overhead alone of roughly 10%(on this I could be mistaken). Again this all takes into account a perfect environement to test in.
    -3
  • thexder1
    First of all I would like to point out to IronRyan21 that the actual Gigabit ethernet standard is for Cat5. Second I would have preferred to see more through tests done as the "50 ft" of cable used is a very short run for ethernet since the maximum length is actually 100 meters or 328 Ft as spectrewind pointed out. In a home you probably will not see runs much over 50 Ft but the setup used in the article was actually 2 x 25 Ft runs. If the tests were done closer to the maximum length you would then see a much bigger difference when changes were made to how the cable was run and the cable itself. I would want the article to be redone with runs of 50, 100, 200, and 300 Ft to see if the conclusions were correct or if they only apply to very short runs. I would have also liked to have seen testing on the throughput difference using jumbo frames as well as different file sizes.

    I think the RAM disk was a good idea to do a maximum throughput test in using real world data copies but that was the only good thing about the article that I can see.
    6
  • zetone
    @all complaining about the technical aspects of this article: I think the target audience is NOT network administrators.

    One the other hand is is worth mentioning, that transfer speed over gigabit network from disk to disk, depends on the files size transferred and number of files transferred.

    It's one thing to copy over network a 4 gig file. And is totally different to copy 40k+ files totalizing 4 gigs. For the latest scenario, performance will take another hit due to increased I/O overhead @ disk level.
    3