Skip to main content

Gigabit Ethernet: Dude, Where's My Bandwidth?

Network Tests: Are We Getting Gigabit Performance?

Let’s proceed with our first test, where we send a file from the client’s C: drive to the server’s C: drive:

We’re seeing something that mirrors our expectations. The gigabit network, which is capable of a theoretical 119 MB/s transfer, is sending out data from the client’s C: drive as fast as it can, probably somewhere in the neighborhood of 65 MB/s. However, as we demonstrated previously, the server’s C: drive can only write as fast as about 40 MB/s.

Now let's copy a file from the speedy server RAID array to the client computer’s C: drive:

Once again, this is just like we called it. We know from our tests that the client computer’s C: drive can write this file at about 70 MB/s under ideal conditions, while the gigabit network is delivering performance very close to this speed.

Unfortunately, none of these numbers have come close to gigabit's theoretical maximum throughput of 125 MB/s. Is there a way we can test the network’s maximum speed? Of course there is, but not in a real-world situation. What we’re going to have to do is make a direct memory-to-memory transfer over the network so that we bypass any hard drive-bandwidth limitations.

To do this, we’re going to make a 1 GB RAM drive on both the client and server PCs, and then transfer a 1 GB file between these RAM drives over the network. Since even the slowest DDR2 RAM should be able to handle over 3,000 MB/s of data, the only limiting factor should be how fast our network can run:

Lovely! We’re seeing a 111.4 MB/s maximum speed over our gigabit network, which is very close to a gigabit network’s theoretical 125 MB/s. This is a great result and is nothing to complain about, as real-world bandwidth will likely never hit an ideal maximum speed due to network overhead.

So now we've proven it conclusively: hard drives are the lowest common denominator when it comes to file transfers over a gigabit network, limiting a network's data transfer rates to that of the slowest hard drive. With this big question answered, we wanted to do a few network-cabling tests, just to satisfy our curiosity. Is network cabling a factor that might keep us from network speeds closer to the theoretical limit?

  • gwiz1987
    why is the RAM-to-RAM network max speed on the graph 111.2 when u state 111.4? typo?
    Reply
  • drtebi
    Interesting article, thank you. I wonder how a hardware based RAID 5 would perform on a gigabit network compared to a RAID 1?
    Reply
  • Hello

    Thanks for the article. But I would like to ask how is the transfer speed measured. If it is just the (size of the file)/(a time needed for a tranfer) you are probably comsuming all the bandwith, beacuse you have to count in all the control part of the data packet (ethernet header, IP headrer, TCP header...)

    Blake
    Reply
  • jankee
    The article does not make any sense and created from an rookie. Remember you will not see a big difference when transfer small amount of data due to some transfer negotiating between network. Try to transfer some 8GB file or folder across, you then see the difference. The same concept like you are trying to race between a honda civic and a ferrari just in a distance of 20 feet away.

    Hope this is cleared out.
    Reply
  • spectrewind
    Don Woligroski has some incorrect information, which invalidates this whole article. He should be writing about hard drives and mainboard bus information transfers. This article is entirely misleading.

    For example: "Cat 5e cables are only certified for 100 ft. lengths"
    This is incorrect. 100 meters (or 328 feet) maximum official segment length.

    Did I miss the section on MTU and data frame sizes. Segment? Jumbo frames? 1500 vs. 9000 for consumer devices? Fragmentation? TIA/EIA? These words and terms should have occurred in this article, but were omitted.

    Worthless writing. THG *used* to be better than this.
    Reply
  • IronRyan21
    There is a common misconception out there that gigabit networks require Category 5e class cable, but actually, even the older Cat 5 cable is gigabit-capable.

    Really? I thought Cat 5 wasn't gigabit capable? In fact cat 6 was the only way to go gigabit.
    Reply
  • cg0def
    why didn't you test SSD performance? It's quite a hot topic and I'm sure a lot of people would like to know if it will in fact improve network performance. I can venture a guess but it'll be entirely theoretical.
    Reply
  • MartenKL
    Gbit is actually 10^9 bits per second, ie about 119 MB/s.
    Reply
  • flinxsl
    do you have any engineers on your staff that understand how this stuff works?? when you transfer some bits of data over a network, you don't just shoot the bits directly, they are sent in something called packets. Each packet contains control bits as overhead, which count toward the 125 Mbps limit, but don't count as data bits.

    11% loss due to negotiation and overhead on a network link is about ballpark for a home test.
    Reply
  • jankee
    After carefully read the article. I believe this is not a tech review, just a concern from a newbie because he does not understand much about all external factor of data transfer. All his simple thought is 1000 is ten time of 100 Mbs and expect have to be 10 time faster.

    Anyway, many difference factors will affect the transfer speed. The most accurate test need to use Ram Drive and have to use powerful machines to illuminate the machine bottle neck factor out.

    Reply