Gigabit Ethernet: Dude, Where's My Bandwidth?
-
Page 1:Introduction
-
Page 2:What Makes A Gigabit Network? Cards, Cables, And Hubs
-
Page 3:First Test: How Fast Is Gigabit Supposed To Be, Anyway?
-
Page 4:Network Speed Limiting Factors
-
Page 5:Test Systems
-
Page 6:Network Tests: Setting Our Expectations
-
Page 7:Network Tests: Are We Getting Gigabit Performance?
-
Page 8:Testing-Cabling Factors
-
Page 9:Conclusion
First Test: How Fast Is Gigabit Supposed To Be, Anyway?
How fast is a gigabit? If you hear the prefix "giga" and assume 1,000 megabytes, you might also figure that a gigabit network should deliver 1,000 megabytes per second. If this sounds like a reasonable assumption to you, you’re not alone. But unfortunately, you’re going to be fairly disappointed.
So what is a gigabit? It is 1,000 megabits, not 1,000 megabytes. There are eight bits in a single byte, so let’s do the math: 1,000,000,000 bits divided by 8 bits = 125,000,000 bytes. There are about a million bytes in a megabyte, therefore a gigabit network should be capable of delivering a theoretical maximum transfer of about 125 MB/s.
While 125 MB/s might not sound as impressive as the word gigabit, think about it: a network running at this speed should be able to theoretically transfer a gigabyte of data in a mere eight seconds. A 10 GB archive could be transferred in only a minute and 20 seconds. This speed is incredible, and if you need a reference point, just recall how long it took the last time you moved a gigabyte of data back before USB keys were as fast as they are today.
Armed with this expectation, I’ll move a file over my gigabit network and check the speed to see how close it comes to 125 MB/s. We’re not using a network of wonder machines here, but we have a real-world home network with some older but decent technology.

Copying a 4.3 GB file from one of these PCs to another five different times resulted in a 35.8 MB/s average. This is only about 30% as fast as a gigabit network’s theoretical ceiling of 125 MB/s.
What’s the problem?
11% loss due to negotiation and overhead on a network link is about ballpark for a home test.
I'm ok with this piece, it isn't and injustice or it isn't wrong in any way IF you look at who it is addressed to. Remember the KISS rule guys.
For example: "Cat 5e cables are only certified for 100 ft. lengths"
This is incorrect. 100 meters (or 328 feet) maximum official segment length.
Did I miss the section on MTU and data frame sizes. Segment? Jumbo frames? 1500 vs. 9000 for consumer devices? Fragmentation? TIA/EIA? These words and terms should have occurred in this article, but were omitted.
Worthless writing. THG *used* to be better than this.
Thanks for the article. But I would like to ask how is the transfer speed measured. If it is just the (size of the file)/(a time needed for a tranfer) you are probably comsuming all the bandwith, beacuse you have to count in all the control part of the data packet (ethernet header, IP headrer, TCP header...)
Blake
Hope this is cleared out.
For example: "Cat 5e cables are only certified for 100 ft. lengths"
This is incorrect. 100 meters (or 328 feet) maximum official segment length.
Did I miss the section on MTU and data frame sizes. Segment? Jumbo frames? 1500 vs. 9000 for consumer devices? Fragmentation? TIA/EIA? These words and terms should have occurred in this article, but were omitted.
Worthless writing. THG *used* to be better than this.
Really? I thought Cat 5 wasn't gigabit capable? In fact cat 6 was the only way to go gigabit.
11% loss due to negotiation and overhead on a network link is about ballpark for a home test.
Anyway, many difference factors will affect the transfer speed. The most accurate test need to use Ram Drive and have to use powerful machines to illuminate the machine bottle neck factor out.
First of all, there's the 10b/8b encoding, so an 8-bit byte is encoded to a 10-bit unit. Then there's a concept of invariable frame sizes, whereit might be possible that a TCP/IP packet spans two frames, filling 100% of the first and 1% of the second, it means 50.5% efficiency. Third, every frame is payload only in part, rest is taken up by header information, footer and CRC. It's not much, perhaps about 5% of the frame, but it can get noticeable.
First, you have to divide by 10, not by 8, to get the speed in bytes/second (ie. 100 MB/s, not 125 MB/s).
Second, if you transmit a lot of inefficient frames (networking programs aren't exactly frugal about bandwidth when they have gigabit ethernet, and next to none are actually optimized in any way for it), you might lose up to half of the bandwidth.
Third, when you factor in the frame level overhead, you might end up with maybe 40-45 MB/s of the promised 100 MB/s...
Fortunately, a lot of these issues can be resolved by optimizing software and firmware to take advantage of the available bandwidth and idiosyncracies of gigabit ethernet.
Testing with a different file for ram to ram then used in the other tests really show the errors in these tests.
I'm ok with this piece, it isn't and injustice or it isn't wrong in any way IF you look at who it is addressed to. Remember the KISS rule guys.
I think the RAM disk was a good idea to do a maximum throughput test in using real world data copies but that was the only good thing about the article that I can see.
One the other hand is is worth mentioning, that transfer speed over gigabit network from disk to disk, depends on the files size transferred and number of files transferred.
It's one thing to copy over network a 4 gig file. And is totally different to copy 40k+ files totalizing 4 gigs. For the latest scenario, performance will take another hit due to increased I/O overhead @ disk level.