Sign in with
Sign up | Sign in

Network Tests: Setting Our Expectations

Gigabit Ethernet: Dude, Where's My Bandwidth?
By

Before we do anything else, we should test our hard drives without using the network to see what kind of bandwidth we can expect from them in an ideal scenario.

There are two PCs taking part in our real-world gigabit network. The first, which we’ll call the server, has two drives. Its primary drive is a 320 GB Seagate Barracuda ST3320620AS that’s a couple of years old. The server acts as a network-attached storage (NAS) device for a RAID array consisting of two 1 TB Hitachi Deskstar 0A-38016 drives, mirrored for redundancy.

The second PC on the network, which we’ll call the client, has only one drive: a 500 GB Western Digital Caviar 00AAJS-00YFA that is about six months old.

We first test the speed of both the server's and client computer's C: drives and see what kind of read performance we can expect to get from them. We’ll use SiSoftware Sandra 2009’s hard disk benchmark:

Right out of the gate, our hopes for achieving gigabit speed file transfers have disappeared. Both of our single hard disks have a maximum read speed of about 75 MB/s in ideal situations. Since this is a real-world test and the drives are about 60% full, we can expect read speeds closer to the 65 MB/s index that both of these drives share.

But have a look at the RAID 1 array performance–the great thing about this RAID 1 array is that the hardware RAID controller can increase read performance by pulling data from both drives simultaneously, similar to how a striped RAID 0 array can; note that this effect will probably only occur with hardware RAID controllers, not software RAID solutions. In our tests, the RAID array gives us much faster read performance than a single drive can, so our best chance for a quick file transfer over the network looks like it’s with our RAID 1 array. While the RAID array is delivering an impressive peak 108 MB/s, we should be seeing real-world performance close to its 88 MB/s index because this array is about 55% full.

So, we should be able to demonstrate about 88 MB/s over our gigabit network, right? It’s not that close to a gigabit network’s 125 MB/s ceiling, but it’s a lot faster than 100 megabit networks that have a 12.5 MB/s ceiling, so 88 MB/s would be great in practice.

Not so fast. Just because our drives can read this quickly doesn’t mean they can write that fast in a real-world situation. Let’s try a few disk write tests before even using the network to see what happens. We’ll start with our server and copy a 4.3 GB image file from the speedy RAID array to the 320 GB system drive, and then back again. Then we'll try copying a file from the client PC's D: drive to its C: drive.

Yuck! Copying from the quick RAID array to the C: drive results in a mere 41 MB/s average transfer speed. And when copying from the C: drive to our RAID 1 array, the transfer speed drops even more to about 25 MB/s. What is going on here?

Well, this is what happens in the real world: The C: drive is only a little over a year old, but it’s about 60% full, probably fragmented a bit, and it’s no speed demon when writing. There are other factors as well, such as how fast the system and memory are in general. As for the RAID 1 array, it’s made up of relatively new hardware, but because it’s a redundant array, it has to write data to two drives simultaneously and is going to take a huge write performance hit. While RAID 1 can offer fast read performance, write performance is sacrificed. Alternatively, we could use a RAID 0 array with striped drives that delivers fast write and read performance, but if one of the drives die, then all of the data is compromised. Realistically, RAID 1 is a better bet for folks who value their data enough to set up a NAS.

However, all is not lost yet, as we have a little light at the end of the tunnel. The newer 500 GB Western Digital Caviar is capable of writing this file at an average 70.3 MB/s over five runs, and even recorded a top-speed run of 73.2 MB/s performance.

With this in mind, we’re going to expect that our real-world gigabit LAN might demonstrate a maximum of about 73 MB/s on transfers from the NAS RAID 1 array to the client’s C: drive. We’ll also test transfer files from the client’s C: drive to the server’s C: drive to see if we can realistically expect about 40 MB/s in that direction.  

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 125 comments.
This thread is closed for comments
Top Comments
  • 19 Hide
    SpadeM , June 22, 2009 9:00 AM
    For all tech people out there, the title of the article should have been a dead give away
    Quote:
    Gigabit Ethernet: Dude, Where's My Bandwidth?
    about the technical aspect of this piece. Sure they could have used a server platform with a server os, SSD's and ram disks, and why not some tech language what most people don't understand. But, as the titles states in a very suggestive way, this article is for people that have simple questions and seek simple answers.

    I'm ok with this piece, it isn't and injustice or it isn't wrong in any way IF you look at who it is addressed to. Remember the KISS rule guys.
  • 19 Hide
    flinxsl , June 22, 2009 7:37 AM
    do you have any engineers on your staff that understand how this stuff works?? when you transfer some bits of data over a network, you don't just shoot the bits directly, they are sent in something called packets. Each packet contains control bits as overhead, which count toward the 125 Mbps limit, but don't count as data bits.

    11% loss due to negotiation and overhead on a network link is about ballpark for a home test.
  • 12 Hide
    cg0def , June 22, 2009 7:25 AM
    why didn't you test SSD performance? It's quite a hot topic and I'm sure a lot of people would like to know if it will in fact improve network performance. I can venture a guess but it'll be entirely theoretical.
Other Comments
  • -5 Hide
    gwiz1987 , June 22, 2009 6:28 AM
    why is the RAM-to-RAM network max speed on the graph 111.2 when u state 111.4? typo?
  • 5 Hide
    drtebi , June 22, 2009 6:29 AM
    Interesting article, thank you. I wonder how a hardware based RAID 5 would perform on a gigabit network compared to a RAID 1?
  • 5 Hide
    Anonymous , June 22, 2009 6:55 AM
    Hello

    Thanks for the article. But I would like to ask how is the transfer speed measured. If it is just the (size of the file)/(a time needed for a tranfer) you are probably comsuming all the bandwith, beacuse you have to count in all the control part of the data packet (ethernet header, IP headrer, TCP header...)

    Blake
  • 12 Hide
    spectrewind , June 22, 2009 7:14 AM
    Don Woligroski has some incorrect information, which invalidates this whole article. He should be writing about hard drives and mainboard bus information transfers. This article is entirely misleading.

    For example: "Cat 5e cables are only certified for 100 ft. lengths"
    This is incorrect. 100 meters (or 328 feet) maximum official segment length.

    Did I miss the section on MTU and data frame sizes. Segment? Jumbo frames? 1500 vs. 9000 for consumer devices? Fragmentation? TIA/EIA? These words and terms should have occurred in this article, but were omitted.

    Worthless writing. THG *used* to be better than this.
  • 12 Hide
    cg0def , June 22, 2009 7:25 AM
    why didn't you test SSD performance? It's quite a hot topic and I'm sure a lot of people would like to know if it will in fact improve network performance. I can venture a guess but it'll be entirely theoretical.
  • 8 Hide
    MartenKL , June 22, 2009 7:34 AM
    Gbit is actually 10^9 bits per second, ie about 119 MB/s.
  • 19 Hide
    flinxsl , June 22, 2009 7:37 AM
    do you have any engineers on your staff that understand how this stuff works?? when you transfer some bits of data over a network, you don't just shoot the bits directly, they are sent in something called packets. Each packet contains control bits as overhead, which count toward the 125 Mbps limit, but don't count as data bits.

    11% loss due to negotiation and overhead on a network link is about ballpark for a home test.
  • 6 Hide
    jankee , June 22, 2009 7:41 AM
    After carefully read the article. I believe this is not a tech review, just a concern from a newbie because he does not understand much about all external factor of data transfer. All his simple thought is 1000 is ten time of 100 Mbs and expect have to be 10 time faster.

    Anyway, many difference factors will affect the transfer speed. The most accurate test need to use Ram Drive and have to use powerful machines to illuminate the machine bottle neck factor out.

  • 0 Hide
    jankee , June 22, 2009 7:43 AM
    Correction: "eliminate" (sorry)
  • 8 Hide
    MartenKL , June 22, 2009 7:46 AM
    Cat 5e is actually a newer standard than the aging Cat 6 standard. Cat 6a however is a relatively new standard that I would recommend, it does support 10Gb/s networks as well.
  • 0 Hide
    Anonymous , June 22, 2009 8:04 AM
    First of all, gigabit ethernet uses entirely different addressing and encoding from 100-meg, and overhead is one heck of a lot greater than that.
    First of all, there's the 10b/8b encoding, so an 8-bit byte is encoded to a 10-bit unit. Then there's a concept of invariable frame sizes, whereit might be possible that a TCP/IP packet spans two frames, filling 100% of the first and 1% of the second, it means 50.5% efficiency. Third, every frame is payload only in part, rest is taken up by header information, footer and CRC. It's not much, perhaps about 5% of the frame, but it can get noticeable.
    First, you have to divide by 10, not by 8, to get the speed in bytes/second (ie. 100 MB/s, not 125 MB/s).
    Second, if you transmit a lot of inefficient frames (networking programs aren't exactly frugal about bandwidth when they have gigabit ethernet, and next to none are actually optimized in any way for it), you might lose up to half of the bandwidth.
    Third, when you factor in the frame level overhead, you might end up with maybe 40-45 MB/s of the promised 100 MB/s...

    Fortunately, a lot of these issues can be resolved by optimizing software and firmware to take advantage of the available bandwidth and idiosyncracies of gigabit ethernet.
  • 1 Hide
    MartenKL , June 22, 2009 8:14 AM
    Ok my bad, this article is not for Tomshardware it is not meant for people that understand networking or maybe even computers. Pass this article on to another site with more "normal" visitors.

    Testing with a different file for ram to ram then used in the other tests really show the errors in these tests.
  • 0 Hide
    cyberkuberiah , June 22, 2009 8:29 AM
    this is why i am a regular reader here at tom's .
  • -1 Hide
    apache_lives , June 22, 2009 8:50 AM
    What i want to see is the effect of jumboframe packets and hdd allocation unit size (or stripe size) and its effects on network transfers since the packets etc transfer differently across the network cable - benchmarks?
  • 19 Hide
    SpadeM , June 22, 2009 9:00 AM
    For all tech people out there, the title of the article should have been a dead give away
    Quote:
    Gigabit Ethernet: Dude, Where's My Bandwidth?
    about the technical aspect of this piece. Sure they could have used a server platform with a server os, SSD's and ram disks, and why not some tech language what most people don't understand. But, as the titles states in a very suggestive way, this article is for people that have simple questions and seek simple answers.

    I'm ok with this piece, it isn't and injustice or it isn't wrong in any way IF you look at who it is addressed to. Remember the KISS rule guys.
  • -3 Hide
    Psycomo , June 22, 2009 9:31 AM
    Technically you could see a 10 fold increase in thruput. A 100Mb/s network is capable of 12.5MB/s thruput. A 1Gb/s network has a thruput of 128MB/s. So therefore 100Mb/s(12.5MB/s) x 10 = 1Gb/s(128MB/s). Of course this takes in to account an optimal or perfect network invironement, one with a single user and a sata based SSD that can push that amount of data on the read/write. Your typical magnetic based HDD running at 7200 RPM will be hard pressed to push more then 100MB/s on the write which is going to be your data bottle neck as you can only push data as fast as it can be written. Of course even at that speed you are around an 8x improvement in thruput. You will also lose bandwith with overhead alone of roughly 10%(on this I could be mistaken). Again this all takes into account a perfect environement to test in.
  • 6 Hide
    thexder1 , June 22, 2009 9:33 AM
    First of all I would like to point out to IronRyan21 that the actual Gigabit ethernet standard is for Cat5. Second I would have preferred to see more through tests done as the "50 ft" of cable used is a very short run for ethernet since the maximum length is actually 100 meters or 328 Ft as spectrewind pointed out. In a home you probably will not see runs much over 50 Ft but the setup used in the article was actually 2 x 25 Ft runs. If the tests were done closer to the maximum length you would then see a much bigger difference when changes were made to how the cable was run and the cable itself. I would want the article to be redone with runs of 50, 100, 200, and 300 Ft to see if the conclusions were correct or if they only apply to very short runs. I would have also liked to have seen testing on the throughput difference using jumbo frames as well as different file sizes.

    I think the RAM disk was a good idea to do a maximum throughput test in using real world data copies but that was the only good thing about the article that I can see.
  • 3 Hide
    zetone , June 22, 2009 9:51 AM
    @all complaining about the technical aspects of this article: I think the target audience is NOT network administrators.

    One the other hand is is worth mentioning, that transfer speed over gigabit network from disk to disk, depends on the files size transferred and number of files transferred.

    It's one thing to copy over network a 4 gig file. And is totally different to copy 40k+ files totalizing 4 gigs. For the latest scenario, performance will take another hit due to increased I/O overhead @ disk level.
Display more comments