GigaLAN speed results -- if you have one please comment

I have some older hardware (linux is running on a 1900+ athlon xp nforce2 based board and my win box is a 3200+ athlon xp nforce2 based board) which may explain the giant boost of larger frame size. Here are the results of my tests moving a large file (about 1 gig) from the win to linux and from linux to win. I've concluded that my network would HUGELY benefit from a 4k MTU size. Here are the data:

Test Large file, 1,048,522 kb xfered via Samba
Both NICs running @ full duplex 1 gig mode

mtu=1500 time (sec) MB/sec Mbps
linux to win 59 21.2 170
win to linux 94 13.3 107

mtu=4000
linux to win 51 24.6 197
win to linux 46 27.2 218

mtu=9000
linux to win 57 22.0 176
win to linux 49 25.6 205[/code:1:9c4d28d1b6]

[b]My conclusions[/b]
[code:1:9c4d28d1b6]4k vs. standard % Change
linux to win 16%
win to linux 104%

9k vs. standard
linux to win 4%
win to linux 92%
[/code:1:9c4d28d1b6]

What's striking to me is that my network is not even close to the theoretical limit of gigalan (1000 Mbps). Are my numbers typical or is my setup slow?

Can others with gigalan switches or crossover cable setups do a test like this? Get yourself about a 1 gig file and transfer it. If you're using windows, use a stop watch or robocopy to time your results and post them here.
5 answers Last reply
More about gigalan speed results comment
  1. One thing I would point out... the data for that large file did not instantly materialize at the NIC. You have to consider the performance of your hard drives in trying to measure your actual network performance.

    ATA133 (probably the fastest hard drive controller in your older PC?) has a theoretical maximum transfer rate of 133MB/sec, or 1Gbps, and it is pretty certain your actual drives are not capable of even that as a sustained throughput; they're probably in the vacinity of 20-40 MBps for sequential reads (rounding to 150 - 300Mbps). And, this assumes you have defragged your drives before the test.
  2. I actually measured the drive-to-drive speed as well (albeit with a different set of files):
    time (sec) MB/sec Mbps
    15 47.7 382[/code:1:d6813e6332]
  3. Hard drive performance varies dramatically depending on the size and arrangement of the file.

    If you got 300+Mbps sustained data transfer rate (large sequential file), that is pretty good. The measurement may be different for each machine, and even each drive on each machine.

    But, that does establish the upper boundary or your network performance for transferring large files. I noticed that in a couple of cases you were approaching the limit of your hard drives.
  4. I've hit around 50-60 MB/s on single drive transfers (at least a single drive on one side); > 90 MB/s on RAID to RAID using Windows on both ends, and > 100 MB/s using ftp under Windows. I've used 10 GB files for these tests, because a 1 GB file can fit in RAM in some cases.

    I recommend going to ftp to factor out the SAMBA/etc., inefficiencies, and for providing a more useful upper limit goal for SAMBA/etc.

    I recommend using iperf or something similar for checking the networking performance in isolation.

    IME whether or not jumbo frames help performance depends a lot on the hardware and software environment. PCI NICs and older hardware in general gain more from jumbo frames than on-board / PCIe NICs and newer faster hardware.

    (But there are still exceptions, blowing the nice theory out of the water in some cases, e.g. with Vista. Vista changes things; it might be nice to see Vista together with a Linux SMB 2.0 implementation as well in the future.)
  5. Thanks for the suggestion. Iperf is pretty slick. For other interested users, you can download a win32 binary and the source for LINUX/UNIX from here. For windows, just do a search for "Iperf 2.0.2 installer for Windows" on the page and you'll get the link. Usage is simple. Just run it in server mode on one machine, and client mode on the other machine.
    $ iperf -c IP_OF_SERVER -t 30[/code:1:dfbe605a47]

    ANYWAY, here's my output (4k jumbo frames enabled)

Ask a new question

Read More

Linux Windows XP Networking