30 megs/sec good throughput for a gigalan switch?

graysky

Distinguished
Jan 22, 2006
546
0
18,980
I have a gs108 (not using jumbo frames) and have been measuring some xfer speed via different methods from my linux box to a win box (dlink 530T on the linux box (1000/full duplex) and marvell builtin gigalan NIC (1000/full duplex) in the winxp machine. I can provide hardware details if needed. I kinda figured I'd see higher numbers. Here are the results, are they typical?

1.19 gigs in 7 files via FTP: 30.4 megs/sec
1.19 gigs in 7 files via NFS: 22.6 megs/sec
1.19 gigs in 7 files via SAMBA: 25.4 megs/sec
 

graysky

Distinguished
Jan 22, 2006
546
0
18,980
Should be in megabytes... math was (1.19 gigs/x seconds) x 1024 = MB/sec I believe. If those are slow numbers, what should I expect? Can you point to a good guide for tweaking the TCP/IP stack as you put it? One box is WinXP and the other is a debian box.

Thanks!
 

El0him

Distinguished
Feb 3, 2006
228
0
18,680
If those numbers are in megabytes..than you are doing pretty good... Vendors rate their products at the bits level.. so if you are achieving..

1.19 gigs in 7 files via FTP: 30.4 megs/sec
1.19 gigs in 7 files via NFS: 22.6 megs/sec
1.19 gigs in 7 files via SAMBA: 25.4 megs/sec

(30.4 megabytes/second) * (8 bits/byte) = 243.2 megabits per second

You should be able to squeeze out some more by tuning your tcp stack and getting good network cards also helps. Just because something has gigbit written on the box doesn't mean you are going to get gigabit wire speed.
 

leexgx

Distinguished
Feb 26, 2006
134
0
18,680
you find thats the max you get

try jumbo frames set to 9k (if it lets you set it to it and Both need to be the same and check if your giga bit switch can support it) mite get more speed but if you are useing an PCI giga bit card your limted by the PCI bus

you need to get 2 computers that use HW based giga bit cards (like Nforce 4 as thay have an onboard NVnetwork card) and you may get more speed but useing PCI cards will be makeing lots of kernel use (80% on my last pc 5% on my nForce4 giga bit one )
 

graysky

Distinguished
Jan 22, 2006
546
0
18,980
Problem with my setup is that in order for the switch to talk to the router, I need to use standard 1500 k MTU size since my router can't use JFs.
 

Madwand

Distinguished
Mar 6, 2006
382
0
18,780
30 MB/s seems roughly right to me; I've seen this figure reported by others (typically 30-35 MB/s) and see this myself when transferring from single IDE to IDE (Windows to Windows). The HD's are the apparent bottleneck.

Another test you could try is connecting a second HD to the same machine, and measuring the transfer speed from one drive to another. I see around 35 MB/s in this case, which is pretty clearly a limiting factor here.

When the source is cached, or faster (RAID), I've gotten 50-60 MB/s transfer rates reported (push from source). Perhaps write caching on the receiving side might be responsible for higher reported transfer speed on transfers that aren't very very large.

Going RAID to RAID, I've been able to reach 60-70 MB/s sustained in the same network setup, which again shows that a consumer gigabit network is not the bottleneck in some such cases IDE to IDE.
 

Pimp

Distinguished
Dec 20, 2005
40
0
18,530
What type of cable are your network cables made from. Are they cat 5e or cat 7? I though that you needed category 7 wire for gigabit.
 

bman212121

Distinguished
Feb 14, 2006
31
0
18,530
Actually, there has to be some limiting factor that I can't find. I can consistantly get about 30 - 35MB a sec, and yet I was running the transfer over a cross over cable. Both computers have 3 drive arrays capable of sustaining 100MB a sec, and the cpu usage wasn't more than 25% on either pc. Using 9k jumbo frames only made a marginal difference, like the 5MB up to 35MB.

If anyone has any other ideas on how to get over that limit, I would like to know.
 

Madwand

Distinguished
Mar 6, 2006
382
0
18,780
Actually, there has to be some limiting factor that I can't find. I can consistantly get about 30 - 35MB a sec, and yet I was running the transfer over a cross over cable. Both computers have 3 drive arrays capable of sustaining 100MB a sec, and the cpu usage wasn't more than 25% on either pc. Using 9k jumbo frames only made a marginal difference, like the 5MB up to 35MB.

If anyone has any other ideas on how to get over that limit, I would like to know.

This case seems different from the OP's and the typical case, and my case as noted above. Can you provide the details on the hardware, software and the test? Are you perhaps saturating the PCI bus? Are you using add-on NIC's? On which bus? Which MB? Are you using add-on drive controllers? On which bus? What size files are you testing with? Which file transfer protocol? Is read/write caching enabled for the arrays? Which RAID configuration is used? What stripe size? How did you measure that the arrays can sustain 100 MB/s? Read + write?

As part of the analysis, a network performance test independent of drive performance would be helpful. I suggest using TTCP for this, e.g. http://www.pcausa.com/Utilities/pcattcp.htm

Options -l 250000 -n 3000 were said to have been recommended by VIA. I used something like -r -c -R -f M on the receiving side, and -l 250000 -n 3000 -t -f M <target name/IP> on the sending side.
 

cisco

Distinguished
Sep 11, 2004
719
0
18,980
I usually get about 30-35% on average network utilization. It is hard to really come close to the 1000Mbps for any sustained period.
 

Madwand

Distinguished
Mar 6, 2006
382
0
18,780
Most of the time there's nothing coming across the line, so I get around 0% average utilization, but can sustain > 50% easily for RAID to RAID transfers, and have seen > 90% utilization when using the synthetic test tool mentioned above. (Consumer gear, no jumbo frames.)

Single IDE to single IDE, around 30% is right, because that's what the drives do.
 

lcdguy

Distinguished
Jan 4, 2006
255
0
18,780
AS far as speeds i get usually around 22-30 MegaBytes per second in an adhoc scenario (pc - pc) when the setting is set to normal frame size 1500 bytes.

When i set both computers nics to jumbo frames 9000bytes the speed jumped up to around 56MB/s. (i think my drives were maxing out)

but i discovered a little quirk. with jumbo frames enabled i tried to transfer a file that was around 465K and it would end up being corrupt, but i reset the frame size to the 1500bytes and it was fine.

Personally i love Gigabit ethernet.
 

graysky

Distinguished
Jan 22, 2006
546
0
18,980
with jumbo frames enabled i tried to transfer a file that was around 465K and it would end up being corrupt, but i reset the frame size to the 1500bytes and it was fine.

Interesting... is this reproducible?
 

Madwand

Distinguished
Mar 6, 2006
382
0
18,780
When i set both computers nics to jumbo frames 9000bytes the speed jumped up to around 56MB/s. (i think my drives were maxing out)

What were the file sizes that you tested? A rule of thumb is that you should use files that are >=4X as large as your available RAM, so that the file cache can't significantly affect the results.

Are you using RAID or single drives? Which drives and which OS? If Linux, how are the drives set up?
 

lcdguy

Distinguished
Jan 4, 2006
255
0
18,780
The first chunk of data was around 96GB of dvd images the second file was a 465Kb zip file. It was going from a single drive to a Dynamic Disk Volume.
 

Madwand

Distinguished
Mar 6, 2006
382
0
18,780
Previously, I'd seen figures on the order of 30 MB/s single drive to single drive over a number of tests, and had accepted this as a general constant of sorts. Based on the above report, I decided to test some more -- I broke up one of my RAID arrays into individual drives, and shucked files around.

Now I see some variability. Some of that seems to come from the tools used -- xxcopy seems to give more variability than xcopy for example. (So I'll stick with xcopy for now.) I've also see results that I just can't explain at this time (particularly bad performance). I'll work on this some more (I think this might be related to SATA drivers and other configuration parameters).

But the purpose of my note is to report that I also see high network transfer rates for single drives that are consistent with drive performance, without using jumbo frames.

I've seen 50-60 MB/s transfer rates, single drive to single drive for transferring an 8.4 GB file (with RAM = 2 GB) around the beginning of the drive. This is roughly consistent with transfer rates within drives on the same machine without networking, so indicates that there is not a great deal to be gained in my case with jumbo frames or other network tweaks. These drives bench around 65 MB/s STR at the beginning, and 35 MB/s at the end, so this would be the expected range of file transfer performance for files that aren't significantly fragmented and the drives are free to do sequential transfers without much seeking. Previously, in at least some tests, I'd used older / more crowded drives, so perhaps gotten the 30 MB/s figures because of that. 30 MB/s is still a valid figure for the end of my newer drives, but I now see that even single drives can do much better in some cases.

Jumbo frames become important when some part of the system / networking / even CPU perhaps can't keep up with a large volume of packets. So I don't dispute that it might be a very important factor for some cases, just that it isn't a single magic bullet, and isn't necessary in all cases.

Thanks for the push to re-test. There's more to learn, and perhaps I'll be able to use jumbo frames eventually and see benefits from them in some cases.