Networking in Debian, Asus RT-N56U, Ethernet (different transfer speeds between subnet-subnet, and subnet-wholeLAN)

whoyawitt

Honorable
May 22, 2013
1
0
10,510
I am having an issue when transferring files between two computers which I tried to detail below.

When I transfer files over a single scp session I can only max out at ~50MB/s.
I kind of got it working for my purposes by using multiple scp streams to fill up the bandwidth, and it has a combined transfer rate of ~114MB/s when I do that. I just moved the computers into the same subnet to start this transfer. When the computers were in different subnets (as shown below), a single scp session would transfer files from the Linux Media to the Hackintosh ~95-100MB/s, and if I added another scp session it would run at 2 or 3 MB/s until the first transfer finished.

For my purposes the workaround is fine but it would be nice to know what is going on.

The setup without problems is as follows:

WAN -> Apple Airport Extreme (192.168.1.1) -> Asus RT-N56U (192.168.1.2) -> Cisco 8-port gigabit switch

Hackintosh connected to airport extreme and Linux Media connected to an 8-port Cisco Switch which is connected to RT-N56U.

From Linux Media I used scp to transfer the files at the 90+ MB/s speeds in a single scp session.

Now the setup is:

WAN -> Airport Extreme -> Asus RT-N56U -> Cisco Switch -> Hackintosh & Linux Media


Now when doing an SCP in the reverse direction (hackintosh to the Linux Media) I peak at ~50MB/s but running multiple scp sessions can combine to ~110MB/s.

The ASUS RT-N56U is runnig custom firmware posted on google RT-N56U_3.0.3.5-058.

The hackintosh is exactly the same in both situations.

The things that are different:
-both on same subnet in the second situation
-installed debian wheezy (haven't really started to configure it)
-Linux Media now has the hard drives set up as a pool in mhddfs
(they used to be a single volume group via LVM).
I have the drive pool mounted on /mnt/pool. Then I made a folder called /mnt/pool/movies and /mnt/pool/TV and the separate scp streams are going into those two folders, to which I am hoping the files get spread to the various drives. This is my first run with mhddfs.

My guess....
Qos on ASUS or Cisco Switch? (I don't see any settings on the RT-N56U for Qos with the firmware I am using)


Any ideas on where the problem is or how to figure it out? Any input or general networking tips are appreciated.

Side note, I also have dual NICs in each computer. I want to set them up to be the best for serving media throughout my house. I think the setting them up as a bond would be the best route? If so, which type of bonding is used for this? Just the basic round robin or something like XOR?

Thanks!
-W
 

blarg_12

Distinguished
Aug 19, 2009
7
0
18,510
You've removed two components from the networking equation and it performs worse? (that's backwards! lol)

The only way I could see QoS causing issues is if the switch treats the network traffic differently when going between subnets - which would be weird indeed. It depends on how dumb your switch is - I try to make all my switches flat, dumb, layer 2 devices to avoid any sort of "calculation" or "traffic prioritisation" as my experience is that it always prioritises the stuff you don't want and adds a bunch of overhead.

The fact that you can fill the pipe with multiple sessions points more to a limitation on an individual scp session - which points more to a software/local machine issue. Although you could find that out by directly connecting the devices...

Haven't used mhddfs, but the docs state that it always fills the drive with the most space remaining first, so you probably aren't running into a drive bottleneck, as the file operations should be concurrently hammering the same (?) drive.

http://romanrm.ru/en/mhddfs
"If each drive individually has less than mlimit free space, the drive with the most free space will be chosen for new files."

Unless all your drives are the same size and mhddfs secures separate drives for each process.... which would potentially remove some disk bottleneck *if* it existed. Something to look into.

The other question is whether there is some processing bottleneck on the network card side that is mitigated by having multiple streams. Not likely, but I have seen it before.