Sign in with
Sign up | Sign in
Your question

Data Transfer speed concern

Tags:
  • Cable
  • Servers
  • Networking
Last response: in Networking
Share
October 2, 2006 11:28:08 PM

Hello,

Recently I planned to upgrade my 42 users and 8 servers LAN from 10/100 to Gigabit. All Servers have Dual Intel NIC's with load balancing. Desktops are now running d-link 530T Gigabit NIC's. My machine and the Servers are all connected via CAT6 to my Netgear GS748T. Most of the cable ran is CAT5, which iwll be reran with CAT6 over an upcoming weekend. 12 users are connected to my Netgear GS724T.

Now my question:

While waiting for my switches/cable to arrive, I purchased a d-link 8-port gigabit switch. Connected my backup server to it and my machine to it. Both machines at this point had CAT6 cable and Gigabit cards. I did a test transfer of a 1.43gig file and it transfered from the server to my desktop in 43-45seconds. Bare in mind there were no other connections; just my machine to gigabit switch and backup server. Now most of my ports are filled up, some @ gigabit speed and some still @ 100Mbps (due to CAT5 cable). I did another test of a similar size file and it took around 2 minutes and 10-15 seconds.

What is the factor here?

1) A few 10/100 cards left in use on LAN?
2) Some users not @ gigabit connection?
3) Not the best of Gigabit Switches to go with?
4) Normal with mostly all ports in use? (48 10/100/1000 Mbps gigabit ports; Bandwidth: 40 Gbps)

Thank You

More about : data transfer speed concern

October 3, 2006 4:11:31 PM

1.25gig should take 10.74 Seconds over Gigabit. I have also found htat HD speed also plays a factor. One of my Servers take 30 seconds to transfer the above file, where as another one takes 1.30minutes. Is CAT5 "ok" to run and effective on runs upto 100feet? As my business is split into 2 different "groups". 1/2 company going to 24port gigabit switch and other "side" going to my 48-port. With a CAT5e cable connecting the two switches. Distance for that run is about 130ish feet. I know CAT5e and/or CAT6 is reccomended for Gigabit, but at what length does CAT5 no longer properly route at gigabit speeds?
October 3, 2006 6:47:27 PM

I think this thread was started with the false assumption that gigabit implies file transfers at full wire speed, a false assumption that it maintained by the linked calculator (where the confusion is further compounded by using the outdated GB = 2^30 B instead of GB = 10^9 B - see GiB in Wikipedia for elaboration.)

File transfers will never run at full wire speed because (a) wire speed does not account for overhead -- even TCP/IP has overhead (b) file management overhead (c) security overhead (d) file transfer protocol overhead (e) file system performance (f) the network itself will rarely run at full wire speed.

In GbE, (e) is normally the dominant factor ("bottleneck"), whereas at 100 Mb/s, the network speed is normally the dominant factor. Even when (e) is minimized using high-end RAID arrays and such, factors (a) to (d) reduce the effective transfer rates well under the idealized wire speed, even when that's nearly achieved by the network subsystem.

It is tempting for the beginner to assume that the network's the problem here, whereas in practice it generally isn't. A way to tell if the network's a problem is to use a pure networking benchmarking tool, such as iperf.

E.g.

Server: iperf -s
Client: iperf -c server -l 60000 -t 30 -i 3 -r

1.25 GB / 30s ~ 41.7 MB/s, which is "normal". Note also that transfer speed will be affected by the client drive speed, for which 30-40 MB/s average transfer rates are typical. Higher speeds can be achieve with fast drives in outer sectors, etc., together with good networking.

Cat 5e is not intended to give better range than Cat 5 for GbE; GbE was designed with Cat 5 in mind; however, Cat 5e tightened up the specs, and is generally recommended. You should be able to get around 100 feet easily with Cat 5 in theory, however, this is practice not theory, and actual tests and measurements > theory.

Use something like iperf to check this out, and don't worry about tweaking the network endlessly when you have bottlenecks at the drive level further compounded by concurrent user load, etc.
!