Sign in with
Sign up | Sign in
Your question

How do larger companies manage in network speeds?

Last response: in Business Computing
Share
October 19, 2012 5:34:41 PM

At my company we only have about 10 people working on the network at any given time, running batch files, converting, or just looking at images over the internal network. The difference in speeds between 2-6-10 people is so drastic that it has us always looking for solutions left and right.

Since this is my day to day experience, how do larger companies with 100-1000 people handle network loads for their LAN and WAN.

Is it just better servers, switches, cables, and NIC cards?

Because ours are kinda second rate. I did just put in new NIC cards into one server so instead of its integrated cards, its running off PCI cards. Now its going really quick.

Is it just the hardware? Or are there some settings in the switch, router, or PCs themselves I'm missing.
October 19, 2012 5:48:41 PM

network traffic is optimized as a whole. as well as bandwidth increased.

Most workstations have 1GB wired NIC's. Switches are enterprise class. and backbone architecture no slower than 1GB or more in any location.
October 19, 2012 6:24:16 PM

A lot of times I've seen when people are measuring network speed comparisons, they're doing so using outside internet resources, such as browsing to a webpage or streaming a video online. In most all situations, accessing your outside internet resources is your biggest bottleneck, as a 6 Mbps DSL line is going to be quite fast for a single user, but once you get ten people on that line it gets pretty slow handling everyone's requests.

If instead you're talking about network traffic just within your office, such as transferring files from Computer A to Computer B, or even remote desktop session to the server, then this is a little different. It depends upon your hardware and configuration greatly, yes. For instance, if you are using just 10/100 speed switches, network cabling, and NICs on your computers or if you are fully gigabit speed. I've been to places where they had all the right equipment, but they had things just plugged in the wrong way and communicating to each other wrong. It was causing a ton of network broadcast traffic which slowed the entire system to a crawl.
Related resources
October 22, 2012 6:46:53 PM

Our network is running using this
http://reviews.cnet.com/switches/3com-baseline-switch-2...

Its about 5 years old. We use CAT5, 5e, UTP cables. And the NIC cards on the PCs may or may not be Gigabit cards.

We just ordered a Spool of CAT6 cables, and more Gigabit NIC cards. Also more RAM for the PCs themselves since a lot of them are running Win7 64 bit with 3 gigs of ram.

Should we upgrade our switch though? It's a layer 2, would a Layer 3 make a huge difference?

What is the difference between the 3com and say... this...

http://www.newegg.com/Product/Product.aspx?Item=N82E168...
October 22, 2012 7:33:01 PM

sdweim85 said:
Our network is running using this
http://reviews.cnet.com/switches/3com-baseline-switch-2...

Its about 5 years old. We use CAT5, 5e, UTP cables. And the NIC cards on the PCs may or may not be Gigabit cards.

We just ordered a Spool of CAT6 cables, and more Gigabit NIC cards. Also more RAM for the PCs themselves since a lot of them are running Win7 64 bit with 3 gigs of ram.

Should we upgrade our switch though? It's a layer 2, would a Layer 3 make a huge difference?

What is the difference between the 3com and say... this...

http://www.newegg.com/Product/Product.aspx?Item=N82E168...




I'm also seeing now that our servers which are Server 2003 can only support up to 4 Gigs of Ram. But the enterprise editions go up to a Terabyte. So I'm assuming all that RAM is there to support all those users of a Enterprise
October 22, 2012 8:07:48 PM

I don't really see that your switch is going to be causing you the fit for limiting performance really, and I do not believe that going to a layer 3 switch would give you any benefit at all. I do have a question though regarding your file access.

When you are all working pulling files, are you in a workgroup environment where you are all sharing files from all computers, or is everything being pulled from or saved/read from a central server? It could be that as you have a greater number of employees making requests to a server that it's the server that is the limiting bottleneck for performance getting files. Here's what I mean:

If you have a single file server that is hosting out all of your files and lets say they're mostly larger data files. If you have just a couple people who are sending requests back and forth to the server, it can handle those couple of requests pretty easily without having a huge demand placed on its memory and hard disk. However, once you start getting several people pulling data all at the same time from the server, it requires and more and more resources, and faster resources, to be able to efficiently handle the demand.

It could be that your network infrastructure itself has plenty of headroom to handle the demand you are needing, which sounds like the case, but it is possibly your individual workstations and most likely your server that is actually causing the decreased responsiveness during peak loads and demands.
October 23, 2012 2:16:59 PM

choucove said:
I don't really see that your switch is going to be causing you the fit for limiting performance really, and I do not believe that going to a layer 3 switch would give you any benefit at all. I do have a question though regarding your file access.

When you are all working pulling files, are you in a workgroup environment where you are all sharing files from all computers, or is everything being pulled from or saved/read from a central server? It could be that as you have a greater number of employees making requests to a server that it's the server that is the limiting bottleneck for performance getting files. Here's what I mean:

If you have a single file server that is hosting out all of your files and lets say they're mostly larger data files. If you have just a couple people who are sending requests back and forth to the server, it can handle those couple of requests pretty easily without having a huge demand placed on its memory and hard disk. However, once you start getting several people pulling data all at the same time from the server, it requires and more and more resources, and faster resources, to be able to efficiently handle the demand.

It could be that your network infrastructure itself has plenty of headroom to handle the demand you are needing, which sounds like the case, but it is possibly your individual workstations and most likely your server that is actually causing the decreased responsiveness during peak loads and demands.


They all work directly off the server. They are reading/writing to and from the server.

That's the theory I am going with now, because when 1-2 users are accessing files on the server it runs fine, but when 3-6 are using it, it goes extremely slow. The Server only has 4 gigs of RAM and is about 10 years old. We've done all the necessary Network upgrades to Gigabit thinking that was the issue, but I think your right, we are definitely now getting slow down from our old servers.
October 23, 2012 2:40:17 PM

Well a 10 year old server would cause issues.

Upgrading would not hurt it.

A SAN's/NAS sounds like the thing you need. You can get real small ones that are cheap and very effective for file storage.

One question to ask if you are using your file server as a application server to then replacing it would help as well.
October 23, 2012 2:47:10 PM

spookyman said:
Well a 10 year old server would cause issues.

Upgrading would not hurt it.

A SAN's/NAS sounds like the thing you need. You can get real small ones that are cheap and very effective for file storage.

One question to ask if you are using your file server as a application server to then replacing it would help as well.



We probably will end up replacing it. Since they are Server 2003 32 bit they only support a Max of 4 gigs of Ram. I suppose we could get 2008 64 bit and go up to 32gigs of ram.
October 23, 2012 2:47:39 PM

This could easily be a problem with the disk drives on the server. Are they in a RAID array or just stand-alone disks. What type of disks are you suing on the server and what RPM are they. How big are these files your company works with?
October 23, 2012 2:47:56 PM

If what you are looking for is the best efficiency with pulling large files from your networked shared storage by multiple end devices at once, then you're going to have to spend for something that offers you better performance than a basic NAS device, so it's not going to be cheap, keep this in mind. A high end NAS device can accommodate a large collection of enterprise class hard drives, and for the greatest performance I'd recommend a RAID 10 to get the benefit of redundancy along with speed.

Stepping up to a full fledged server system offers you more flexibility including more standardized hardware for better upgradability and compatibility, more flexibility in usage for software and server tasks, and in many cases better overall performance. A server system is going to cost you more than a NAS device, but when you start comparing features of a high end NAS to a file server with similar features and capabilities, the cost really isn't that much different. I normally recommend getting a file server for most customers over a NAS device as, in the end, its easier to work on and upgrade if needed (throw in another 8 GB of RAM to that machine to give it more responsiveness a couple years down the road!) and you can do more with it such as remote desktop, domain services, application hosting, and more.
October 23, 2012 2:48:14 PM

File servers don't need a lot of RAM.
October 23, 2012 4:04:25 PM

Jim_L9 said:
File servers don't need a lot of RAM.

Indeed. And you still don't know 100% if all computers have gigabit Ethernet or not. This reminds me of this other thread here http://www.tomshardware.com/forum/forum2.php?config=tom... that was done just a few days ago almost similar situation. You need to fix the network topology.

Unless you can monitor the usage activity on the server, you have no way of knowing if it's even maxing out on the memory. My guess is it's not, and I'd say that limiting it is a single network card that might not even be gigabit. There are a lot of ways to optimize file sharing by installing a $100 5 port gbit card and then allocating different communal stores to their own NICs assuming these files are stored on separate high speed drives increasing overall network capacity and creating some type of load balancing as well as increasing theoretical server throughput.

None of these "fixes" requires a new server. You need to find the real cause and not just throw random hardware at the problem. Do a checklist, create a batch file to make dummy file transfer to your client computers, see which ones are slower than others and fix those. Monitor the hardware activity of the server, watch specifically network and disk access utilization. If the server has not been upgraded from 2003, the drives are old and should have been replaced regardless. Almost 10 year old drives should not be used to hold mission critical files!

Lastly are the systems up-to-date? Patches!

My2cent if ya gonna fix things, do it right. Don't go guessing at the problem. Analyze, evaluate, solve.
!