Sign in with
Sign up | Sign in
Your question

Optimum Small Business File Server?

Last response: in Networking
Share
February 5, 2007 7:19:05 PM

Hello all! I need some information regarding what I need in terms of a network. Here's my situation:

Small Business, only 5 currently networked computers, running Solidworks on each individual computer (P4 3Ghz 2Gb memory). SolidWorks files are quite large, assemblies > 500 Mb in size

All files are stored on a "server" (basically just a left over computer). Every time one opens a solidworks file it accesses the server for the files.

10/100 ethernet (low end NIC's in each computer, really old switches) not sure what cable type.

Stock pitiful hard drives, etc.

My question to all you is what would be a good setup for a new server? Currently the time it takes to open files is painful. Upgrading the cabling, NIC's is easy. I've got about $1600 to work with.

What Operating System? (Everyone likes XP Pro, I really don't want to hear them complain about going to a windows server ed.)

Memory?

Hard drives? (SATA?, what RAID config?)

NIC's (If you know of a chipset to stay away from? Gigabit worth it?)

Cabeling? (would like some future capabilities)

Switches?

I'd be greatful for anyones input. Thanks.
February 5, 2007 10:42:01 PM

You are only needing a SMB NAS server for file storage.

I'm afraid your bottle neck is your network.

Ideally I would use a 8 port gigabait switch like a Dlink DGS-108T. Install gigabit nic card is the pc's, you will not need highend so the $30 card will work fine. Should deliver you ~30MB's speeds. If you have cat5e cabling it will work without rewiring your current system.

Your desktop PC depending on age, most HD can deliver around ~50-70MB/s. Connect all pc and your pc NAS server into the switch. That way all pc can transfer at full 1000 speed. All of the nic and switch should be less than $300.

For a high speed nas look for something like a SnapAppliance (Adaptec Now) 4200, 4500, or newer 500 series NAS Server. New these will over run your budget. The 4500 has dual gig with around 280mb/s, and maintains around 250 mb/sec with 12 users. The avg speed for a gigabit nic card is ~25-30MB/sec. A typical SMB file transfer on a windows pc with a 100base net is ~6-8MB/s. The gig net will boost your file transfer speed from 8 to 30 (4x). Some of the higher end nic can deliver ~50-80MB/s but these require a lot of work.

I hope this explains what / where your bottle neck it.
ftp speeds 100baseT= 12.5 - OH = 10.5MB/s,
1000baseT = 125 - OH = 80MB/sec
February 6, 2007 11:41:56 AM

I agree with Blue that your network is your current bottleneck. You could upgrade the network without changing the actual server and see where you stand. You'll want to do this anyway, so the only potential wasted money would be a new NIC for the server PC if you choose to not continue to use it. That is not much money at risk.

Then if you are comfortable with Linux, you can convert your old computer into a pretty good high performance server. Your users won't know or care that it is running Linux. I don't know if your office can tolerate the disruption while you make this transition.

If you have the bucks, a business-oriented server like Blue recommended is the way to go. Easier on you, too.

You can try an in-between solution, such as a less expensive small office NAS device. On the low end of this range would be the Buffalo LinkStation Pro for in the range of $250 or so (depending on hard drive size). There are a number of multi-drive RAID boxes in the $600-900 price range. Check out T. Higgin's NAS charts for more information. If you go this way, pay attention to the actual performance of the NAS with the gigabit network. Some of these devices say they suppport gigabit, but show little or no performance improvement with the faster network.
Related resources
February 6, 2007 8:51:25 PM

Be sure to have the server designed properly for I/O. In particular, this means don't put everything on a 33/32 PCI bus and expect awesome performance. Allocate enough RAM for a nice file cache to help things out. Gigabit workloads can also put strain on the CPU, so don't be tempted to low-ball this too far. You don't have to get a supercomputer, but know that if you get a server with an embeded low-power CPU as you would on typical consumer NAS boxes, that this would have a performance impact.

I recommended Intel NICs, especially in PCI. For non-PCI on-board / PCIe, it's generally not worth dropping in an add-on NIC, particularly PCI, as this can introduce a new bottleneck instead of reducing it. Gigabit absolutely. Pre-built cat 5e or better is recommended.

An HP Procurve 1800-8g might be a good switch for your needs. A Dell 2716 is another affordable alternative. You might be able to get by with a lower-end desktop switch such as a Netgear GS108.

A good RAID 5 implementation would probably be best, but this can suck up a good part of your budget if you're not careful. Be sure to have a good backup practice in place as well -- and perhaps your existing server can help in this -- no RAID implementation is a sufficient replacement for a backup. Areca and 3ware are well-regarded. Note that modern hard-drives are fairly rapidly increasing capacity and performance and maintaining $/GB in the process, so you might be able to save money with a smaller set of large hard drives and a matching smaller RAID controller.

Looking back, most of the higher-end storage controllers have been PCI/PCI-X based, which requires a server motherboard. Looking forward, storage controllers are showing up in PCIe format. You can go PCI in a pinch, esp. if your NIC is not also on the same PCI bus, but it's best to design this for bandwidth and future, staying off the PCI bus if possible.
February 7, 2007 2:00:19 PM

Thanks guys. I'm going to look into the NAS solution as I think this might be the easier option.

But I was wondering, if I do go with a stadalone server, what is the advantage of going with an operating system like windows server 2003? This type of OS is only for those neededing to "regulate" a network right? There wouldn't be any speed increases over xp pro would there?
February 7, 2007 4:21:04 PM

There are several advantages to using software like Windows Server and they have to do mostly with managing an enterprise network.

With Windows Server, your users are set up as members of a domain, rather than merely users on an individual computer. You set user privilege at the domain level, rather than individually on each computer. Managing account policies and network security is easier. When a user logs on, he is actually logging onto the domain server in addition to his local computer. In fact, he can log onto the domain from any computer, not just his own computer. Handling security on shared folders is easier since a single username / password applies across the domain for each user.

It will not be faster as a file server, though.
February 7, 2007 9:45:41 PM

Unless you make some network changes you may not see an increase in speed. A simple gigabit switch feeding the server with 10/100 clients will fix your bottle neck.
February 8, 2007 4:08:34 PM

Thanks for all the advice guys.

Could you guys take a look at what I'm thinking of doing?...

Decided to go with a workstation/server with the following specs (just a customized Dell Poweredge):

Processor = Dual Core Intel Xeon 550 3.0Ghz 667Mhz FSB.
Memory = 2 Gb
Hard Drive Config: Raid 1. (2) 7200RPM 160Gb SATAii drives. I'm actually torn between raid 0 and 1 because our redundant backup requirements, see below.
HD Controller SAS 5IR SAS internal raid controller.
network adapter = On board single gigabit (There is an option for a INTEL PRO 1000PT PCI express single port server adapter, worth the extra $139?)
OS = Windows Vista Ultimate OEM (this should communicate with the other computers running XP)

Switches/NICS:

Dell PowerConnect 2716 16 port 10/100/1000 or the NETGEAR Prosafe JGS516 (need to find out if they are both RJ45) Thanks for the suggestions.
PCI Express x1 Ethernet cards for the client computers (I have these PCI-Ex1 slots free)
wiring.... Still need to find out what we have, the visible portion is easy, the stuff behind the wall face plates....?

hot-swappable hard drive back up option: (president of the company really wants this)
currently we have an automated backup hard drive connected to the computer
we also have the interns burn DVD backups once a week.
now i'm going to add a SATA2 hotswapable hard drive backup to the computer. (put a drive in, back it up, take this one offsite, the next day use a different drive, etc.)
So with all these backups, I'm really starting to lean towards running the server drives in RAID 0 (wouldn't really be all that concerned about losing a drive, losing a days worth of work could be acceptable). Do you think this would yield a speed increase?
If not, I'm more than happy to leave it raid 1 (would prefer 5, but i've been pushing my limits with regards to price, already over my budget but I'm pretty sure i can get them to bend)

Comments? Suggestions?

Thanks.
February 8, 2007 4:31:00 PM

Good job coming up to speed on this stuff.

One concern I didn't mention earlier is drive cooling. Drives don't need anything like CPUs do, but they should have a bit of airflow. Hopefully the Poweredge box has been designed with this in mind and either passively feed inflow across the drives or better yet have active fans. This is especially important with SCSI / SAS drives, so I'd think that a good server box would have that built into the design.

You might consider sourcing your drives elsewhere. OEM drives generally have good warranties and are often priced better than Dell's we sites when configured into a machine.

RAID 0 for the OS is not recommended. You should keep the OS as simple and clean as possible, and off RAID arrays if possible. RAID 1 is close enough to simple drives to get by, and also gives you some uptime guarantee, so is OK in this perspective.

RAID 5 should be fine for performance with a decent build, and in this case, the added benefit of redundancy probably far out-weighs any marginal performance gains that might be possible. The gigabit network is going to be the bottleneck here, so there's a big limit on how far you can go with storage array. SAS or even Raptor SATA drives can give you some benefits on random access, which does matter for multi-user access, although at high cost.

Also note your downtime costs. What are the users costing per hour / day? How much time would be wasted with downtime? This is generally a concern in building servers and networks, and used to justify uptime preservation features such as RAID and other redundant hardware.
February 8, 2007 4:37:12 PM

As you've noted, RAID 0 offers no redundancy. In fact, it is worse than that. Hard drive MTBF is half that of a single drive (assuming a 2 drive RAID 0 configuration), since if either drive fails, you've lost access to all the data on both drives.

Whether or not performance improves depends on the configuration and use of the array. Basically, you'll need to ensure the "stripe" size in the RAID 0 configuration is as large as the largest file being copied down to each worstation. You do this so each file copy is accessing only one of the RAID 0 drives, leaving the other drive to access the file for the next user. Ideally, the throughput of a 2 drive RAID 0 configuration is twice that of a single drive, but the ideal can be tough to achieve, depending on the drive usage, size of files, and size of the stripes.

Your hot swappable drives will need to be twice the size of each drive in your RAID 0 array in order to contain the complete backup on one drive.
February 8, 2007 4:43:09 PM

Couple of other points to consider. With RAID 0, your Dell would have 320GB of storage; with RAID 1, 160GB.

RAID 1 can actually outperform RAID 0 on read performance without the fussiness of making sure everything is sized "just so" - but to do this, your RAID controller needs to be able to access each mirror drive independently during read operations. I don't know if the Dell controller in your proposed system allows this.
February 8, 2007 4:49:58 PM

OK, another complex feature to consider: Multiple teamed NICs on the server. The Dell 2716 supports a limited form of NIC teaming. The idea is that you have the server with multiple GbE NICs/ports, using some NIC teaming software (usually provided by the NIC vendor -- Intel, Broadcom and SysKonnect/Marvell do for example). Then you configure the managed switch so that it knows that a certain bunch of NICs are in the same team (Link Aggregation Group). Then, when different clients request data at the same time, one request can be served from one NIC and the other from another. Wow, cool stuff! If only it worked perfectly every time...

Note that this is typically useless for single-user workloads, and no single user can get more than a single NIC's worth of bandwidth. Additional bottlenecks also come into play.

This may be overkill for you, and not worth the hassle of getting it right (the Dell switch's implementation could be less than great + you need driver support, etc., and this stuff is pretty hard to test + it might stress your already-stressed budget), but I thought I'd mention it FYI.

Edit: A more typical justification and usage of multiple NICs on servers is redundancy / fail-over. This also comes "for free" with the bandwidth improvement, and helps for the rare cases where just one NIC or wire or switch port fails.
February 8, 2007 8:42:54 PM

Also if the server is more critical than the machine that is doing the work (Clients), I would hold on the installation of Vista (yes, if they give you that keep it, but install XP anyways, using the license from your old server) and settle for XP at the moment until they have SP1 comes out;

regarding raid0/1, performance-wise I would go for Raid 1 (accessing it would be like reading from 2 drives) unless your office has autosaves at annoyingly frequent intervals, then you may experience better performance at Raid 0.

also consider a single 320GB HDD instead of raid 1 of 2x160GB HDD if the price are low enough for a single drive

Also if you go for Raid 0 when a drive fails you lose more than 1 day of work because it takes time to redeploy the system onto initial state (unless you backup the entire drive image with the OS, and that's like several GB extra each backup you deploy)
February 9, 2007 11:29:09 AM

OK, so I have some stuff to look at.

Multiple teamed NIC's sounds interesting, I'm going to have to look into that, they make dual GbE port cards right? I'll look into that when I have some free time (this is my busy work project).

I'm also going to try find out more about the Dell SAS/sata Controller.

Good point about looking for drives elsewhere. This is what I was actually going to do after I got the computer but before I installed the OS. I have no idea what make, or how good the Dell drives are going to be, and I have had really good luck with Newegg OEM drives. Maybe a couple raptors...

What if I install the operating system on a single drive, then run RAID 1 for the secondary drive/s which would hold all our SolidWorks files, would separating the OS and the data drives help out performance at all?
February 9, 2007 5:30:59 PM

Quote:
What if I install the operating system on a single drive, then run RAID 1 for the secondary drive/s which would hold all our SolidWorks files, would separating the OS and the data drives help out performance at all?
Sounds like a very good idea. Should improve performance of the served drives as well as make backup simpler.
!