Reader's Voice: Building Your Own File Server

Status
Not open for further replies.

wuzy

Distinguished
Jun 1, 2009
900
0
19,010
Yet again why is this article written so unprofessionally? (by an author I've never heard of) Any given facts or numbers are just so vague! It's vague because the author has no real technical knowledge behind this article and are basing mainly on experience instead. That is not good journalism for tech sites.

When I meant by experience, I didn't mean by self-learning. I meant developing your own ideas and not doing extensive research on every technical aspects for the specific purpose.
 

wuzy

Distinguished
Jun 1, 2009
900
0
19,010
And even if this is just a "Reader's Voice" I'd expect a minimum standard to be set by BoM on articles they publish to their website.
Most IT professionals I have come to recognise in the Storage forum (including myself) can write a far higher caliber article than this.
 

motionridr8

Distinguished
Jul 24, 2009
1
0
18,510
FreeNAS? Runs FreeBSD. Supports RAID. Includes tons of other features that yes, you can get working in a Linux build, but these all work with just the click of a box in the sleek web interface. Features include iTunes DAAP server, SMB Shares, AFP shares, FTP, SSH, UPnP Server, Rsync, Power Daemon just to name some. Installs on a 64MB usb stick. Mine has been running 24/7 for over a year with not a single problem. Designed to work with legacy or new hardware. I cant reccommend anything else. www.freenas.org
 

bravesirrobin

Distinguished
May 1, 2008
12
0
18,510
I've been thinking on and off about building my own NAS for around a year now. While this article is a decent overview of how Jeff builds his NAS's, I also find it dancing with vagueness as I'm trying to narrow my parts search. Are you really suggesting we use PCI-X server motherboards? Why? (Besides the fact that their bandwidth is separate from normal PCI lanes.) PCI Express has that same upside, and is much more available in a common motherboard.

You explain the basic difference between fakeRAID and "read RAID" adequately, but why should I purchase a controller card at all? Motherboards have about six SATA ports, which is enough for your rig on page five. Since your builds are dual-CPU server machines to handle parity and RAID building, am I to assume you're not using a "real RAID" card that does the XOR calculations sans CPU? (HBA = Host Bus Adapter?)

Also, why must your RAID cards support JBOD? You seem to prefer a RAID 5/6 setup. You lost me COMPLETELY there, unless you want to JBOD your OS disk and have the rest in a RAID? In that case, can't you just plug your OS disk into a motherboard SATA port and the rest of the drives into the controller?

And about the CPU: do I really need two of them? You advise "a slow, cheap Phenom II", yet the entire story praises a board hosting two CPUs. Do I need one or two of these Phenoms -- isn't a nice quad core better than two separate dual core chips in terms of price and heat? What if I used a real RAID card to offload the calculations? Then I could use just one dual core chip, right? Or even a nice Conroe-L or Athlon single core?

Finally, no mention of the FreeNAS operating system? I've heard about installing that on a CF reader so I wouldn't need an extra hard drive to store the OS. Is that better/worse than using "any recent Linux" distro? I'm no Linux genius so I was hoping an OS that's tailored to hosting a NAS would help me out instead of learning how to bend a full blown Linux OS to serve my NAS needs. This article didn't really answer any of my first-build NAS questions. :(

Thanks for the tip about ECC memory, though. I'll do some price comparisons with those modules.
 

ionoxx

Distinguished
Jul 24, 2009
2
0
18,510
I find tat there is really no need for dual core processors in a file server. As long as you have a raid card capable of making it's own XOR calculations for the parity, all you need is the most energy efficient processor available. My file server at home is running a single core Intel Celeron 420 and I have 5 WD7500AAKS drives plugged to a HighPoint RocketRAID 2320. I copy over my gigabit network at speeds of up to 65MB/s. Idle, my power consumption is 105W and I can't imagine load being much higher. Though i have to say, my celeron barely makes the cut. The CPU usage goes up to 70% while there are network transfers, and my switch doesn't support jumbo frames.
 

raptor550

Distinguished
Mar 6, 2009
34
0
18,530
Ummm... I appreciate the article but it might be more useful if it were written by someone with more practical and technical knowledge, no offense. I agree with wuzy and brave.

Seriously, what is this talk about PCI-X and ECC? PCI-X is rare and outdated and ECC is useless and expensive. And dual CPU is not an option, remember electricity gets expensive when your talking 24x7. Get a cheap low power CPU with a full featured board and 6 HDDs and your good to go for much cheaper.

Also your servers are embarrassing.
 

icepick314

Distinguished
Jul 24, 2002
705
0
18,990
Have anyone tried NAS software such as FreeNAS?

And I'm worried about RAID 5/6 becoming obsolete because the size of hard drive is becoming so large that error correction is almost impossible to recover when one of the hard drive dies, especially 1 TB sized ones...

I've heard RAID 10 is a must in times of 1-2 TB hard drives are becoming more frequent...

also can you write pro vs con on the slower 1.5-2 TB eco-friendly hard drives that are becoming popular due to low power consumption and heat generation?

Thanks for the great beginner's guide to building your own file server...
 

icepick314

Distinguished
Jul 24, 2002
705
0
18,990
Also what is the pro vs con in using motherboard's own RAID controller and using dedicated RAID controller card in single or multi-core processors or even multiple CPU?

Most decent motherboards have RAID support built in but I think most are just RAID 5, 6 or JBOD....
 

Lans

Distinguished
Oct 22, 2007
46
0
18,530
I like the fact the topic is being brought up and discussed but I seriously think the article needs to be expanded and cover a lot more details/alternative setup.

For a long time I had a hardware raid-5 with 4 disk (PCI-X) on dual Athlon MP 1.2 ghz with 2 GB of ECC RAM (Tyan board, forgot exactly model). With hardware raid-5, I don't think you need such powerful CPUs. If I recalled, the raid controller cost about as much as 4x pretty cheap drives (smaller drives since I was doing raid and didn't need THAT much space, it at most 50% for the life of the server, also wanted to limit cost a bit).

Then I decided all I really needed was a Pentium 3 with just 1 large disk (less reliable but good enough for what I needed).

For past year or so, I have not had a fileserver up but planning to rebuild a very low powered one. I was eyeing the Sheeva Plug kind of thing. Or may be even a wireless router with usb storage support (Asus has a few models like that).

Just to show how wide this topic is... :)
 

werfu

Distinguished
Sep 27, 2008
54
0
18,630
Actually doing file serving requires not much CPU power except if you're using software RAID-5. A dual P3 900Mhz is more than enough of CPU power. However, on this kind of hardware, you're right to worry about southbridge interconnect speed, which can limit you greatly. Using PCI-X SATA or SAS adapter should however garantee you good performance. Just leave PCI alone, even in modern system.

I think you should compare a bit more software raid vs hardware raid. This can greatly change the way someone is gonna build is server. Myself I would not use hardware raid as cards with multiple (4+) SATA connector sporting RAID-5 are way overpriced for the performance I would want to achieve. A home file server is not expected to match entreprise class NAS. And even with an entreprise class server, you need an entreprise class network switch, and good CAT6 cable to link everything together.

I think that a home server should be a bit more than a file server. Do not aim for the most powerful build, you're better off with some more reasonable performance which would be way cheaper, leaving you more cash to buy greater capacity hard drive. One could buy a cheap 6+ SATA connector AM2+ motherboard, get a low-power phenom (905e), invest in 2gig of DDR2 ECC (666Mhz would be ok), and setup a nice home server, capable of doing both file server and a nice back-end for MythTV.

I'm building myself my home server with a old dual IBM Server P3-1.23Ghz with 2.5Gig of ECC SDRAM. I'm looking into replacing it's two current SCSI-160 adapter with two 4-port SATA PCI-X card for a reasonable price (less than a 6+ SATA connector mobo and CPU). I'll be running three VM : My router, my Linux server, and my Windows server. It should be plenty enough for what I want. The only thing that I haven't got to figure out is how to get my Wireless card to be directly accessed by my router VM.
 

snarfies

Distinguished
Jan 15, 2009
56
0
18,630
I decided against any hardware RAID controllers in my NAS setup. If the controller goes south, it renders the array useless unless you can reacquire the EXACT SAME controller - which may be impossible and/or expensive a few years after the fact. I just use FreeNAS software RAID.

And, yeah, I agree with many of the above, PCI-X strikes me as a weird choice even if you were to still go with a hardware RAID controller.
 

bravesirrobin

Distinguished
May 1, 2008
12
0
18,510
How much ram is ideal for a NAS? Surely not 4GB, as you won't be gaming or anything. But with Linux/BSD can you get by with 1GB? 512MB?
 

jeffunit

Distinguished
May 19, 2008
117
0
18,680
I am the author.

>I also find it dancing with vagueness as I'm trying to narrow >my parts search. Are you really suggesting we use PCI-X server >motherboards? Why? (Besides the fact that their bandwidth is >separate from normal PCI lanes.) PCI Express has that same >upside, and is much more available in a common motherboard.

There are many possible choices in building a fileserver.
I chose to use motherboards that I already had. Sure
PCI-E is faster and if I were buying all new hardware, I would
use it. However, for those on a budget, $40 will get you a
used motherboard, with 2 cpu's and lots of PCI-X slots at
http://www.surpluscomputers.com/348725/accelertech-tsunami64-dual-amd-opteron.html

You explain the basic difference between fakeRAID and "read RAID" adequately, but why should I purchase a controller card at all? Motherboards have about six SATA ports, which is enough for your rig on page five.

True, the rig on page 5 only had 6 hard drives, and so a
mb with 6 sata ports will work. However that motherboard
has no usable sata ports. Additionally, I have since added
a 7th drive, and the racks allow me to have up to 10 drives.
Clearly, if the mb has enough ports you don't need a controller
card, but I mentioned controller cards for those situations
when the mb doesn't.

>Since your builds are dual-CPU server machines to handle >parity and RAID building, am I to assume you're not using a >"real RAID" card that does the XOR calculations sans CPU? (HBA >= Host Bus Adapter?)

Correct. The real raid cards start around $500, and can easily
cost $1000.

>Also, why must your RAID cards support JBOD? You seem to >prefer a RAID 5/6 setup. You lost me COMPLETELY there, unless >you want to JBOD your OS disk and have the rest in a RAID? In >that case, can't you just plug your OS disk into a motherboard >SATA port and the rest of the drives into the controller?

I said the controller card you buy has to support JBOD.
The Rosewell card calls itself a RAID card (raid 0 and 1),
but it supports JBOD, so you could use it with software RAID.
If the card doesn't support JBOD, you won't be able to use it
with software RAID, (like the SATA controller on the NCCH-DL).
I have all the disks (except the OS disk) using software RAID.

>And about the CPU: do I really need two of them? You advise "a >slow, cheap Phenom II", yet the entire story praises a board >hosting two CPUs. Do I need one or two of these Phenoms -- >isn't a nice quad core better than two separate dual core >chips in terms of price and heat?

When I said CPU, I meant processor. I didn't say two sockets.
You want more than one processor so that one can do the raid
xor calculations. For software it doesn't matter if the
processors are in one or more sockets. If you are buying new
hardware, it is easy to get more than one processor in a single
socket, which is why I recommended the phenom II, with it's 4
cores.

>What if I used a real RAID card to offload the calculations? >Then I could use just one dual core chip, right? Or even a >nice Conroe-L or Athlon single core?

Sure. The only problem is cost. A real RAID card will cost
more than virtually any CPU. If my motherboard fries, I can get
another one, and plug in all of my drives. A new mb and memory
for me will be $100 to $200. If a RAID controller breaks, you
will need to buy another one, which costs $500 - $1000.
There is much more flexibility with software raid and it is
cheaper.

>Finally, no mention of the FreeNAS operating system? I've >heard about installing that on a CF reader so I wouldn't need >an extra hard drive to store the OS. Is that better/worse than >using "any recent Linux" distro? I'm no Linux genius so I was >hoping an OS that's tailored to hosting a NAS would help me >out instead of learning how to bend a full blown Linux OS to

It is certainly a viable option. It is a bit less flexible than
a full linux system, but much easier to set up as you point
out.

>Thanks for the tip about ECC memory, though. I'll do some >price comparisons with those modules.

It isn't much more expensive.
 

Darkk

Distinguished
Oct 6, 2003
615
0
18,980
I have a custom built fileserver running 8 channel SATA RAID HighPoint RocketRAID 2320 controller with 8 disks running of a 380 watt power supply eff rating of 80. 3 x 1.5TB as one array and 3 x 320GB in another array. I did that for fault tolerance so I don't lose everything all at once. I am running Windows 2008 R2 server on it since I have TechNet subscription. The remaining 2 320GB drives plugged directly onto the motherboard's SATA ports are software mirrored with 60 gig partition (future space for Windows updates) and remaining space is stripped for junk stuff. I am using different technologies in my fileserver and all work very well. Yes I could use Linux on it as most people would do but since I have TechNet I figured might as well use it. I also have an external enclosure that holds two 1TB hard drives for backups of the RAID array. I am surprised backups weren't even mentioned in the article thinking RAID 5 or 6 is all you need. Screwups do happen which is very important to have backups on a different sets of media. I am using Robocopy to make exact copies of my data and it's scheduled to run nightly. I also use Volume Shadow Service (VSS) which allows me to roll back any screwups of my data if I need to.

 

jeffunit

Distinguished
May 19, 2008
117
0
18,680
[citation][nom]icepick314[/nom]And I'm worried about RAID 5/6 becoming obsolete because the size of hard drive is becoming so large that error correction is almost impossible to recover when one of the hard drive dies, especially 1 TB sized ones...I've heard RAID 10 is a must in times of 1-2 TB hard drives are becoming more frequent...also can you write pro vs con on the slower 1.5-2 TB eco-friendly hard drives that are becoming popular due to low power consumption and heat generation?Thanks for the great beginner's guide to building your own file server...[/citation]

RAID5 and RAID6 aren't going to become obsolete even with big
hard drives. Recovering from a bad hard drive isn't a big deal.
RAID 10 uses almost twice as many drives as RAID-5, and still
tolerates 1 disk failure.

Sure the new drives are bigger than the old ones, but they
are also faster, so recovery doesn't take longer than with old
drives.

I haven't used the eco friendly low power drives, but I have
heard they don't have time limited error recovery, so they
can cause problems with raid controllers or software. Generally
the newer drives use a bit less power than the old ones, due
to increased storage density.

 

jeffunit

Distinguished
May 19, 2008
117
0
18,680
[citation][nom]icepick314[/nom]Also what is the pro vs con in using motherboard's own RAID controller and using dedicated RAID controller card in single or multi-core processors or even multiple CPU?Most decent motherboards have RAID support built in but I think most are just RAID 5, 6 or JBOD....[/citation]

I don't know of any motherboard that does RAID 5 or 6 and has
a XOR engine. Without the XOR engine, the CPU does all of the
work, and you might as well do it all in software.
 

jeffunit

Distinguished
May 19, 2008
117
0
18,680
[citation][nom]Lans[/nom]I like the fact the topic is being brought up and discussed but I seriously think the article needs to be expanded and cover a lot more details/alternative setup.

I was eyeing the Sheeva Plug kind of thing. Or may be even a wireless router with usb storage support (Asus has a few models like that).Just to show how wide this topic is... :)[/citation]

I agree. There are many options. I showed two examples of what I have done. The article was longer, but it was cut down a lot.

I have looked at the Sheeva Plug. It is very cute, but only has
one gigabit ethernet and one usb2 port. It could build a nice single disk NAS, but doesn't have the horsepower or bandwidth
to do RAID-5 or 6. I plan on buying one to build a low power bittorrent server...
 

que3jxp

Distinguished
Sep 9, 2006
7
0
18,510
The only real choices for OS on a home server/DIY NAS are Windows Home Server and FreeNAS. Everything else is too convoluted to use. And to the comment of Windows Server being "least secure and reliable", thanks for the FUD. All modern operating systems are plagued with bugs and they are all averaging about the same number of bugs per month in the last few years.

On the stability side, Windows can EASILY run for months without a restart if it were not for the odd required reboot from security patches. Calling Windows unstable is no better than saying that Max OSX NEVER crashes.

Otherwise, the article is amateurish at best. I see the point of trying to explain why using certain vintages of motherboard is less desirable but really. We are talking about a server for home use where I/O bottlenecks like this are not that big an issue for most people. And yes, physical RAID is all nice in the business world but it is a real pain in the home world.
 

jeffunit

Distinguished
May 19, 2008
117
0
18,680
[citation][nom]werfu[/nom]Actually doing file serving requires not much CPU power except if you're using software RAID-5. A dual P3 900Mhz is more than enough of CPU power. However, on this kind of hardware, you're right to worry about southbridge interconnect speed, which can limit you greatly. Using PCI-X SATA or SAS adapter should however garantee you good performance. Just leave PCI alone, even in modern system.

And even with an entreprise class server, you need an entreprise class network switch, and good CAT6 cable to link everything together.

I'm building myself my home server with a old dual IBM Server P3-1.23Ghz with 2.5Gig of ECC SDRAM. I'm looking into replacing it's two current SCSI-160 adapter with two 4-port SATA PCI-X card for a reasonable price (less than a 6+ SATA connector mobo and CPU). I'll be running three VM : My router, my Linux server, and my Windows server. It should be plenty enough for what I want.[/citation]

Simply using PCI-X doesn't automatically solve all your bandwidth problems. My asus NCCH-DL with PCI-X has a 266MB/sec
link between the PCI-X and the CPU. I would bet your pIII setup
has a similar limitation.

You can use plan CAT-5 for gigabit ethernet. I just bought a
roll of CAT-5E, and it is so cheap, there is no excuse to use
anything less. Gigabit switches are also dirt cheap. You don't
need 'enterprise' hardware any more.

Personally I don't like putting lots of functionality on my
fileserver, such as running VM. I prefer using separate computers, as they are so cheap. But if I left my fileserver
on 24x7, I might load it up with more functionality...
 

sub mesa

Distinguished
Next are the various versions of BSD Linux: OpenBSD, FreeBSD, and others. They don't cost anything and are reasonably secure and reliable. The biggest limitation is they aren't as modern as Linux with respect to RAID support.

This is rather curious, since BSD has the best performing RAID5 drivers i've seen so far. FreeBSD 8.0 also supports the latest ZFS version (13) just like OpenSolaris, and you don't have to use FUSE or any other userland-wrapper, since its a kernel implementation. The only thing i see BSD doesn't offer, is traditional RAID-6 support. But it does offer ZFS with RAID-Z2 which is comparable/superior to RAID-6.

So you loose RAID-6 support, but you gain alot because FreeBSD has a very sleek storage-framework known as GEOM. Its a framework that allows you to play lego with your disks. You can use GEOM modules like RAID0 or journaling and connect them to eachother in any combination you want. So you can have a chain that goes like: Disks -> RAID0+1 -> Encryption -> Journaling -> Filesystem. Checkout this wikipedia page for a list of available GEOM modules:
http://en.wikipedia.org/wiki/GEOM

So i would disagree in the statement that FreeBSD is lacking in terms of technology relevant to storage; its more the other way around. Linux is more universal and widespread, with alot of users and information found on the web, while BSD offers access to the latest technologies in part thanks to its license, which allows incorporation of Sun's Dtrace and ZFS technologies. Due to the GPL license GNU/Linux is using, these technologies may not be directly incorporated into the kernel, and a seperate kernel-userland interface has to be maintained so it runs outside the kernel, like the FUSE (Filesystem in USErspace) project did.

Its also awkward that FreeNAS is not mentioned. This is based on FreeBSD but has a simple web-gui that any Windows user should be comfortable with; a simple way to try something else than Windows. You can even use it in a VM solution like the free Virtualbox, to test its usefulness. FreeNAS 0.7 will also allow use of ZFS, albeit an older version (version 6). This does bring ZFS technology in very close reach of casual computer users without experience beyond Windows.

Also, a 64-bit CPU is pretty much required for ZFS as well as at least 2GB RAM but preferably 4GB+. Multicore is great if you want to use live compression/encryption. Note that Linux/BSD can do multiprocessing alot better than Windows so you'll want at least a dualcore. AMD cpu's do very well in NAS systems because of their low idle power consumption, low price, good multicore and FPU performance and the available chipsets are low-power and provide 6 full-speed SATA ports. AMD 740G/760G/780G and nVidia GeForce 8200/8300 chipsets are the ones to look for. The motherboards with these chipsets often come in Micro-ATX format, which still allows you two PCI-express ports (the x16 and one x1) for expansion with PCI-express SATA controllers. So a self-built NAS shouldn't have to cost you more than 200 dollars for the bare system (cpu, memory, motherboard, casing, cooling).
 

talys

Distinguished
Apr 2, 2009
42
0
18,530
jeffunit,

I do understand where you are coming from, and I do appreciate that your spec builds a very decent 5TB fileserver at a great price. I believe that many enthusiasts build fileservers with spare parts, simply because in a small network a Quad Core Extreme is not really any different a file sharing device than a repurposed P3.

Respectfully, though, I must disagree with your build.

I believe there are three primary destinations of file servers: home, small office, and corporate (server room). I disagree with your build for any of those markets.

My main concern for a home server is noise, size, and power. I don't think most people will want a 6-drive full tower just to give storage; the noise level, once you factor in fans, is horrible. I would prefer a small ATX or SFF chassis with 1-2 very large hard drives, and eSATA to add more storage if necessary.

A small office without a proper server room suffers from similar problems: someone who tucks a server into a (literal) closet can't generate too much heat. Cost is less of an issue, and other factors such as servicability is more of an issue. In this case, I would suggest a smaller chassis, fewer hard drives, and at least 1 spare hard drive. The likelihood of a hard disk failure in a multi-disk array, during the lifespan of the array, is high; have you ever tried to find a replacement Spinpoint F1 or RE2 after it's been discontinued?

I have rarely seen a small office that requires as much storage as you suggest. Usually, the purpose of a file server in a small office is centralized file storage (rather than space) and centralized backup.

An office with a properly air conditioned and powered server room would almost certainly want a file server in a rackmount chassis. Here, noise is not a factor. However, usually, neither is price. Assuming that "value" is still important, I would suggest something like the Asus P3Q Premium motherboard with 4 onboard 1000TX, paired with an adaptec SATA card.

Hard drive wise, I'm a big fan of the WD Green series. As I mentioned above, I can't overemphasize the importance of a spare hard disk drive that is safely tucked away -- it will allow you to swap in a drive, and send the old one in for repair.

In terms of operating system software, make sure you have all the file sharing protocols you need (an office, for example, might need IPX or NetBEUI), and that there's remote desktop. Make sure you know how to deal with replacing a failed RAID drive. Be aware of what you will do if the motherboard fails.

 
Status
Not open for further replies.