Advice on building a ZFS-based file server

Cheers everyone,

I've been pressed for storage lately, and after looking at the options I decided to get one of those home-built file servers everyone seems to be talking about. I knew I wanted ZFS, because of its sheer reliability and other nice features (snapshots...), and being a beginner in such things, I'll probably be using FreeNAS (haven't yet looked at OpenIndiana, to be honest).

It's going to be a 24/7 NAS which will be standing about 5 meters from my bed, so a loud and power-hungry system is out of the question. It's going to serve media, i.e. mostly do video streaming via CIFS/NFS to 1-2 hosts at once, and be used as a backup for about 200-500 Gig of important files. I'm also considering using it as a router, if the operating system can do that. All that via GigE.

Anyway, I'd like some input on the hardware list and some other design considerations, so here we are.

Mainboard: Intel SK1200KP. As ITX boards go, it's definitely on the pricy end, but dual Gigabit and ECC RAM are worth the price tag, imho.

CPU: Intel Pentium G620T or Core i3-2120T. The big advantage I see with the i3 is SMT, the question is: how hardware-hungry is ZFS really? Will the two virtual cores justify the higher price?

RAM: I'm torn here. I want 16 Gigs, again because of ZFS; however, 8 Gig ECC RAM is pricy as hell. The cheapest I can find would be Kingston ValueRAM for around €138/stick (yes, I'm from Europe ;) ), which is hefty.

Case: Lian Li PC-Q25. Was considering the Fractal Array R2 when I came across this one. ITX case with 5 hot-swap drive bays? Do want.

PSU: Corsair CX500. Not because I love Corsair, but because it's one of the few that'll fit in there (I have yet to research the availability of Silverstone PSUs, some of them fit too, apparently).


For the OS, a 64 GB Crucial m4. 1€/GB for a SATA-6G SSD is low enough to consider (and I want to test them, because I'm thinking of putting one in my desktop).
For the storage, four 3TB Seagate Barracuda 7200.14. The equivalent WD Greens seem to have reliability issues after Thailand, and server drives, which I'd love to use, are just too expensive. I'll order one more and put it in the cupboard, should one of them fail or be DOA.
I'm also considering throwing in a 20 GB Intel 313 (SLC) SSD for the ZIL. Question is if that's worth it, performance-wise.
Because of lack of SATA ports on ITX boards, I'll have to get a card (either controller or port multiplier). I've read nice stuff about the Rosewill RC-211, but haven't been able to find it in my preferred online stores. Going to keep looking, though. I'd be happy for recommendations here, because I'd love to have a 4-port card. Another option would be the Intel SASUC8i, an 8-port SAS Controller for 150€. (I'm guessing it'd have no problems with el cheapo SATA drives).

As for the actual RAID setup, I'd favor RAID-10. It'll yield 6TB with four drives, and that should be enough for the moment. If it shouldn't at some point in the future, I can either expand or switch to RAID-Z. Alternatively, Z2, the question here is about performance and security.
30 answers Last reply
More about advice building based file server
  1. I would go with the Core i3 personally, and 8GB should be more than enough for ZFS. I see nothing wrong with your selection of drives, the Seagate's aren't the fastest in the world (nor slowest), but they should suffice. I wouldn't get the SLC drive personally, mostly due to cost and the requirement of mSATA.

    As far as a controller goes, a controller is not something I would chintz on. If you have plans to do hardware RAID then you want a quality controller, and having the capability to handle the amount of data from all the various drives simultaneously is a plus as well. I would use an Areca ARC-1320-8I PCIe if you can afford it (alternate: 3ware 9650SE-4LPML). As far as RAID levels go, RAID-Z (RAID-5) and RAID-Z2 (RAID-6) have their strengths and weaknesses. RAID 5 has faster write performance than RAID 6, since 6 has higher overhead, but RAID 6 has greater fault tolerance. RAID 10 is better than RAID-Z in terms of speed, and the same in terms of drive failure, but as far as RAID 10 vs RAID 6 goes, it's speed vs safety. Additionally, RAID 5 and 6 have some issues that have people worried.

    PS: The board you are looking at has a different designation in the US and Europe, "Intel DBS1200KP".
  2. Well, I'm not planning on doing hardware RAID. The way I see it, it's best to let ZFS take full control of everything, and not have a controller doing fancy things with the drives as well. The reason I included one is because I need the additional ports, since ITX boards only have a maximum of four SATA ports on them. I'd guess a simple port multiplier would do the trick, though.
    Oh, and the 313 SSDs come in 2.5 inch too. But you're right, if I need it, I can always add it in later.
  3. Port multipliers for SATA are much cheaper, however if you have high drive I/O, you will be VERY bandwidth limited to 1 SATA lane for all attached drives.
  4. Yeah, that's true. :|

    Okay, looking at controller cards: I'm thinking about the Supermicro AOC-SASLP-MV8 or the Intel SASUC8I. The only real difference between the two appears to be a PCIe x8 interface on the Intel card. Compared to the ARC-1320-8i, the Intel card's a good 50€ cheaper. Thoughts?

    Edit: Also, any recommendations for a UPS? It should support that server and perhaps a switch and router (nothing fancy), and obviously give the server time enough to safely shut down.
  5. Go with the Intel over the Areca if price is a significant consideration, and/or you don't need an external SAS connector for an external enclosure (future expansion). I would avoid the Supermicro because I've heard of people having issues with the Marvell chipset.

    As far as a UPS goes, I don't trust any other brand other than APC personally. I would go with at least a 1250VA unit, preferably the SmartUPS line if you can afford it. Otherwise the Backups line would work nicely as well.
  6. Hm, the external SAS would be a good argument to spend the extra 50. Going to sleep on that.

    Tbh, APC was the only UPS manufacturer on my radar anyway, so I'll go with those. SmartUPS is way out of the budget, I'll see if I can squeeze in a 1200VA Backups Pro. What's the difference between them, anyway? APC's website is remarkably non-useful.

    And I just noticed I'm going to need a new Gigabit switch. This is going to get expensive. ._.
  7. Something to think about.

    There are a few differences:

    The SmartUPS is line interactive meaning that it is able to tolerate continuous undervoltage brownouts and overvoltage surges without consuming the limited reserve battery power. It instead compensates by automatically selecting different power taps on the autotransformer. The second difference is that the SmartUPS have environmental monitoring, out of band management, email alerts, and remote IP control. It also has a sine-wave AC/DC inverter which is important if you have a power supply that has PFC.

    The BackUPS and BackUPS Pro are cheaper because they use a cheaper "square wave" inverter which some power supplies and sensitive electronics don't care for.

    I suggest this one. If you plan on teaming NICs later on, or are looking for a managed switch, then I suggest this one or this one. All three support Jumbo Frames up to 9000, and have lifetime warranties.
  8. Sounds good. Alas, I can't shell out 1'000-2'000 € on a UPS just now, so I guess I'll be stuck with a Backups. :/

    I was looking at a Cisco 300 series switch (L3 capability, gigabit, fully managed). Alternatively, I might buy a used/refurbished data center switch off ebay or something as that would benefit my CCNA as well.
  9. Nothing wrong with the BackUPS, it's what I use.

    The 300 will not help you on the CCNA, it has no CLI and does not run Cisco's IOS. The lowest end switch that is currently manufactured and covered in the CCNA is the Catalyst 2960, Catalyst 2960G, or Catalyst 2960S. The latter two are generally a 24 Port L2 Gigabit switch, and runs about $900 for an 8 port version. Additionally the 300 only supports L3 as a static route rather than routing protocols. It's fine to use as a switch, just don't expect it to help you with the CCNA.

    Depending on the number of ports you need, take a look on Ebay for a Foundry EdgeIron 24G-A.
  10. Yeah, I know it's somewhat crippled. A recent update gave it a CLI though (if a somewhat simplified one). That's why I said I might buy a "real" Cisco on ebay. ;)

    Hm, refurbished, 24x GigE, fully managed L2, I like it. I'm assuming it runs some IOS-alike OS and CLI, right?
    Which reminds me, I desperately need to get some console cables and expansion cards...
  11. Ah. Actually if your building a lab, it would be cheaper to buy older, fast ethernet switches since you need a couple of them for the CCNA.

    Very similar, not exactly the same of course:


    Console(config)#interface ethernet 1/5
    Console(config-if)#speed-duplex 100half
    Console(config-if)#no negotiation



    Switch(config)#interface fastEthernet 0/5
    Switch(config-if)#speed 100
    Switch(config-if)#duplex full
  12. pineconez said:
    Yeah, that's true. :|

    Okay, looking at controller cards: I'm thinking about the Supermicro AOC-SASLP-MV8 or the Intel SASUC8I. The only real difference between the two appears to be a PCIe x8 interface on the Intel card. Compared to the ARC-1320-8i, the Intel card's a good 50€ cheaper. Thoughts?

    Edit: Also, any recommendations for a UPS? It should support that server and perhaps a switch and router (nothing fancy), and obviously give the server time enough to safely shut down.

    Don't bother with Areca. We had one of their controllers at work and it failed. It took them a month to replace it as they sent it back to China to get repaired. If you value being able to access you data in a reasonable amount of time, I'd go elsewhere.
  13. I'll take that under advisement.
    Any recommendations for a CPU cooler? I'm not sure a fully passive cooler will do under load, and I definitely know I don't want the Intel Boxed cooler/fan combo. Have yet to come across one that wasn't crap.
  14. Please take note that some of the good controller cards, cannot run the drives as individual drives for ZFS to take over and do software raid modes, they tend to either need to be run in raid mode weather it be JOBD/1/5/10 etc...
    So if you go the that route your limited to Hardware raid only.

    Which if your controller card fails will leave you without access to your data, with basic sata controller cards + ZFS software raid or a port multiplier you can be back up online quite quickly with another cheap replacement.

    Usually for ZFS your better off getting a cheaper standard SATA controller card and if need be a port multiplier

    The pentium G620T will be plenty of power to run your needs I run a very similar system for myself on a Intel ATOM cpu with no problems
    Plus you need more then 4gb of ram to take advantage of ZFS's caching (at least on NAS software like Freenas you do)
  15. So I'd need to research the individual cards and their compatibility with ZFS? :/
    What kind of cards would you recommend, on the cheap end of the spectrum?

    I was considering a Pentium, but I might not use this server solely for fileserving (playing around with web programming, routing etc) so I'd like the additional horsepower. Also, dedup. (Does FreeBSD support dedup by now? Else I'll sooner or later have to migrate to OI, once I'm comfortable with the system.)
    Of course, I could buy the Pentium now and then later upgrade to the i3 if necessary...we'll see.

    I'm already planning on getting at least 8 GB. Possibly maxing the board out with 16 from the get-go.
  16. Yes, since I had forgotten about that. Some controllers are hardware only and do not play nice with ZFS, and other software RAID implementations. I think the Intel wouldn't pose an issue, but the Areca might.

    For a "dumb" SATA controller, take a look a the HighPoint RocketRAID 640, HighPoint RocketRAID 2300, and if your feeling ambitious, 3ware 9650SE-4LPML.

    For a CPU cooler, look at the Corsair H60.

    I wouldn't do that if I were you, it is more cost effective to spend the difference now and get the i3 than buy the Pentium now and the i3 later. Oh, and before you buy the RAM, note that ECC will work only if you use a Xeon E3, the Pentium and i3 do not support ECC.
  17. Okay, I'll look at those, thanks.

    I forgot the memory management happens in the CPU nowadays, so it follows it must support ECC as well. I'd very much like ECC on a server. According to the Intel site, this one should be released fairly soon. 20W (!) TDP, 2 cores with SMT; apart from the slightly lower clock frequency it sounds like a Xeonized version of the i3-2120T.
  18. The E3-1220L is already out, but hardly anyone uses it since it is generally around the same price as the E3-1220 at $220 USD, and is the cheapest Xeon for the 1155 platform.
  19. So, it's 8MB L3/ECC/4 cores/no SMT at 80W vs. 3 MB L3/ECC/2 cores/SMT at 20W.
    Since this was supposed to be a fairly power-saving design, I might have to drop the ECC RAM. Not cool.
  20. Makes no sense does it?

    ECC Ram is nice, but hardly a requirement, especially if it's un-buffered.
  21. Yeah, you convinced me...different question, with 8/16 GB of RAM, how much sense does a swap space really make? And, on a different note, how lethal is it to put the swap on the SSD?
  22. Not much if you have enough RAM. Not all that lethal considering it normally resides on the HDD. Oh, and with that case, be careful with the backplane, I don't trust backplanes.
  23. I meant lethal to the SSD, not to performance in general. And apparently the backplane of the PC-Q25 does seem to have issues; I'll make sure to have cables ready.
  24. Depends on if you have TRIM support or not. Also, MLC SSD's also have a shorter Read/Write count than SLC HDD's.
  25. Okay, great.

    I saw your post in nolegrl's thread and decided to go with 2TB disks, for the reasons you mentioned and because I don't immediately need 6TB of storage. For the same money I'd be spending on four 3TB or six 2TB disks, I can get four WD RE4s, i.e. enterprise storage. Any experience with those? Else I'm just going to call this a build and go with Barracudas.
  26. If you go with the Green Barracudas, realise that they are 4 platter disks as well. If your going with Seagate, look for the ST2000DM001 which has 2, 1TB platters.

    The RE4's aren't really enterprise drives in the truest sense, rather they are standard WD drives with enhanced testing, enhanced firmware, and TLER support (time limited error recovery) which is extraordinarily important if you use hardware RAID so the drives don't fall out of the array due to timeouts. It's not so important with ZFS or software RAID.
  27. Good, I'll probably go with the ST2000DM001s then.

    Sorry for bothering, but there are three more questions I'd like to have answered before I commit to this:

    First, is there any such thing as a 8i2e card? I've been flexing my Google skills, but so far, nothing has come up. There are 12i4e cards (from LSI, for example), but those are way over budget. I'd like to have the option of adding a simple external box when the case isn't enough. Otherwise, I'll either go with the Intel SASUC8i, LSI 9211-8i or Supermicro AOC-USAS2-L8i. Those're all based on LSI hardware, and appear to work well with ZFS.

    The second question is ZFS-related. I'm wondering whether the RAID-5/RAID-6 problems (UER and write hole) transfer to -Z and -Z2, respectively. If they do, I'll most definitely go for a pool of mirror vdevs. If not, I won't, because I have serious doubts I can max out even RAID-Z performance. (Sure, in theory it's possible, but the most I/O-heavy stuff will be copying files from my desktop or to my desktop...and that one doesn't have a RAID. And I'm living alone, so I'm not in the "have to supply BR-quality movies to 5 persons simultaneously" spot. ;))

    Third. I know that with ZFS I can easily change controllers should mine undergo spontaneous combustion or whatever, but what happens if I change the OS drive, or lose it? Essentially, what I'm asking is this: can I pull the OS drive, do a clean install and then access the storage disks without any trouble?

    Thanks in advance :)

    Ed.: In favor of the Intel controller, it automatically disables its write cache if no battery backup is installed (that would be a major criterion, otherwise the thing's useless for ZFS as I understand it).
  28. Well, you could try this thing, and connect it to the SASUC8I, which would give you the external ports your looking for.

    As to your second question, ZFS does not have the write hole issue that RAID5 does, and as far as UER goes, this article explains it quite nicely:

    As a result I would run RAID-Z2/RAID-6 just for redundancy.

    To your third question, I really don't know since the OS drive is usually redundant in at least a RAID 1 configuration. I've never had an OS drive as a single, standalone drive before.
  29. Yep, it'd give me external ports, but then I'm down to 4 lanes on the inside. Ah well, I'll decide that when it comes to it.
    Btw. - can you connect more than one drive to one of these lanes? Say, plug in one of those port multipliers, then run 2 or 4 disks off of one of the lanes? I'm aware this would screw with drive performance, I'm just curious whether the controller supports it.

    Ok, so Z2 is an option. That's good to know. I'll have to do some extensive performance testing myself, and see if I'm comfortable with it. Else it's likely going to be mirrored vdevs, because with 6x 2TB disks there's a breakeven in capacity anyway (and a pool of mirrored vdevs has the virtue of being incredibly easy to expand).

    As for the third question, I'm going to ask around in the FreeBSD fora and see what they have to say on it. Anyway, going to catch some sleep now, thanks for your time :)
  30. Just add an SAS expander. The controller should support it, as it's no different than an SAS expander.

    SAS Expander vs. Port Multiplier

    That is true.

    No problem.
Ask a new question

Read More

NAS / RAID Storage Product