Homebuild NAS.

Hi All,

I'm close to putting together a NAS for home use. It will sit under a desk upstairs and be always on, so low noise and low power are primary considerations - aesthetics aren't! It won't be heavily used; although performance is not a primary requirement, it should be fairly acceptable once it's started transferring data, but I'm willing to put up with a fair bit of startup latency.

I think the basic design will look like this:
Case - undecided, but probably a generic mini-tower;
Mini-itx motherboard running Via Nano (chosen primarily for low power consumption);
OS on SSD;
6-8 hotswap bays - initially 4 filled with SATA drives (size yet TBD);
ZFS running on OpenSolaris or FreeBSD;
Gigabit ethernet card with wake-on-LAN;
Passively cooled where possible, but this is an area I've not researched heavily yet;

The biggest question is which controller to use? Any other comments or suggestions gratefully accepted too.

Cheers - Adam...
8 answers Last reply
More about homebuild
  1. Adam,

    Just a quick comment - if you are running some NAS OSs, they can be run from a flash drive - you wouldn't even need the SSD - just a thought.

    What is the budget? Any parts you already have? NAS or file servers are not typically demanding, so you can get away with pretty low specs.
  2. Hi huron,

    Good point about the OS - I'll bear that one in mind. I won't need very much space at all for the OS for this box, although I'm toying with this also providing a Web server.

    Budget - whilst it's not unlimited (I don't really want, nor can I afford, an enterprise-ready solution) this is more about having fun putting something together. Exclusive of the SATA HDDs themselves (I'm thinking, intially, 4 Samsung EcoGreen 1TB drives), I guess my budget would be about £500, which should give me loads of headroom. The highest-priority requirement is low power consumption. No parts already in the bin, although, as I mentioned, I'm fairly fixed on a Via Nano processor, probably on a mini-ITX mainboard (although I'll listen to good arguments against it). Otherwise, it's a blank slate.

    Cheers - Adam...
  3. Part of the OS can even be on the ZFS array if you like to do so.
    You can use SSDs to add them as cache device to a ZFS array, meaning it would store the most accessed data. So the stuff you use every day will be read from SSD instead; might be nice if you like performance. But not a real must if performance isn't key.

    I guess you should make a choice on the OS first, then on which hardware (those two choices are related) and then on your exact ZFS setup. What experience do you have with OpenSolaris/FreeBSD? I don't know Solaris very well, but i do know BSD.

    About your question which SATA controller; the best is chipset-powered SATA; and you can extend the number of ports with PCI-express addon cards; but be careful here. Also, never use PCI for anything. As NIC (network card) i think you should use the motherboard's onboard NIC, as its directly tied to the chipset as well and should work fine with even slightly lower latency than a 'real' PCI-express NIC.

    Also, you know VIA Nano consumes more power than a stronger AMD setup? The CPU of VIA may be nice, but the chipset and other components are not low-power. An AMD setup would be your best bet. You can make everything passive cooling if you like.

    If you want 100% passive:
    power supply: PicoPSU 120W
    cpu: AMD 35W TDP chips does 1-2W idle
    mem: ddr2 with standard voltage
    motherboard: standard Micro-ATX or Mini-ITX board with AMD chipset and solid caps
    video: onboard
    hdd: you need 2.5" HDD to connect them to the PicoPSU power supply

    That would work. Also remember, when using ZFS your CPU should be 64-bit, and you should have a minimum of 2GB RAM, while 4GB+ is recommended. ZFS on my system uses 1.5GB RAM with low acitivity and 3.5GB RAM at maximum I/O.
  4. Wow, lots of useful info - thanks sub mesa. I'll go through it all once I get home from work!

    One thing question that you've reminded me to ask is this: what are the practical differences between 2.5" and 3.5" disks for this application? Is there any (other than size, obviously)?
  5. 3,5" desktop-class disks are cheap, they have the best capacity-per-$

    2,5" notebook-class disks are more expensive, but have some advantages:
    they use few watts to power up, while each 3,5" HDD takes 30W to power up, which becomes a problem with a power supply like PicoPSU.
    they have lower idle power consumption, 0.7W versus 4-8W when using 3,5" disks.
    due to generating less heat they run cooler, require no cooling or can be packed together with only modest cooling.
    Their physical size (dimensions) allow you to store four 2,5" disks in a single drive bay where your CD-ROM normally goes in to, this allows you to have much more disks in your server, to expand in the future

    The downside is higher cost per GB, it may be 2-3 times as expensive. Currently, 500GB 2,5" disks are your best bet. On the other hand, 1.5TB 3,5" HDDs are very inexpensive right now, so its a tradeoff. If using 3,5" disks, it would be harder to get a full passive system.
  6. adam-the-kiwi said:
    Hi All,

    OS on SSD;

    ZFS running on OpenSolaris or FreeBSD;


    With the setup you listed, I'd go with freeNAS installed to a CF card. It is freeBSD based with the freeBSD port of zfs built in. Very easy to set up. Web based interface so there is a gui for all configurations. It allows you to spin down idle drives (and there is a setting for how long to wait before spindown occurs).

    If you want to use OpenSolaris, the EON build will install to a CF card (and then run from ram). It's CLI but you can install the samba web interface for samba configuration. See http://eonstorage.blogspot.com/

    I've built an opensolaris home NAS and have enjoyed zfs for over a year. ZFS is memory intensive. You should have at least 2 GB ram. You also should go with a 64 bit processor. Enable compression (recommend zjlb). Make sure any controller card is opensolaris compatable. Opensolaris has only a few build in hba drivers.

    ZFS is an excellent file system. You can export the drives, put them in a new system (or build a new system), install and import and all the data is there. With multiple drives, I'd set up a raidz pool. It's functionally similar to RAID5. Any one drive goes bad, just swap it with a new one. If you want more redundancy, go with raidz2. Two drives in the pool can go bad and your data is still intact.

    I've used both freenas and opensolaris on the same hardware. Throughput tends to be limited on freenas to about 35MB (gigabit with jumbo frames). As you will see on the freenas forums, speed is a known issue. With the same hardware and opensolaris, throughput is 90MB - 95MB.

    Good luck and enjoy. If you have older modern hardware you might want to test out on that. I build my NAS with two dual core opteron 270s with 6GB ram and a SUPERMICRO AOC-SAT2-MV8 hba (8 SATA II ports)
  7. Rayik,

    What all are you running on your server that you have such a powerful processor going? I to am in the planing phase of what I hope will be a power efficient ITX NAS for media storage and backup purposes. Your system specs worry me that I am really under powering my design.
  8. It's overkill, but hardware was selected because of price alone and not capability. (I only built that because I was able to get the MB / CPUs combo for $30. Already had the ram, power supply and hdds. )

    I've found that zfs works well on a home NAS with dual core cpu and at least 2 GB ram. Prior NAS was built with Athlon64 X2 BE-2400 on Foxconn M61PMV mb with 4 GB ram. Used onboard sata. Worked great with both freenas and opensolaris running zfs.

    Only "upgraded" because I could get the hardware cheap and I like to tinker.
Ask a new question

Read More

NAS / RAID Storage