Sign in with
Sign up | Sign in
Your question

Looking to build a storage server at home

Last response: in Storage
Share
November 10, 2009 1:53:16 AM

Hi All,

I hope this is the correct forum to post in.

Well i have the need to backup quite a bit of data at home. I am looking to have around 8Tb's online all the time. Disk access speed is not really a high priority, as this will mainly be for backing up all of my work data. Capacity is the main concern.

So i have been having a look around and here are the specs that i am looking to build to:

- RAID Controller. For use in RAID 5 or 6
- PC (I have a fairly recent pc to use, has a motherboard. cpu, DVD and 4BG of RAM.) So no upgrade needed there.
- New Case
- Power Supply

If anyone could suggest a few cases or power supplies along with a RAID controller that will do the job please.

Should i install a different NIC, one that is not on board? or don't worry about it?

With the RAID controller, is it possible once i set-up say the 8TB of storage in the one container, can i expand the raid array to include more disks in the future? Is this a feature that would be possible with a RAID card? I am looking for the system to be able to upgrade (mainly the disk space/new HDD's as they become available)

I have had a look at this RAID card: Adaptec RAID 31605
- Does buying the slightly more expensive RAID controller help to minimise the risk of the Array crashing or getting errors in it which will destroy the data contained inside?

Hope you guys can help me out.

cheers
November 10, 2009 2:53:24 AM

Just an update, I have been having another look around today and i think this RAID card is better suited to my setup: Adaptec RAID 5805Z

This card has a power saving option which will spin down the disks during times of no use.
a c 126 G Storage
November 10, 2009 8:38:54 AM

Is this machine going to be a dedicated NAS? Or also partly a workstation?

If its going to be a NAS, have you considered a non-windows OS? It would save you the cost of a RAID controller and allows the usage of ZFS filesystem for example.

If you don't have any experience beyond Windows, the FreeNAS project is worth taking a look at. The 0.7 version with ZFS support was just released as stable/final.
Related resources
a b G Storage
November 10, 2009 11:11:25 AM

churchi - I've been thinking of a very similar setup and am looking forward to how this thread goes.

I've been looking at a dedicated RAID controller as well, and various cases that have hot-swap bays.

Sub mesa's suggestion about FreeNAS is good, and so is ZFS. For an different approach to RAID that uses ZFS, you may want to look at OpenSolaris.

November 10, 2009 11:36:50 AM

sub mesa said:
Is this machine going to be a dedicated NAS? Or also partly a workstation?

If its going to be a NAS, have you considered a non-windows OS? It would save you the cost of a RAID controller and allows the usage of ZFS filesystem for example.

If you don't have any experience beyond Windows, the FreeNAS project is worth taking a look at. The 0.7 version with ZFS support was just released as stable/final.


Hi Sub Mesa, thanks for the reply. I will be using most likely some forum of Linux for the base OS. I want to run some VM images on this machine as well, so storage for the box can just be on the side. It will have enough grunt to run the VM's that i am wanting, however i wanted to combine this box so it can be a storage NAS for the home.

I have not looked too closely into software RAID, however i just feel i would like to stay away from it. I am prepared to pay a little bit more for the hardware RIAD, if there is enough justification for it. I am not a linux guru, so i don't want to have the task to sit down and spend weeks on setting up the software RAID so i thought that the hardware controller was the best approach.

i have looked into the freenas, however i would like to install a bit more on this machine for VMWare/Virtualbox to run at home.

Thank you for your suggestions.


i have also been doing a bit more research today, and with the Adaptec RAID 5805Z i can plug in upto 8 SATA.SAS drives directly from the RAID card. Do people find this is enough? i would like another 4, however in this series of card (no need for battery backup for cache since its got NAND flash on board) there is not another card with 3 slots on it.

Has anyone has any experience with SAS expanders? if so can you let me know what you purchased and how they went? i am wanting to maybe install one of these off one of the ports on the RAID card (should i go this path).

Thanks.
a c 126 G Storage
November 10, 2009 11:59:03 AM

A hardware RAID configuration would lose the benefits of advanced filesystems: self-healing and generally maintenance-free operation. ZFS has many benefits and maybe you should spend a little time getting to know the differences, as it may make your life easier.

Advanced software RAID configurations may be / are superior to hardware RAID configurations. With hardware RAID the filesystem has just one storage block and cannot correct any errors where corruption / loss of data is concerned. The combination of hardware RAID and traditional journaling filesystems may also be fatal (journal replay before a parity correction by the hardware RAID will corrupt the filesystem).

If you want ZFS, i recommend you run FreeBSD 8 instead, it should allow Virtualbox and is generally very easy to setup. FreeBSD 8 is not released at this point, but very close to. Currently its in RC2 status. I've been using FreeBSD 8 for a long time in combination with ZFS and its absolutely great.

For ZFS, you do need a 64-bit CPU and a healthy dose of RAM; 4GB for ZFS and 4GB for your VMs would be preferable. If your applications are not very I/O intensive you can get away with less.
a b G Storage
November 10, 2009 12:18:31 PM

Sub mesa - thanks for the insight, as I'm watching this thread with similar ideas in mind for my future.

You don't think there are issues using a software RAID, such as the controllers on motherboards? I've heard conflicting things and am trying to figure it out as well.
a c 126 G Storage
November 10, 2009 5:46:29 PM

Software RAID on windows platforms is flaky at best - not something to be 'proud' on. But Windows doesn't offer any advanced filesystems to start with, and alot of it is assuming its on a plain single disk with little to tune or diagnose.

ZFS is truely different than any filesystem; as its a filesystem and RAID-engine in one. I can only recommend people go read more about what ZFS is and how it can benefit them. To me, the benefits are mainly identifying and preventing file corruption, self-healing in case of corruption and backups using instant snapshots.

The self-healing requires some explaination. ZFS uses checksums for each file to see if its free from corruption - thats neat. But what if there IS corruption, how can ZFS repair that? Well simple, assuming the corruption only applies to one drive, and you are using a redundant pool like RAID1, RAID5 or RAID6, ZFS simply uses the redundant data from the other drives to calculate which version is uncorrupted; and correct the corruption by using this redundant data. This will not work if ZFS is used on a hardware RAID as ZFS cannot access the redundant data.

So in this case, software RAID is superior to hardware RAID as it allows features that would otherwise not be possible. The other cool thing about ZFS is that you should never need to 'check' the system for errors, like fsck on linux or chkdsk on windows. ZFS is self-healing; whenever it discoveres a fault in operation it'll fix it here and there - unlike other filesystems which need to be unmounted etc. So basically, this thing runs itself - zero maintenance.

You also have all freedom to use whatever SATA or PATA ports you want; you can mix disks on the chipset controller, on some add-on controller and also PATA. As long as your operating system can access the physical disk, so can ZFS. I do recommend you use controllers that can work in non-RAID mode. For example i have an Areca controller but its pretty much useless to me as i don't want to use the card's RAID functions just to give each disk to the operating system; i use it as SATA controller right now. Otherwise the self-healing wouldn't work as i described above.

FreeNAS is a quick way to test ZFS. But if you are confident with using Linux you should be able to setup a FreeBSD 8 + ZFS system fairly quick. If you want guidance i would be happy to provide this for you. Either email or direct-chat via IRC for example. One of the reasons i joined this board was both to see if people would be using ZFS already, and to inspire them to use it. So far there weren't many people interested in ZFS; i guess that's just because ZFS seems out of reach to many 'casual' computer users, even power users. But it shouldn't have to be; its free and is incorporated in multiple open source systems, and is fairly easy to work with.

Again i repeat: you need 64-bit and a healthy doze of RAM. Without that you might run into problems. For additional SATA ports this is a great product to use with ZFS, as it also supports staggered spinup:

http://www.supermicro.com/products/accessories/addon/AO...

Though its Mini-SAS its just an tidy way of offering 8 SATA ports using just two cables.
a b G Storage
November 10, 2009 10:49:51 PM

I'll hit you up with a PM soon to talk ZFS and building a file server for home. I'm pretty excited about it and would love the help.
November 11, 2009 1:27:22 AM

Hi Sub Mesa,

Thank you for all the explanation of the ZFS that you have had experience with. I really appreciate you taking the time to go over it with us here.

It seems that i am just about to go down the ZFS path now. It will require less hardware to be purchased and will be cheaper with other benefits.

Can i ask a few questions.
- Since my knowledge on Linux is only 'ok' i am really not sure if learning BSD or open solaris is going to be a quick thing for me. Even if i get it up and running, troubleshooting i am afraid will take for ever if something happens. Did you find it easy to learn BSD/Open Solaris?
- If say i loose my main system drive, is it possible/easy to rebuild the system drive and re-connect the ZFS file system to the new OS?
- Will connecting the SATA drives off the motherboard and an expansion card limit transfer speeds between the drives? will this create a bottle neck b y using the SATA controller on-board? Should i just buy expansion cards and keep the array on them?
- Have you seen ZFS running on Linux? i have read that is is possible, or would you go with a native OS that it is supported on? like bsd?
- Connecting the disks on the SATA array like you have suggested, will bandwidth across all drives be limited since i would not be using the RAID controller? or should there be no difference. Would the RAID controller give me better transfer speeds over the whole volume?
- Does ZFS support spinning down the disk drives when they are not in use? or would the be an OS related task?
- Does ZFS support expanding the array so that if i wanted to add another drive in to increase the space i can? or is this a future feature?
- Is it possible, once the array has been built, to swap out the smaller drives and replace with bigger drives without having to destroy the array and re-create and copy back the data?

Thanks heaps Sub Mesa, i am really interested in trying this ZFS if it is a cheaper way to go and more reliable.

cheers.
a c 126 G Storage
November 11, 2009 8:01:50 PM

1) I found BSD to be easier to learn than Linux. Its just that FreeBSD is less known and it sounds more geeky, and especially for desktop usage it has some rough edges. But if you want to setup a server i can't say FreeBSD would be more difficult than Linux. For me it was the other way around; and i still find Linux harder than BSD. That said; i did have some people who 'taught' be how to work with BSD so that may be of influence.

2) you do not need the system drive really; you can reinstall a fresh BSD on another system drive and have direct access to your ZFS again (you have to use "zfs import" command but thats it). You can also access the ZFS filesystem by creating a VM in virtualbox and giving it direct raw disk access to all ZFS disk members; this works. I tested both procedures myself, with a real array.

3) the chipset SATA ports are the fastest, as they are full bandwidth and lowest latency. PCIe addon controllers are a good second, and anything with PCI is slowest and should not be used.

4) ZFS on Linux only works via the FUSE layer; this is a complex design and it uses an older version of ZFS, its performance is alot less and generally its far from bug free operation. I would not recommend ZFS-on-FUSE. There is a real kernel-level module development ongoing to get ZFS supported on linux without using the FUSE layer.

5) as long as the disks have full-bandwidth and full-duplex bus (so no PCI), you should have no slowdowns or bottlenecks at all

6) spindown is OS related; FreeBSD can easily be configured to spindown disks with the "atacontrol spindown" command; you can set it to automatically spindown after x seconds; which can be anything from 1 second to several hours. I suggest no shorter than 10 minutes.

7) expanding existing arrays is possible, but with limitations. You can always add new storage arrays to an existing pool, meaning if you have a RAID5 you can add a bare disk, mirror or RAID5 array to the same storage pool. But you cannot expand a 4-disk RAID5 into a 5-disk RAID5. You can, however, add a second RAID5 array to the same storage pool, meaning two 4-disk RAID5 arrays of each 1.5TB size would give you one virtual 3TB pool. The fact that its actually two arrays is hidden from actual usage. Its also possible to combine any RAID levels; so you can add a 2-disk mirror to a 4-disk RAID5 later, for example.

This may be difficult to understand, but a storage pool can consist of any combination of storage. You can have all sorts of complex combinations of mirrors, bare disks, RAID0s and RAID5/6's all in the same storage pool and it would behave as if it was one big disk. ZFS is very flexible in this. But the one big missing feature is expanding existing RAID5's and RAID6's as this would require stripe remapping which ZFS doesn't support and is extremely complex due to ZFS's variable stripesize design. Don't count on this being supported in the near future; or even at all.

8) probably, this works for all software RAID levels in FreeBSD (geom_raid) but i haven't tested/done this with ZFS itself. It would probably work though. Can be tested in a VM i guess if its important to you.

This week is kind of busy for me, as i have alot of work to do. But please feel free to ask questions either here or via PM. I'll respond when i'm able to. :) 

Kind regards,
sub
November 11, 2009 8:27:25 PM

Thanks Sub again for such a long and detailed reply.

I will send you through a PM around what i am thinking of setting up.

a few quick last questions:
- Do you think this is the best SATA controller for opensolaris/bsd? i am thinking of more going the BSD way. I can run virtualbox and a few other things on there. or even start out with freenas.
- Just to confirm, if i start out with freenas for example, and i decide i have out grown that OS, i could then re-format my system drive, leave the ZFS in tact, re-build with BSD/opensolaris then i can reconnect the zpool? would that be correct?
- Is it easy to setup some alerting to e-mail me if the array is degraded so that i know that a disk has failed? or is this a default setting in the setup of ZFS?
- would a card like this work just as well: http://www.supermicro.com/products/accessories/addon/AO...


Thanks mate.
a c 126 G Storage
November 12, 2009 9:45:00 AM

Yes you can start with FreeNAS (ZFS version 6) and later convert to BSD which has ZFS version 13. With the "zpool upgrade" and "zfs upgrade" commands you can upgrade the pools and filesystems to version 13.

FreeNAS may not use the recommended way of partitioning though, by using geom_labels. I would have to check. With a geom_label any disk would have a label you designate, for example "disk4" and it will be called that unrelated to how its connected to the controller or disk order. You can apply geom_labels afterwards, but requires at least 512 bytes of free space; and it kind of tricky to do.
Without these labels the system may be less robust if you change the disk order, though i heard they fixed that in recent commits on FreeBSD 8.0; this is one of the issues i had back in the FreeBSD 7.x ZFS age (which also uses ZFS version 6).

Setting up an alert is pretty easy with a crontab; this works just like on linux. You can create a simple script that checks if the status of your ZFS array has been changes every x minutes, and if it is it will mail you so you get notified.

The card you posted is a PCI card (PCI-X is a 'boosted' version of PCI; with the same fundamental weaknesses as its a half-duplex shared access bus, just the bandwidth has been increased). You should really avoid this old controller and go for its newer model with PCI-express interface:

http://www.supermicro.com/products/accessories/addon/AO...

This one also supports staggered spinup, and due to having two Mini-SAS cables it has less cable clutter; these will split into 4 SATA connectors each, so a total of 8 ports just like the older controller you posted. I highly recommend you avoid PCI as in my testing the performance was terrible and interrupt CPU usage on a quadcore was totally devastating.
November 12, 2009 1:16:21 PM

ok thanks. i think i will just start with BSD 8.x (RC for the moment) and go from there. i would like to have the latest and stable version of ZFS for running on my system. you have convinced me to start on bsd first :) 
i hope there is not a real steap learning curve from linux.

didn't notice that about the controllers. i will go with the one you suggested. i found a local supplier for them today.

the next would be to find a case and PSU to power all of this. a core 2 duo 66xx would be fine to run for this storage server? 4gb of ram will be in it as well.

does freebsd 8.x have the latest version of zfs that solaris has? or is it slightly behind where solaris is?

i will search around for some scripts to monitor the array. i am guessing all you need to have monitored is the status? so when it says something like degraded you get an e-mail alert? would i be on the right track there?

if i take out the disks and i have them setup on the card you have suggested. do i have to plug them back in the same order or does ZFS know exactly which disk belongs where and will adjust to the new location of the disk (connection wise) and it will keep on working correctly? or is it particular where disks were setup they need to be back in that place?

with the SAS card you suggested, is the staggered spin up controlled by the OS or the card?

thanks.
a c 126 G Storage
November 12, 2009 2:41:21 PM

Core 2 duo should be fine, or any AMD based dualcore. AMD cpus tend to be alot faster on (AES) encryption if that's what you want to use. If not, the newer Intel's are excellent choices. Unless you already have an E6600 i would suggest picking one of the newer 45nm models though (E8400 for example) - it would reduce idle power consumption without sacrifising performance.

FreeBSD 8.0 has ZFS version 13, the same version in OpenSolaris today. So this shouldn't be a reason to pick OpenSolaris instead. I don't have much experience with OpenSolaris though - but many of its good things (ZFS/D-trace) have been ported to FreeBSD. FreeBSD has a very relaxed license which allows to 'borrow' good technologies from other operating systems, like the OpenBSD pf package filter firewall which is i think the best firewall.

So technically FreeBSD has the best papers; the only thing it lacks is filesystems; it only has support for UFS and ZFS; read-only support for ext3/XFS is not that sexy; i would have liked some additional filesystems like XFS/JFS/ReiserFS/Ext4 etc. But i guess with UFS and ZFS you got the two most important ones.

The script would just look at the output of "zpool status" and look if its in any other state than "ONLINE"; if its not in online state it would send you an email with the output of zpool status command. That should be reasonable i guess.

If you use the geom_label on each disks, it won't matter how you connect them. So the disk order is irrelevant. Probably this works without labels in FreeBSD8 but this was still an issue in FreeBSD 7 + ZFS. It's not an issue with any conventional RAID (geom_raid) in FreeBSD as FreeBSD always supported this flexibility. You can even connect RAID disk members which are in a different system on the network, or even in a VM. ;) 

Staggered spin-up is controlled by the card; you may need to tune FreeBSD to set timeouts higher than the default 5 seconds though. When FreeBSD spinsdown a drive it'll use 60 seconds timeout - but if the device / hardware RAID does this FreeBSD doesn't know and still uses the 5 second timeout by default. This is too low probably.

Before you actually commit live data to the system, please allow some time to test the system thoroughly. So don't put your precious data on it right away. This counts for any storage solution i guess - always test especially simulate what happens if a disk fails; go write to it while writing pull one of the cables and test what should be done to get it back up running. Testing is very important. :) 

I would like to write a tutorial on how to setup FreeBSD for a NAS for this purpose. Maybe even for toms hardware's frontpage. But i still have to do a formal request i guess. And it'll take some time of course. If you ever want to me to help you hands-on, even with ssh access to the machine, i'm willing to do that. With a program called "watch" you can see exactly what i'm doing. ;-)

But it should also be a learning experience - though i found it helps and motivates people if they have some sort of guidance. I had some good help when i started my FreeBSD and linux adventures some long time ago. Without it maybe i would still run Windows these days (what a horrible thought!). :D 
November 20, 2009 7:25:35 PM

stick with the Adaptec series cards the 5000 is great but DONT use the power saving feature on the drives... there have been instances where the drives spin down and dont spin back up... thus causing and issue with the array... or a drive will go "orphan" and then spin back up after a rebuild has started thus causing 2 arrays with the same name to show up one is totally inaccessable and must be deleted (due to it being the drive that spun down and spun back up. then you have to make that drive the hotspare) the other thing about drives spinning down is that they "slow down" and that will cause a miss read of some smart functionality... UGH I FREAKIN HATE the POWER FUNCTIONS! whew that is off my chest. go with a case that can handle at least 8 drives and go with that. raid 5 or 6 depending on the drive size you have and the security you will need
December 22, 2009 10:34:00 PM

Hi Sub,

I am nearly ready to have my system built. I have all the HDD's and a brand new Lian Li case to put them all into. Just working on the SAS card now.

I have been reading on the net that the AOC-SASLP-MV8 is not supported on Solaris. Can you confirm if you have had this card setup on opensolaris? i would not like to purchase this card if it is not going to be supported on the OpenSolaris/BSD OS.

Also i have been reading that an LSI chipset card may be a better way to go. However it seems that all the LSI cards have RAID functionality on them. I am not going to need this as ZFS will be taking cars of this Software RAID.

Anyway, if you can let me know what you think about the AOC-SASLP-MV8 card that woudl be great.

cheers.
December 23, 2009 12:04:08 AM

Hi Sub,

i have been looking around again and i have found this card that is supported by opensolaris and contains the LSI chipset that seems to be supported everywhere.

AOC-USAS-L8i | http://www.supermicro.com/products/accessories/addon/AO...

Do you think this card will work in the JBOD setup for ZFS?

Thanks.
October 20, 2012 1:12:25 AM

sub mesa said:
Software RAID on windows platforms is flaky at best - not something to be 'proud' on. But Windows doesn't offer any advanced filesystems to start with, and alot of it is assuming its on a plain single disk with little to tune or diagnose.

ZFS is truely different than any filesystem; as its a filesystem and RAID-engine in one. I can only recommend people go read more about what ZFS is and how it can benefit them. To me, the benefits are mainly identifying and preventing file corruption, self-healing in case of corruption and backups using instant snapshots.

The self-healing requires some explaination. ZFS uses checksums for each file to see if its free from corruption - thats neat. But what if there IS corruption, how can ZFS repair that? Well simple, assuming the corruption only applies to one drive, and you are using a redundant pool like RAID1, RAID5 or RAID6, ZFS simply uses the redundant data from the other drives to calculate which version is uncorrupted; and correct the corruption by using this redundant data. This will not work if ZFS is used on a hardware RAID as ZFS cannot access the redundant data.

So in this case, software RAID is superior to hardware RAID as it allows features that would otherwise not be possible. The other cool thing about ZFS is that you should never need to 'check' the system for errors, like fsck on linux or chkdsk on windows. ZFS is self-healing; whenever it discoveres a fault in operation it'll fix it here and there - unlike other filesystems which need to be unmounted etc. So basically, this thing runs itself - zero maintenance.

You also have all freedom to use whatever SATA or PATA ports you want; you can mix disks on the chipset controller, on some add-on controller and also PATA. As long as your operating system can access the physical disk, so can ZFS. I do recommend you use controllers that can work in non-RAID mode. For example i have an Areca controller but its pretty much useless to me as i don't want to use the card's RAID functions just to give each disk to the operating system; i use it as SATA controller right now. Otherwise the self-healing wouldn't work as i described above.

FreeNAS is a quick way to test ZFS. But if you are confident with using Linux you should be able to setup a FreeBSD 8 + ZFS system fairly quick. If you want guidance i would be happy to provide this for you. Either email or direct-chat via IRC for example. One of the reasons i joined this board was both to see if people would be using ZFS already, and to inspire them to use it. So far there weren't many people interested in ZFS; i guess that's just because ZFS seems out of reach to many 'casual' computer users, even power users. But it shouldn't have to be; its free and is incorporated in multiple open source systems, and is fairly easy to work with.

Again i repeat: you need 64-bit and a healthy doze of RAM. Without that you might run into problems. For additional SATA ports this is a great product to use with ZFS, as it also supports staggered spinup:

http://www.supermicro.com/products/accessories/addon/AO...

Though its Mini-SAS its just an tidy way of offering 8 SATA ports using just two cables.



If you want guidance i would be happy to provide this for you. Either email or direct-chat via IRC for example
-> would you be my guru? interested on zfs + freebsd.. email me: joffreyjoffrey77@yahoo.com
!