After reading through many posts I am convinced that the best course of action for me is to setup a FreeBSD server with ZFS so that i can store my data. I am average with Linux but i am willing to spend the time to set this up right. I am looking to store pictures (which need to be backed up) and then a myriad of music, games, videos, etc... these files are not critical so i can lose them. I also want to be able to stream them to my PS3 and XBox360 and possibly itouch or other devices...
System will be a
Phenom II X4 955 (apparently better with AES Encryption)
Asus M4A785TD-M EVO
Kingston 6GB DDR3 1333MHZ
WD Caviar Green 2TB 32MB (x8)
Case (Haven't decided)
Supermicro AOC-SASLP-MV8 Marvell 6480 8 Channel SAS/SATA RAID PCI-E Card (So that i can run 8 hard drives)
1. Basically i would like to run 8 drives for the NAS and then 1 drive for the O/S. I would like to run 4 drives (each 2TB) in a raid 01 array so that i have a total of 4TB mirrored. If i do this am i then able to take my 4 other 2TB drives and set them up as raid 0 to make 8TB?
2. If I decide later on i would like to add 2 more drives would i then be able to add to my raid 0 array or would i have to create another array providing that can be done?
3. Also, in ZFS using raid 0 if one drive in the array dies does that mean that i lose the data stored across the array?
4. And lastly, i just want to confirm that i can then setup a portion to stream video content from this NAS to my PS3 and XBox360 units?
5. Does anyone have a link to a tutorial on how to setup FreeBSD, install ZFS and install the Streaming server?
6. And as a wild question, does anyone know if i would be able to stream live tv content from this box (like a slingbox) to other computers or devices?
1. If you want both a RAID0+1 of 4 disks and a RAID0 of 4 disks, you will end up with two pools where one is redundant and the other is not.
2. You can expand RAID0s without problem. If you have a pool of mirrors in RAID0, you can expand those too. What you can't do is expand RAID5 by one disk; you have to add redundant arrays (i.e. add a mirror to an existing mirror; add a raidz to existing raidz)
3. RAID0 with one or more disk failed will be inaccessible. However, theoretically the implementation of ZFS allows to use 'degraded RAID0'; as both meta-data and data can have multiple copies. So a 2-disk RAID0 array with one disk failed but has copies=2 set would theoretically have all data on one disk. The fact that ZFS current cannot open RAID0s with a disk missing is an implementation issue not a design issue.
4. Depends on how your XBox works; if you're talking about CIFS access yes that is just normal NAS protocol; but anything which uses a 'proprietary' protocol will likely not work on FreeBSD.
I highly recommend you follow my tutorials before you setup the RAID. Two more things:
- why not consider a RAID-Z (RAID5) instead of mirror? the only real drawback would be random writes.
- you may want to consider a non-PCI controller instead. In fact, if you will use your PCI controller with a lot of disks it will be very slow since ZFS really wants to do parallel I/O. You may want to opt for a controller like this one, supported by FreeBSD:
For RAID-Z basically you can use all drives but 1 is for parity so that if 1 HDD dies i can replace it without data loss. But if i lose 2 hard drives then i would lose all the data... correct?
Yes. Though there is also raidz2 which is like RAID6; double-parity so 2 disks can fail without loss of data.
The controller i suggested is pretty cheap - just alittle more expensive than that (popular but inferior now) PCI-X controller. You would see much higher performance with this controller! Highly recommended.
Be warned: there is also a 6Gbps version of this controller, using LSI2008SAS chipset. These are not yet supported; so pick the 3Gbps one it is fine for HDDs both now and in the near future.
In a raid-z arrays; all disks should be the same size. However, you can create multiple arrays and combine them all in one pool, so you get:
raidz (disk1 disk2 disk3 disk4)
raidz (disk5 disk6 disk7 disk8)
Now disks 1-4 could be 1TB disks and 5-8 could be 2TB disks. These two arrays are RAID0'ed for increased performance and can use capacities of array1+array2. It is like a RAID50. So this allows you to fix different size of drives; but you have them in badges and they have to be redundant arrays. They can vary in size that is no problem.
Sequential I/O would be very fast (large files) - if you also do alot of small random reads you may want to consider an SSD as cache device; ZFS uses this like a big RAM which really speeds up portions of your array that are most used and randomly accessed.
I'm assuming its not that hard to add a solid state drive as a cache device so i will probably invest in one. What size would you suggest? 30GBish Is it just a setting? and does it have to be done before the array is setup or can it be done at any time?
also that PCI-E controller card will work in any PCI-E slot right? i was looking on the website and it has a rather limited list of motherboards that it supports...
The Supermicro-controller mentioned is a UIO card; meaning that the components are on the 'wrong' side (upwards not downwards). It is still a normal PCI-express x8 card though; but you may have to remove the bracket to make it fit. Since it has no external ports this shouldn't be any problem. It is still PCI-express gen1, meaning 8 lanes corresponds to 2GB/s.
Cache devices can be added and removed at any time; so yeah you can do this later. Howmuch you need depends on your usage. I would suggest using Intel X25-V 40GB for this; you can RAID0 them if desired (2 makes 80GB). I do suggest setting up the SSD so that only a part of the SSD can be used; by creating a BSD label that covers 30GB of the available 40GB; that means you leave 25% free. This is necessary as TRIM does not work with ZFS yet.