Sign in with
Sign up | Sign in
Your question

Home Server OS and Setup

Last response: in Storage
Share
October 29, 2009 3:18:41 PM

This is what I want:
A home server for storing files. The files would be primarily picture, video, and audio files. I also want the server to act as a print server (but this seems trivial). I want the data to be fairly safe from hard drive failure. While the files won't be important business documents or top secret government files, it would be nice not to lose thousands of photographs to a HD failure. I want read and write speeds to be as fast as possible. The computers connecting to the sever will most likely be running Vista/7. I don't expect more than 3-4 clients to be running on the network at once. I would also like the ability to stream files to a my home theater via a network media player. It would also be cool to be able to access files from the server over the internet if I am not on the home network.

I currently have two old computers, one is a P4 system and the other is an Athlon XP 3800+ (or similar). I would like to be able to recycle these old computers and use them for my server.

I have been considering the following options and would appreciate any feedback you can give:

1. Use Windows Home Server as the OS on the P4. Use folder duplication as protection method. This option would be easy to set up and use. Also, the WHS hd pool makes access to files easy and offers expandability. The downsides would be that WHS costs money, and folder duplication would cut my storage space in half. Also, how would duplication affect read and write speeds?

2. Use Windows Home Server as the OS on the P4. Instead of folder duplication, set up a second server using the AMD machine to backup all files on the WHS machine. This would make a nice "front-end" server that Windows 7 clients could easily connect to, and avoid any possible slow down in speed with folder duplication. The AMD could use FreeNAS to save money. However, it would still cut storage space in half.

3. Use FreeNAS with ZFS on the P4 machine. ZFS seems to be an amazing techonology for a NAS device. I would get data back up without cutting storage capacity in half. The trade-off would be similar to a RAID5 setup, I believe. However, do you need more powerful hardware to run ZFS without massive slow downs?

4. Use FreeNAS on both machines, with one machine backing up the other. This would avoid any possible slow downs from a ZFS system, but storage capacity would be back to half of the installed drive capacity.

I am leaning toward these solutions because of the idea of having a pool of hard drives that acts like one. This seems very intuitive and the right way to set up a home server.

Also, running two servers could be tough on the electricity bill, and I want to stay somewhat environmentally friendly. Therefore I am weary about have two servers with one as a backup for the other.

Basically, what I am asking is which solution do you think is the best and why? Do you have any suggestions for a new option? I know that the WHS vs. FreeNAS question comes up a lot, but i really can't decide. Any personal experiences with any of the above options would be appreciated.

Thanks,
Armethius

More about : home server setup

October 29, 2009 8:05:57 PM

I would go FreeNAS way. Please consider that ZFS is still experimental so I would setup Software RAID1 instead (or do rsync to another machine if you like).

If you'll take ZFS anyway, it won't really slow down so you can feel it. However, ZFS requires to have more memory (I would've put like 2GB or more), but since memory is cheap today, then you should be fine.

P.S.: you can boot your FreeNAS from CompactFlash card for higher reliability.
m
0
l
a c 127 G Storage
October 30, 2009 4:31:22 AM

ZFS is not labeled experimental anymore, in FreeBSD 8.0 ZFS version 13. FreeNAS is still based on FreeBSD 7 and has ZFS version 6; it still has some issues but should be usable. I do recommend investing in a backup instead of just focusing on redundancy.

Also, ZFS doesn't like 32-bit CPUs or operating systems; i strongly recommend going 64-bit. This is architectural and has to do with memory management. Know that ZFS is extremely sensitive to memory; a serious ZFS setup should have at least 4GB although with tuning you can do with less.

Though you didn't mention it as an option, how would you feel running the 'real thing'? FreeBSD 8.0 would offer the most advanced storage technologies available today, though it requires time to study and implement and - most importantly - test your solution.

Having two NAS fileservers, where one is a backup of the other, would be a very secure setup having virtually all advantages, but would be the most expensive option. Still, using advanced software will allow you to save on the cost of a hardware controller, while providing benefits no hardware RAID controller can ever provide. ZFS's self-healing and checksumming, copy-on-write model and instant snapshots are adorable to anyone serious about storage.

If you like, i can guide people who want to take the ZFS route, from the planning stage to implementation and configuration phase. Since there isn't that much info available today to help first-timers and people new to BSD, i'll gladly guide you through this process.

I'd like to go into this in more detail, but i have a business trip coming weekend so have to prepare. I'll be back on monday.
m
0
l
Related resources
October 30, 2009 2:27:19 PM

sub mesa said:
ZFS is not labeled experimental anymore, in FreeBSD 8.0 ZFS version 13. FreeNAS is still based on FreeBSD 7 and has ZFS version 6; it still has some issues but should be usable. I do recommend investing in a backup instead of just focusing on redundancy.

Also, ZFS doesn't like 32-bit CPUs or operating systems; i strongly recommend going 64-bit. This is architectural and has to do with memory management. Know that ZFS is extremely sensitive to memory; a serious ZFS setup should have at least 4GB although with tuning you can do with less.

Though you didn't mention it as an option, how would you feel running the 'real thing'? FreeBSD 8.0 would offer the most advanced storage technologies available today, though it requires time to study and implement and - most importantly - test your solution.

Having two NAS fileservers, where one is a backup of the other, would be a very secure setup having virtually all advantages, but would be the most expensive option. Still, using advanced software will allow you to save on the cost of a hardware controller, while providing benefits no hardware RAID controller can ever provide. ZFS's self-healing and checksumming, copy-on-write model and instant snapshots are adorable to anyone serious about storage.

If you like, i can guide people who want to take the ZFS route, from the planning stage to implementation and configuration phase. Since there isn't that much info available today to help first-timers and people new to BSD, i'll gladly guide you through this process.

I'd like to go into this in more detail, but i have a business trip coming weekend so have to prepare. I'll be back on monday.


I actually wish to have a similar outcome as "Armethius" but I am just curious why do you have options where you have 2 servers and one backups the other. Wouldn't it just be sufficient to have 1 server with the harddrive operating on RAID as a protection for hardrive failure?
m
0
l
November 1, 2009 2:59:16 AM

If you have RAIDz1 and two hard drive fail then you will lose all your data, or if you have RAIDz2 and more than two fail. But that is unlikely. I was concerned about the affect ZFS might have on read/write times which was why i was considering two servers using UFS.
m
0
l
a c 127 G Storage
November 2, 2009 12:15:46 AM

ZFS is fast enough for me; i had Gigabit link aggregation running, allowing to create a 2Gbps or faster network link from multiple gigabit network interfaces, but i find that i don't actually need that much performance so i re-distributed the hardware to other systems.

Why two fileservers if you can have one? Well, if something goes wrong with number 1, i have direct read-only access to number 2. So i don't have any 'downtime' if something bad happens. Especially when i used ZFS in its infant stages this was a good choice; as it gave me more protection against bugs in ZFS itself and in reality it would crash every now and then and with one command i could re-mount my data on the second array in read-only mode.

While RAID is a great technology, it cannot protect against dangers that supersede the domain of RAID:

  • Filesystem corruption
  • RAID-engine problems (for example broken RAID arrays which happens frequently in onboard windows arrays)
  • Accidental deletions
  • Virusses / unauthorized persons deleting or corrupting my data
  • Multiple disk failures caused by external factors like Fire, physical shock, a failed power supply that applies wrong voltages causing multiple disks to fail)

    While ZFS is somewhat different and 'more' than just a RAID engine in itself (it has a 'history' and thus is partly a backup also) its just a safe and convenient setup to have two fileservers instead of just one. It also gives me a break to update ZFS more regularly to 'unstable' versions like i did in the past; which usually fixes bugs etc. without jeopardizing my entire dataset. It comes at a cost ofcourse, but since i don't need RAID controllers or expensive motherboards, the costs are well contained.

    I am using RAID-Z1 which is comparable to RAID5. I do not think RAID-Z2 ("RAID6") adds that much data security - its nice to have in mission-critical servers though.

    UFS is fast but it doesn't have advanced protection against corruption. With ZFS you know if there is data corruption: ZFS tells you. ZFS can do this, because it keeps checksums of your files. If you have redundancy (mirror or RAID-Z1/2) it will fix the corruption automatically, informing you about this with the "zfs status" command. This is a killer feature to me.

    Another killer feature are instant-snapshots. On the backup server each time it does a nightly backup, i configured it to make a snapshot before it synchronises the files. So if the main server has all files deleted (or simply unmounted) it won't destroy all files on the backup server as well. Well it will, but with one simple command i can 'rewind' to a date in history and see the state of my files at that particular date. So it has protections against most foreseeable events that can cause me to loose access - both temporary and permanent - to my data.

    I love ZFS and encourage everyone interested in Storage to try it. Do make sure you invest enough time to get to know this advanced piece of technology, to make yourself familiar with it and to understand its concepts. I'd be happy to answer any other questions one might have. :) 
    m
    0
    l
    November 3, 2009 9:19:04 AM

    Thanks for the reply!

    Do you think a home user like me whois a newbie RAID/Backup can or should bother to learn ZFS? Where do I get basic resources from before I start to disturb you asking questions?

    I am mainly backing up my GB's of videos/photos/music. Someone said just buy an autobackup software ot use standalone NAS like this http://www.netgear.com/Products/Storage/ReadyNASNVPlus.....

    Ill be shopping for new hardware in December. Iniatially I was going to save cost and just buy 4 additional harddrives and put them in the same casing operating them as RAID mirror 1, and for protection against fire online backup or put some super inportant files in a external harddisk and leave them at my parents place. If I was to use ZFS would I need a complete new RIG i.e. ram motherboard the likes?

    By the way I apologise Armethius for asking too much in your thread.
    m
    0
    l
    November 3, 2009 12:42:05 PM

    Your questions are similar to mine so keep asking them!
    m
    0
    l
    a c 127 G Storage
    November 3, 2009 1:49:10 PM

    I don't mind explaining ZFS; its one of the reasons i joined this board actually... to see if people would know about it and use it already. I've fallen in love with it, its a great piece of technology. Its still a filesystem in development though, meaning that it might not be the most safe choice for conservative users. For me that's no real problem as i have two fileservers so even if the unthinkable happens i still got my totally independent backup in another machine.

    If you want to start with ZFS, FreeNAS is a really easy and accessible way; even if you have zero experience beyond windows. You can try FreeNAS in a Virtualbox VM solution, Virtualbox is free and it runs FreeNAS great. All configuration is done via the web, so any windows PC could be running FreeNAS with ZFS in a VM to test its usefulness. :) 

    To use ZFS with all bells and whistles, you need:

  • a modern system with 64-bit CPU (amd64)
  • a supported operating system in 64-bit flavor (FreeBSD, OpenSolaris, FreeNAS)
  • at least 2GB RAM, recommended 4GB. For serious usage you might even want more as ZFS can use memory really well. There is no real limit.
  • SATA drives connected to chipset controller or PCI-express add-on controller

    Do not use any PCI controller! Its okay to mix PCI-express add-on controller and onboard SATA; so you can expand the way you like the drives don't have to be on one controller. This is an advantage software RAID is able to provide.

    As for hardware, i would go for at least a dualcore, though ZFS is threaded so the more cores the better; especially if you want to use on-the-fly compression and/or encryption this is very beneficial. Note that ZFS encryption is not yet finished (beta) and not integrated yet into ZFS. But its possible to use encryption in FreeBSD by using geom_eli or geom_bde encryption modules; which work with everything. AMD cpus perform alot better with AES encryption than Intel CPUs, i don't know why exactly but the difference is huge.

    A nice hardware list would be:

  • AMD Phenom X2/X3/X4 (~$100; try not to exceed 65W TDP)
  • AMD 785G chipset motherboard, Micro-ATX should be fine, has 6 onboard SATA and PCIe
  • 2 x 2GB DDR2 or DDR3. Quantity matters not memory speed.
  • Energy-efficient power supply (reserve 150W for the base system + 30W per each disk for spinning up)
  • Harddisks: i prefer WD Green 1.0 / 1.5 / 2.0 TB disks because they use little power and generate little heat, require no cooling and should be reliable.

    For the base system (excluding disks) it would be about $100 cpu + $75 mobo + $50 mem + $50 psu + $50 casing plus cables = $325. You should be able to use the onboard gigabit ethernet and onboard graphics so pretty basic system really.

    $325 + $110 per 1.5TB disk = $765 with 4 x 1.5TB disks (6.0TB raw capacity or 4.5TB in RAIDZ/RAID5)

    Disks i found on Newegg:
    http://www.newegg.com/Product/Product.aspx?Item=N82E168...
    (please note i do live in euro-zone so dollar prices may be off)
    m
    0
    l
    November 4, 2009 4:04:33 AM

    Thanks for taking the time to post this great information sub mesa. I've just spun up a Freenas box and am currently evaluating how I should best take advantage of this emerging technology. ZFS sounds very interesting and I'll be playing around with it for starters I think. I have a pair of 500GB drives in the box currently so I will start running some basic performance tests with those. I'd like my NAS to not only be a file server, but stream media to my 3x HTPCs as well (all over full gigabit HP switches). We'll see if FreeNAS and ZFS pass the test!

    Personally, I'm still evaluating what hard disks to go with for my FreeNAS build. It's interesting that you recommended the WD Green drives. It seems like lots of people are having problems with them in RAID situations. Seems like there are lots of failures, and I even remember reading that WD recommends not using them in RAIDs (I hate how you never know if they just want you to buy the more expensive drives, or if there is a legitimate hardware based reason), though I don't have any links to back that up... just ran across it a few times in personal comments and reviews, no idea if it's really true.

    I assume, by your recommendation, that you haven't experienced any problems in that regard? I own 750GB and 1TB WD Green drives and have never had any problems (in single drive configurations), but people seem to be really warning against these in RAID setups. Thoughts, opinions?

    Thanks again for the ZFS info...

    To the OP: I'm also building my NAS on an older P4 system, though I may switch to a spare Pentium D 820 CPU and board I have laying around if I'm going to implement ZFS. It sounds like P4 may be a bit low end for a ZFS solution.

    In any case, I'd recommend using a single NAS solution if you can help it. If nothing else, you'll save $50-60 year just in power alone not running that second system. If you really want to use the second hardware, consider an offline backup system (power it up once a month to run a backup, then shut it back down again), that's one option I'm considering (besides just having an external USB enclosure as an offline backup).

    I looked at WHS as well, but the additional licensing costs just don't make sense (I just blew a ton of $$ on 5 Windows7 licenses!). Besides, all of the "additional" things you get with WHS, you can do youself with your offline backup, and have more protection than a single WHS.
    m
    0
    l
    a c 127 G Storage
    November 4, 2009 6:17:24 AM

    I've been using WD Green drives (the 1TB ones) for more than a year now (i think) without any issues. None of them failed or dropped out of the array yet.

    WD Green drives come with TLER (Time-Limited Error Recovery) disabled by default, though you can enable it with a WD utility. The RE4-GP drives from WD; which are the same physical drives but with more warranty, come with TLER enabled. I'm not 100% sure but i think its just a way for the HDD manufacturers to sell the same products for a little extra money.

    Windows based RAID setups appear to be very fragile and offer the owner virtually no control to inspect what happened when an array has broken or a disk has been disconnected. So you might boot your system in the morning and suddenly it would say the array failed; while it was just a HDD that took some time to recover a single bad sector which took longer than 5 seconds; all this trouble to a user because the RAID engine did not wait longer to allow the drive to perform error recovery which is a common thing...

    As for power; my second fileserver doesn't have to be on 24/7. It is at the moment as i use it for some other stuff too, but in the past i had it just boot every night (setting in the BIOS) and shutdown after synchronisation has completed; all done with scripts. So fully automated. However, depending on your personal situation not all of the data on the "main NAS" would need to be backed up. If you can separate your data into "important" and "less important" only the important stuff would need a backup; the other stuff can be re-created/re-downloaded in case of data-loss.

    I do want to warn though, that ZFS likes to be on a 64-bit operating system; Pentium 4 and Pentium D do not allow 64-bit so you'll be limited to 32-bit operation. In FreeBSD 8 it disables some things in ZFS when it detects you are running 32-bit OS/CPU. But FreeNAS is still based on FreeBSD 7 so you may encounter instability. ZFS on 32-bit is not 'recommended'. Though it may be very stable in your case. These older CPUs generally consume alot of power when idling; which is what systems generally do 95-99% of the time even when you're using them. So a modern CPU might use only 1-5W when idling while olders use about 25W. The TDP value of CPUs is misleading as it does not tell anything about actual power usage or power usage when idling. Idle power usage is most important; except for render farms. :) 

    Good luck! I'd love to hear from your experience.
    m
    0
    l
    November 4, 2009 4:52:14 PM

    I thought a P4 could run 64-bit operation. I guess not.

    Quote:
    It is at the moment as i use it for some other stuff too, but in the past i had it just boot every night (setting in the BIOS) and shutdown after synchronisation has completed; all done with scripts.


    This sounds appealing to me now as I could have two UFS systems since I do have the two machines. How exactly does it boot and synchornize all with scripts?
    m
    0
    l
    a c 127 G Storage
    November 4, 2009 5:41:02 PM

    Synchronisation is done using "rsync" (which stands for remote synchronisation i think). It consists of a client and a server (rsyncd, everything with "d" at the end is a daemon or server process). The server process should always be running.

    Then the script would simply launch something like:

    rsync -chirtvv --delete --stats --exclude-from=EXCLUDE --progress --partial root@10.0.0.26::mesa/storage/ /destination/on/client/

    This would connect to the server with IP 10.0.0.26, login as root (some authentication stuff is hidden) and connect to the storage pool "mesa" with directory storage which is mounted as /mesa/storage on the server. The client then transfers each file in that directory on the server to the local directory, /destination/on/client/ in my example. The two directories will be exactly the same. Any "extraneous" files that exist in the client directory but not on the server directory would be deleted if the --delete parameter is used; this is kind of dangerous so use with care. Without this parameter it would only change/add files but not remove them if they were removed on the server. The various options at the beginning enable stuff like checksumming, which takes alot of time. It does guarantee that the files are byte-for-byte the same on both client and server which is a safe feeling. :) 

    After synchronisation has finished, it just issues a "shutdown -p now" which powers down the system. The BIOS timer will boot up the system again, and the script gets executed right after booting has finished. So its a self-repeating cycle.

    Of course this works with any filesystem, not just ZFS. But ZFS allows to create snapshots which my script does before it syncs. This would prevent the backup from being deleted by the script if the server has an empty directory at the place where it should have my files. This can happen if the array is unmounted or some other reason all files are gone (virus/hacker?). With ZFS the same would happen, only then i could issue one command to 'roll back' to an earlier snapshot.

    Snapshots are like backups; it works kind of like the System Restore functionality in windows. Snapshots in ZFS are instant, meaning it doesn't take any time to create them; due to the copy-on-write design of ZFS this is possible.

    Either way, you should be testing this thoroughly and understand how you setup things. By the way FreeNAS supports rsync you can try it out with FreeNAS first if you like as its easy to setup and convince yourself about what is the proper (final) setup.

    If you use UFS, you may wish to use journaling too, FreeBSD 7.x has geom_journal which can add reliability of UFS filesystems in case of a crash or power outage.
    m
    0
    l
    November 4, 2009 6:26:39 PM

    I'll have to test some things out when I get my hands on my old hardware; I'm currently away at school but I plan on setting up the server over winter break back home. I was hoping to use the old hardware but It's looking like it might just be easier to get a whole new setup.

    If I do go with two servers, one backing up the other, I'll definitely need to consult you to set it up like you have cause that sounds pretty slick.

    One thing I am worried about (among other things) is how many SATA ports my old computers' motherboard's have. Can you recommend a good SATA controller with a good number of ports in case I need extra? I have been able to find ones that handle RAID, but I don't really need RAID capability and the expense that comes with it...

    m
    0
    l
    November 4, 2009 11:10:30 PM

    I believe that the Pentium D and some of the later P4s (some Prescott's and thereafter) supported 64-bit, with BSD being specifically named as supporting it (http://en.wikipedia.org/wiki/X86-64#Intel_64). I'd be surprised if it didn't work. It's worth a try for sure.

    As for power consumption and needing a whole new system to support a FreeNAS setup, I'd really recommend looking at the numbers. As stated, the cost of an idle P4 and and an idle "modern" CPU may be 20 watts, this equates to about $35/year running 24x7x365. It would take you YEARS to recover the cost of new components if you base your decision on power consumption alone. If you have existing hardware, be sure you REALLY need the new gear. Something like adding SATA ports can be handled by an add in card; something like your CPU not supporting 64-bit and your desired technology needing it would be considered a reason to upgrade. Just food for thought.

    As for the WD drives, sub mesa mentioned a utility, and a bit of searching has surfaced a utility called WDIDLE, which sounds like it extends the timeout period for the green drives to park their heads. I guess this keeps them more active and less subject to failures in RAIDS, however also removes some of the "Green" appeal due to the increase in power consumption. Anyway.. just thought I'd mention what I found in case anyone is looking for this information.

    I played with my NAS a bit last night... ZFS seems complex to setup! Is there a good tutorial or walk through on setting it up for the first time? Not straight forward, that's for sure. Gonna play with it some more tonight...
    m
    0
    l
    a c 127 G Storage
    November 5, 2009 4:30:14 AM

    @Gamemaster000: appears you are right, if i believe Wikipedia. I did know the Prescotts were having 64-bit support (AMD64; since Intel also has IA64 which is Itanium architecture and not directly compatible with x86). But i thought Intel had disabled this functionality and later decided to abandon Netburst architecture altogether, not having promoted the 64-bit capabilities of these CPUs. Anyway, if you can use them in 64-bit; do so!

    Regarding power consumption, power prices will rise, here 1W running 24/7/365 is about 2 euro or 3 dollars. So if you invest $200 in a new base system, and save 50W total, it would be a "free" investment after little over a year. By that time, you literally have the new hardware for free; assuming you do not use the older hardware. Of course these savings are less pronounced with lower energy prices (though they are rising!) and more modest energy savings. But even if it takes longer; you do have the benefit of newer hardware and modern motherboards come with 6 SATA ports and alot of PCI-express ports for expansion; while the costs will continue to diminish over time due to the energy savings. If you don't do it for the environment, you can do it for your own personal gain. Personally i care for both and think "green" is simply the logical solution for both the environment and myself.

    As for TLER and idle power: FreeBSD would allow you to see if a drive timeouts with its requests, and allows you to set the time the drive is allowed before it is considered a timeout (which after a retry will kick it out of the RAID). On Windows this process is hidden from the user without much configuration or tuning possible.

    Getting started with ZFS?

    First create an array:

    zpool create tank /dev/ad4
    (single disk without RAID)

    zpool create tank /dev/ad4 /dev/ad6 /dev/ad8 /dev/ad10
    (creates a 4x RAID0 array)

    zpool create tank mirror /dev/ad4 /dev/ad6
    (creates a 2x RAID1 array)

    zpool create tank raidz /dev/ad4 /dev/ad6 /dev/ad8 /dev/ad10
    (creates a 4x RAID-Z or RAID5 array)

    The name of the array ("tank" in my example) will be used as mountpoint, so if you do a "df -h" it would list your ZFS array mounted as /tank with the free space it has.

    After creating a pool (with RAID functionality) automatically one filesystem is created, you can create additional filesystems within the same pool:

    zfs create tank/personal
    zfs create tank/work
    zfs create tank/work/textdocuments

    Set compression for directories with lots of text-files:
    zfs set compression=gzip-9 tank/work/textdocuments
    (this only works for newly created files)

    I could go on with many more examples, but ZFS is something you should explore one step at a time i think, the manual pages (man zfs and man zpool) serve as reference, or some this link can serve as quick-reference:
    http://lists.freebsd.org/pipermail/freebsd-current/2007...

    Good luck playing with ZFS. :) 
    m
    0
    l
    a c 127 G Storage
    November 5, 2009 4:40:15 AM

    As for expanding SATA ports, this PCI-express controller is my favorite:
    http://www.supermicro.com/products/accessories/addon/AO...



    It has two mini-SAS connectors; these allow you to have two cables which split in 4 SATA ports each, so a total of 8 SATA connectors. They support staggered spin-up which is damn useful as it allows you to spin-up the drives more slowly/evenly spread to give the power supply a break. Each drive can consume 28-35W when spinning up which adds significantly to the wattage your power supply should have. With this feature you can have many disks and still use a modest 400W power supply, which is alot more efficient than 500W+ ones with low idle loads of 50W and less. It also allows you to use spindown functionality to save power at night, while not taxing the power supply so that it enables the overload protection and switches the entire system off. It's also a plain SATA controller; no RAID support which is nice.

    By the way, in my example above i used raw devices (/dev/ad4) without partitions. This works alright, but can create problems with fake-RAID controllers (PCI cards with Silicon Image/JMicron/Promise/etc) and is not very flexbile. The recommended way of using disks in ZFS is probably:

    1) create a partition that leaves some space at the end of each drive; so if a drive has 5000000 sectors, leave 2000 or so unused at the end; this prevents problems with FakeRAID cards
    2) now you have partitions which are called /dev/ad4s1 for example, create a label:
    glabel label disk1 /dev/ad4s1
    glabel label disk2 /dev/ad6s1
    (etc.. one label for each disk on their partition)

    Now you have label devices located in:
    /dev/label/disk1

    Which you use in creating the zpool:
    zpool create tank raidz /dev/label/disk1 /dev/label/disk2 /dev/label/disk3 /dev/label/disk4

    This would allow you to move them around any way you want, connect them to different ports, controllers, even remote computers; and ZFS should only look at the label not the physical disk names. I've had some issues in the past with assigning "hardcoded" device names; though i've heard FreeBSD8 deals more gracefully with device name changes.
    m
    0
    l
    December 20, 2009 6:16:18 AM

    I've discovered a something about the computer I was planning to convert to my server. The motherboard supports SATA 1.5Gb/s but not SATA 3Gb/s. Is this going to bottleneck my read/write speeds, and can I connect a SATA 3GB/s HD to it?

    Thanks
    m
    0
    l
    !