Buses, Controllers, And Disks
Most older motherboards sport 32-bit PCI slots, all of which share bandwidth. If you look at a chipset diagram for a chipset on one of those boards, the Ethernet controller, IDE controller, and SATA controller all connect to the PCI bus. Combined, disk bandwidth and Ethernet bandwidth are limited to a theoretical 133 MB/s. This will work, but you will end up with a slower file server.
There are many older server class motherboards that have some PCI-X slots. These are often favorable, because they use a bus that is separate from the 32-bit PCI bus. So, you can put your disk controllers in the PCI-X slots and not have anything else interfere with their I/O.
My first file server uses an Asus CUR-DLS motherboard with 64-bit, 33 MHz (or 266 MB/s) PCI-X slots. My second file server uses an Asus NCCH-DL motherboard with 64-bit, 66 MHz PCI-X slots, supporting 533 MB/s, which is faster than my six SATA drives. The controller card works at up to 133 MHZ, which would be 1,066 MB/s if I had a newer motherboard.
If you have a PCI Express-based platform, anything more than one lane should be plenty of bandwidth for a home file server, as that's good for 266 MB/s of throughput.
There is another bus speed to worry about: the connection between the northbridge and southbridge on your motherboard. Even though the Asus NCCH-DL has 64-bit, 66 MHz PCI-X slots, the link from the northbridge to southbridge is only 266 MB/s. In theory, this bus limits I/O. Fortunately, this isn't a big problem in practice, and newer chipsets usually have higher connection speeds.
Disk Controller Card
Modern motherboards have up to six SATA 3 Gb/s connectors. Older ones have fewer available ports, and they might use the slower SATA 1.5 Gb/s standard. It is likely that you will need to add a controller card to your system.
There are many kinds of controller cards available with different interfaces. For newer systems, PCI Express is perhaps most popular. The interface offers plenty of bandwidth, while PCI-X still serves up enough bandwidth for older systems. For less-expensive systems, 32-bit PCI can be used, although it will limit performance.
There are plain disk controller cards (host bus adapters) and RAID controllers. Using Linux terminology, the RAID cards break down into two groups, FakeRAID and real RAID. If the card performs XOR (parity) calculations by itself, it is considered to be real RAID. Otherwise it relies on the main CPU and software drivers to do the hard work.
My current file server uses the Supermicro SAT2-MV8 eight-port SATA 3 Gb/s controller card. It is a PCI-X controller and will work with a bus speed up to 133 MHz. It is a very nice card with good software support. I chose it because my existing motherboard didn't have any SATA 3 Gb/s ports, but did have PCI-X slots.
I also bought a Rosewill four-port SATA 1.5 Gb/s HBA controller card. It is a 32-bit PCI solution, though it will work at 33 and 66 MHz. It supports JBOD configurations, which is what is needed in order to let software-based RAID do its job. My Asus NCCH-DL has a Promise PDC20319 controller, which is another HBA, but does not support JBOD and is therefore useless here.
It is a good idea to check that Linux supports the controller card (assuming that this is the operating environment you'll be running). To do this you will need to find out the actual storage controller used on the board and research Linux support for that chip. Of course, if the card maker offers a Linux driver, there's a good chance that you're in luck.
Disks
I recommend SATA disks. They offer the largest capacities and are inexpensive. They employ a point-to-point architecture that doesn't share bandwidth. I built my first file server with parallel ATA (PATA) disks and put two disks on each channel. If a disk were to fail, the controller would likely act as if both disks failed, and I would be stuck. If you buy a good PATA RAID card, it will only support one drive per controller to eliminate this problem. Of course, you will end up with a snake’s nest of cables. This is one of the reasons why the industry went to SATA.