I was in a similar situation as yourself, only for personal storage. I've had great success with a direct attached storage setup of SATA II drives and Linux software RAID.
You don't need to spend big $$ on traditional server gear for a flexible, scalable array (up to around 20TB raw) - if you do your reading. All you need is commodity disks (all 3yr warranty at least these days), enough ports to attach them & your favourite Linux distro !
Some history: after an LVM array failure, it was clear I needed redundancy. I also needed to add / remove disks in the array over time, preferably whilst online. I started with 500GB disks but recently migrated to 1TB disks for a 7-disk RAID6 array (5TB usable) at present. My filesystem is reiserfs 3.6, so I can also reize the filesystem when mounted.
With linux kernels >2.6.16 (IIRC), udev supports hotplug (no need for 3.3V SATA power) so attaching a new member disk to your system is limited only by your case & PSU plugs (I use several molex to SATA Y splitters).
Once the disk is visible to the system (check with 'lsscsi'), you simply add it to the array with 'mdadm /dev/md0 -a /dev/sdX' and then grow the array onto the new device with 'mdadm -G /dev/md0 -n X'. The rebuild can take a long time - 20hrs to grow from 6 to 7 1TB disks for me. Next, grow your filesystem on the array & you're done.
With linux software RAID you can even increase the member disk sizes; so future migration to (in my case) 2TB disks and above is possible. You can also migrate from one RAID level to another, online. Monitoring support is also built in (e.g. emails on events) and arrays can be members of other arrays (e.g. RAID10, RAID50, RAID60 etc).
Before choosing Linux software RAID, the best thing to do is read the mdadm manpage a few times & experiment with some loopback devices (check out 'losetup') in a dummy array. the linux md driver is very mature & s/w RAID performance is greater than most low-end to mid-range h/w RAID cards. For myself, it was the clear choice for my storage needs both now & for the foreseeable future.
I strongly recommend watching your disks' health via SMART (I also graph my drive temps via MRTG). I also strongly recommend RAID6, although it's expensive initially in terms of usable space with a small number of disks, later on you'll feel much safer knowing that any 2 disks can drop before you are at risk of losing data due to a disk failing. Remember your array is vulnerable when you lose only 1 disk with RAID5, and the rebuild after replacing a disk is the most risky time !
One thing to be wary of, is when you run out of onboard SATA ports & add a controller card, the card ports may supersede the onboard ports for disk numbering; so the first disk cabled to the controller will be /dev/sda instead of the first mainboard port you were used to. Linux software RAID deals with this through reading superblock information stored at the end of each member disk, to rebuild the array if it doesn't match the saved config.
Another piece of advice is to spread your disk purchasing across multiple manufacturers and dates. There is no point in buying lots of disks from the same batch from the same factory, as odds are they will all fail in very close proximity to one another too. Linux software RAID checks for discrepancies in member disk size & allows (IIRC) a 1% variance. I use whole physical disks in my array, not partitions. In my reading I found some people used to recommend ECC memory too for large direct-attached arrays, although I've had no issues with a decent brand of regular DDR2 unbuffered, non-ECC RAM.
I'm also looking now for some high-density SATA-II RAID backplanes to allow easy attachment & removal of disks in my array; I am pretty sure I will go for the Supermicro 5-in-3 modules (same positive experiences with Supermicro as others have stated).
Edit: Oh also I should mention I run my O/S (openSUSE 10.3) from a 4GB USB flash disk; it's a little slower than a regular hard disk but has great benefits such as complete separation from your array & no moving parts.