But i posted the following on that thread, i dont know if you saw it or not.
It all depends on the level of redundancy and capacity you need. If you don't need much storage space
, and want average redundancy, then a pair of drives in a RAID 1 would be sufficent. If you need lots of space (more than a pair of mirrored drives could provide) then you need to move to a RAID 5, or if the array will be really large (5 or more disks) and the data is critical, a RAID 6 would be even better. a RAID 5 array requires a minimum of 3 disks, and your capacity is the total minus 1 drive. A RAID 5 array can lose 1 disk and continue to function. a RAID 6 array requires a minimum of 4 disks. Your capacity would be the total minus 2 disks. A RAID 6 can tolerate losing any (2) disks in the array at any given time. A RAID 6 would be a good idea for, say, an array of 6 or more drives, several hundred GB or a few TB each or more, where good fault tolerance is required in a medium throughput application.
As far as your question regarding the card you mentioned. The card you linked, in a RAID 1 scenario, won't provide much speed benefit over the onboard ICH9R. Maybe a very minimal gain if anything. However, the main reason people go with a RAID card as opposed to an onboard RAID is that a lot of onboard RAID controllers don't support things like hot swap, hot spares, and online array rebuilding, which makes them useless in a business scenario that requires high availability of data. What's the point of having a hosted service, if you have to take down the server to rebuild an array after a failed drive, which might take hours for a large array?
In a RAID 5 scenario, the ICH9R would be blown away by a caching RAID card (not the one you linked though, since it doesn't even do RAID 5). The reason being that RAID 5/6 require a lot of processing power for the parity calculations involved, and hardware RAID cards have a RISC or ASIC calculation engine to handle this. I hope I was able to shed a little bit of light on this.