The JMicron controller only supports RAID levels 0, 1 and JBOD. I have no interest in using JBOD (what's the point?) and I'm more worried about capacity and performance that reliability so I'm inclined towards using RAID 0. I am, however, slightly worried about a massive failure of the array...
I'm thinking of using two Samsung T166 500GB 7200RPM SATA (HD501LJ) drives in the array (the cheapest 500Gb drives I can find), or alternatively two Samsung Spinpoint F1 640GB SATA (HD642JJ) drives (the cheapest 640GB drives).
I intend to use my existing 320 GB Samsung T166 as my backup and secondary storage, but both my OSes (Ubuntu and Vista) and the bulk of my data (Music, videos, pictures, programming stuff etc.) would reside in the array.
Any thoughts on the setup, hardware, reliability, performance or on anything else would be most welcome.
UPDATE: Just been looking at a couple other HDDs I could get (at about the same cost):
Western Digital Caviar SE16 640GB SATA (WD6400AAKS)
Samsung Spinpoint F1 500GB SATA (HD502IJ)
Western Digital Caviar SE16 500GB SATA (WD5000AAKS)
Typically your RAID array fails due to I/O error from the RAID controller or the connection with the interface; not the drives. You may or may not really have much warning, especially if you are using onboard RAID chips on a retail motherboard. I know that most people really looking to see better quality arrays tend to go with PCI, PCIe or PCIx controller cards. I currently don't have a RAID array, but did some initial research when I considered running 2x raptors.
wats the difference between pcie and pcix? always wondered...
PCI-X uses 64bit wide paths at 133mhz max giving you a bandwidth of about 1GB/s. PCI-Express uses a different method called lanes. Each lane is 250MB/s. To compete with PCI-X @ 64bit wide paths @ 133mhz you would need a 4x slot card.
PCI-X are typically found in servers for cards like gigabit, fibre channel, and ultra 3 scsi cards. All of which you will not see on a typically desktop system.