PCIE Raid Controller versus Intel Rapid Storage
I want to create a very fast RAID 0 setup by buying several (perhaps as many as six) SSD's. The only performance characteristic I am worried about is sequential write speed. Latency is not an issue. What are the price and speed considerations to make regarding whether to use a PCIE Raid Controller (and... which one?) versus using all the SATA ports provided by the Z68 intel chipset with the RAID 0 provided by the Intel Rapid Storage Technology?
Maximum PC's June issue evaluated the latest SSD's, their random & seq Read and Write speeds, using CrystalMark and AS SSD benchmarking. If you don't have access to the mag or review, I will list the data.
Use the P68 Intel chipset for your RAID-0 since it is controlled by the PCH, not thru the PCH then to a separate Marvel controller.
When you set up a RAID-0 with multiple disks you use just one controller and add the disks to the volume, not several controllers.
As a general rule, the RAID-0 speed is x times the number of disks added. For example if you use 3 disks in RAID-0, you get 3 times the read and write speed of a single disk
A six-SSD RAID0 is at least seven times as likely to fail as a single drive. Please consider how much you would be damaged if you lost all the data since your last backup.
Did you read Tom's 3 GB/sec project article? That's not a typo, it's three gigaBYTES per second. Lots of SSDs in RAID0. http://www.tomshardware.com/reviews/raid-array-ssd,2915.html
The data loss possibility isn't a problem for external memory algorithms, which is what I really want to gear up for (a research workstation). But for actual long-term storage, yep, that is quite a big deal! Is there a way to partition the drives into both a RAID 0 partition and RAID 5 partitions, so to have a fast partition for external memory algorithms and a safe partition for long term storage?
That 3GB/sec article is pretty cool. So they managed the 3GB/sec using the PCI-e with LSI cards. I'm wondering about the alternative... using the six SATA ports provided by the Z68 chipset (not marvell ones). One naively calculates 6Gbps+6Gbps+3Gbps+3Gbps+3Gbps+3Gbps = 24 Gbps = 3 GB/sec. But I guess the DMI 2.0 on the Z68 maxes out at 20Gbps = 2.5 GB/sec. Is there some other, dramatic bottleneck I am failing to take into account? Or could I conceivably get > 2 GB/sec using these six SATA ports without having to use a PCIE card?
The best way to increase computational speed is to use multicore processors or parallel processing, and to use SSD in RAID-0 or the LSI cards for data storage.
You can use separate RAID protocols on separate controllers.
In general, RAID-0 read and write speeds are x times the number of disks.
RAID-1 speed is read x times number of disks (since both disks can be read at the same time) write = 1 (doesn't change)
RAID-5 speed is read 3/4times number of disks, and write N/A until you get into larger numbers in the array.
The fastest, IMO, would be to use RAID-0 on one controller for computations (or use Watson!) and RAID-1 for data redundancy.
It appears that there is something called "Matrix RAID" which Intel offers on its IRST, where you can actually do RAID 0 and RAID 5 on different parts of the same set of drives. That would be nice; so I can get the blinding RAID 0 speed for the paging algorithms and yet not have the geometrically increased chance of failure for critical data. At the same time, I don't have to take up a card slot to do it. (Important, since I want to be cramming those slots with GPU's, for CUDA programs). Unfortunately, I've read some reports that some people are getting absolutely horrible results with write speeds on RAID 5 on IRST. I guess I should just build the darned thing and see how well I can get it to work.
The matrix technologies gets pretty complicated, and you'd have to try it to see how effective it is.
The write speeds are much slower on the RAID-5 until you get a much larger number of disks in the array. There are applets on the web that you can use to calculate the speed advantage/disadvantage of the different RAID combinations depending on how many disks you have in the array.
Logically if you combined RAID levels on the same array of disks, 1+0 = 10, or other permutations, you are still using the same controller and same heads, so the heads would have to move to the other section of the disk to write data slowing it down. And if you lost 1 drive for whatever reason, you would completely lose the RAID-0 computational advantage until you rebuilt both arrays.