I am about to buy four identical drives in the hopes of setting up a RAID 10(1+0). I was told this is better than 0+1. I have read the tutorial on here and have a basic understanding, but I want a more specific answer as to why RAID 1+0 is better than 0+1. I picture 1+0 being one striped array(two drives) and the other two being mirrors e.g. the first drive mirrors the striped array and then the second drive mirrors the mirror?
And finally I'm starting this to have a Post set up for when I have more questions since the tutorial is overly used, so please keep checking this post if you like to help a virgin find his manhood in the world of RAID.
I'm not sure if there's any way to predict performance without benchmarking yourself. I've seen array types that "should" be fast on controllers that "should" be fast, that completely failed to beat the speed of a single drive. (Google around for Raid1e, which "should" be the same as Raid-10 when the number of drives is even. I just tried it using six TB drives on an LSI SAS controller. It was almost always 50% or more slower than a single drive.)
If you only have four drives then the reliability should be almost the same. Assume the Bold drives are failed and the Italic Underline drives are alive:
Raid-10 Maximum Failure Tolerance: 2 of 4 failed, 4 possible ways out of six.
v---- Striped ----v
Raid-01 Maximum Failure Tolerance: 2 of 4 failed, 2 possible ways out of six.
Breaking it down: Either array can withstand one failure no matter which disk it is. Statistically there are 6 ways that two drives can fail out of four. If you run Raid-10, you are protected in four of those six scenarios. If you run Raid-01, you are protected in two of those six scenarios.
So there is a reliability difference, but not a huge one.
I especially say the reliability difference is not huge, because this is most likely a machine that you are going to be sitting close to most of the time. If one drive goes, you'll probably notice immediately. In a datacenter, if drives start failing, the staff have to A: notice and B: find someone with nothing better to do at the moment than go to the server (wherever it is, could be another state) and pop a working drive in for the failed one. Keep in mind the machine is still working just fine, just probably beeping off in a sealed room somewhere. So the "having something better to do" problem could legitimately push that fix back for a few weeks under the safety net of "oh it's not very likely that exactly the wrong drive will fail RIGHT NOW." (Naturally they -should- go fix it of course...)
If you take a higher number of drives, the reliability of Raid-01 goes way down compared to Raid-10 because it can only tolerate additional failures within the raid-0 that already has one failure.
Performance wise I'd trust Raid-10 but that's only because I expect controllers supporting Raid-10 to be better designed than one made by someone who thought Raid-01 was a good idea.
(Edit: yay math. I should be able to do that more good.)
(Edit again: okay... I think I got it right this time.)