Hello All,
Don't know if this has beed done already in a bench mark test. I am looking at speed and data recovery. Which is faster; Raid 0+1 or Raid 5?
If I set up a Raid 0+1 and one of the striping drives malfunctions, I can just copy the info from the mirrored drive to a new one and get back in a state of redundancy.
If I am using Raid 5 and one of the four drives crashes, would it take longer to replace the drive and recover the data that was on the drive that crashed. The object would be to get back into a four drive Raid 5 configuration as if the crash of one of the drives did not happen.
I am looking to build a new system and I am aware, that for servers, they usually use a Raid 5 configuration. If a Raid 0+1 is faster for read / write and recoverability then I am there. If not, then I am going with Raid 5.
Those 10K RPM drives would be good but I may just go with using 4 SATA 200GB drives.
If I were running the test I would be doing some very intesive read write commands on the drives. I would be also trying to recover a large amount of data for the drive I would remove and replace with an empty one to simulate a failure.
Has anyone tried this or seen if Tomshardware did a test. I didn't see anything in the archives. Thanks.
Don't know if this has beed done already in a bench mark test. I am looking at speed and data recovery. Which is faster; Raid 0+1 or Raid 5?
If I set up a Raid 0+1 and one of the striping drives malfunctions, I can just copy the info from the mirrored drive to a new one and get back in a state of redundancy.
If I am using Raid 5 and one of the four drives crashes, would it take longer to replace the drive and recover the data that was on the drive that crashed. The object would be to get back into a four drive Raid 5 configuration as if the crash of one of the drives did not happen.
I am looking to build a new system and I am aware, that for servers, they usually use a Raid 5 configuration. If a Raid 0+1 is faster for read / write and recoverability then I am there. If not, then I am going with Raid 5.
Those 10K RPM drives would be good but I may just go with using 4 SATA 200GB drives.
If I were running the test I would be doing some very intesive read write commands on the drives. I would be also trying to recover a large amount of data for the drive I would remove and replace with an empty one to simulate a failure.
Has anyone tried this or seen if Tomshardware did a test. I didn't see anything in the archives. Thanks.