Is software RAID really much better than it ever was?

Brawndo

Distinguished
Mar 8, 2011
26
0
18,530
I've been looking all over for info about this topic and while I've seen discussions relating to it, nothing that really answered the question. Some people seem to be claiming that nowadays with our more powerful cpus and wider faster buses, software RAID has become a more viable option. I even see discussions about doing software RAID 0, something that previously was considered by most to be completely pointless because any performance benefit gained from striping would be strongly negated by the performance impact on the rest of the system, particularly the CPU and bus. The way it was explained to me years ago, is that anything written to the disk would actually have to get passed through the CPU and bus (twice, really since first the CPU had to actually process the data), and under high load this could severely affect the system's performance rather than boosting it as intended.

Software RAID 1, I can definitely see a benefit in using as long as disk IO is not a major concern for the RAID volume, like in the case of a storage drive, or one where reads are common but writes not so often (like in the case of a webserver drive containing lots of jpg's and html files). In that sort of scenario the cost of a true IOP RAID card might not really be justified when adequate performance already happens to be built into the motherboard.

I'd really be interested to hear some comments about these performance issues, particularly with respect to whether things have changed as CPUs and motherboards have improved.
 

groberts101

Distinguished
Feb 20, 2010
363
0
18,810
no it's not. Only usefull purpose is if you need extended storage volume capacity and it wouldn't be bootable. here's 2 Vertex 30GB SSD in dynamic R0. As you can see it's not worth the time with single drive speeds.



It should be noted though that Intel caching is much more aggressive in raid, although couldn't see any difference in usage.
 
Software RAID 0 has never, even been a performance issue. All RAID-0 does is to alternate groups of blocks from one drive to another. There's no extra I/O, no parity to be computed. The only thing the CPU needs to do extra is to choose which drive to direct the I/O to, just a handful of instructions on machines that can do millions of them every second.

RAID 1 and RAID 0+1 are similar. All the CPU needs to do is to initiate two I/O operations instead of one when writing, then it can be dispatched to run instructions from a different program that doesn't have to wait for the I/O to complete. Reading is virtually the same as from a single disk.

For these RAID organizations there's a bit of extra overhead to go through a RAID-aware driver protocol level, but in terms of CPU load it's really no different than, say, using a USB- or Firewire-connected drive.

It's only RAID-5 where CPU horsepower becomes more of an issue because of the need to compute parity - and that's only for writes. Modern CPUs can do this easily - the figures I've seen suggest about 3% overhead, trivial for todays' systems that are rarely CPU bound.