"Firmware RAID", what it is really?

ricno

Distinguished
Apr 5, 2010
582
0
19,010
Many modern motherboard offers some RAID functionality, but seems to differ a lot from "real" hardware RAID controller cards where the RAID system is invisible to the operating system.

Yet it is said to not be "Software RAID" which used to be a RAID totataly controlled by the operating system and does everything in software using a special driver and using CPU time for the raid operations.

From what I understand the firmware RAID needs a driver in the OS, so in what way is it different from classic software raid that has been in Windows (NT) for ages?
 
Solution
The processing done by "firmware RAID" that uses RAID-capable motherboard chipsets isn't done in the OS per se, it's done at the driver level. You can think of the work that a disk I/O request goes through in much the same way that a network request is handled - there's a "stack" of software that handles different protocol levels all the way from the initial request by the application down through lower and lower-level layers until it hits the actual hardware. In the case of disk I/O, it goes something like this:

Application...
The difference between "firmware" RAID and "software" RAID is that the RAID configuration is actually stored in NVRAM on the motherboard and the chipset has enough smarts in order to be able to boot from a RAID volume. That eliminates the major drawback of purely software RAID - the inability to use it for the boot drive.
 

ricno

Distinguished
Apr 5, 2010
582
0
19,010


Thanks a lot sminlal for a good answer as always!

So for an operating system to actually use the firmware RAID it has to load a raid driver early and then do all RAID logic in software during normal operations?
 
Yes, that's correct. But people often have an inflated idea of the amount of work that's required to to service a RAID array. For RAID-0, RAID-1 and RAID 0+1 the amount of extra work is tiny, it's just a trivial calculation of which drive and sectors the I/O has to be done to , and in the case of writes to a RAID-1 array it's issuing one extra I/O request.

Writing to RAID-5 is more work, but it's essentially some block XOR operations for each sector, and that's actually a very fast operation for modern CPUs. Similar work is also needed when you read from a degraded array, but you wouldn't normally having to be doing that.

In my mind the bigger issue with motherboard RAID is the dependency on the host CPU to keep doing the right thing in the face of potential problems that crop up in the OS. With hardware RAID, the dedicated processor can't be affected by OS crashes or viruses, and that means a (small) reduction in risk, IMHO. That's not really relevant for RAID-0, but for redundant RAID organizations it could make a difference.
 

ricno

Distinguished
Apr 5, 2010
582
0
19,010


That would be the same why I do not really like software RAID, firmware based or not, as the dependency of the operating system to do the logic, even as it might not be hard work speaking of CPU cycles. When working with servers it just "feel good" to have a dedicated SCSI/RAID controller, say HP Smart Array, which makes all RAID work and logic invisible for the OS.

But just one thing for the firmware based RAID, how does the operating system gets the RAID driver during installation? Does it have to be added early (like the "press F6 for adding storage adapters") or does modern operating systems like Win7 have them included?
 

ricno

Distinguished
Apr 5, 2010
582
0
19,010


But the CPU does not "understand" anything, it will just sit passive and wait for a series of instruction to commit. This must be given from something, and from the moment the computer has started that will be the OS who is in charge of everything that could run on the processor.
 
The processing done by "firmware RAID" that uses RAID-capable motherboard chipsets isn't done in the OS per se, it's done at the driver level. You can think of the work that a disk I/O request goes through in much the same way that a network request is handled - there's a "stack" of software that handles different protocol levels all the way from the initial request by the application down through lower and lower-level layers until it hits the actual hardware. In the case of disk I/O, it goes something like this:

Application -> OS -> File System (NTFS) -> Storage Class driver (generic disk) -> Port Driver (SCSI/SATA/IDE/etc) -> Device

This is a little oversimplified, you can find a better description here: http://www.petri.co.il/windows-storage-disk-architecture.htm

"Firmware RAID" would be done at the "Port Driver" level. However it's still vulnerable to OS bugs - the OS is responsible for keeping the system running even if application or (hopefully) driver programs blow up, and if it fails at that task then the CPU hangs or resets and you loose whatever RAID operation was going on at the time. That's not generally a problem unless you happen to be writing to a redundant RAID set at the time and the redundant write only gets halfway done - in that case you're left with a RAID set in an inconsistent state - that can spell trouble.

Hardware RAID is also vulnerable to interruptions, but because it's completely independent of the CPU it can keep on working properly even if the OS crashes. The most serious interruption for hardware RAID is loss of power. Of course there could be bugs in the firmware that runs a hardware RAID card too, but RAID is not all that terribly complicated and the chances of a bug in the RAID card are almost certainly smaller than in a full-blown Windows system loaded up with lots of device drivers, application software, and potentially viruses.
 
Solution