RAIDCore Unleashes SATA to Take Out SCSI

Fulcrum: Features Beat Adaptec And LSI

The functions given above are considered standard for enterprise applications. RAIDCore worried about this, because in truth, the resources available in a RAID controller generously equipped with hard drives could be better used than it has been possible until now. Below we will go into more detail about the possibilities offered by Fulcrum Architecture.

Automatic Performance Tuning

Interestingly enough, RAIDCore says it allows only the choice of whether caching should be enabled or not. Other performance-related parameters such as block size are set automatically.

Moreover, Fulcrum Architecture is able to cache parity data in a RAID 5 to a limited degree, which actually gives it quite an edge in performance when it comes to random write processes. To get this, the old data must be read, followed by the appropriate parity data. The new data must then be written, followed once again by the parity data generated.

Distributed Spare

A hot spare must be present for the smooth operation of a RAID array. After all, a hard drive can give up the ghost at any time, and that's just when a replacement drive has to be available as quickly as possible in order to begin restoring the data protected with parity data - so that the worst imaginable thing won't happen: failure of yet another hard disk.

But there are several questions that not every administrator will necessarily think about: What happens if the hot spare fails? Normally it won't, but a hot spare can break down even during operation. To prevent this scenario, the Fulcrum Architecture allows a so-called distributed spare to be set up. Then none of the drives is designated as a hot spare and switched off - rather, a small sector of each drive is left empty. In the event of a crash the array is "spread out" to these sectors, making the restore function more like a RAID level migration using a hard drive and less like a classic rebuild.

If an array configured with Distributed Spare loses a drive (in this case a RAID 50), the array is migrated into a new RAID 50, using the storage capacity not used beforehand (lower branch).

Access times and minimum transfer rates benefit from this, because the write/read heads of the hard drives have shorter routes to complete. The minimum transfer rate of the drive is only reached when there is a crash.