Results: RAID 5 Performance
RAID 5 can sustain the failure of one drive. RAID 6 (which Intel's integrated controller does not support) will keep an array up even after two failures. Of course, if you build a volume using SSDs, RAID 5 will cost you one drive worth of capacity. Using a trio of 480 GB SSD DC S3500s, losing a third of the configuration hurts. Giving up one drive out of six is less painful. But as you add storage to the array, the percentage of capacity lost to parity goes down.
Typically, writing the extra parity data also means that performance drops below single-drive levels, particularly without DRAM-based caching. Intel's Rapid Storage Technology relies on host processing power and not a discrete RAID controller, but it can help speed up writes substantially (particularly sequentials), depending on how caching is configured. With that said, enabling caching is far more helpful on arrays of mechanical disks. Why? Random writes are literally hit or miss. If data is in the array's cache, it's serviced at DRAM speeds. If not, latency shoots up as the I/O is located elsewhere. That penalty doesn't affect most hard drive arrays, but it slows down SSDs more dramatically.
It's somewhat of an issue, then, that read and write caching cannot be fully disengaged with Intel's RST in RAID 5. For more potential speed, disk drives would read data near a requested sector (on the way to the needed data) and toss that information in a buffer with the hope that, should it be called upon, it'd be ready more quickly. That's plausible in a sequential operation, but a lot less so when it comes to random accesses. As it happens, this same principle applies to RAID arrays. Write caching can be disabled for data security reasons, but RST always has some form of read-ahead enabled, passing data along to a RAM buffer on the host for use later. A hard drive's buffer typically holds this information, but a RAID setup passes the data along to the controller.
What does that end up looking like? Over the last two pages, I've tried to drive home that you have up to about 1.6 GB/s of usable throughput with Intel's PCH. Now, consider this chart:
With the previously discussed read-ahead behavior passing un-requested, adjacent data along to the host system's memory, sequential reads enjoy a significant boost, but only while the DRAM cache lasts. As you can see, read speeds from a three-drive RAID 5 array reach a stratospheric 3 GB/s. The data isn't coming from the SSDs, but rather our DDR3. The caveat is that getting such a notable boost requires significant drive utilization. It's a lot easier to stack commands on a hard drive. But it's a lot more challenging to do this in the real world on an SSD, since they service requests so much faster.