Sign in with
Sign up | Sign in

Results: RAID 5 Performance

Six SSD DC S3500 Drives And Intel's RST: Performance In RAID, Tested
By

RAID 5 can sustain the failure of one drive. RAID 6 (which Intel's integrated controller does not support) will keep an array up even after two failures. Of course, if you build a volume using SSDs, RAID 5 will cost you one drive worth of capacity. Using a trio of 480 GB SSD DC S3500s, losing a third of the configuration hurts. Giving up one drive out of six is less painful. But as you add storage to the array, the percentage of capacity lost to parity goes down.

Typically, writing the extra parity data also means that performance drops below single-drive levels, particularly without DRAM-based caching. Intel's Rapid Storage Technology relies on host processing power and not a discrete RAID controller, but it can help speed up writes substantially (particularly sequentials), depending on how caching is configured. With that said, enabling caching is far more helpful on arrays of mechanical disks. Why? Random writes are literally hit or miss. If data is in the array's cache, it's serviced at DRAM speeds. If not, latency shoots up as the I/O is located elsewhere. That penalty doesn't affect most hard drive arrays, but it slows down SSDs more dramatically.

It's somewhat of an issue, then, that read and write caching cannot be fully disengaged with Intel's RST in RAID 5. For more potential speed, disk drives would read data near a requested sector (on the way to the needed data) and toss that information in a buffer with the hope that, should it be called upon, it'd be ready more quickly. That's plausible in a sequential operation, but a lot less so when it comes to random accesses. As it happens, this same principle applies to RAID arrays. Write caching can be disabled for data security reasons, but RST always has some form of read-ahead enabled, passing data along to a RAM buffer on the host for use later. A hard drive's buffer typically holds this information, but a RAID setup passes the data along to the controller.

What does that end up looking like? Over the last two pages, I've tried to drive home that you have up to about 1.6 GB/s of usable throughput with Intel's PCH. Now, consider this chart:

With the previously discussed read-ahead behavior passing un-requested, adjacent data along to the host system's memory, sequential reads enjoy a significant boost, but only while the DRAM cache lasts. As you can see, read speeds from a three-drive RAID 5 array reach a stratospheric 3 GB/s. The data isn't coming from the SSDs, but rather our DDR3. The caveat is that getting such a notable boost requires significant drive utilization. It's a lot easier to stack commands on a hard drive. But it's a lot more challenging to do this in the real world on an SSD, since they service requests so much faster.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 15 comments.
This thread is closed for comments
  • 1 Hide
    apache_lives , October 9, 2013 1:19 AM
    page 2 - 8GB cache?
  • 0 Hide
    SteelCity1981 , October 9, 2013 2:19 AM
    "we settled on Windows 7 though. As of right now, I/O performance doesn't look as good in the latest builds of Windows."

    Ha. Good ol Windows 7...
  • 3 Hide
    colinstu , October 9, 2013 4:14 AM
    These gotta be the most difficult-to-read graphs ever.
  • 2 Hide
    vertexx , October 9, 2013 5:08 AM
    In your follow-up, it would really be interesting to see Linux Software RAID vs. On-Board vs. RAID controller.
  • 1 Hide
    tripleX , October 9, 2013 5:19 PM
    Wow, some of those graphs are unintelligible. Did anyone even read this article? Surely more would complain if they did.
  • 0 Hide
    utomo , October 9, 2013 5:26 PM
    There is Huge market on Tablet. to Use SSD in near future. the SSD must be cheap to catch this huge market.
  • -1 Hide
    tripleX , October 9, 2013 5:30 PM
    Wow, some of those graphs are unintelligible. Did anyone even read this article? Surely more would complain if they did.
  • -1 Hide
    tripleX , October 9, 2013 6:11 PM
    Wow, some of those graphs are unintelligible. Did anyone even read this article? Surely more would complain if they did.
  • 1 Hide
    klimax , October 10, 2013 2:08 AM
    "You also have more efficient I/O schedulers (and more options for configuring them)." Unproven assertion. (BTW: Comparison should have been against Server edition - different configuration for schedulers and some other parameters are different too)
    As for 8.1, you should have by now full release. (Or you don't have TechNet or other access?)
  • 1 Hide
    rwinches , October 10, 2013 4:08 AM
    " The RAID 5 option facilitates data protection as well, but makes more efficient use of capacity by reserving one drive for parity information."

    RAID 5 has distributed parity across all member drives. Doh!
  • 0 Hide
    Andy Chow , October 14, 2013 3:45 PM
    Love this article. I'd like to see the same test done on the AMD 990fx. It's had 6* Sata 3 ports for a long time. I suspect it's a lot slower than Intel's, and plateau's more quickly, obviously being an older SB.

    "The larger block sizes generate less bandwidth" Really? Seems to me the opposite is happening. I'd guess the high IOPS of smaller blocks also uses more, not less cpu resources. But what do I know?
  • 0 Hide
    Taracta , October 16, 2013 9:10 PM
    I can't believe that nobody mentioned the big write holes errors in the sequential write for RAID 5, 4 drive and 6 drive. This is because your RAID 5 array is not properly configured for your 4 drive and 6 drive configuration at least.
  • 0 Hide
    sertdatarecovery , October 23, 2013 12:22 PM
    Thank God for SSD! That is until your RAID array fails. What do you do then? More info at www.sertdatarecovery.com/raid-data-recovery
  • 0 Hide
    rbdeli , November 3, 2013 4:04 PM
    Here is another worthy contender:
    http://deltecsystems.com/storage/ssd-drives/ocz-deneva-2-ssd-drive/
  • 0 Hide
    Nazia hassan , December 1, 2013 7:58 PM
    Hi