SSDs In RAID: A Performance Scaling Analysis

RAID arrays with dozens of hard drives are not uncommon for reaching certain performance levels. We demonstrate how beautifully SSD RAID arrays can scale. There may come a time when a few flash-based drives will replace entire farms of hard disks.

Barely a week goes by without a new product being introduced to the growing SSD market. Meanwhile, the storage landscape is already packed with MLC and SLC NAND-based solid state drives claiming superlative data throughput rates of more than 250 MB/s (on SATA 3 Gb/s ports) and I/O rates in the five-figure range. In contrast, the veteran hard drives seem like relics from a seemingly bygone era: cheap, much slower, and eventually doomed.

It is not really quite that simple, of course, because when the underlying technology does not fit, the SSD performance data that looks so impressive on paper can quickly go up in smoke, and even fall behind that of notebook hard drives. A flash drive can only reach its full potential with the right combination of hardware resources, controller, cache, and software features.

But that is only a basic requirement and you have to consider other factors, including the latest Serial ATA drivers and SSD firmware, AHCI support through BIOS, as well as the TRIM feature offered in Windows 7, Windows Server 2008 R2, and Linux distributions with kernel version 2.6.18 or higher. This keeps the SSD informed of deleted blocks so that the available storage space is managed better, thereby preventing performance degradation.

Flash Drives For Corporate Use

Because of their technology, SSDs are not only interesting for PC enthusiasts and performance aficionados, but for corporate use as well. Regardless of the usage scenario, there are a lot of technical advantages that favor SSDs: while only a minority of enterprise sector users will benefit a lot from the great throughput rates, the lack of moving parts means superior access times, as well as lower operating temperatures. And most important, especially in servers dealing with huge numbers of individual read and write operations, the I/O performance is miles away from traditional hard drives.

The few drawbacks of SSDs are easy to list: the price per gigabyte is still much higher than for traditional hard drives. Also, the lifetime of flash memory is technically limited to a certain number of write cycles. This is not usually a significant disadvantage, especially since this problem is shared with traditional hard drives due to mechanical wear and tear, and the latest high-end flash products have a life expectancy purported to be on par with enterprise hard drives.

How Do Enterprise SSDs Scale In RAID?

Under what conditions is the use of SSDs worth the investment for a company? We address that question in this article, and answer it from two angles. First, we will investigate the scenarios where the use of SSDs is worth the investment over traditional enterprise hard drives. And because the RAID topic inevitably surfaces in this context, we also take a look at SSD RAID scalability.

The fact that an SSD RAID array almost always dominates a comparable hard drive RAID array in terms of performance has been extensively studied by us and others, and is therefore not the main focus of this article. Instead, what is important here is whether the 'Online Capacity Expansion' feature of RAID arrays now also resembles 'Online I/O Capability Expansion,' because the I/O performance increases significantly with every drive added, and it is higher than with traditional hard disks.

This thread is closed for comments
    Your comment
  • campb292
    Ask GR what he thinks.
  • mrbongal007
    hi, pls help in understanding how are you getting 1000 MB/s performance on a sata3 port/lane which gives max of 600MB/s. if the answer is raid striping across 5 lanes then potentially we can get this performance on a sata2 port as well since each lane is being taxed to appx 200MB/s. appreciate your help in understanding this. thanks.
  • oxxfatelostxxo
    OutPut is through a pci x8 slot. Max transfer of 6gb/s I think. The sata 2 max is per channel for each drive. Not a combined max
  • chefboyeb
    I guess i would be better off adding 2 more ocz vertex ssds to my existing 3 ssd raid 0 setup afterall... I was concerned about the limitations of motherboard, but not anymore... Thanks
  • oxxfatelostxxo
    ... The motherboard will Max out. You need a raid card to see those speeds
  • saymi
  • maybe is just me only.
    3 reason hold me back moving HD to SSD.
    1st. money VS pre GB.
    2nd. the technology is mature enough to keep that real speed in stabilize performance.
    3rd. RAID support in SSD still in wonderland.
    conclusion. all the read/write speed in the benchmark is full of BS, but if you can maintain the driver is reading purpose only but never erase and delete any old data and rewrite new files into it. and you are a heavily download user. you will lost the speed advance reading/writing in a SSD over a traditional HD. SSD is pretty fast only in a fresh windows install for the first time. it will lose speed performance in time and you have to do another fresh reinstall again and again.
  • 2nd. the technology is not mature enough to keep that real speed in stabilize performance.
  • nebun
    oxxfatelostxxo... The motherboard will Max out. You need a raid card to see those speeds

    or just use an PCIE SSD like the revodrive x2 :) no limit
  • parpanghel
    Yeah right, SASMSUNG drives are the best :-} too much souce last night my fren' ?
  • oxxfatelostxxo
    To ssdlkje
    1: Money vs gb.. they arn't really that expensive anymore, after rebate i spent 180$ for 2 60gb ssds, and put them in raid 0 for my OS. Runs perfect and not too expensive.

    2: My SSD's constantly get files written to them, i have yet to see any loss in performance. 6 months so far with same windows install.

    3: Raid support in wonderland? not sure what you mean.., you can put them in a raid just like a hdd. Works exactly the same.

    @ Nebun: yea you can use a revodrive, but if you look at reviews very glitchy and lots of issues. Not to mention you would get better performance with a raid card and ssd's for about the same price.
  • hixbot
    I would really love to see an article evaluating degraded performance in RAID vs degraded performance in single drive. Trim Vs Garbage Collection etc, and all our options to keep performance at its best in RAID.
    How do different consumer model SSDs handle degraded performance in RAID operation?

    Also it would be nice to see how many RAID 0 SSDs can a typical onboard RAID controller handle before the linear performance model breaks down.
  • interesting posts and article. I wonder if the use of a VHD (Windows 7) file for the OS volume would be the answer to future performance degration. There is a performance hit because it is a VHD, but it should be negligible because of the performance gains from the SSD and RAID 0.
  • re: oxxfatelostxxo
    good for you. but I suggest you do a benchmark for yourself. I m not sure
    is just probably only happens to me. I spent $1000$ to get 4 SSD 6 month ago from newegg. try to set up it as raid 0 in my workstation for a better file cache speed for any purpose such as render or fluid simulation. it drive me crazy just try to set things up right. 4 SSD in raid 0 will lose the function on trim. and garbage collector is never be a another good solution either. so I sold it on ebay. until intel or anyone can come up with a better comparable raid card for SSD with full trim support even in any raid mode. I will give another try. but for now 6 tradition HD in raid 0 average speed 250 read/write. good but not the best and less problem.
  • JohnnyLucky
    I read a similar report about a company over on the mainstream side. They created a massive array using Intel enterprise ssd's. It was amazing what they were able to accomplish.
  • emperornicon
    If I understand things right, the folks dealing some these types SSD Drives said they tend to degenerate or degrade over their life span. I am one of the lucky individuals that live on a Raid-0 system 365year. I own six of the Western Digital 250 GB WD2500AAKS-00UU3A0 these drives all have a 16 Mb buffer of course running SATA II 3.0 GB/S speeds and cost $47.99 each they boast a 631.3 MB/sec average write 231.6 MB/sec average 1500 GB but really 1396.9 GB volume. Each drive by itself reads at an average rate of 130.2 Mb/s then writes at an average of 76.4 Mb/s. My Raid-0 array should be reading at 781.2 Mb/s, and then it should be writing at 458.4 Mb/s this must be from some kind of overhead.
    My personal computer consist of these parts and configurations; Micron 8 GB ECC 1600 MHz 7-7-7-27-1T 8*200 MHz 1.5V 2 GB * 4 sticks and OC’D @ 1666 MHz 8-8-8-30-1T 8 * 250 MHz 1.65V. My Processor AMD Phenom II X6 1090T 3.2 GHz 1.3V @ 4125 MHz 16.5*250 1.45V on a 64bit OS, and @ 4375 MHz 17.5*250 1.45V on a x86 OS Utilizing a Gigabyte GA-MA790XT-U4DP motherboard F8G BIOS. Then I am using the Corsair H50 water cooler with custom Heat spreaders for my ram. Finally my graphics card is an ATI Radeon HD 5870 1GB DDR5 stock clock.
    On the other hand, should I be content that I am able to install windows XP 2k3 or 2k8 in 5min and Linux Fedora 13, Ubuntu 10.10 and Centos 5 in 3min of course my Installs are unattended not from a DVD disk.

    1. Should I really replace my disks with SSD Drives?
    2. Is the expense worth cost, not thinking of performance of electrical efficiency in mind?
    3. How does the SSD category’s score in reliability in terms of being trusted with mission critical data with patient health care records?
    4. Is it possible for the SSD Drives out live mechanical Hard disks in terms of MBTF?
    5. Is this storage technology still in its infancy?
    6. Will the price SSD Drive ever lower to compete with traditional storage?
  • marraco
    I want a faster RAID 0, but I also want to buy new drives, and add them to my old RAID.

    Problem is that new drives have different speeds (and also are larger). So, I need “Asymmetric RAID 0”. A technology which not exists, but would be easy to implement. An Asymmetric RAID 0 controller should distribute data in proportion to the speed of each drive. Small Chunks of data to slow drives, and largest bits of data to faster drives. Otherwise, the slower drives would bottleneck the faster ones. (It means different sizes of partitions, each one proportional to the drive speed).

    I also want to know scaling on integrated RAID controllers like the ones included on motherboards.
  • emperornicon
    Take look at my motherboard it Gigabyte GA-MA790XT-U4DP even though it has sb600 South Bridge. I have a 1500 GB Raid-0 array it handles well at 631.3 MB/sec average write 231.6 MB/sec average. I found the balance between performances, capacity, and price. I noticed 4* 1 Tb Raid-0 drives with a 64mb buffer all Seagate Sata II, which reads, at 112 Mb/Sec and at Raid-0 * 4 drives 353 Mb/sec theoretically a Raid-0 * 6 drives 559 Mb/sec. to me it seems slower and when a drive goes it more expensive and you have more data loss.
    The larger the drive the longer the time to complete a job example 500 GB folder 3h 30m Raid-0 * 4 TB drives or at 6 * 250 GB which 1 h 50m and that duplicating to an identical array each time
  • netsql
    That would be a good article, which Mobo has best RAID speed (of the new 500mb/s SSDs)
  • MRFS
    Is this reviewer setting up a future comparison of these results with the same number of SATA/6G SSDs?

    The bandwidth of each LSI 9280-24i4e RAID controller port is 6 Gb/s, but "each drive has a capacity of 100 GB, is based on SLC NAND flash, has a 3 Gb/s SATA interface."

    I didn't see this discrepancy mentioned (yet) in any of the Comments above: please correct me I am wrong.

    p.s. If no more than 8 x SSDs are needed, then a less expensive 6G RAID controller is a viable option e.g. Highpoint RocketRAID 2720. I would enjoy seeing the results obtained from the same tests, using the latter RAID controller and scaling 1-8 Sandforce SF-2000 series 6G SSDs.

  • larkspur
    emperornicon3. How does the SSD category’s score in reliability in terms of being trusted with mission critical data with patient health care records?

    A TON better than 6 spinning discs in a RAID-0!!! Hopefully you're not really keeping mission critical data on a RAID-0 with 6 mechanical drives... If so, my god man at least get a real RAID card (with a battery backup) and at least do RAID-5... SSDs have been used for mission-critical apps for a long time. RAID-0 is not viable for mission-critical anything. Sandforce's SSD controllers were originally developed for enterprise mission critical apps. Their wear-leveling and garbage collection techniques are superb.
  • emperornicon
    no mission critical data on my own pc
  • MRFS
    > at least do RAID-5 ... RAID-0 is not viable for mission-critical anything

    I think the authors mentioned that their goal was to measure scaling and the maximum throughput that could be obtained from this combination of hardware, with the expectation that other, more resilient RAID modes would come in lower on those same measures.

    Another comparison I would like to see, done right, is to compare a "real RAID card" with dedicated IOP, on the one hand, and a "cheap RAID card" that relies on unused cores in a quad-core CPU.

    Remember, the Sandy Bridge CPUs now have 4 cores + hyperthreading! Surely, some of that 8-thread goodness can be harnessed to handle the work that a more expensive RAID controller would normally do.

    How many enthusiasts and builders can keep "CPU Usage" of all 8 threads above 90% in normal production environments? (Cf. Windows Task Manager)

    Your thoughts?

  • emperornicon
    I dunno ive never noticed any much usege with my dumptruck of a ferrari's array and that with 6 cores