Sign in with
Sign up | Sign in

Test Setup And Components

Almost 20 TB (Or $50,000) Of SSD DC S3700 Drives, Benchmarked
By

HBAs and Hardware RAID

If you've looked at any motherboard based on an Intel or AMD chipset lately, you probably noticed that it didn't have anywhere close to 24 SATA ports on it. It goes without saying, but we need some help in that department to facilitate communication with our SSDs.

Intel's HBA/Integrated RAID CardsIntel's HBA/Integrated RAID Cards

Intel markets its RMSKB080 and RMSJB080 (shown above) as entry-level RAID cards. It's true that they're hardware-based RAID controllers. But really, these cards are just HBAs in disguise. The KB and JB are identical, feature-wise. The JB simply slots into that proprietary mezzanine connector we mentioned on the previous page.

Our controllers center on LSI's SAS2308 PowerPC-based silicon. It might even help to think of them as mostly rebadged LSI 9207-8i HBAs. Whereas the 9207-8i ships without RAID functionality by default (Initiator-Target mode), the RMSKB080s do ship with firmware that enables this feature (known as Integrated RAID mode). We're not really interested in using them for their hardware RAID capabilities, but rather their ability to pass a drive through directly to the host. Then, our server can handle all of the RAID calculations and overhead in software.

We do have one Intel RMS25CB080 adapter on-hand to try a little hardware RAID action, if the need arises. But with just one card, it's hard to harness the performance of 24 drives in an appropriately speedy fashion. Based on LSI's Gen3 PCIe SAS2208 RAID offerings with 1 GB of DDR3 cache and a beefier PowerPC processor, the CB handles the computationally-intense parity RAID levels (5/6) that the lighter KB cards cannot. RAID 0 and 1 calculations aren't very taxing, but the parity calculations involved in RAID 5 and 6 necessitate more serious muscle.

It's worth pointing out that these three Intel storage products only work in the company's Xeon E5-compatible motherboards. You have to be using an LGA 2011-equipped platform and it has to be Intel-branded, else the cards don't even power up. The mezzanine add-in employs a proprietary form factor anyways, so that's less of an issue. The RMSKB080 can be found for a third of the price of LSI's 9207-8i, but as far as we can tell, there's no way to cross-flash it for broader compatibility. Also, it doesn't appear that flashing the firmware from IR to IT mode is supported. Intel does sell products intended for more general compatibility. However, the models we have here are basically upgrades for this platform specifically.

Software RAID: Not Evil After All

Armed with three HBAs, we'll be using the server's operating system to create RAID volumes. Windows has long supported striping, mirroring, and even RAID 5. But its performance is generally pretty poor, and there's a complete lack of flexibility in terms of settings. Windows 8 introduces some interesting new concepts through Storage Spaces, but these aren't useful to us for this exhibition.

Linux is a different beast. Modern Linux distros include a number of RAID options. Somewhat analogous to Windows' disk management RAID modes, logical volume management provides RAID through the file system. But Linux's true ace is mdadm, which facilitates the creation of RAID 1/0/5/6 volumes (plus compound modes like RAID 50/60). We can define the strip size and even allocate system memory for cache, the same way a hardware-based RAID adapter would. This is a far more alluring prospect, essentially turning our Xeon server into one big RAID controller.

RAID 5/6 levels require some truly sophisticated math to create and recover arrays. Those algorithms benefit from instruction extensions built into architectures like PowerPC. Fortunately, x86 processors can accelerate these calculations when the software is designed to exploit them. mdadm has been worked over to take advantage of these benefits wherever possible, and the open source community can continue to improve upon it when necessary. 


Test Configuration
Server
Intel R2224IP4LHPCBPPP
Mainboard
Intel S2600IP4 "Iron Pass", Dual Socket R/LGA 2011
Processors
2 x Intel Xeon E5-2665 (Sandy Bridge-EP): 2.4 GHz Base Clock Rate, 3.1 GHz Max. Turbo Boost, 32 nm, 8C/16T, 115 W TDP, LGA 2011, 20 MB Shared L3 Cache
Memory
8 x Kingston KVR13LR9D4/8HC 1.35 V, 1,333 MT/s ECC LRDIMM
Chassis
Intel Server System R2200GZ Family, 24-Drive Bay Backplane, 2U Rack Chassis
PSU
2 x Intel Redundant 750 W, 80 PLUS Platinum, FS750HS1-00
Expander
Intel RES2CV360 36-Port SAS2 Expander
Storage Controllers
2 x Intel RMS25KB080 Integrated RAID Modules
1 x Intel RMS25JB080 Integrated RAID Module, Mezzanine
1 x Intel RMS25CB080 RAID Controller, Mezzanine
Intel C600 AHCI SATA 6Gb/s
Boot Drive
Kingston 200 GB E100, SATA 6Gb/s, FW: 5.15
Test Drives
24 x 800 GB Intel SSD DC S3700, SATA 6Gb/s, FW: 5DVA0138
Operating Systems
CentOS 6.4 x86_64
Windows Server 2012
Management
Intel RMM4 BMC Remote Management System
Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 46 comments.
This thread is closed for comments
  • 0 Hide
    ASHISH65 , April 14, 2013 9:49 PM
    very good review and also helpfull!
  • 0 Hide
    mayankleoboy1 , April 14, 2013 9:52 PM
    IIRC, Intel has enabled TRIM for RAID 0 setups. Doesnt that work here too?
  • 3 Hide
    Novulux , April 14, 2013 10:13 PM
    You have graphs labeled as MB/s when it should be IOPS?
  • -1 Hide
    DarkSable , April 14, 2013 10:34 PM
    Idbuaha.

    I want.
  • 3 Hide
    techcurious , April 14, 2013 11:10 PM
    I like the 3D graphs..
  • 0 Hide
    cangelini , April 14, 2013 11:26 PM
    NovuluxYou have graphs labeled as MB/s when it should be IOPS?

    Fixing now!
  • -1 Hide
    sodaant , April 14, 2013 11:29 PM
    Those graphs should be labeled IOPS, there's no way you are getting a terabyte per second of throughput.
  • 0 Hide
    cryan , April 15, 2013 12:11 AM
    mayankleoboy1IIRC, Intel has enabled TRIM for RAID 0 setups. Doesnt that work here too?


    Intel has implemented TRIM in RAID, but you need to be using TRIM-enabled SSDs attached to their 7 series motherboards. Then, you have to be using Intel's latest 11.x RST drivers. If you're feeling frisky, you can update most recent motherboards with UEFI ROMs injected with the proper OROMs for some black market TRIM. Works like a charm.

    In this case, we used host bus adapters, not Intel onboard PHYs, so Intel's TRIM in RAID doesn't really apply here.


    Regards,
    Christopher Ryan
  • 5 Hide
    cryan , April 15, 2013 12:16 AM
    DarkSableIdbuaha.I want.


    And I want it back! Intel needed the drives back, so off they went. I can't say I blame them since 24 800GB S3700s is basically the entire GDP of Canada.

    techcuriousI like the 3D graphs..


    Thanks! I think they complement the line charts and bar charts well. That, and they look pretty bitchin'.


    Regards,
    Christopher Ryan

  • 0 Hide
    utroz , April 15, 2013 12:33 AM
    That sucks about your backplanes holding you back, and yes trying to do it with regular breakout cables and power cables would have been a total nightmare, possible only if you made special holding racks for the drives and had multiple power suppy units to have enough sata power connectors. (unless you used the dreaded y-connectors that are know to be iffy and are not commercial grade) I still would have been interested in someone doing that if someone is crazy enough to do it just for testing purposes to see how much the backplanes are holding performance back... But thanks for all the hard work, this type of benching is by no means easy. I remember doing my first Raid with Iwill 2 port ATA-66 Raid controller with 4 30GB 7200RPM drives and it hit the limits of PCI at 133MB/sec. I tried Raid 0, 1, and 0+1. You had to have all the same exact drives or it would be slower than single drives. The thing took forever to build the arrays and if you shut off the computer wrong it would cause huge issues in raid 0... Fun times...
  • -1 Hide
    hansrotec , April 15, 2013 12:35 AM
    with the crucial m500 960 (599.99 usd) out you could drop the cost by a pretty penny putting in in range of groups
  • 5 Hide
    PadaV4 , April 15, 2013 3:25 AM
    The 3d graphs look sexy :D 
  • 0 Hide
    Aegean BM , April 15, 2013 3:58 AM
    Nice to see "Sky is the limit" once in a while because we're curios and because yesteryear's sky is today's budget rack. (Although in my humble prediction, I can't afford this setup for 10 years.)

    That said, I would dearly like to see the follow up "Fastest Windows Storage for $1000". (I assume it would be RAID 0 of two 500GB SSD.) I picked a grand because it's a common anchor point, affordable today, and anything less is probably just "Get yourself the biggest SSD you can afford on our monthly SSD comparison chart."
  • 0 Hide
    Aegean BM , April 15, 2013 4:23 AM
    SSD RAID 0 is sexy. With HDD being so massive and cheap, I wonder how close HDD can come to SSD in RAID 0. (As if you don't already have an overwhelming stack of requests and ideas of your own for new articles.)
  • -1 Hide
    ojas , April 15, 2013 5:59 AM
    Where's Andrew Ku? Isn't this usually his stuff?
  • 0 Hide
    ojas , April 15, 2013 6:14 AM
    Aegean BMSSD RAID 0 is sexy. With HDD being so massive and cheap, I wonder how close HDD can come to SSD in RAID 0. (As if you don't already have an overwhelming stack of requests and ideas of your own for new articles.)

    They did compare 8 (WD?) HDDs to some Samsung SSDs (830 series, i think).
    Let me see...
    No, 470 series vs Fujitsu HDDs:
    http://www.tomshardware.com/reviews/ssd-raid-array-hard-drive,2775.html
  • 1 Hide
    cryan , April 15, 2013 6:36 AM
    BigMack70lol 32 threads of QD 32That setup is ridiculous... this article was a fun read


    That's equivalent to a total outstanding IO count of 1024. The only reason it didn't go up to 128 threads of 128 QD is because (1) it really muddies up the charts and (2) performance mostly maxes out at TC32/QD32.

    Aegean BMSSD RAID 0 is sexy. With HDD being so massive and cheap, I wonder how close HDD can come to SSD in RAID 0. (As if you don't already have an overwhelming stack of requests and ideas of your own for new articles.)


    The truth is, even with the fastest 15K RPM SAS HDD burners, you still overcome the fundamental issues. When you RAID some HDDs together, you do get much better performance and responsiveness. It's just not anything like the jolt a single SSD can provide.

    Regards,
    Christopher Ryan

  • 0 Hide
    yialanliu , April 15, 2013 6:40 AM
    Very cool to see the performance but I would love to see a test of RAID 5/6 as a much more practical usage of multiple SSDs
  • 0 Hide
    veroxious , April 15, 2013 6:48 AM
    What I would like to know is what the performance difference would be if you stuck that 24 Intel SSD drives in a SAN scenario i.e swopping out 24 300GB 15K SAS drives in an entry level Dell MD3220 chassis with dual-socket sixteen core Intel powered host and 128GB of RAM.................
  • 0 Hide
    veroxious , April 15, 2013 6:50 AM
    Sorry forgot to add.........in a RAID 10/50 config

Display more comments