Sign in with
Sign up | Sign in

Almost 20 TB (Or $50,000) Of SSD DC S3700 Drives, Benchmarked

Almost 20 TB (Or $50,000) Of SSD DC S3700 Drives, Benchmarked
By

We've already reviewed Intel's SSD DC S3700 and determined it to be a fast, consistent performer. But what happens when we take two-dozen (or about $50,000) worth of them and create a massive RAID 0 array? Come along as we play around in storage heaven.

For as long as SSDs have been around, power users and enterprise professionals have been configuring them in RAID arrays. Connect a few low-capacity solid-state drives, and you get one spacious and lightning-fast volume. There are a number of great reasons to build such a potent arrangement, and some compelling reasons not to. But perhaps conventional wisdom is up for review now.

You could argue that there are actually fewer reasons to team up a set of solid-state drives nowadays. Price per gigabyte continues to fall as capacity creeps higher. And folks looking for the ultimate in performance have a number of PCI Express-based options available to them. But we don't share that opinion, particularly after Intel sent us 24 of its high-end SSD DC S3700 to toy around with (check out our review: Intel SSD DC S3700 Review: Benchmarking Consistency).

The SSD DC S3700 family boasts impressive specs. At its peak, the largest model is capable of sequential reads of up to 500 MB/s and writes as high as 460 MB/s. Random 4 KB reads clock in up to 76,000 IOPS, while writes plateau at 36,000. Of course, the real reasons to want one of these drives are their bolstered endurance, end-to-end data protection, resilience against power loss, and a price tag just north of $2/GB. 

As we know, the SSD DC S3700 ships in capacities as low as 100 GB. Two dozen of those smaller drives could do some real damage in the right hands. After all, you'd be looking at 2.4 TB in RAID 0. But we got the 800 GB version for our little exhibition. At about $2,000 each, that's roughly 50 grand worth of flash-based storage.

That comes out to a mind-boggling 24,576 GiB, by the way. Each flagship 800 GB SSD DC S3700 features a full terabyte of flash on-board. Even after you factor in over-provisioning, we still end up with 745 GiB of usable space on each drive, giving us an astounding 18 TiB, all-told. Considering these things are designed to withstand up to 10 full writes per day for five years, the possibilities seem endless.

If your life happens to revolve around solid-state storage, then two-dozen 800 GB SSD DC S3700s in one place are like having a bespoke Rolls Royce trimmed in fragrant stegosaurus hide. It seems too opulent to even exist. Fortunately, a conversation with the right folks at Intel made it possible for us to line this up. Now, what to do with all of our high-end hardware?

The mandate seemed clear: let's stripe these bad boys together and see what sort of performance is really possible.

Intel and LSI Hardware RAID ControllersIntel and LSI Hardware RAID Controllers

We're presented with a few challenges, though. If we only had eight drives to deal with, our situation would be simple. Many hardware RAID controllers offer eight ports of connectivity. An octet of SSDs would give each drive its own port and we'd be off to the races. But 24 force us to consider alternative configurations. We could use three RAID cards, but then we wouldn't be able to create a single volume. We could also run dozens of drives from one controller using an expander, but that only makes sense for mechanical disks that don't saturate a 6 Gb/s link. We'll tackle this conundrum shortly.

Then there's the sad fact that so many drives and their associated connections are physically difficult to manage. For every SSD, you're looking at one power and one data cable. So, we need a backplane to provide both in one convenient package. And because we also need a lot of host resources to tax this gratuitous storage subsystem, we can address the setup side by using a server equipped with a 24-port backplane. Intel heard our request on that end, too, and followed up our package of SSDs with a dual Xeon E5 machine exposing 80 lanes of third-gen PCI Express and a number of storage-centric features.

And with that, the hardware is ready for action. Pair 24 SSD DC S3700s with a dual-processor 2U server and let 'er rip. But we're still missing one piece of the puzzle. As a result of the way these drives are set up, we must rely on oft-maligned software RAID. Depending on whose office you happen to be standing in, those two words together can get you slapped across the face. But that's alright by us. Software-based RAID functionality has come a long way over the past 15 years, and although it saps host resources, our 16-core server has plenty of horsepower in reserve.

At least for this first round of experimentation, we're skipping the most responsible, performance-robbing RAID levels (like 5 and 6) in favor of the far more exciting (and dangerous) RAID 0, which should let us get to all of the performance and capacity these drives can manage.

Member Drives
Total Capacity
1 x 800 GB DC S3700
745 GiB
4 x 800 GB DC S37002,980 GiB
8 x 800 GB DC S37005,960 GiB
16 x 800 GB DC S370011,920 GiB
24 x 800 GB DC S370017,880 GiB


Just one of our SSD DC S3700s is larger than 12 of Intel's original 64 GB X25-E enterprise drives. To match the capacity of our striped 24-drive array built using 800 GB repositories, you'd need more than 300 of those X25-Es. Yeah, we're pretty excited about having so much flash at our disposal.

Display 46 Comments.
This thread is closed for comments
  • 0 Hide
    ASHISH65 , April 14, 2013 9:49 PM
    very good review and also helpfull!
  • 0 Hide
    mayankleoboy1 , April 14, 2013 9:52 PM
    IIRC, Intel has enabled TRIM for RAID 0 setups. Doesnt that work here too?
  • 3 Hide
    Novulux , April 14, 2013 10:13 PM
    You have graphs labeled as MB/s when it should be IOPS?
  • -1 Hide
    DarkSable , April 14, 2013 10:34 PM
    Idbuaha.

    I want.
  • 3 Hide
    techcurious , April 14, 2013 11:10 PM
    I like the 3D graphs..
  • 0 Hide
    cangelini , April 14, 2013 11:26 PM
    NovuluxYou have graphs labeled as MB/s when it should be IOPS?

    Fixing now!
  • -1 Hide
    sodaant , April 14, 2013 11:29 PM
    Those graphs should be labeled IOPS, there's no way you are getting a terabyte per second of throughput.
  • 0 Hide
    cryan , April 15, 2013 12:11 AM
    mayankleoboy1IIRC, Intel has enabled TRIM for RAID 0 setups. Doesnt that work here too?


    Intel has implemented TRIM in RAID, but you need to be using TRIM-enabled SSDs attached to their 7 series motherboards. Then, you have to be using Intel's latest 11.x RST drivers. If you're feeling frisky, you can update most recent motherboards with UEFI ROMs injected with the proper OROMs for some black market TRIM. Works like a charm.

    In this case, we used host bus adapters, not Intel onboard PHYs, so Intel's TRIM in RAID doesn't really apply here.


    Regards,
    Christopher Ryan
  • 5 Hide
    cryan , April 15, 2013 12:16 AM
    DarkSableIdbuaha.I want.


    And I want it back! Intel needed the drives back, so off they went. I can't say I blame them since 24 800GB S3700s is basically the entire GDP of Canada.

    techcuriousI like the 3D graphs..


    Thanks! I think they complement the line charts and bar charts well. That, and they look pretty bitchin'.


    Regards,
    Christopher Ryan

  • 0 Hide
    utroz , April 15, 2013 12:33 AM
    That sucks about your backplanes holding you back, and yes trying to do it with regular breakout cables and power cables would have been a total nightmare, possible only if you made special holding racks for the drives and had multiple power suppy units to have enough sata power connectors. (unless you used the dreaded y-connectors that are know to be iffy and are not commercial grade) I still would have been interested in someone doing that if someone is crazy enough to do it just for testing purposes to see how much the backplanes are holding performance back... But thanks for all the hard work, this type of benching is by no means easy. I remember doing my first Raid with Iwill 2 port ATA-66 Raid controller with 4 30GB 7200RPM drives and it hit the limits of PCI at 133MB/sec. I tried Raid 0, 1, and 0+1. You had to have all the same exact drives or it would be slower than single drives. The thing took forever to build the arrays and if you shut off the computer wrong it would cause huge issues in raid 0... Fun times...
  • -1 Hide
    hansrotec , April 15, 2013 12:35 AM
    with the crucial m500 960 (599.99 usd) out you could drop the cost by a pretty penny putting in in range of groups
  • 5 Hide
    PadaV4 , April 15, 2013 3:25 AM
    The 3d graphs look sexy :D 
  • 0 Hide
    Aegean BM , April 15, 2013 3:58 AM
    Nice to see "Sky is the limit" once in a while because we're curios and because yesteryear's sky is today's budget rack. (Although in my humble prediction, I can't afford this setup for 10 years.)

    That said, I would dearly like to see the follow up "Fastest Windows Storage for $1000". (I assume it would be RAID 0 of two 500GB SSD.) I picked a grand because it's a common anchor point, affordable today, and anything less is probably just "Get yourself the biggest SSD you can afford on our monthly SSD comparison chart."
  • 0 Hide
    Aegean BM , April 15, 2013 4:23 AM
    SSD RAID 0 is sexy. With HDD being so massive and cheap, I wonder how close HDD can come to SSD in RAID 0. (As if you don't already have an overwhelming stack of requests and ideas of your own for new articles.)
  • -1 Hide
    ojas , April 15, 2013 5:59 AM
    Where's Andrew Ku? Isn't this usually his stuff?
  • 0 Hide
    ojas , April 15, 2013 6:14 AM
    Aegean BMSSD RAID 0 is sexy. With HDD being so massive and cheap, I wonder how close HDD can come to SSD in RAID 0. (As if you don't already have an overwhelming stack of requests and ideas of your own for new articles.)

    They did compare 8 (WD?) HDDs to some Samsung SSDs (830 series, i think).
    Let me see...
    No, 470 series vs Fujitsu HDDs:
    http://www.tomshardware.com/reviews/ssd-raid-array-hard-drive,2775.html
  • 1 Hide
    cryan , April 15, 2013 6:36 AM
    BigMack70lol 32 threads of QD 32That setup is ridiculous... this article was a fun read


    That's equivalent to a total outstanding IO count of 1024. The only reason it didn't go up to 128 threads of 128 QD is because (1) it really muddies up the charts and (2) performance mostly maxes out at TC32/QD32.

    Aegean BMSSD RAID 0 is sexy. With HDD being so massive and cheap, I wonder how close HDD can come to SSD in RAID 0. (As if you don't already have an overwhelming stack of requests and ideas of your own for new articles.)


    The truth is, even with the fastest 15K RPM SAS HDD burners, you still overcome the fundamental issues. When you RAID some HDDs together, you do get much better performance and responsiveness. It's just not anything like the jolt a single SSD can provide.

    Regards,
    Christopher Ryan

  • 0 Hide
    yialanliu , April 15, 2013 6:40 AM
    Very cool to see the performance but I would love to see a test of RAID 5/6 as a much more practical usage of multiple SSDs
  • 0 Hide
    veroxious , April 15, 2013 6:48 AM
    What I would like to know is what the performance difference would be if you stuck that 24 Intel SSD drives in a SAN scenario i.e swopping out 24 300GB 15K SAS drives in an entry level Dell MD3220 chassis with dual-socket sixteen core Intel powered host and 128GB of RAM.................
  • 0 Hide
    veroxious , April 15, 2013 6:50 AM
    Sorry forgot to add.........in a RAID 10/50 config

Display more comments