Almost 20 TB (Or $50,000) Of SSD DC S3700 Drives, Benchmarked

Toying Around With 18 TB Of Solid-State Storage

For as long as SSDs have been around, power users and enterprise professionals have been configuring them in RAID arrays. Connect a few low-capacity solid-state drives, and you get one spacious and lightning-fast volume. There are a number of great reasons to build such a potent arrangement, and some compelling reasons not to. But perhaps conventional wisdom is up for review now.

You could argue that there are actually fewer reasons to team up a set of solid-state drives nowadays. Price per gigabyte continues to fall as capacity creeps higher. And folks looking for the ultimate in performance have a number of PCI Express-based options available to them. But we don't share that opinion, particularly after Intel sent us 24 of its high-end SSD DC S3700 to toy around with (check out our review: Intel SSD DC S3700 Review: Benchmarking Consistency).

The SSD DC S3700 family boasts impressive specs. At its peak, the largest model is capable of sequential reads of up to 500 MB/s and writes as high as 460 MB/s. Random 4 KB reads clock in up to 76,000 IOPS, while writes plateau at 36,000. Of course, the real reasons to want one of these drives are their bolstered endurance, end-to-end data protection, resilience against power loss, and a price tag just north of $2/GB. 

As we know, the SSD DC S3700 ships in capacities as low as 100 GB. Two dozen of those smaller drives could do some real damage in the right hands. After all, you'd be looking at 2.4 TB in RAID 0. But we got the 800 GB version for our little exhibition. At about $2,000 each, that's roughly 50 grand worth of flash-based storage.

That comes out to a mind-boggling 24,576 GiB, by the way. Each flagship 800 GB SSD DC S3700 features a full terabyte of flash on-board. Even after you factor in over-provisioning, we still end up with 745 GiB of usable space on each drive, giving us an astounding 18 TiB, all-told. Considering these things are designed to withstand up to 10 full writes per day for five years, the possibilities seem endless.

If your life happens to revolve around solid-state storage, then two-dozen 800 GB SSD DC S3700s in one place are like having a bespoke Rolls Royce trimmed in fragrant stegosaurus hide. It seems too opulent to even exist. Fortunately, a conversation with the right folks at Intel made it possible for us to line this up. Now, what to do with all of our high-end hardware?

The mandate seemed clear: let's stripe these bad boys together and see what sort of performance is really possible.

Intel and LSI Hardware RAID Controllers

We're presented with a few challenges, though. If we only had eight drives to deal with, our situation would be simple. Many hardware RAID controllers offer eight ports of connectivity. An octet of SSDs would give each drive its own port and we'd be off to the races. But 24 force us to consider alternative configurations. We could use three RAID cards, but then we wouldn't be able to create a single volume. We could also run dozens of drives from one controller using an expander, but that only makes sense for mechanical disks that don't saturate a 6 Gb/s link. We'll tackle this conundrum shortly.

Then there's the sad fact that so many drives and their associated connections are physically difficult to manage. For every SSD, you're looking at one power and one data cable. So, we need a backplane to provide both in one convenient package. And because we also need a lot of host resources to tax this gratuitous storage subsystem, we can address the setup side by using a server equipped with a 24-port backplane. Intel heard our request on that end, too, and followed up our package of SSDs with a dual Xeon E5 machine exposing 80 lanes of third-gen PCI Express and a number of storage-centric features.

And with that, the hardware is ready for action. Pair 24 SSD DC S3700s with a dual-processor 2U server and let 'er rip. But we're still missing one piece of the puzzle. As a result of the way these drives are set up, we must rely on oft-maligned software RAID. Depending on whose office you happen to be standing in, those two words together can get you slapped across the face. But that's alright by us. Software-based RAID functionality has come a long way over the past 15 years, and although it saps host resources, our 16-core server has plenty of horsepower in reserve.

At least for this first round of experimentation, we're skipping the most responsible, performance-robbing RAID levels (like 5 and 6) in favor of the far more exciting (and dangerous) RAID 0, which should let us get to all of the performance and capacity these drives can manage.

Swipe to scroll horizontally
Member DrivesTotal Capacity
1 x 800 GB DC S3700745 GiB
4 x 800 GB DC S37002,980 GiB
8 x 800 GB DC S37005,960 GiB
16 x 800 GB DC S370011,920 GiB
24 x 800 GB DC S370017,880 GiB

Just one of our SSD DC S3700s is larger than 12 of Intel's original 64 GB X25-E enterprise drives. To match the capacity of our striped 24-drive array built using 800 GB repositories, you'd need more than 300 of those X25-Es. Yeah, we're pretty excited about having so much flash at our disposal.

  • ASHISH65
    very good review and also helpfull!
    Reply
  • mayankleoboy1
    IIRC, Intel has enabled TRIM for RAID 0 setups. Doesnt that work here too?
    Reply
  • Novulux
    You have graphs labeled as MB/s when it should be IOPS?
    Reply
  • DarkSable
    Idbuaha.

    I want.
    Reply
  • techcurious
    I like the 3D graphs..
    Reply
  • cangelini
    NovuluxYou have graphs labeled as MB/s when it should be IOPS?Fixing now!
    Reply
  • sodaant
    Those graphs should be labeled IOPS, there's no way you are getting a terabyte per second of throughput.
    Reply
  • cryan
    mayankleoboy1IIRC, Intel has enabled TRIM for RAID 0 setups. Doesnt that work here too?
    Intel has implemented TRIM in RAID, but you need to be using TRIM-enabled SSDs attached to their 7 series motherboards. Then, you have to be using Intel's latest 11.x RST drivers. If you're feeling frisky, you can update most recent motherboards with UEFI ROMs injected with the proper OROMs for some black market TRIM. Works like a charm.

    In this case, we used host bus adapters, not Intel onboard PHYs, so Intel's TRIM in RAID doesn't really apply here.


    Regards,
    Christopher Ryan
    Reply
  • cryan
    DarkSableIdbuaha.I want.
    And I want it back! Intel needed the drives back, so off they went. I can't say I blame them since 24 800GB S3700s is basically the entire GDP of Canada.

    techcuriousI like the 3D graphs..
    Thanks! I think they complement the line charts and bar charts well. That, and they look pretty bitchin'.


    Regards,
    Christopher Ryan

    Reply
  • utroz
    That sucks about your backplanes holding you back, and yes trying to do it with regular breakout cables and power cables would have been a total nightmare, possible only if you made special holding racks for the drives and had multiple power suppy units to have enough sata power connectors. (unless you used the dreaded y-connectors that are know to be iffy and are not commercial grade) I still would have been interested in someone doing that if someone is crazy enough to do it just for testing purposes to see how much the backplanes are holding performance back... But thanks for all the hard work, this type of benching is by no means easy. I remember doing my first Raid with Iwill 2 port ATA-66 Raid controller with 4 30GB 7200RPM drives and it hit the limits of PCI at 133MB/sec. I tried Raid 0, 1, and 0+1. You had to have all the same exact drives or it would be slower than single drives. The thing took forever to build the arrays and if you shut off the computer wrong it would cause huge issues in raid 0... Fun times...
    Reply