Skip to main content

Samsung Cramps 24-SSD RAID Experiment

By now, the Internet has been abuzz with a new marketing video from Samsung. In it, a group of intrepid--well--Samsung marketers take the company's new 256 GB solid-state drive and hook it up to 23 of its closest friends.

That's right. It created a 24-drive RAID array of SSDs and used it to accomplish a number of simple tasks.  We're not sure what RAID level the Samsung folks are using--we're going to assume RAID 0, since the video seems geared to show off the performance of what a ton of solid-state drives can collectively do.

Or does it?

For some reason, the marketers also don't mention the RAID controller they're using to attach the drives to their system. We can only surmise, based on a pithy search of some of the industry's bigger RAID controller companies, that they're using a PCI Express x8-based controller.  We weren't able to find any quasi-consumer, >24-port SATA controllers running on anything faster than a PCI Express x8 link. Why is this important? Because as cool as the notion of 24 drives in a RAID array might be, it's completely frivolous from a technological perspective.

Just looking at the tale of the tape, Samsung boasts 220 MB/sec. sequential reads for its 256 GB SSDs. Actual performance specs usually differ from what a manufacturer provides, but in this case, just assume that this is the theoretical maximum output of these SSDs. Twenty-four of these drives in a giant RAID-0 array could, in theory, produce a maximum sequential read speed of 5,280 MB/sec. This will obviously be different in a real-world setting, as adding drives to a RAID 0 array doesn't automatically double the data bandwidth.  But push those thoughts aside for now and just cement that number in the back of your mind for a moment. 

Now consider just how much bandwidth a PCI Express x8 connector can tolerate.  Each of the eight lanes in the connector produce a bidirectional 250 MB/sec. transfer rate.  Since the Samsung crew is just reading from the drives for its benchmarks, that leaves a total bandwidth maximum of 2,000 MB/sec.--nearly three gigabytes-per-second smaller than the theoretical maximum output of a 24-drive SSD RAID array.  And what happens when the Samsung group measures the performance of the onslaught of drives? They find a sequential transfer speed of 2,019 MB/sec.

What was that PCI Express x8 maximum bandwidth again? You guessed it. Right around 2,000 MB/sec.

We're not discounting the "cool factor" that come with using a large chunk of solid-state drives in a single array.  Or, for that matter, grabbing said array and jumping up and down on a trampoline while your computer's running.  But it does look a little misleading to use so many of these drives (at roughly $900 a pop) to deliver this kind of performance when a similar metric could have been achieved with, say, one-half the number of drives.  We're only surmising this last point, as it's unclear how much of a performance benefit each new SSD brings to a RAID-0 array.

Still, it's an important lesson to remember for aspiring performance enthusiasts.  Maxing out your storage bandwidth can win you the love and admiration of YouTube geeks worldwide, but know that all the parts of your machine--the tubes, if you will--have to be the correct size to avoid the kind of bottlenecking that the Samsung crew sees on their 24-SSD experiment.  We can only imagine the kind of results Samsung might have been able to show off were it running a PCI Express 16x RAID controller (or, for that matter, a PCI Express 2.0 8x RAID controller).

Do you have SSDs installed on your system? If so, what brand and how many do you have installed? Was it worth the purchase?

Update: Props to Tom's Hardware user Spazoid who's noted that Samsung's full RAID configuration details appear in a quick series of frames at the tail end of the video.  Here's the setup: Samsung slaps ten SSDs onto an Areca 1680ix-24 RAID card, eight SSDs onto an Adaptec 5 Series RAID card, and the final six SSDs directly into the motherboard SATA connectors itself.  It ran two RAID0 arrays build from the drives connected to each controller, with the remaining drives operating in a standalone mode. And the 2,000 MB/sec number?  That's a cumulative total of the connected drives' performances, not a reflection of a single array's performance.

  • LATTEH
    the way the picture looks it looks like a mother pig or dog feeding her pups LOL
    Reply
  • PhoenixBR
    We only need 2 Microns SSD to get equal performance and 3 to surpass it.

    "TG Daily - 26/11/2008
    Chicago (IL) – Chip manufacturer has demonstrated what is, at least to our knowledge, the fastest solid state disk drive (SSD) demonstrated so far. A demo unit shown in a blurry YouTube video was hitting data transfer rates of 800 MB/s and can expand to apparently about 1 GB/s. The IO performance is about twice of the best performance we have seen to date."
    Reply
  • Aragorn
    Where can you buy that micron drive?
    Reply
  • spazoid
    All the info the article states as lacking, is at the end of the video. Excessive you of the pause button will reveal to you that they use an Areca, Adaptec and the onboard controller(s) to achieve a total bandwidth of 2000+ mbyte/second.

    All other info you might want about the setup is also there.
    Reply
  • Themurph
    @spazoid Good catch, Spazoid! I didn't even see this bit after the video's little celebration.

    They're still running quite a strange RAID setup though: using two controllers and onboard motherboard connections to, what, create one giant RAID of drives? Surely there has to be some performance loss from splitting the drive connections up as they do.

    Also, -15 points for the "pause to see how we did it" deal. Ugh.
    Reply
  • dlapham
    Maybe they were using raid 10 to achieve both redundancy and speed?!?!
    Reply
  • nihility
    Watch the video to the end, they tell you exactly which RAID cards they used.

    The say they had 10 drives hooked up to a 24 port card, another 8 hooked up to an 8 sata port card and another 6 plugged into the motherboard.

    They also state that with the 24 SSDs all hooked up to one card they were getting a serious bottleneck so they instead used the aforementioned setup.

    The video is pretty awesome IMHO. When they opened up 54 programs in a bit over 10 seconds it blew my mind.
    Reply
  • MasonStorm
    How does one set up a RAID array spanning three different controllers?
    Reply
  • hellwig
    MasonStormHow does one set up a RAID array spanning three different controllers?The right software will raid any harddrives connected to the system, regardless of controllers or even interface. I agree with the article that it was probably RAID 0. Any sort of calculation dependant on the CPU would have greatly reduced their throughput.
    Reply
  • mapesdhs
    I'll be more impressed when they break past 40GB/sec, speeds SGI
    achieved 10 years ago with simple FC.

    Ian.

    Reply