Sign in with
Sign up | Sign in

Samsung Cramps 24-SSD RAID Experiment

By - Source: Tom's Hardware US | B 68 comments
Tags :

By now, the Internet has been abuzz with a new marketing video from Samsung. In it, a group of intrepid--well--Samsung marketers take the company's new 256 GB solid-state drive and hook it up to 23 of its closest friends.

That's right. It created a 24-drive RAID array of SSDs and used it to accomplish a number of simple tasks.  We're not sure what RAID level the Samsung folks are using--we're going to assume RAID 0, since the video seems geared to show off the performance of what a ton of solid-state drives can collectively do.

Or does it?

For some reason, the marketers also don't mention the RAID controller they're using to attach the drives to their system. We can only surmise, based on a pithy search of some of the industry's bigger RAID controller companies, that they're using a PCI Express x8-based controller.  We weren't able to find any quasi-consumer, >24-port SATA controllers running on anything faster than a PCI Express x8 link. Why is this important? Because as cool as the notion of 24 drives in a RAID array might be, it's completely frivolous from a technological perspective.

Just looking at the tale of the tape, Samsung boasts 220 MB/sec. sequential reads for its 256 GB SSDs. Actual performance specs usually differ from what a manufacturer provides, but in this case, just assume that this is the theoretical maximum output of these SSDs. Twenty-four of these drives in a giant RAID-0 array could, in theory, produce a maximum sequential read speed of 5,280 MB/sec. This will obviously be different in a real-world setting, as adding drives to a RAID 0 array doesn't automatically double the data bandwidth.  But push those thoughts aside for now and just cement that number in the back of your mind for a moment. 

Now consider just how much bandwidth a PCI Express x8 connector can tolerate.  Each of the eight lanes in the connector produce a bidirectional 250 MB/sec. transfer rate.  Since the Samsung crew is just reading from the drives for its benchmarks, that leaves a total bandwidth maximum of 2,000 MB/sec.--nearly three gigabytes-per-second smaller than the theoretical maximum output of a 24-drive SSD RAID array.  And what happens when the Samsung group measures the performance of the onslaught of drives? They find a sequential transfer speed of 2,019 MB/sec.

What was that PCI Express x8 maximum bandwidth again? You guessed it. Right around 2,000 MB/sec.

We're not discounting the "cool factor" that come with using a large chunk of solid-state drives in a single array.  Or, for that matter, grabbing said array and jumping up and down on a trampoline while your computer's running.  But it does look a little misleading to use so many of these drives (at roughly $900 a pop) to deliver this kind of performance when a similar metric could have been achieved with, say, one-half the number of drives.  We're only surmising this last point, as it's unclear how much of a performance benefit each new SSD brings to a RAID-0 array.

Still, it's an important lesson to remember for aspiring performance enthusiasts.  Maxing out your storage bandwidth can win you the love and admiration of YouTube geeks worldwide, but know that all the parts of your machine--the tubes, if you will--have to be the correct size to avoid the kind of bottlenecking that the Samsung crew sees on their 24-SSD experiment.  We can only imagine the kind of results Samsung might have been able to show off were it running a PCI Express 16x RAID controller (or, for that matter, a PCI Express 2.0 8x RAID controller).

Do you have SSDs installed on your system? If so, what brand and how many do you have installed? Was it worth the purchase?

Update: Props to Tom's Hardware user Spazoid who's noted that Samsung's full RAID configuration details appear in a quick series of frames at the tail end of the video.  Here's the setup: Samsung slaps ten SSDs onto an Areca 1680ix-24 RAID card, eight SSDs onto an Adaptec 5 Series RAID card, and the final six SSDs directly into the motherboard SATA connectors itself.  It ran two RAID0 arrays build from the drives connected to each controller, with the remaining drives operating in a standalone mode. And the 2,000 MB/sec number?  That's a cumulative total of the connected drives' performances, not a reflection of a single array's performance.

Display 68 Comments.
This thread is closed for comments
  • 3 Hide
    LATTEH , March 10, 2009 3:50 PM
    the way the picture looks it looks like a mother pig or dog feeding her pups LOL
  • 0 Hide
    PhoenixBR , March 10, 2009 3:55 PM
    We only need 2 Microns SSD to get equal performance and 3 to surpass it.

    "TG Daily - 26/11/2008
    Chicago (IL) – Chip manufacturer has demonstrated what is, at least to our knowledge, the fastest solid state disk drive (SSD) demonstrated so far. A demo unit shown in a blurry YouTube video was hitting data transfer rates of 800 MB/s and can expand to apparently about 1 GB/s. The IO performance is about twice of the best performance we have seen to date."
  • 0 Hide
    Aragorn , March 10, 2009 4:44 PM
    Where can you buy that micron drive?
  • 3 Hide
    spazoid , March 10, 2009 4:48 PM
    All the info the article states as lacking, is at the end of the video. Excessive you of the pause button will reveal to you that they use an Areca, Adaptec and the onboard controller(s) to achieve a total bandwidth of 2000+ mbyte/second.

    All other info you might want about the setup is also there.
  • 0 Hide
    Themurph , March 10, 2009 5:19 PM
    @spazoid Good catch, Spazoid! I didn't even see this bit after the video's little celebration.

    They're still running quite a strange RAID setup though: using two controllers and onboard motherboard connections to, what, create one giant RAID of drives? Surely there has to be some performance loss from splitting the drive connections up as they do.

    Also, -15 points for the "pause to see how we did it" deal. Ugh.
  • 0 Hide
    dlapham , March 10, 2009 5:20 PM
    Maybe they were using raid 10 to achieve both redundancy and speed?!?!
  • 0 Hide
    nihility , March 10, 2009 5:44 PM
    Watch the video to the end, they tell you exactly which RAID cards they used.

    The say they had 10 drives hooked up to a 24 port card, another 8 hooked up to an 8 sata port card and another 6 plugged into the motherboard.

    They also state that with the 24 SSDs all hooked up to one card they were getting a serious bottleneck so they instead used the aforementioned setup.

    The video is pretty awesome IMHO. When they opened up 54 programs in a bit over 10 seconds it blew my mind.
  • 0 Hide
    MasonStorm , March 10, 2009 6:25 PM
    How does one set up a RAID array spanning three different controllers?
  • 0 Hide
    hellwig , March 10, 2009 6:31 PM
    MasonStormHow does one set up a RAID array spanning three different controllers?

    The right software will raid any harddrives connected to the system, regardless of controllers or even interface. I agree with the article that it was probably RAID 0. Any sort of calculation dependant on the CPU would have greatly reduced their throughput.
  • 1 Hide
    mapesdhs , March 10, 2009 6:41 PM

    I'll be more impressed when they break past 40GB/sec, speeds SGI
    achieved 10 years ago with simple FC.

    Ian.

  • 0 Hide
    MasonStorm , March 10, 2009 6:42 PM
    Hi hellwig,

    What would be some examples of such software, and are there any that would allow such a created array to be used as a boot drive?
  • 0 Hide
    MRFS , March 10, 2009 7:02 PM
    > The say they had 10 drives hooked up to a 24 port card, another 8 hooked up to an 8 sata port card and another 6 plugged into the motherboard.

    So, the ceiling was not dictated by a single x8 slot (2GB/sec),
    but by the PCI-E lane assignments made by the BIOS and the chipset.

    What do we get if we go RAID-SLIorCROSSFIRE with 2 x RAID controllers
    each using x8 PCI-E lanes, or preferably 2 x8 PCI-E 2.0 lanes.

    Highpoint's RAID controllers can be "teamed" in that fashion.

    Do we then run into the same ceiling, or not?

    Inquiring minds would now like to know.


    MRFS
  • 0 Hide
    hellwig , March 10, 2009 7:29 PM
    MasonStormHi hellwig,What would be some examples of such software, and are there any that would allow such a created array to be used as a boot drive?

    I doubt you could boot off such an array, its a purely software implementation, meaning something has to be running the software.

    That said, I don't have specific examples (never done it myself). Many OSes can implement RAID on their own: http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks#Implementations

    This article here on Toms tells how to setup RAID 0 or 1 in Windows XP: http://www.tomshardware.com/reviews/raid-additional-hardware,363.html

    This guy claims to be able to hack Windows XP into doing RAID 5: http://www.jonfleck.com/2009/02/24/low-cost-and-reliable-network-attatched-software-jbod-raid-0-1-or-5/#more-934

    I'm sure there are third-part apps out there that implement this as well, but I wouldn't know where to look.
  • 1 Hide
    mapesdhs , March 10, 2009 8:02 PM
    MasonStorm writes:
    > How does one set up a RAID array spanning three different controllers?

    For hw RAID I guess it depends on the cards and management sw.

    For RAID0, it's easy to do this on certain systems, eg. under
    IRIX, using 3 x QLA12160 (6 disks per channel, SCSI controller
    IDs 2/3, 8/9, 10/11), optimised for uncompressed HD, it would be:

    1. diskalign -n video -r8294400 -a16k '/dev/dsk/dks[p0,2,8,10,3,9,11]d[8-13]s7' | tee xlv.script
    2. xlv_make < xlv.script
    3. mkfs -b size=16384 /dev/xlv/video
    4. mkdir /video
    5. mount /dev/xlv/video /video


    (I hope the text formatting works for the above)

    That gets me 511MB/sec sequential read using a bunch of old/slow
    Seagate 10K 73s, on an Octane system more than a decade old. With
    modern SCSI disks, I get the same speed with just a couple of
    drives per channel.

    I should imagine Linux and other UNIX variants have similar
    sw tools, but I don't think Windows offers the same degree of
    control.

    Ian.

  • 0 Hide
    fiskfisk33 , March 10, 2009 8:04 PM
    why are you guessing what they used?
    if you watch the vid in hd and pause at the end you can read it perfectly :p 

    they had
    10 drives connected to an 'areca 1680ix-24'
    8 to an 'adaptec 5 series'
    and 4 directly to the mobo.
  • 0 Hide
    falchard , March 10, 2009 9:57 PM
    Why did they use a Mid-Tower?
  • 0 Hide
    graviongr , March 10, 2009 11:43 PM
    I also read all the pause screens, it also says they disabled all optical drives. So you can have a super fast system but you can't watch a DVD lol.

    Pointless.
  • 0 Hide
    MRFS , March 11, 2009 1:59 AM
    > they disabled all optical drives

    Maybe they ran out of SATA ports? :) 


    MRFS
  • 0 Hide
    MRFS , March 11, 2009 2:06 AM
    > they disabled all optical drives

    I use that chassis: They let the SSD's "all hang out";
    as such there was no need to install them in 24 x 2.5" drive bays.

    It could be done, however, with 4-in-1 enclosures
    like the QuadraPack Q14, and this Athena unit I
    recently purchased from Newegg:

    http://www.newegg.com/Product/Product.aspx?Item=N82E16816119006

    6 x 5.25" bays @ 4 x SSDs each = 24 SSDs total

    That Thermaltake Armor chassis has 11 x 5.25" drive bays:

    http://www.newegg.com/Product/Product.aspx?Item=N82E16811133021
    (see photos)


    MRFS

  • 0 Hide
    ossie , March 11, 2009 3:19 PM
    Kind of counterproductive to use 2 expensive HW RAID controllers and SB-SATA for a big array.
    A better solution for higher performance and lower cost would have been 3 PCIe-x8 8 port SAS HBAs with SW RAID.
Display more comments