A Sexy Storage Spree: The 3 GB/s Project, Revisted

Benchmark Results: Throughput

With two controllers, the RAID array generates nice throughput performance. However, the results seem somewhat sobering because, on average, the RAID array delivers a 2245 MB/s read and 2094 MB/s write rate in the fastest configuration. Those are good results, but still a good deal away from the magical 3 GB/s threshold. After breaking down the data according to queue depths, we see a very different picture.

Due to bandwidth limitations of the LSI cards, it really doesn’t matter whether we use the 9280-24i4e with eight or 16 SSDs. We reach the highest read and write rates using the dual-controller solution, driven by two 9260-8i cards, as each is connected to a different PCI Express 2.0 port.

At low queue depths, the different test systems are pretty close to each other. But at a queue depth of four pending commands, the dual-controller system gets going. At QD=8, it really shows its advantage. There, it fluctuates between just below the 3 GB/s mark when writing, and significantly over when reading.

When we connect one controller to our test system, the performance limit is reached much faster. The RAID doesn't exceed 1500 MB/s in writes and around 1600 MB/s in reads.

Create a new thread in the US Reviews comments forum about this subject
This thread is closed for comments
40 comments
    Your comment
  • jeff77789
    I want one of those...
    0
  • user 18
    jeff77789I want one of those...


    Why stop at one? I want two!
    1
  • burnley14
    Wow, throughput in GB/s. Makes my paltry single SSD look shameful. How fast did Windows boot up out of curiosity?
    0
  • the associate
    Overkill benches like this are awesome, I can't wait to see the crazy shit were gona have in 10 years from now.

    burnley14How fast did Windows boot up out of curiosity?


    I'd also like to know =D
    1
  • abhinav_mall
    How many organs I will have to sell to get such a setup?
    My 3 year old Vista takes 40 painful seconds to boot.
    1
  • knowom
    You can use super cache/super volume on SSD's or even USB thumb drives to dramatically improve the I/O and bandwidth at the expense of using up a bit of your system ram still the results are impressive and works on HD's as well, but they suffer from access times no matter what.

    I don't even think I'd bother getting a SSD anymore after using super volume on a USB thumb drive and SSD the results are nearly identical regardless of which is used and thumb drives are portable and cheaper for the density you get for some messed up reason.
    0
  • knowom
    I'd be really interested to see super cache/super volume used on this raid array actually it can probably boost it further or should be able to in theory.
    0
  • x3style
    abhinav_mallHow many organs I will have to sell to get such a setup?My 3 year old Vista takes 40 painful seconds to boot.

    Wow people still use vista? Was that even an OS? It felt like some beta test thing.
    1
  • nitrium
    I suspect you'll all be VERY disappointed at how long Windows takes to boot (but I'd also like to know). Unfortunately, most operations in Windows (such as loading apps, games, booting, etc) occur at QD 1 (average is about QD 1.04, QD > 4 are rare). As you can see on Page 7, at QD1 it only gets about 19 MB/sec - the SAME speed as basically any decent single SSD manufactured in the last 3 years.
    0
  • kkiddu
    mayankleoboy1holy shit! thats fast. how about giving them as a contest prize?


    I WANT 16 OF THOSE !

    For God's sake, that's $7000 worth of hardware, not including the PC. DAMN DAMN DAMN !! 3 gigabytes per second. And to think, that while on dial-up 4 years back, I downloaded at 3 kilobytes per second (Actually it was more like 2.5 KB/s).
    0
  • Anonymous
    "Albeit Risky" what made them risky?
    -1
  • user 18
    terasddd"Albeit Risky" what made them risky?


    It's in RAID 0. If any one drive goes, you lose all the data on the array.
    1
  • compton
    Thanks for taking it "2 x-treeeemms"

    Hmmm. I wonder just how reliable 16 MLC SSDs are. I know that wasn't part of the test, but I'd figure with sixteen of them working around the clock, how long would one of the take to start acting up?
    1
  • Anonymous
    This was apparent years ago - keep-it-up somebody has to push the envelope
    0
  • Leaps-from-Shadows
    Some pretty nice performance numbers there.

    If you end up giving away those drives, I'll take one. Not the whole array, just one drive. I'm not greedy!
    0
  • Marco925
    mayankleoboy1holy shit! thats fast. how about giving them as a contest prize?

    As if it will be available outside of USA
    0
  • balister
    This would be something a large corporation may want to make and use, but instead of a RAID 0, they'd probably make a RAID 10 so that they have redundancy.

    I could also see this being used mainly for read situations where you have data that doesn't change much, maybe just add to it, but you need to be able to get to the data quickly. Best situation I could think of would something along the lines of Patient Information for Electronic Medical Records.
    0
  • Anonymous
    I would like to know where the raid card becomes the bottleneck, how many SSD's can saturate that card? specially with the even faster SSD's.
    And how about some raid 10 results? and while your at it raid 5,6,50 and 60 results? if you have the kit why not?
    0
  • cadder
    At what data transfer level can a single user detect the difference? Meaning for something like a professional workstation. A single SSD seems to be an improvement over a rotating hard drive, and some people use dual SSD's in a RAID configuration. Would it be worthwhile for a professional workstation to have a RAID with more than 2 SSD's?
    0
  • truchonic
    ssd is on my list to my upgrade, booting fast not only windows also each program
    -1