Sign in with
Sign up | Sign in

Another Record Broken: 6 Gb SAS, 16 SSDs, 3.4 GB/s!

Another Record Broken: 6 Gb SAS, 16 SSDs, 3.4 GB/s!
By

In late July, we published the article Breaking Records With SSDs: 16 Intel X25-Es Do 2.2 GB/s, which we created after being encouraged by this YouTube video. It is a fun bit of documentation on the initial project that was tackled by Paul Curry, who, like us, went through the efforts for the purpose of promoting decent SSDs. The goal was to illustrate unprecedented storage throughput.

Although we were lucky enough to deliver even better performance numbers on our SSD RAID array (16 Intel X25-E SSDs versus 24 Samsung PB22-J drives), we weren’t really satisfied, and decided to do some more testing using other HBAs and RAID controllers. Intel and LSI came to the rescue, supplying the latest MegaRAID 9210-8i (Intel RS2BL080) and 9260-8i cards. Man the battle stations!

More Bandwidth, Please

The 2.2 GB/s result we achieved already sounds pretty impressive, but some simple math reveals that this number could even be higher, since each of the 16 Intel X25-E flash SSDs is realistically able to provide more than 220 MB/s of throughput. The theoretical maximum for our array should be somewhere around 3.5 GB/s, which is 60% more than what we’ve reached so far. Clearly, we were looking at some sort of bottleneck.

Platform? Check.

Our platform, a Supermicro X8SAX X58 motherboard and an Intel Core i7-920 2.66 GHz quad-core processor with 3 GB DDR3-1333 memory, is definitely fast enough to support higher bandwidth numbers. After all, we used two x16 PCI Express 2.0 slots for the controllers. Each of the 16 lanes is able to serve up 250 MB/s bi-directionally, providing up to 4 GB/s each way on the first generation of PCI Express. On the X58 platform with PCIe 2.0, this is doubled to 8 GB/s. Clearly, the platform wasn’t the issue.

Controllers? Replaced!

We initially used Adaptec’s 5808 cards, which are high-performance RAID HBAs offering balanced performance and a plethora of software-oriented features, used to manage the RAID array. One of our very first checks was the potential bandwidth of these cards, since they employ eight PCI Express 1.1 lanes to interface with the system. But two cards running eight times 250 MB/s still results in 2 GB/s each, or 4 GB/s total. Between two cards, we should have had enough available bandwidth. But it turns out that we didn't.

We replaced Adaptec’s popular 5-series RAID 5805 cards with two LSI controllers.We replaced Adaptec’s popular 5-series RAID 5805 cards with two LSI controllers.

We decided to use LSI’s latest HBA and RAID products instead. LSI sent us its latest MegaRAID 9260-8i cards, which are SAS 6 Gb/s boards. In addition, Intel became intrigued by the possible performance gains and provided two other cards, namely LSI’s 9210-8i, which it also sells under the Intel brand, named RS2BL080. These aren’t yet available, and they don’t come with a powerful XOR engine, cache, or kick-butt enterprise features. But both cards are among the first HBAs to utilize PCI Express 2.0, which effectively doubles the interface bandwidth using eight PCIe lanes. With this hardware, we were pretty darned confident that we’d be able to break our earlier performance numbers—and we did!

Display 38 Comments.
This thread is closed for comments
  • 0 Hide
    Anonymous , August 26, 2009 6:54 AM
    Holy cow, how much for the total damage?
  • 5 Hide
    Anonymous , August 26, 2009 8:31 AM
    It would be good to get a benchmark with a Windows XP/Vista/7 showing how long to boot the OS, various games, file copy speed... etc Fair enough these give fast throughput but where are the real world results?
  • 2 Hide
    Anonymous , August 26, 2009 8:32 AM
    What about some photos of the raid itself?
  • -6 Hide
    amdfangirl , August 26, 2009 9:51 AM
    That's... such an overkill...
  • 6 Hide
    climber , August 26, 2009 11:28 AM
    Personally, I would like to see a "Part II" to this article showing RAID 5, 6 and 10 setups with the same tests. No database admin or graphic designer, animator or CAD/CAM/GIS professional is going to use RAID 0 with it's inherent vulnerability, or at least they shouldn't.
  • 1 Hide
    amnotanoobie , August 26, 2009 11:29 AM
    faceholeIt would be good to get a benchmark with a Windows XP/Vista/7 showing how long to boot the OS, various games, file copy speed... etc Fair enough these give fast throughput but where are the real world results?


    I think with the cost of such a setup these would be ideal for a web or application server, or maybe a small data center. Booting Win 7 would be the least of your problems.
  • -5 Hide
    megahunter , August 26, 2009 11:56 AM
    what vga was used?
  • 0 Hide
    cah027 , August 26, 2009 12:46 PM
    I wonder if this is the type of storage used in super computers or render farms ?
  • -1 Hide
    GullLars , August 26, 2009 2:03 PM
    "None of the SSDs currently available support Serial Attached SCSI (SAS) or 600 MB/s transfer speeds"
    False: STEC's Zeus IOPs, and BitMicro's E-Disk Altima support SAS (Zeus supports SAS 6 Gbit). Though these cost about 3-5x more pr GB.
  • -1 Hide
    Anonymous , August 26, 2009 2:30 PM
    could it be that the computer's integrated graphics card is also connected to that bus and utilizes some bandwidth?

    Another question I had is if you really notice a difference running whatever program on 2,2GB/s or 3,4GB/s? Even slow Vista should fly there.
  • -1 Hide
    Anonymous , August 26, 2009 2:40 PM
    Excellent article, thanks
  • 6 Hide
    meatwad53186 , August 26, 2009 3:01 PM
    I don't know about anyone else, but I would like to see Tom's including more pictures of the hardware actually in the Tom's office, set up, and being used in some of the articles that get posted.
  • 0 Hide
    bounty , August 26, 2009 3:28 PM
    So what you need now is for Intel to hook you up with another 6 drives and you can load up the onboard SATA controller, raid 0 that with the others. Or switch platforms to something designed for Quad SLI then really load up on the drives (plus onboard SATA of course.) I say dial up the ridiculous, then see how long it takes to boot/load games. New hobby for the super overclockers, make fastest raid 0 setup.
  • -1 Hide
    sseyler , August 26, 2009 4:35 PM
    I'd like to see the actual setup myself, as well..
  • 0 Hide
    viometrix , August 26, 2009 5:08 PM
    id love to see this myself as well...
  • 0 Hide
    tixarn1 , August 26, 2009 6:53 PM
    faceholeIt would be good to get a benchmark with a Windows XP/Vista/7 showing how long to boot the OS, various games, file copy speed... etc Fair enough these give fast throughput but where are the real world results?


    As I've said before, it's not all a hardware RAID and thus isn't bootable.
  • -1 Hide
    Major7up , August 26, 2009 7:39 PM
    I would love to see some high end Mobo's that incororate these new controllers to leave your PCIe slots free. I would bet it would not be cheap but it certainly would be awesome!
  • 0 Hide
    Anonymous , August 26, 2009 10:17 PM
    16x Intel X-25 E will set you down approx €10.000
  • -3 Hide
    Anonymous , August 26, 2009 10:18 PM
    16x Intel X-25 E will set you down approx €10.000
  • -4 Hide
    Anonymous , August 26, 2009 10:19 PM
    16x Intel X-25 E will set you down approx €10.000
Display more comments