The 2011 Extreme RAID Project
Our most recent extreme RAID array, build with 16 SSDs, created a total throughput of over 3.4 GB/s. That was more than a year and a half ago, and we haven't repeated our ambitious project until today. This should be especially interesting after the large time gap, because a lot has happened on the SSD market in the last few months. So, we decided to relaunch our extreme RAID project for one important purpose: we want to illustrate the potential of SSDs have when paired with professional controller hardware, and to provide an outlook into what SSDs could hold for us in the future.
We again used an array of 16 SSDs for our benchmark runs, simply because we weren’t able to gather more of them. More specifically, we received 16 Samsung 470-series drives, even though other drives like Crucial's RealSSD C300, OCZ’s Vertex 3, and Intel's new models deliver higher throughput.
We expect the extreme RAID to easily pass the 3 GB/s mark, and hopefully go a good bit higher. Really, we're justified in those projections given benchmark results of the individual drives, which show the Samsung devices demonstrating sequential read speed of around 261 MB/s and sequential write speed of 224 MB/s. In our previous RAID system, we used 16 Intel X25-E drives, which reached an average throughput of around 220 MB/s. So, we expect to increase our results this time around (and for a lot less money, too).
The Hunt For Performance Bottlenecks
Aside from the SSDs, we made few changes to our tried-and-true test system. Our platform, based on a Supermicro X8SAX X58 Express motherboard with Intel’s 2.66 GHz first-generation Core i7-920 quad-core processor and 3 GB of DDR3-1333 memory, is already fast enough to support higher throughput, especially since we used two x16 PCI Express 2.0 ports for the RAID controllers. An X58 platform employing PCIe 2.0 can theoretically push up to 8 GB/s, so there isn’t any risk of running into an interface bottleneck.
We also relied on tested and proven RAID controllers: LSI put two of its PCI Express 2.0 MegaRAID 9260-8i at our disposal, each of which allows the connection of eight SSDs. We used both controllers to distribute the SSDs across two PCI Express ports in an effort to avoid bottlenecks. We had an additional LSI card in the mix as well, the MegaRAID 9280-24i4e, which can accommodate up to 24 SATA/SAS drives. We selected this controller to try driving all 16 SSD drives from a single controller. The implication there, of course, is that two controllers should enable more throughput than a single card, and we're ready to test that hypothesis.
Did we successfully eliminate all bottlenecks? Yes and no. The system easily allows our combination of RAID controllers and SSDs to reach a throughput of over 3 GB/s. However, the RAID controller reached some limits; a controller with a greater bandwidth would have enabled even greater performance.
Current page: The 2011 Extreme RAID ProjectNext Page The SSDs: 16 x Samsung 470 (256 GB)
Stay on the Cutting Edge
Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.
I want one of those...Reply
jeff77789I want one of those...Reply
Why stop at one? I want two!
Wow, throughput in GB/s. Makes my paltry single SSD look shameful. How fast did Windows boot up out of curiosity?Reply
Overkill benches like this are awesome, I can't wait to see the crazy shit were gona have in 10 years from now.Reply
burnley14How fast did Windows boot up out of curiosity?
I'd also like to know =D
How many organs I will have to sell to get such a setup?Reply
My 3 year old Vista takes 40 painful seconds to boot.
You can use super cache/super volume on SSD's or even USB thumb drives to dramatically improve the I/O and bandwidth at the expense of using up a bit of your system ram still the results are impressive and works on HD's as well, but they suffer from access times no matter what.Reply
I don't even think I'd bother getting a SSD anymore after using super volume on a USB thumb drive and SSD the results are nearly identical regardless of which is used and thumb drives are portable and cheaper for the density you get for some messed up reason.
I'd be really interested to see super cache/super volume used on this raid array actually it can probably boost it further or should be able to in theory.Reply
abhinav_mallHow many organs I will have to sell to get such a setup?My 3 year old Vista takes 40 painful seconds to boot.Wow people still use vista? Was that even an OS? It felt like some beta test thing.Reply
I suspect you'll all be VERY disappointed at how long Windows takes to boot (but I'd also like to know). Unfortunately, most operations in Windows (such as loading apps, games, booting, etc) occur at QD 1 (average is about QD 1.04, QD > 4 are rare). As you can see on Page 7, at QD1 it only gets about 19 MB/sec - the SAME speed as basically any decent single SSD manufactured in the last 3 years.Reply
mayankleoboy1holy shit! thats fast. how about giving them as a contest prize?Reply
I WANT 16 OF THOSE !
For God's sake, that's $7000 worth of hardware, not including the PC. DAMN DAMN DAMN !! 3 gigabytes per second. And to think, that while on dial-up 4 years back, I downloaded at 3 kilobytes per second (Actually it was more like 2.5 KB/s).