Page 1:The 2011 Extreme RAID Project
Page 2:The SSDs: 16 x Samsung 470 (256 GB)
Page 3:The Controllers: 1 x LSI MegaRAID 9280-24i4e And 2 x LSI MegaRAID 9260-8i
Page 4:RAID Creation In Windows
Page 5:Benchmark Results: Throughput
Page 6:Benchmark Results: I/O Performance
Page 7:Benchmark Results: 4 KB Random Reads/Writes
Page 8:Conclusion: Second-Generation 6 Gb/s Systems Needed
We repeat our extreme SSD RAID project for the third time and arrange 16 Samsung 470-series SSDs based on MLC NAND in a RAID 0 array to reach new levels of performance. We weren't as fortunate this time, but not for the reasons you might suspect.
Our most recent extreme RAID array, build with 16 SSDs, created a total throughput of over 3.4 GB/s. That was more than a year and a half ago, and we haven't repeated our ambitious project until today. This should be especially interesting after the large time gap, because a lot has happened on the SSD market in the last few months. So, we decided to relaunch our extreme RAID project for one important purpose: we want to illustrate the potential of SSDs have when paired with professional controller hardware, and to provide an outlook into what SSDs could hold for us in the future.
We again used an array of 16 SSDs for our benchmark runs, simply because we weren’t able to gather more of them. More specifically, we received 16 Samsung 470-series drives, even though other drives like Crucial's RealSSD C300, OCZ’s Vertex 3, and Intel's new models deliver higher throughput.
We expect the extreme RAID to easily pass the 3 GB/s mark, and hopefully go a good bit higher. Really, we're justified in those projections given benchmark results of the individual drives, which show the Samsung devices demonstrating sequential read speed of around 261 MB/s and sequential write speed of 224 MB/s. In our previous RAID system, we used 16 Intel X25-E drives, which reached an average throughput of around 220 MB/s. So, we expect to increase our results this time around (and for a lot less money, too).
The Hunt For Performance Bottlenecks
Aside from the SSDs, we made few changes to our tried-and-true test system. Our platform, based on a Supermicro X8SAX X58 Express motherboard with Intel’s 2.66 GHz first-generation Core i7-920 quad-core processor and 3 GB of DDR3-1333 memory, is already fast enough to support higher throughput, especially since we used two x16 PCI Express 2.0 ports for the RAID controllers. An X58 platform employing PCIe 2.0 can theoretically push up to 8 GB/s, so there isn’t any risk of running into an interface bottleneck.
We also relied on tested and proven RAID controllers: LSI put two of its PCI Express 2.0 MegaRAID 9260-8i at our disposal, each of which allows the connection of eight SSDs. We used both controllers to distribute the SSDs across two PCI Express ports in an effort to avoid bottlenecks. We had an additional LSI card in the mix as well, the MegaRAID 9280-24i4e, which can accommodate up to 24 SATA/SAS drives. We selected this controller to try driving all 16 SSD drives from a single controller. The implication there, of course, is that two controllers should enable more throughput than a single card, and we're ready to test that hypothesis.
Did we successfully eliminate all bottlenecks? Yes and no. The system easily allows our combination of RAID controllers and SSDs to reach a throughput of over 3 GB/s. However, the RAID controller reached some limits; a controller with a greater bandwidth would have enabled even greater performance.
- The 2011 Extreme RAID Project
- The SSDs: 16 x Samsung 470 (256 GB)
- The Controllers: 1 x LSI MegaRAID 9280-24i4e And 2 x LSI MegaRAID 9260-8i
- RAID Creation In Windows
- Benchmark Results: Throughput
- Benchmark Results: I/O Performance
- Benchmark Results: 4 KB Random Reads/Writes
- Conclusion: Second-Generation 6 Gb/s Systems Needed