By now, the Internet has been abuzz with a new marketing video from Samsung. In it, a group of intrepid--well--Samsung marketers take the company's new 256 GB solid-state drive and hook it up to 23 of its closest friends.
That's right. It created a 24-drive RAID array of SSDs and used it to accomplish a number of simple tasks. We're not sure what RAID level the Samsung folks are using--we're going to assume RAID 0, since the video seems geared to show off the performance of what a ton of solid-state drives can collectively do.
Or does it?
For some reason, the marketers also don't mention the RAID controller they're using to attach the drives to their system. We can only surmise, based on a pithy search of some of the industry's bigger RAID controller companies, that they're using a PCI Express x8-based controller. We weren't able to find any quasi-consumer, >24-port SATA controllers running on anything faster than a PCI Express x8 link. Why is this important? Because as cool as the notion of 24 drives in a RAID array might be, it's completely frivolous from a technological perspective.
Just looking at the tale of the tape, Samsung boasts 220 MB/sec. sequential reads for its 256 GB SSDs. Actual performance specs usually differ from what a manufacturer provides, but in this case, just assume that this is the theoretical maximum output of these SSDs. Twenty-four of these drives in a giant RAID-0 array could, in theory, produce a maximum sequential read speed of 5,280 MB/sec. This will obviously be different in a real-world setting, as adding drives to a RAID 0 array doesn't automatically double the data bandwidth. But push those thoughts aside for now and just cement that number in the back of your mind for a moment.
Now consider just how much bandwidth a PCI Express x8 connector can tolerate. Each of the eight lanes in the connector produce a bidirectional 250 MB/sec. transfer rate. Since the Samsung crew is just reading from the drives for its benchmarks, that leaves a total bandwidth maximum of 2,000 MB/sec.--nearly three gigabytes-per-second smaller than the theoretical maximum output of a 24-drive SSD RAID array. And what happens when the Samsung group measures the performance of the onslaught of drives? They find a sequential transfer speed of 2,019 MB/sec.
What was that PCI Express x8 maximum bandwidth again? You guessed it. Right around 2,000 MB/sec.
We're not discounting the "cool factor" that come with using a large chunk of solid-state drives in a single array. Or, for that matter, grabbing said array and jumping up and down on a trampoline while your computer's running. But it does look a little misleading to use so many of these drives (at roughly $900 a pop) to deliver this kind of performance when a similar metric could have been achieved with, say, one-half the number of drives. We're only surmising this last point, as it's unclear how much of a performance benefit each new SSD brings to a RAID-0 array.
Still, it's an important lesson to remember for aspiring performance enthusiasts. Maxing out your storage bandwidth can win you the love and admiration of YouTube geeks worldwide, but know that all the parts of your machine--the tubes, if you will--have to be the correct size to avoid the kind of bottlenecking that the Samsung crew sees on their 24-SSD experiment. We can only imagine the kind of results Samsung might have been able to show off were it running a PCI Express 16x RAID controller (or, for that matter, a PCI Express 2.0 8x RAID controller).
Do you have SSDs installed on your system? If so, what brand and how many do you have installed? Was it worth the purchase?
Update: Props to Tom's Hardware user Spazoid who's noted that Samsung's full RAID configuration details appear in a quick series of frames at the tail end of the video. Here's the setup: Samsung slaps ten SSDs onto an Areca 1680ix-24 RAID card, eight SSDs onto an Adaptec 5 Series RAID card, and the final six SSDs directly into the motherboard SATA connectors itself. It ran two RAID0 arrays build from the drives connected to each controller, with the remaining drives operating in a standalone mode. And the 2,000 MB/sec number? That's a cumulative total of the connected drives' performances, not a reflection of a single array's performance.