Again, the bar charts represent average performance from a queue depth of one through 32 in three benchmark profiles: database, Web server, and workstation.


The two RAID 0 arrays manage to clearly draw ahead of the single drives across all queue depths in Iometer’s Web server benchmark profile. Can anyone guess the access pattern in play? That's right, 100% reads. Could have seen that one coming, right?


However, the database and workstation profiles convey less scaling at queue depths up to eight. Up until that point, the Samsung 840 Pro 512 GB performs about the same as the two 256 GB SSDs in RAID 0. The same is true for the single 256 GB SSD and two striped 128 GB SSDs.


Previous
Next
Summary
- Are Two SSDs Any Better Than One?
- Benchmark System And Software
- Results: Sequential Read And Write Performance
- Results: 4 KB Random Read And Write Performance (AS-SSD)
- Results: 4 KB Random Read And Write Performance (Iometer)
- Results: Access Time
- Results: I/O Benchmark Profiles (Iometer)
- Results: PCMark 7 And PCMark Vantage
- Results: AS-SSD Copy Benchmark And Overall Performance
- Real-World Benchmarks: Booting Up And Shutting Down Windows 8
- Real-World Benchmarks: Booting Up Windows 8 And Adobe Photoshop
- Real-World Benchmarks: Five Applications
- RAID 0: Great For Benchmarks, Not So Much In The Real World
Ask a Category Expert
Pulls about ~100,000 IOPS on the IOMeter workstation benchmark pattern at high queue depths, so performance does scale as expected.
Look at data for the returns rate for Samsung SSDs
http://www.behardware.com/articles/881-7/components-returns-rates-7.html
At 0.48%, the odds of at least one drive failing in even a six drive array are around 2.9%. Now compare that to HDD failure rates here:
http://www.behardware.com/articles/881-6/components-returns-rates-7.html
and it's in the same ballpark. Let's also not forget how differently HDD and SSD failure rates change with time:
http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923-9.html
Samsung lists consumer AFR for the 840 Pro at 0.16%, but the real number is likely in between the two.
IMO, the increase in failure rate is not really significant. I'm not saying you shouldn't keep backups of crucial data, I have an HDD RAID 1 array just for that purpose. But it's certainly not a deal breaker, especially with the performance it gives.
This could only hold true for drives that use on-the-fly compression, like Sandforce SSDs. Samsung's in-house controller doesn't utilize on-the-fly compression. Moreover, given the same load, writes will be distributed across all drives in RAID 0 compared to being written to a single drive. So even if maximum wear scales linearly with drive capacity, you won't see any appreciable change in lifespan.
Not to mention that for 256 GB MLC drives with 3000 P/E cycles, you have to write at least 700 TB before you get close to the limit. Given the caution most people have been told to have writing to SSDs, it's more likely you'll buy new drives before your old ones wear out.
Garbage collection algorithms are a lot better than they were in the past. I'd agree with you if you were running a server under constant load, but if you give the drives some idle time, they'll recover. Just look at the review for the Samsung 840:
http://www.anandtech.com/show/6337/samsung-ssd-840-250gb-review/11
Some idle time restored write performance completely after torture tests.
Unless you need large and sustained I/O, garbage collection will suffice.
2nd kind of cool, right?
If you don't care about the speed boost of RAID 0 I would suggest you not RAID 0 them but just use them separately as two 512GB drives. By doing this you have less risk of losing all of your data because your data won't be mixed through both drives.
Which was already stated in the article/benchmark. Real world differences are too small, maybe even worse in half of the tests. One positive is for the raw video captures like at the end of the article.
http://en.wikipedia.org/wiki/RAID_0#RAID_0
RAID 0 is useful for setups such as large read-only NFS servers where mounting many disks is time-consuming or impossible and redundancy is irrelevant.
RAID 0 is also used in some gaming systems where performance is desired and data integrity is not very important. However, real-world tests with games have shown that RAID-0 performance gains are minimal, although some desktop applications will benefit.[1][2]
http://www.anandtech.com/printarticle.aspx?i=2101
"We were hoping to see some sort of performance increase in the game loading tests, but the RAID array didn't give us that. While the scores put the RAID-0 array slightly slower than the single drive Raptor II, you should also remember that these scores are timed by hand and thus, we're dealing within normal variations in the "benchmark".
Our Unreal Tournament 2004 test uses the full version of the game and leaves all settings on defaults. After launching the game, we select Instant Action from the menu, choose Assault mode and select the Robot Factory level. The stop watch timer is started right after the Play button is clicked, and stopped when the loading screen disappears. The test is repeated three times with the final score reported being an average of the three. In order to avoid the effects of caching, we reboot between runs. All times are reported in seconds; lower scores, obviously, being better. In Unreal Tournament, we're left with exactly no performance improvement, thanks to RAID-0
If you haven't gotten the hint by now, we'll spell it out for you: there is no place, and no need for a RAID-0 array on a desktop computer. The real world performance increases are negligible at best and the reduction in reliability, thanks to a halving of the mean time between failure, makes RAID-0 far from worth it on the desktop.
Bottom line: RAID-0 arrays will win you just about any benchmark, but they'll deliver virtually nothing more than that for real world desktop performance. That's just the cold hard truth."
http://www.techwarelabs.com/articles/hardware/raid-and-gaming/index_6.shtml
".....we did not see an increase in FPS through its use. Load times for levels and games was significantly reduced utilizing the Raid controller and array. As we stated we do not expect that the majority of gamers are willing to purchase greater than 4 drives and a controller for this kind of setup. While onboard Raid is an option available to many users you should be aware that using onboard Raid will mean the consumption of CPU time for this task and thus a reduction in performance that may actually lead to worse FPS. An add-on controller will always be the best option until they integrate discreet Raid controllers with their own memory into consumer level motherboards."
http://www.hardforum.com/showthread.php?t=1001325
"However, many have tried to justify/overlook those shortcomings by simply saying "It's faster." Anyone who does this is wrong, wasting their money, and buying into hype. Nothing more."
http://jeff-sue.suite101.com/how-raid-storage-improves-performance-a101975
"The real-world performance benefits possible in a single-user PC situation is not a given for most people, because the benefits rely on multiple independent, simultaneous requests. One person running most desktop applications may not see a big payback in performance because they are not written to do asynchronous I/O to disks. Understanding this can help avoid disappointment."
http://www.scs-myung.com/v2/index. [...] om_content
"What about performance? This, we suspect, is the primary reason why so many users doggedly pursue the RAID 0 "holy grail." This inevitably leads to dissapointment by those that notice little or no performance gain.....As stated above, first person shooters rarely benefit from RAID 0.__ Frame rates will almost certainly not improve, as they are determined by your video card and processor above all else. In fact, theoretically your FPS frame rate may decrease, since many low-cost RAID controllers (anything made by Highpoint at the tiem of this writing, and most cards from Promise) implement RAID in software, so the process of splitting and combining data across your drives is done by your CPU, which could better be utilized by your game. That said, the CPU overhead of RAID0 is minimal on high-performance processors."
And as far as data loss in case of failure, don't use an SSD to store you data, use a separate HDD to store any important data(I have a 2TB drive).
However it all comes to opinion, some users don't want to worry about RAID nor take the time to setup(I don't blame them either).
For your workstation, buy a single SSD and regularly do backups of important data. RAID0 is fast, but not only the failure of a single drive will make all data irrecoverable, but also RAID0 considerably increases the wear and tear on the drives so they will fail earlier. If you've got money to burn, buy a quality SSD. For standard use, it's more than enough.
For servers, RAID makes far more sense since time is money, and for every minute of downtime, money is lost.
Yes, but the actual MTBF of SSDs is so high that I personally consider it to be a moot point. Not that it isn't valid for someone running their business, but my use means failure is more a matter of convenience than lost income.
Seems to me the answer is, as always, application dependent. Most of us gamer-types aren't going to benefit from a RAID 0 configuration, but if you build using one, the down sides are very low, and the potential upside, if you happen to expand beyond gaming and use an app that takes advantage of it, is very high. That being said, price is a big factor for me, and there just isn't any reason to pay $300 instead of $240, to use the example in the article, when I'm not likely to ever actually run apps that take advantage of the higher limit using 2 SATA connections allows for.
Boy that would be costly