SSD RAID 10 very poor write speed

Status
Not open for further replies.

Campbell Wild

Honorable
Aug 7, 2013
2
0
10,510
I have an ASRock X79 Extreme 11 motherboard which has an onboard LSI 2308 RAID controller. Into this I have plugged 4 x 512GB Samsung 840 SSD drives and created a RAID 10 array, giving me a 929GB volume.

The array is created at boot time, and onto it I have installed Windows 7 x64. I did some testing today and found that the write speed seems ridiculously slow.

I'm getting a sequential read speed of ~900MB/s, but my write speed is ~30MB/s. Yes, I didn't forget a zero!

I have checked all the settings of the drive and they are as LSI recommend, i.e.
Stripe Size: 64KB (actually they recommend 256KB but I don't think I had the option to change this)
Access Policy: Read Write
Disk Cache Policy: Disable
Read Policy: No Read Ahead
IO Policy: Direct IO
Write Policy: Write Through

If I enable Disk Cache Policy then I get a write speed of almost 300MB/s. However, there's a big warning that any power loss or crash could destroy the whole volume, and I have no UPS.

I have tried updating the Windows driver (from 2.0.57 to 2.0.63), and have updated the RAID controller firmware (from 13.00.57 to 16.00.00), but neither have brought any improvements.

Can anyone suggest what the problem could be, and if there is anything I can do to improve this?

Thanks,
Campbell
 
Solution

The Kasafist

Honorable
Mar 20, 2013
756
0
11,160


That's because SSDs are known to actually lose speed in Raid other than those that backup like Raid 1. Raid for performance like Raid 0 actually make SSD slower. SSD are best left individually separate or use one as a back up. Honestly with all those 512GB SSD I would do a Raid 0 + 1 only combine 2 in 0 (if you really feel the need to combine them) and then the other 2 into Raid 1 to back up the first 2. Raid 0+1 might help. It might also not help since 2 pairs will still be in a Raid 0 option. I would simply keep all 4 separate and do Raid 1 options like SSD 1 to SSD 2 and SSD 3 to SSD 4!

You'll still have a TB they'll just simply be separate drives is all. But yes SSD tend to perform worse when combined has something to do with the number of Nand controllers or something or the number of cells I think. Its got something to do with the larger SSDs particularly when it has more capacity it has more to search through to find what you need and you have like 2TB total probably not enough controllers to pull it off so it comes off as slow. You could check out an example of this here:
http://www.tomshardware.com/reviews/crucial-m500-1tb-ssd,3551.html
 
Solution

Campbell Wild

Honorable
Aug 7, 2013
2
0
10,510

Sorry, a typo. Those are the sequential read and write times.
 

tony1024

Reputable
Jan 27, 2015
2
0
4,510
TRIM support is not there for RAID10.
I am using two SSDs in raid 0 , one pretty new one the Samsung 850 pro another is old crucial m4. The raid 0 setup outperform the single disks set up in my case. Of cause you have to make sure the TRIM is supported for the intel chipset . and the right intel iRST software is installed.
May need a Option ROM flushed for old mobos. check out win-raid dot com.
 

tony1024

Reputable
Jan 27, 2015
2
0
4,510
Try to use the intel ports via the intel chipset for RAID . Most of other on board chipsets can't outperform the intel one. For example the Marvell 6g/s port on my board is much slower than the intel 3g/s port from Intel when comes to RAID. Other chipset don't support TRIM anyway.
 

shadycuz

Distinguished
Oct 6, 2009
32
0
18,530


This array of SSD's in raid 0 doesnt look slower..... [video="https://www.youtube.com/watch?v=eULFf6F5Ri8"][/video]

oh and this too [video="https://www.youtube.com/watch?v=JPuywNBctvg"][/video]
 

The Kasafist

Honorable
Mar 20, 2013
756
0
11,160


RAID is still obsolete in the real world anyhow. Once it's dumped in the commercial world it's pretty much pointless other than dropping tons of cash into more storage devices anyway. Whether firmware has improved to allow for proper RAID support over the past few years is irrelevant. It's already an irrational choice for backup purposes. Its only real purpose today is redundancy and that can be done already without it with even more reliably. Besides this thread was relevant when it first started when these issues with SSD and RAID were still common among certain special case situations. His world is not carved in stone and I do follow LinusTechTips as well. A RAID in SSD did not always guarantee justifiable spending on extra storage devices anyhow. BTW I have seen these videos but to humor you I will watch them again.

Just note I said what I said based on facts that were relevant over 6 months ago. You are just replying too little too late with irrelevant information now and with unjustifiable means of even considering RAID in SSD set ups. Unless the performance is more than double then why bother with RAID while spending twice the money. For example if a devices read is 450MB/s and you RAID 0 on two drives completely identical then it's only really worth it if you get 900MB/s otherwise it's a waste you might get lucky in some special cases (maybe won't have the trouble the OP has just trying to set the junk up) and clock 675MB/s just an example. Why would you even sacrifice 50% of the speed on one SSD than just simply have them both at max speed. On another note ask any real IT professional not just myself in the REAL commercial world and they're all slowly dumping their RAID for better options. Only places that don't are Data (farms) Centers for the sake of redundancy where losing information is detrimental to the business. Otherwise in any other back up situation cloning is by far more logical and by far much less complicated and more reliable. Mainly because the storage device doesn't have to be kept within the system but can be stored in another location.

Perform a clone regularly and voila you are good to go. Where in a RAID if your PC gets hit with a power surge just an example (many more things can happen natural disaster, theft, etc) then the user doesn't lose every single bit of data they intended to keep anyway. Besides technology changes so much and so fast that most really shouldn't lock them selves into a RAID when it comes time to swap all that jazz over to say for example a new set of drives in the new M.2 ports. Some advice take anything you hear on YouTube with a grain of salt it's all just for reference at the end of the day. Facts are right here as the OP has significant trouble with his RAID and hence why the IT world is straying away from a unnecessarily difficult option of backing up information. I have seen RAIDs that perform less than 10% improvement in the speeds and that doesn't justify a 100% spend on extra storage devices. How about this though go to a thread that a user actually needs help with. Instead of those that are already nearly 9 months old and solved.

Your first from 2011 used 24 SSD only to hit 2GB/s. Ask yourself this does everyone on Earth think they're rich or just you. Oh and the second video just further proved what I have been explaining to the OP further. I will thank Linus for his information later! Anyhow moving on RAID in the SSD world well use it for redundancy and that only starts at RAID 5 for parity. Otherwise if you don't have the money to swing a 3rd SSD into the configuration. Save yourself the trouble of losing a drive in a RAID 0 and the hassle of even setting it up for RAID 1 and simply clone the darn thing. Oh and on a $/GB ratio buying smaller capacity drives nowadays just cost even more money than buying the larger capacity drive in the first place. Just thought that might be helpful marketing information for the world.
:bounce:
 

shadycuz

Distinguished
Oct 6, 2009
32
0
18,530


They hit 1Gbs with only 9 drives, meaning with all 24 they were saturating the bus completely.

If you want the best speed you cant get. You raid 0 two ssd's. Period. And i'm sure their are other specialty cases out their where they need SSD's in even more exotic raid configurations. You said SSD's loose speed. You were wrong, period. Though you might have been right 2 years ago. I just happened to come by this post and dug it up on accident.
 

nctritech

Reputable
Oct 21, 2014
4
1
4,510
Holy cow, there are some really bad "facts" in this thread. For one thing, the verbose "solved" poster couldn't be more wrong about pretty much everything they've said. RAID is never going away and has significant benefits. SSDs can be RAIDed together and achieve insane performance well in excess of 2 GB/sec with the correct combination of hardware.

DO NOT listen to The Kasafist. The Kasafist evidently is blatantly ignorant about this topic.

Your issue is that you're using a hardware RAID controller that is known to exhibit highly variable performance with SSDs and is clearly geared towards traditional hard drives. This article and its comments will illustrate both the tuning required to achieve higher write speeds and the probability that you'll never get consistent behavior on an LSI 2308 controller: https://www.servethehome.com/lsi-sas-2308-hba-ssd-windows-server-2012-turn-write-cache/

You either need a different controller that plays nicely with SSDs or you need to use a software RAID setup instead. You need to use more than one PCI Express SATA controller as well; this allows you to distribute the drive connectivity across multiple PCIe lanes whereas four drives on one controller will always be limited by the single controller's PCIe bandwidth. One PCIe 2.0 lane carries less GB/sec than a modern SATA SSD can sequentially read, plus the SATA controller chips themselves may have a maximum internal bandwidth limitation that four SSDs can exceed.

Why should you trust my answer? Because I actually build and rebuild large RAID arrays in a professional capacity and understand the hardware from top to bottom.
 
Status
Not open for further replies.