which raid and controller?

Nhyrum

Reputable
Jun 20, 2015
51
0
4,640
Im new to raid, but have read a fair amount.

Im looking for a setup that will allow the fastest read and write speeds along with hard drive failure protection. From what i have read, what i have narrowed it down to raid 5/50 or 6/60.

what i like about the raid 6 is i can have two drives fail at once and still recover(a simultaneous two drive failure is pretty rare) with a loss of useable space due to the tow "backup" partitions ill call them on each disk. I like the idea of pairing either raid in a raid 10 for the added security and a little speed boost.

I have an asus H97plus mobo, I assume ill need a raid controller. are there special ones that allow the raid 6 or the multi level raids?

I will be most likely be using 240/520 gig SSD's, or maybe western digital caviar black 1 or 2tb HDD's. preferrably SSD's. I havent bought the drives other than one 240 gig SSD

It will be used for home office/personal/gaming use so it will have mostly sentimental value stored
 
Solution
The LSI 9361 will scream for RAID-ing even a LOT of SSDs, but I can tell you (from personal experience) that a 9266-8i (or 4i) will not restrict bandwidth of SSDs in RAID sets at all. Funny thing is, the 9361s are about the same price these days as the 9266s if bought brand new from normal retail channels (newegg, etc). Difference being the 9361 gives you SAS3 (12Gb/s), a faster on-card CPU and a PCIe 3.0 connector vs the 9266's SAS2 (6Gb/s) and PCIe 2.0. That being said, you can pick up 9266-8i's, as I have 4-5 times, off ebay (sometimes straight from China) and they're 100% legit, and you can get them for $225-ish which is a steal.

The LSI cards have "fastpath" by default since a few firmware versions ago, which boosts SSD logical...

Nhyrum

Reputable
Jun 20, 2015
51
0
4,640


thanks, that was helpful.

It seemed the raid was smoking the single drive except in a few areas where the single drive had a small edge, but in the real tests, the single drive consistently had an edge.

I still would prefer a raid array that can maximize speeds with some mirroring/redundancy which is why i picked either single level or multi level raid 5 or 6
 

USAFRet

Titan
Moderator
In benchmarrks and some rare use cases, SSD + RAID 0 can be hugely fast.
Personally, I don't chase benchmarks.

There are multiple other ways to protect your data. And probably better and easier.

There are really only two reasons to consider a RAID array:
1. If you absolutely cannot sustain any downtime. A webstore, for instance, where downtime = lost sales.
And you will have an actual backup anyway.

2. Hobby and learning purposes.
 

Nhyrum

Reputable
Jun 20, 2015
51
0
4,640
this is entirely as a hobby and learning. i dont really care if my steam games get lost, i can always redownload. as it will also be a gaming PC(nothing too crazy) i would prefer the fastest and most protected, which is why i narrowed it down the 4(more like two really)

so ive decided to go with a raid 60 with a lsi 9300-8i controller. should reduce the bottlenecking of the array and isnt too expensive. I know its half the space id get with a raid 0 or JBOD, but the redundancy and two drive failure hotswap is worth it to me
 

marko55

Honorable
Nov 29, 2015
800
0
11,660
The LSI 9361 will scream for RAID-ing even a LOT of SSDs, but I can tell you (from personal experience) that a 9266-8i (or 4i) will not restrict bandwidth of SSDs in RAID sets at all. Funny thing is, the 9361s are about the same price these days as the 9266s if bought brand new from normal retail channels (newegg, etc). Difference being the 9361 gives you SAS3 (12Gb/s), a faster on-card CPU and a PCIe 3.0 connector vs the 9266's SAS2 (6Gb/s) and PCIe 2.0. That being said, you can pick up 9266-8i's, as I have 4-5 times, off ebay (sometimes straight from China) and they're 100% legit, and you can get them for $225-ish which is a steal.

The LSI cards have "fastpath" by default since a few firmware versions ago, which boosts SSD logical volumes' performance. A couple years ago you had to buy that separately. To enable fastpath you just need to configure your logical volume with certain settings (64k strip, write-through, etc). Note that if you want to do write-back on a RAID-5 of SSDs, you will see faster write performance as the card's cache is used but that will disable fast path. To me, its worth it for parity RAID sets though and fast-path only adds about 10% performance boost, while SSDs already scream so.... For RAID-0 or RAID-10, I'd stick with write-through.

I've got four 120GB Samsung 850s on my 9266-8i and the speed scales as you'd expect for all RAID sets. I've got them in a RAID-0 that I'm running a bunch of VMs off of (VMWare workstation) which are all backed up to a large spin disk so losing the array to a dead drive wouldn't hurt me. I've set those 4 drives up in RAID-5 and 10 before to test, and again, they perform as expected for each RAID set.

One important thing to consider when RAID-ing SSDs: overprovisioning. You lose TRIM when putting them in RAID so the OS can't run TRIM on the logical volume to clean up previously used NAND so its ready for new writes. This is sort of a hot debate around the web as many people say that native "garbage collection" on the SSDs themselves does a good enough job, but that's not as efficient as TRIM. Granted, this isn't as important of a subject unless you've completely filled your logical volume and are continually writing new data to the array. Basically, to ensure good performance when the volume is filled up, or close to filled up, you provision the logical volume to about 80-90% (again, you'll hear 50 diff opinions on how much you should do) of the total disk space available. So if you put 4 x 120GB drives in to a RAID-0, when you specify how big to make the logical volume in your RAID config, make it 400GB instead of the complete 480GB available to the volume. Your OS will see a 400GB drive available and your RAID card's config utility will see an additional 80GB unused, which you won't touch and just leave there.

If using SSDs I wouldn't sweat RAID-5 vs RAID-6. The biggest concern with parity RAID is losing drives DURING rebuild, after a drive has died in the array. Reason being, rebuilding an array once a dead drive has been replaced puts a lot of load on the other drives in the array and if those drives are large (over 1TB) then the rebuild can take a long time (days in many cases) and during that time if a 2nd drive dies (in a RAID-5) then you're screwed. RAID-6 adds one more bit of protection against this, but you pay with even slower write speeds under normal conditions. Write-back and RAID cache may keep you from ever noticing how "slow" your write speed is with RAID-6 though, especially if you don't do a lot of writing and even more so if using SSDs which can get the data to disk much faster anyway. The thing about SSDs is that rebuilds are SUPER fast, like a couple hours, even with 1TB SSDs in your array, so for me I'd stick to RAID-5 with SSDs. One way or the other, you always want your data from your RAID set backed up to some other drive or media, in a critical data environment anyway, which reduces the fear of this happening anyway.

If you buy enough drives then you can play with 50 and 60 but unless you have a LOT of disks I wouldn't bother with this. They're typically used once you hit over say 12 drives, and the choice of utilizing these RAID types and how many drives go in each are highly dependent on a LOT of factors, including total number of drives, size of the drives, speed of the drives, reliability of the drives (URE ratings & MTBF) and the use of the server. Again, not really what you're messing with here. For instance, I'm working on a file server build that will have over 50 hard drives and we're looking at building multiple RAID-60 sets on multiple cards using enterprise hard drives. A LOT of design considerations went in to this.

Note that the LSI cards don't do true JBOD. If you want a single drive to be presented to your OS, you actually have to configure that drive in its own RAID-0 array, then that logical volume is presented to the OS. The main caveat to doing it this way is that there's RAID data on the drive, so if you were to take that drive out later and connect it to an external SATA connector or put it in another computer, that computer's OS wouldn't be able to access the data you had put on that drive. Other cards (Areca, Adaptec) have native JBOD support on most of their cards. Note that comparably priced Areca cards, to the LSIs you've looked at, will have 2GB of on-card RAM and are also very nice. I wouldn't hesitate giving them a try.

So, if you're going to stick to SSDs, I like RAID-5 with write-back. If you want that 2nd bit of protection, you could do RAID-6. Or, if you can afford it, RAID-10 is always great to have. If you're going to use spin discs, the choice can change and of course it will take 4-5 HDDs in a RAID-5 (for instance) to equal the sequential performance of just one SSD and the IOPS still won't even be close for random reads/writes. If using a newer version of windows that has storage spaces, you can get even more creative with SSD caching at the OS level before writing back to your hardware RAID set if you want to really mess around (which I have yet to try).

All this being said, if you just want some redundancy, and this is just as a gaming drive, just grab a couple 500GB (or larger) SSDs and put them in a RAID-1 and call it a day! haha! It will probably feel about the same for load times as 4 x SSDs in a RAID-0. Or, scrap the RAID idea and grab a PCIe SSD (950 pro), put it on a PCIe adapter card and put that in your PCIe slot instead. Now you've got an SSD that's faster than 4 x SATA SSDs in a RAID-0 anyway, and 1/4 the failure potential.

Ok I'm done...
 
Solution

Nhyrum

Reputable
Jun 20, 2015
51
0
4,640
Thanks for the input!

I wouldnt do a multi level RAID until i get all my drives(8+ sandisk 256GB), which may take a while. Ill just wait until the drives i have go on sale and pick up a few at a time. Ill start with a RAID 5 or 6. If i add a spinning drive as a back up (WD caviar black 2TB lets say) how would i have that mirror the RAID? a 51? If i did a 8 drive RAID 50, could i have the spinning drive mirror the whole RAID 50?

I looked on Ebay and there is a 9361 8i for 371 and a 9266 8i for 214. the only difference is the 9266 is a little slower? 6GB/s vs 12? 6GB/sec is still rediculous in my mind, ive never owned anything with an SSD, so thats a ton faster than old PATA HDD's im used to. how does the 9300 8i compare to the 9361 8i? they both seem to be 12 GB/s cards

While on ebay I found a "cache vaut kit" for the 9361. will that just aid in faster transfer speeds?

Would i see faster read/write speeds if I used two RAID cards with each one in a RAID 5/6 then use software RAID to pair them together to get the raid 50/60? how would that work with a HDD as a mirror?
 

marko55

Honorable
Nov 29, 2015
800
0
11,660
First, you won't ever put your HDDs and SSDs in the same RAID set. You want to use all identical drives in a single RAID set. So you can create a RAID-5 with all your SSDs, then a separate RAID-5 with your hard drives. Your OS will see two separate drives and the way to get data replicated between them would be to use some sort of backup software in your OS.

Everything's only as fast as your slowest point in path. With 8 SSDs in a RAID-5 you're generally talking about 7 x SSDs combined speed. Best case that's 3850MB/s, or 3.85GB/s (that's GigaBYTES/s, not Gigabits/s like the RAID "ports" are measured in, like 6Gb/s which is Gigabits/s).

In regards to the "ports" on your RAID cards: Each port on a SAS2 "8i" RAID card, like the 9266-8i has 6Gb/s of bandwidth, which equates to 750MB/s, which is more than each SSD can even put on that port since they max out around 550MB/s. So the 9266 has more than enough bandwidth to support each SSD on its 8 ports. The 9300 is SAS3 (12Gb/s per port) which means 1500MB/s on each port....WAY faster than each SSD you're connecting, so it can be looked at as overkill.

Second consideration is the PCIe connection to your mobo. A PCIe 2.0 x8 connection is good for about 4GB/s to your system. So if you've FULLY utilizing a RAID-5 of 8 SSDs at their max potential of 3.85GB/s (drive limited) then you're good, and your PCIe connection to your system bus has enough bandwidth.

At that point your only concern is the bus speed of your motherboard and its bandwidth to actually get the data to your CPU to process it. You'll have to look that up. There's a good chance you're overrunning it with this.... So synthetic benchmarks might look cool but as for real world, well...

So you might be wondering: What the hell would I need 8 x 12Gb/s ports for if the fastest drives on the market are only capable of less than 6Gb/s? Reason being is you can use a single SAS cable that goes from one of those 2 physical connectors on that RAID card and the other end connects to a SAS expander card or backplane. On the other side of that backplane might be 24 x HDDs or SSDs connected, and the RAID card utilizes the bandwidth of that single physical connection to control those drives. Those backplanes can even be daisy chained, which is how you can reach that 128 drive limit that the specs of the RAID cards say they can support. So that "4 port SAS3 12Gb/s", single physical connection to the SAS backplane/expander is good for a total of 6GB/s (6000MB/s). If your server is insane enough to have 24 SSDs connected to that backplane, with say 3 sets of 8 drives each in RAID-5s, those three RAID-5s are capable of 3850MB/s EACH, totaling 11.5GB/s of throughput at max utilization on all three and now they're limited to only moving 6GB/s of that data to the system to serve clients due to the single cable between the backplane and the RAID card.

The cache vault kit is to back up the RAM cache on the RAID card. When you put a RAID-5 in to "write-back" mode, any time you write data to the logical volume in your OS, the data first goes in to the volatile RAM on the RAID card and reports to the OS that it has been written. The RAID card keeps the data cached until the discs can get it actually written to disk. If your computer loses power while a write is being performed, the data that was waiting in the RAID card's cache will be lost, the write will be incomplete on the actual disks, and the file(s) will be corrupt. With the cache vault kit the cached data in cache is protected until the machine regains power, and when the disks become available again the data continues to be written to disk out of cache. Many people will put their computer on a UPS to circumvent this, but at that point if your PSU dies you still pay the price. This may or may not be important to you.

You don't need more than one RAID card and will not see any performance increase, especially when RAID-ing SSDs. Remember, you can create multiple RAID sets using one RAID card. If you've got 8 SSDs, you could create two RAID-5 sets with 4 drives each. Your OS would just see two separate drives at that point. Its all about what you want or need in regards to protection, performance, and storage capacity in the OS. All considerations.
 

Nhyrum

Reputable
Jun 20, 2015
51
0
4,640
ok, yeah that gigabit vs. gigabyte difference is huge, and that is my fault. never really got the difference.

How would i set up a spinning drive to back up my RAID?

3.85 GB/s is INSANE. Is that read or write speeds? ill be happy if i can get 1 GB/s speeds.

As this if mainly for funsies/learning purposes on a personal desktop, i dont need insane read/write speeds that are approaching 4 gigabytes per second that a server would.

so now that i have my hardware options figured out, i just need to decide if i want to use a HDD as a back up or use a multi level RAID. If i did my homework correctly, a RAID 50/60 is a little faster than a RAID 5/6?

Thank you for your wealth of knowledge!
 

marko55

Honorable
Nov 29, 2015
800
0
11,660
That 3.85GB/s speed would be read speed, and potentially write speed for a certain amount of data written depending how quickly the data gets off the controller's cache and on to disk. If you synthetically benchmark it with less than 1GB of data it will show a few GB/s.

Keep in mind also, even if you have an array that reads & writes at that speed, and all those other factors (PCIe interface bandwidth, bus, even network) can support it, its still not going to get utilized unless whatever is reading or writing that data to/from the array can even come close to that speed.

RAID 50 will double the write performance of RAID 5, and same with RAID 60 vs RAID 6. However, if you're using write-back on a RAID-5 or 6 array and your cache can keep up with the number of writes, it won't really matter. Again, 50 and 60 are more commonly used when you have a lot of disks that need to be put in to RAID sets. You don't want to create a single RAID set with more than 16 disks really, and that's more for a RAID 6. RAID-5, personally, I'd never go bigger than 6-8 maybe.

Again, the only way you can back up data off your SSD RAID is by using a software utility in the OS to copy/back it up to another drive in the OS.
 

Nhyrum

Reputable
Jun 20, 2015
51
0
4,640
ok, that makes more sense.

I do apologize for my noobishness, i am new to RAIDS, but the all powerful google has been helpful other than telling me what to pick and how to decide. Its good to be able to talk to someone who has dealt with raids and can give a professional opinion.

so a raid 5 with write back will essentially be as fast as a raid 50? that may be more economical, as i could use 5 256 GB drives and have the same capacity as an 8 drive raid 60 with similar speed and with a full backup on a spinning drive, the 2 disk failure capability of a raid 6 in unnecissary
 

marko55

Honorable
Nov 29, 2015
800
0
11,660
Well, the thing about RAID 5 (and 6) is that if you don't use write-back (utilize the controller's RAM cache for writes) then you'll be writing directly to the disks and it will be MISERABLE. It will generally be slower than the write speed of a single disk in the array. So if they're SSDs, then it won't be as bad. This is because every write is striped across all disks + the parity bit has to be calculated and written. This is where "write-back" comes in to play. When you write to the array, the data is written to the controller's 1GB (or more depending on the card) of cache and the card reports back to the OS that its complete, which makes it look super fast to the OS. At the same time, the controller is writing the data out of its cache to the disks. Now depending how many disks are in the array, how fast their write performance is and how much data you're writing, it may always look like you're getting RAID0-like write performance to a RAID-5 array when using write-back. Writing to a RAID 50 or 60 doubles the write performance to the array because is striping the write across 2 RAID-5s (or 6s), if you have 2 sets in your 50 array. You could have 4 x RAID-5s of 4 disks each, and create a RAID-50 out of those four, hence quadrupling your write performance (without write-back enabled). Regardless, having write-back enabled is still going to make things faster than without it.

Honestly, the best thing to do is grab a bunch of disks and play around with various RAID types and settings like write-back and write through. Run synthetic benchmarks and real world benchmarks like large file transfers. Sometimes the toughest thing with testing is if you build a super fast array, you have to have another array or disk in the same machine that's fast enough to test the other! This is where having something like a PCIe SSD in your machine can be super handy.
 

marko55

Honorable
Nov 29, 2015
800
0
11,660
"Write-back" is a setting you configure on each RAID array you configure in your RAID card's config utility. When you configure a logical drive in your RAID config utility, you'll have a few things to config (for starters):

1) Which physical disks to use
2) What strip size
3) Cache mode (write-back, write-through, etc)
4) Size of logical volume
and more