"For OS RAID, you should use a driver-less RAID controller like this:
http://www.datoptic.com/ec/sataii-to-dual-sata3-raid-controller.html
or
http://www.amazon.com/StarTech-com-Internal-Connector-Controller-S322SAT3R/
and use PCIE raid card for other RAID (data)
This wont have compatible issue if you ever move the OS drive to a new system., either as RAID or a single drive from the raid set
There is is NOTHING wrong with your plan: 2 raid in one controller, it's just a bit of tricky to load or move the OS drive.
As a rule of thumb for server, you should have OS and DATA are in different path or bus, so something goes wrong it won't take both set of volumes"
---
I realize this question is three years old, but it still shows up in Google searches and is something that others read and gather information from, and thus still relevant (in my eyes).
While I agree that creating multiple RAID arrays on a single PERC controller isn't an issue (assuming enough disk bays for all of the required drives), I have to disagree about your controller recommendation. The H730 is a relatively expensive enterprise RAID controller, with 1 GB of onboard non-volatile flash cache, 12 Gb/s SAS and eight internal ports. It is a nice hardware RAID controller for server use; I don't see any advantage to replacing it with low-cost non-enterprise controllers. Certainly not just to gain the ability to import the RAID arrays or drives into another system. Most enterprise RAID controllers will create arrays that won't be transportable anyway, that's how they typically work. You wouldn't want to give up all of the features of such a card just so you could maybe one day move the drives to another system without having to reinstall. That's not typically how things are handled in servers anyway. If you were to build a new system and wanted to reuse the same drives, all you would need to do is back up the data, build the new system and restore the data, which shouldn't be a huge deal. And by that time, newer, bigger / faster drives will have come down in price enough that you'd likely want to purchase new drives anyway.
We use this controller in many of our servers and it is an excellent model. I've had good experience with the PERC controllers over the years. You can build as many arrays as you want, as long as you have enough drive bays (and drive ports). But nowadays, most people recommend building one large RAID10 over as many drives as you can get, split into multiple Virtual Disks for OS, data, etc. Though there is debate over whether it is worth using RAID10 on SSD, given the penalty on usable space.
At the very least, you want redundancy on any disks used in a server (unless it's for temp/cache storage), especially the OS - never use a single drive. If you had the SSD pair in RAID1 for the OS, you'd be better off putting the rest of the drives in RAID10 given the slow 7200 RPM rotation speed, if you can spare the 50% penalty. RAID5 with 7200 RPM drives would be rather slow unless it's just for bulk file storage (like a file server) without heavy usage. Given that that controller has eight ports, if you used two for the SSD pair, that leaves you with six that you could use in RAID10 for 12 TB usable (assuming 4 TB drives). Just be sure the drives are enterprise SATA so they will work properly in RAID without dropping out due to timing issues, a problem that happens with desktop SATA drives. Just be sure to have proper, frequent working backups since RAID only helps protect against disk failure, it won't help if files get deleted, corrupted, or hit with ransomware. I can't tell you how often I come across screwed up backups that don't work, that the customer doesn't realize are broken until they need them because they never tested them or kept them up to date.
Again, the original poster's problem is likely long gone, but as this thread is indexed in Google and others will see it, it's worth replying to for others who might come across it. A good, solid enterprise hardware RAID controller is a must, and the H730 will definitely do the trick. Even if it is now outdated a bit. How you use it
NOW trumps how you
might possibly use it down the road at some hypothetical point.