Finding a cheap but good SAS/SATA controller for OpenSolaris

g00ey

Distinguished
Aug 15, 2009
470
0
18,790
I'm looking for a hard disk controller to use with an AMD Phenom II X4 based computer running OpenSolaris and ZFS. I want it to be able to run at least 8 drives and hardware RAID features are not interesting. The major problem is the lack of driver support in OpenSolaris which is strange. If you want to get serious with storage and get value for the money the only reasonable option is OpenSolaris.

I found that an LSI based controllers are the best ones as they are most frequently used in Solaris servers according to my sources. A recommendation is to use a controller based on the LSI SAS1068E chipset. Now I found a controller card based on the LSI SAS1078 chipset at a reasonable price which is an enhancement over the 1068/1068E chipsets (E means PCIe support whereas nonexistent letter means PCI-X support). The card is a so called SAS Riser card (product code: AFCSASRISER) which is intended for Intels S7000FC4UR servers and uses an PCIe x8 slot. The following questions I have about this card are the following:

* Will it run on any Motherboard with this slot or will it only run on that server board?
* Can I flash it with firmware from the LSI website which is more up-to-date than the firmware provided on Intels website, or do I have to stick with Intels patches?
* Will it run using LSIs drivers or do I have to stick to the drivers that are provided by Intel?
* What does "riser" really mean? The term doesn't make sense for this card which seems to be a regular SAS/SATA controller.

I know that the Intel SASUC8I is flashable with LSIs firmware updates and probably also Intel SASMF8I. But this card is a little fishy; for example some features do exist on the card but you need a hardware key provided by LSI with a unique license number in order to unlock it. In spite of the "impressive" chipset, the card only supports RAID 0, 1, and 1E (whatever 1E means) but with the key inserted it also supports all currently used RAID modes. A flash may conflict with this lock system, or with luck it may be bypassed.

Anyone having experiences with this card?
 

goobaah

Distinguished
Dec 7, 2009
128
0
18,710
I have no experience with this card, but I have a concern about ZFS and hardware RAID. Are you planning to use all of these drives as a single volume? I have not looked into ZFS, but I believe its design is better suited if you leave the drives seperated to a degree and let it manage some of the data integrity. Its more of a software RAID solution plus file system. Now, if you are using 4 sets of mirror pairs or two raid 5 or 10 arrays you would be playing into the idea of ZFS. Except you would not need to tell system that you need to keep copies of your data. The redundancy of your RAID volumes would take care of that.

As far as your card goes, if it fits in thhe PCIe slot then it should work.
I dont know the answers to the rest of the questions, buuuut.

Dell servers come with PERC SAS RAID cards. Also you can configure Dell servers with Solaris. And PERC RAID cards can be found on ebay for cheap. I wont do the research for you, but maybe this is the way around these questions.

 

g00ey

Distinguished
Aug 15, 2009
470
0
18,790
I intend to use the hard drives in JBOD mode, i.e. no RAID. LSI calls this mode IT mode whatever that means. I intend to use 7 1.5 TB hard drives with ZFS using RAIDZ2 which is tantamount to hardware RAID6 mode but with using the software layer.

The SASRISER card has a more powerful chip than the SASMF81 and yet it is even slightly cheaper than the cheaper SASUC8I card plus 2 SATA multilane breakout cables. When you buy a SASRISER card, such cables are included which makes it a bargin if it works on a regular PC with a PCIe slot.

I also found an LSI 3081E-R on a chinese website for about $100 with cables which seems like a pretty sweet deal if I can trust the quality of their products.

I haven't looked at the PERC SAS RAID cards from Dell but I'll look it up.
 

g00ey

Distinguished
Aug 15, 2009
470
0
18,790
I think I have misunderstood JBOD. I used to believe that JBOD means that the hard drives are configured as they are; as individual disks. But it rather seems like that JBOD is a mode which concatenates the storage space of all drives into one big virtual drive. So after the last sector of the first drive begins the the first sector of the next drive.

To run the drives individually, it seems that I will have to create a drive group for each drive and set the group to use RAID 0. So a drive group using 1 drive with RAID 0 is the same as using a single drive without any RAID at all.

I think pretty much all SAS controllers supports running the drives individually without RAID. It wouldn't make any sense at all if they didn't.
 

goobaah

Distinguished
Dec 7, 2009
128
0
18,710
I think you are on the right track. Solaris volume manager (metadb) documentation reads that a single disk is called a raid 0 disk and I think that some RAID cards do the same, so you may be right about this RAID 0 = single drive thing. As far as the chinese stuff goes, be careful. You are putting your data on this controller. I think the PERC cards are not much more than the chinese stuff. The controller can fail too and hopefully your data will still be intact after the failure. You might be doing a good thing with the ZFS stuff because if the controller dies, you might be able to just put those drives on a different controller and still get the data. The device files may need to be change reflect the new hardware, but the files system will still be intact.