I am trying to build a high performance RAID. Sata drives have an average write throughput of about 100 MB/s. Selecting specific drives can get you better or worse performance. Accounting for some loss in throughput from overhead, to get 600 MB/s, one might think that all I would have to do is set up RAID 0 with 6-8 drives. Looking at the RAID benchmarks , it appears that you max out at around 270 MB/s no matter how many drives you attach. This would lead me to believe the bottleneck is in the controller.
This would also lead me to believe that in order to get the throughput, I need to go to Fibre Channel drives. Even though the individual drives do have some improvement in throughput, it's not huge. However controllers are rated at higher throughput (up to 8 Gb/s).
Honestly, for that purpose, I'd go with SAS rather than fiber channel, and a good hardware RAID controller. Seagate Cheetah 15k.7 drives are currently the highest throughput SAS drives on the market, though they are expensive. They are absolutely reliable too - they will last longer with heavy writing (which is what I assume you are doing) than an SSD would.
I appreciate your answer on the drives, but my question was about the RAID controllers. Are there fundamental limits to different types of RAID controllers. What are they and are there ways of getting around them.
Throughput is also not the only measure you probably need to look at, unless the load is purely sequential. If its non-sequential IOPS (IO operations per second) matters too. Drives matter more to that then the controller.
Also keep in mind that with a controller a single PCIe lane (1.0) is able to provide about 250 MB/s. So to get your speeds you are looking at least a PCIe x4 slot card. If the card is just PCIe (1.0) x 1, move on (of course, the motherboard needs to support this too, you might be able to plug your card into a PCIe x 1 slot, but you'll be limiting the card by the bus). The PCIe 2.0 spec doubles the amount of bandwidth a single lane can handle (500 MB/s).
There really aren't that many good independent benchmarks of controllers out there, but Toms does them occassionally:
The controller does matter some, but the first question is you need to ask is how many drives. That will drive how many ports you need. Then you can pick the controller. Controllers generally perform in-line with port count.
SATA also isn't always bad...combine some drives together and you can get some great throughput and IO response times.
Depending on how much storage you are looking for, low end SAN solutions may also be an option.
FC drives are generally overkill these days for anything other than OLTP or other high end loads. Extreme loads are served by SSD solutions. SAS is good for mid performance-high performance (e-mail, typical database loads, etc.) SATA servers the low-midrange segment (smaller systems, file serving, etc).
Keep in mind with the right motherboard, controller, and number of drives a SATA array could outperform SSDs (I have seen a SATA setup that hits 100,000 IOPS). So you may not have to discount SATA from your solution (8 decent SATA drives and a decent controller might meet your needs). That will come down to budget.
if the questions is controller then go with Adaptec and go with the 5000 series cards good perfomance and quality. make sure the firmware is up to date... if you are putting this on a backplane there is another place to look at for speed loss... Supermicro backplanes are pretty decent stick with them and use the expander chips when you can... if not go with the discreate connectors for the speed. once that has been figured out the next is the drives... Seagate you will NEED to pull the 1.5 x 3 jumper on the back of the drive... for the speed (dumb way to do it for enterprise quality drives but Seagate has not been accused of better judgment any time lately) I like the WD enterprise drives they are long lasting with a pretty high MTF and good transfer rates. next is the motherboard all the power in the world is not gonna make a difference if you choke here. good processor and PLENTY of ram couple that with a good bus speed you will do fine.
Thanks: The computer platform I am using has a number of x8 and x16 PCIe 2.0 slots. The system is running on LINUX. I am using an enterprise RAID and not building my own, so I have limited options. The articles you pointed to helped me understand better what the bottlenecks were.