Although the performance level in RAID 10 is lower than in RAID 0, AMCC again delivers significantly better performance in RAID 10.
How does this compare to a DIY Linux Software RAID? Price? Performance? Reliability?Unlike a hardware solution, if the controller card dies, you can forget about getting your data back since there is no "Standard" for RAID. On Linux you could just put the drives into another PC, as the meta-data for software RAID on Linux is not going to change across different versions of Linux.
Thanks for the article - you have convinced me not to even consider either of these. RAID 10 should be faster than any individual drive for reads and writes, and it should also be faster than RAID 5.Something is wrong here - either with the hardware or the tests.
Actually performance isn't capped at 1 cable. There are a number of solutions that have multiple connections using iscsi, some even route between the connections dynamically on the server side and you can bond the ethernet connections on the client side to achieve performance maxing out the quantity of connections on the client machine. Of the ones that we tested (day job) there were only a few that met performance needs. All the arrays max the cable(s) out with straight read/write, but the performance on a number of array's drops drastically when you staring hitting them with more clients (20+) for read/write scenarios. Of course, these solutions are only really useful if you have, say, 100K (or more. Alot more in some cases) lying around.
It's a crying shame that storage "solution" providers (and Tom's Hardware) don't look at the needs of the laptop marketshare. This would be just what I need, but the controller cards are deal-breakers.