Raid 5 card that maximizes performance to 4 or 5 Gigabytes throughput?

Daniel925

Distinguished
May 25, 2011
2
0
18,510
Hello,

I am trying to build a small cluster. I'm planning on using Infinibands QDR which has 5Gigabytes of throughput. The PCI-Express 2.0 standard maxes out at 4Gigabytes throughput for a 8lan bus which seems to be the standard for the raid cards I have been able to find.

Are there any uber raid 5 cards out there that will actualy pump io at 4Gigabytes given enough disks?

The best I have been able to find so far in my search is the LSI MegaRaid 9285 around $999 with two External SFF-8088 ports I was thinking of connecting this card to two Norco Technologies DS-24E around $1400 each. While I think two Norcos DS-24E's should pump 2400megabytes of info each 4800 megabytes total if I strip accross all 48 disks but I doubt if the LSI MegaRaid 9285 will handle that much information sustained and I am afraid Raid performance will decrease and obviously, another concern is if I strip across the 2 Norcos DS 24E's then any momentary disconnect of the cable would destroy the raid 5 which is far to risky to move forward with.

Ideally I am looking for 5 Gigabytes of throughput in and out of the raid 5 card with one common volume which will then be shared out through the Infiniband QDR network as cheaply as possible.

Anyway, if any of you guys has a better idea please let me know in the next couple months its 5/25/2011 as of this post.
 

NaranKPatel

Distinguished
Jul 12, 2012
1
0
18,510


If it's performance your seeking your probably better getting high performance drives wds blacks or ultrastars on relative commodity side, etc in raid 60 using LSI 9285CV-8E and Gen3 or 4 SSD drives in the array as massive data cache for reads/writes using LSI CacheCade feature and leveraging CacheVault instead BBU's on this CV model, I would use smaller raid stripe group, that massive group will fail as mtbf on that many drives is very high indeed and when it does the rebuild job will take sometime to complete, crazy in mission critical workload. If you've used large arrays groups before you'll know you'll get a lot of failures.
 

FireWire2

Distinguished



If i correctly read the spec of Infinibands QDR. It's 5Gb/s (Gigabits) not GB (Gigabytes) or 500MB/s
This can easy achieve with HW raid PCIe Gen2 16x ports from ATTO, Areca, HPT...
the HW RAID mention above can do over 1GB/sec sustain speed

PCIe x8 lanes has the bandwidth of 5Gb/s x8 = 40Gb/s, that is about eight times of Infinibands QDR.