10x SSD RAID 0 w/ LSI Controller - Configuration HELP!

toneekay

Distinguished
Jun 28, 2011
564
0
19,010
Hey guys, it's been a while since I've been on here but I'm stuck in a rut. I've searched far and wide over the web and can't seem to find the correct answers I'm looking for. I've tried multiple configurations and nothing seems to work in my case.

System:
Windows Server 2012 R2
i7-5930K @ 3.5GHz
64GB DDR4-2133MHz
(10) Samsung 850 EVO SSD
LSI MegaRaid 9361-4i (12GB/s)
Intel RES2SV240 SAS Expander (6GB/s per port)

The issue: So I'm trying to configure a RAID 0 setup for mainly READ speeds only. I've checked online and I believe my read, write, I/O policy is correct in a sense to achieve the best performance. However, testing through "strip size," I can't seem to get over 1600 MB/s read speeds. I've tested 64KB (1664 mb/s read), 128KB (1679 mb/s read), 256KB (1576 mb/s read). Now I've read that you want smaller blocks if you'll be reading smaller file sizes which is what the system will be used for (most files will range from 1MB to 1GB).

Does anyone have experience dealing with these LSI cards and/or have any insight as to why my SSD's aren't scaling like how they should?



EDIT: I forgot to add my RAID configuration.

- RAID Type: 0
- Strip Size: 64 KB, 128 KB, 256 KB (512 KB, 1 MB available)
- Read Policy: Normal
- Write Policy: Write through
- I/O Policy: Direct



 
Solution


I think that's right. The card can handle probably up to 64 drives, but they will have to share the bandwidth of 4.
The 9361-8i card gets around 4600 MB/s read speeds in RAID0 using SAS3 SSDs. Your version only supports half as many ports (4) natively and should get about half of that. You are probably losing some speed due to the additional overhead of SATA and the SAS expander. Your numbers are at least close to the range of what you can expect.
Another way to look at it is that your cable to the SAS expander supports 4 links. Those links are running at 6Gbps because you are using SATA drives. This gives you a maximum bandwidth of 24Gb/s. Your real world performance of more than half of that is not too surprising.

You may be able to achieve better performance by using only four drives.
 

toneekay

Distinguished
Jun 28, 2011
564
0
19,010
Help me understand this a little better.

My enclosure has 5 backplanes (which 4 are being used). There are 4 SATA drives (links) per backplane which is then connected to the SAS expander via SAS cable, and then the expander to the controller via SAS. Are you saying that because it's SATA to SAS where the backplane is, that I'm losing performance there?

Wouldn't the drives per backplane (4x 500mb/s read drives) translate into 2000mb/s and then combined with the other backplanes through to the controller?

I'm a bit noobish to these high multi-drive RAID systems, so excuse my many questions.


EDIT: I think I just answered my own question... The card only supports up to 4 native ports which will only scale to that limited number (4x). If I had a card with more native ports (8x), then my drives should scale up to that amount. Correct?
 


I think that's right. The card can handle probably up to 64 drives, but they will have to share the bandwidth of 4.
 
Solution

toneekay

Distinguished
Jun 28, 2011
564
0
19,010
I've actually found out what my issue was... I screwed up thinking "Normal" was the quickest Read Policy in the RAID configuration, however "Read Ahead" was actually way faster. My 1500 mb/s read speeds increased to 2500 mb/s. I've also found out that with the benchmarking tool, the ideal data size to test is "100 MB," and earlier I was testing at "4000 MB."

Right now with 128KB strip size & read ahead settings, I'm getting around 4100 mb/s read and 4400 mb/s write speeds through the 10 SSDs.