I am running a Dual Xeon (2x X5650 Westmere CPUs) config. with 12GB DDR3 ECC 1333MHz and a 450GB SAS Seagate Cheetah 15.7K (15000rpm) HDD on a LSI SAS 9260-8i raid controller.
The HDD sits in a Raid0 configuration (there is no second drive, kind of like JBOD) with following settings enabled in the controller:
Read-ahead enabled, disk cache enabled, I/O access: direct, write-back (no BBU unit/I don't have one yet), stripe size 64K.
My problem is that when I copy a large file (i.e. 2GB) it jumps to the middle of the process in one second then there's a sudden drop in transfer speed (below 50MBps) and keeps dropping making large file transfers a pain.
I have run a HD Tune test and I get the following: 119MBps minimum, 196MBps maximum, 167MBps average - this is with a 256K stripe (not 64K)
Also worth mentioning, the HDD connects to a 3Gbps backplane - the SAS controller and HDDs are both SAS2 (6GBps). I know the backplane is a bottleneck but shouldn't I at least have steady transfer rates not dropping below 50MBps and craps out completely?
And another question? Is this because I have a single drive in the Raid0 configuration? Could it be because the controller is writing the same stuff twice onto the same drive? (that's my theory)
No double writes, but the virtual drive (as LSI calls them) could be seen as degraded. What you are probably seeing is the write cache filling up, then the disk sputtering as it tries to commit the data.
Not sure how you can have a striped volume on a single drive. Something isn't right with the config.
You really need to leave the drive JBOD or "Unconfigured" if it is a stand alone. I have a LSI 9212 and that is how I leave single drives.
Also, when you get a second drive to make a real RAID 0 volume, use a 64K stripe size since you seem to be concerned with throughput.
just a bunch of disks ..ignore disable disengage stop, I'm not Raid savvy but all iv ever done is use JBOD just slap each hard drive in as its own drive maby partition but each ones on its own. just plugged in formatted and used.
This controller is piece of crap. After mode digging found that can enable JBOD from megacli command line ("MegaCli -AdpSetProp -EnableJBOD -1 -a0") so I did and I was able to see the Make JBOD option from the WebBIOS but it doesn't work, I get an error that the operation could not be completed.
Also tried manually from megacli and get an error:
"MegaCli -PDMakeJBOD -PhysDrv[252:4] -a0"
"Adapter: 0: Failed to change PD state at EnclId-252 SlotID-4
Exit Code: 0x01"
It has been clearly stated on the mfc page. that the controller supports JBOD, now I have to waste another $400 to get an identical HDD for RAID.
JBOD does not consume disk space for storing parity data, a JBOD configuration makes maximum use of storage capacity. For the same reason, the reliability of JBOD configurations is poor; a single disk failure inevitably results in data loss.
When a hardware RAID controller is used, it normally defaults to JBOD configuration for attached disks. Many that I know never use this type of configuration including RAID 0 (not a true RAID). They opt for RAID5 and 10.
How about the backplane limitation of mine of 3Gbps? I don't want to risk ending with another $400 spent on a second drive that will experience the same slowness...
Wouldn't I be better If I bought a 3Gbps HDD instead?
I have tried with a SATA Samsung SpinPOint drive and while the speeds are not terribly fast at least they are constant...
I bought a mini-SAS to SAS cable (not the fanout cable) to test my HDD connected directly to the controller w/o going through the backplane.
I have also read on forums that Seagate drives don't work well with LSI controllers (sense errors and other stuff). Might look into replacing with Hitachi Ultrastar...
I have not received the mini-sas to sas cable yet so I have not tested the HDD directly connected to the controller.
In the meantime, I have changed the backplane configuration to match my SAS controller sideband signalling. I had the backplane configured for i2c when my controller was "talking" to my backplane via SGPIO (Via SGPIO cables not i2c).
After changing my backplane settings to SGPIO (via jumper settings0 I get better throughput, still not well above 200MBps but close and with less transfer rate spikes.
I do not understand how this could affect throughput because I have disconnected the sideband cables from the backplane and was getting the same crappy troughput.
An interesting thing I have discovered after running some benchmarks:
- Ran Sandra and throughput rates skyrocketed and beated the reference numbers (number 1 in list)
- Ran HDTune, better throughput that before changing my backplane settings
- Copied a 2GB file, same crappy transfer rates, 200Mbps for few seconds then dropped gradually to 1Mbps and never caught up
I don't understand how I can beat the benchmarks yet fail to reliably copy a file (same HDD, same partition etc).
They have about 10 motherboards they have tested to work with this thing and they are all server boards. If you have a problem with this RAID card you will get the standard message
"We do not support that combination of hardware, please do not use our card with this board, have a nice day"
They make no attempt to help the end user at all.
I have this card and my system will not boot with a hard disk connected to the card. I can hot plug a drive into the card once the PC is booted but that is it.
Also JBOD does not work. Period.
Flashing to earlier firmware does not work. A corrupt image error occurs. Earlier firmwares are not even listed that might work. The only way I found one was to dig around their knowledgebase site for a download link and that image generates a corrupt image message.
oh, and yes I am pissed off and definitely want to spread the word about this half baked product.
I have had it with this P.O.S. card and on top of all that, NewEgg wants to charge me $70 restocking fee for this garbage.