raid 1e slow

j4jes

Honorable
Jan 16, 2013
13
0
10,510
I created a raid 1e array last week on an intel sr1500al server using an LSI 2008 card and it just finished syncing finally. The purpose of the array was to setup vmware again but on a redundant array instead of on a single drive, but Ithought they raid array would be faster! It's taking forever to boot up and to delete / create a vmdk-flat file. It's just not readong / writing properly I guess. It's a three drive 7200rpm sata drive array and I'm told that I can add another drive and make it raid 10. What did I do wrong. This is a "mpt2sas" card from an IBM server but I flashed it with the LSI firmware to be in raid mode instead of target mode.

update: I am not only measuring the boot time but more the read / write speed when creating a test virtual machine within vmware. This is very slow and the raid 1E should be a mirror / striped hybrid so it is my understanding that because it is using three drives it should be a bit faster than when I had vmware esxi on the single hard drive.
 
Solution


I bet , if you build no matter what raid type, you will have poor speed in any case with Vnware ESX server on this controller, but single drive will show much better perfomance.
Your raid card doesn't have internal cash , Esxi also doesn't doing cashing operation , like Hyper-V for example.
Thats going to slow down final perfomance.

RAID1 is mirroring - so two exact drives become "one". If one fails, the second will keep the array going until the failed drive is replaced.

RAID10 is striping/mirroring, so read/writes are spread over multiple drives, and then they are mirrored.

Mirroring drives, slows down system performance - as you have to write twice, read once, but gives you the reliability of the mirrored backup.

Striping drives, increases system performance, and the write is happening on multiple drives.

I use high speed drive arrays for database applications, typically running RAID 10 with 10+ hard drives. The more drives you have, the better performance. If high speeds are really needed - consider SAS 15k drives, and using smaller drives (i.e. 160G or 320G) to give better performance.

For your setup, I would suggest a single OS drive (100GB in size) - maybe even an SSD for the boot drive.

Then you need to look at the "data" portion. There are 3 options - total volume size (striping), speed (striping) or 100% up time (mirrored).

With any drive setup - RAID is NOT a replacement for backup - consider a backup drive for maintaining the backup.
 

popatim

Titan
Moderator
Are you timing the boot up from power on or from the system actually beginning to load the OS?
You shouldnt really count the time it takes the raid card to initialize as some are just awfully long and drawn out and 10 minutes is not uncommon at all.

I would check your error logs, esp in the lsi, to make sure everything is clean there.
 

j4jes

Honorable
Jan 16, 2013
13
0
10,510
Here is a performance comparison between my older ESXi 5.0 server with single 7200rpm drive and my newer ESXi 5.5 server with 3 x 4TB 7200rpm RAID 1E (vmware installed on the LSI2008 raid 1e volume) and I'm just using the performance tab to compare the KBps read and write. The guest OS's just respond faster on the single drive server.
I would add more drives / spindles to the array but the server is a 1U intel with room for only three drives. The LSI 9i9e raid card does have some kind of eSAS external interface on the back, so I guess I could add more drives in an external housing with power supply.

1510760_10153670089385433_183790092_n.jpg


1604925_10153670089895433_1628444460_n.jpg



The peculiar thing are these messages that keep popping up in the events tab. It will say latency decreased, and then latency increased only minutes apart.

Device
naa.600508e00000000045cf9bd469315c05
performance has deteriorated. I/O latency
increased from average value of 3532
microseconds to 291216 microseconds.
warning
12/31/2013 12:05:22 PM
reynolds
 

popatim

Titan
Moderator
Do you hace an IBM controller card with a passive sas backplane for the drives or the active backplane that uses the lsi 1064 built in?

I'm not sure what you have as what I know as the mpt2sas doesnt have 1 internal and 1 external mini sas ports on them. What the FRu# of the card?
 

j4jes

Honorable
Jan 16, 2013
13
0
10,510
Well the card is a LSI SAS 9212-4i4e that shipped with some IBM System X systems. So it had IBM firmware on it, that I updated with LSI firmware to get integrated raid mode available. Since the SR1500AL chassis I'm using has a sata/sas backplane behind where the three hot swappable drive bays are, I just relayed three sata cables from the sata backplane to the LSI card, and then setup the 1E raid volume from the LSI Raid BIOS. This initialization process took about a week or more! maybe due to the intel sata backplane? or using 3 x 4tb Seagate desktop drives? Anyways it's working but I hoped it would be a bit snappier because in vmware or any of the guest VM's just clicking around has some delays that weren't present on the single drive ESXi install (used same Seagate 4tb desktop 7200rpm drive). The events window in esxi vsphere is constantly complaining about degraded performance & latency. Isn't RAID supposed to be to take advantage of "inexpensive disks"? I can add four more drives to the array with the sff8088 breakout cable on the back? I suppose 9212-4i4e means 4 internal / 4 external using the breakout cable internally or externally but I've seen this card used with a SAS expander unit with more than eight disk drives (IBM firmware maybe) so I think perhaps with a straight through sff8088 to sff8088 cable on the back of the card going to right expander DAS unit it can handle more than 4 external drives.
 

Dr-Kiev

Distinguished
Aug 24, 2010
54
0
18,660


Raid1E it is same raid10 but based on 3;5;7;9... drives
There is present both: striping and mirroring
 

j4jes

Honorable
Jan 16, 2013
13
0
10,510
so even if I am using raid 1E (striped mirroring) which is really just raid 10 with an odd number of drives, it will be slow? I think I just need more spindles / drives to make this volume have some performance ?
 

Dr-Kiev

Distinguished
Aug 24, 2010
54
0
18,660


I bet , if you build no matter what raid type, you will have poor speed in any case with Vnware ESX server on this controller, but single drive will show much better perfomance.
Your raid card doesn't have internal cash , Esxi also doesn't doing cashing operation , like Hyper-V for example.
Thats going to slow down final perfomance.

 
Solution

j4jes

Honorable
Jan 16, 2013
13
0
10,510
Good point. I recall reading through another forum regarding this LSI card where someone had to install LSI raid manager software (maybe on a windows guest VM) to enable some caching to speed it up. I'll do some more googling and try that. Rather disapointed that a raid card of this price behaves poorly in ESXi. My other machines have integrated intel esrt-2 ich10 raid and LSI 1068 raid.

I'm thinking the best solution at this point might be just to use the single drive and then setup a nightly rsync to a secondary drive for redundancy.

I've tried HyperV recently but had a rough time since we have what is mainly a linux network without active directory. HyperV without AD is something of a nightmare.
[/quotemsg]

 

j4jes

Honorable
Jan 16, 2013
13
0
10,510
I have a hunch that my problem is related to using these cheap desktop grade drives (ST4000DM000) but you know what, I just put four of these Seagate drives into a raid-0 on the LSI 9212-4i4e card and my vmware ESXi 5.5 still reports no improvements for write speeds, it is always capped at about 150MBps. So basically four ST4000DM000 drives going through the LSI card in raid-0 is SLOWER than one ST4000DM000 hard drive going through the Intel SR1500al native sata controller. This is really p**sing me off because I know that a four drive raid-0 should do something to improve the performance even with cheap drives. Something is acting as a bottleneck, maybe the sata backplane / midplane on the sr1500al server.. this makes no sense to me at all. I even bypassed the sata backplane on the server by using external power supplies for the drives and STILL performance on the LSI volume with the drives connecting directly to the LSI card / no sata backplane, shows in ESXi as 150000 KBps.
 

popatim

Titan
Moderator
raid 1e is not a stripe of mirrors nor a mirror of stripes. Rather its a hybrid of both
Data is stored stripped like raid0 but with copies of the stripes

raid 0 would look like this where a and b are the two halves of the data bytes

1a 1b
2a 2b
3a 3b

raid 1e adds a duplication of the stripe across3 or more drives. Every other row is a mirror of the previous row just rotated 1 drive over. Writing is obviously much slower but reading should be at raid0 speeds.

1a 1b 2a
2a 1a 1b < mirror of row1
2b 3a 3b
3b 2b 3a < mirror or row3
4a 4b 5a
5a 4a 4b < mirror or row5
5b 6a 6b
6b 5b 6a < mirror or row7


At least if I remember my ibm training right. LoL
 

j4jes

Honorable
Jan 16, 2013
13
0
10,510
trying a raid-0 with two seagate 7200rpm drives now to test the disk speed in esxi and it always seems capped. These intel sr1500al servers have a midplane between the motherboard and the sata backplane. I wonder if they interfere with the performance. I know the LSI sas2008 based cards don't have cache, but I would still expect a bit more performance than this, atleast 15000kBps or better.

I just read about someone with an LSI based card experiencing poor performance that removed a vib within esxi that seemed to fix his problems.

http://en.community.dell.com/support-forums/servers/f/906/t/19487404.aspx