Dimension XPS T800 (i440BX, Pentium 3 @ 800MHz)
768MB SDRAM @ 100MHz FSB
3Com 3C905 10/100 NIC
ATI Rage 128 PCI video
LSI MegaRAID i4 (4-channel ATA/100 RAID controller)
nothing on IDE0
Generic DVD-ROM as secondary master (IDE1)
4 x 250GB Western Digital WD2500 ATA drives
The i4 firmware is the latest available from LSI.com, version N661. Each WD2500 drive was jumpered CS and attached to a separate port on the LSI controller via 80-wire Ultra/ATA ribbon cable. I configured the drives as a single RAID5 logical device of 750GB size and initialized it, then booted the system from the Ubuntu Feisty Desktop CD.
The installation ran fine...I believe I had two partitions setup. The first (sda1) was practically all of the volume's space, formatted using ext3 and mounted as the root filesystem. I held back 4GB on sda2 as a swap partition. I never had to consider stride or other parameters of the ext3 partition, as the standard Ubu installer does everything on its own. The total size of sda1 as reported using the "df -h" command was 686GB.
This entire setup ran flawlessly since I set it up in Feb 2007...until I ran out of space. Oops. I never did any benchmarking on it, as it never gave me any reason. Anytime I moved files onto it via my home LAN, the 100Mbit link was the limiting factor. FTP transfers routinely topped out around 11MB/sec for both read and write.
So, my problems begin about a week ago, when Fry's ran a sale on 500GB Maxtor ATA/100 drives @ $99 each. I picked up four of them with the plan of rebuilding my fileserver with double the capacity.
After pulling all the data off the old RAID5 volume, I replaced all the WD2500 drives with the new Maxtor STM305004N1AAA-RK units and built a new configuration...this time using two logical devices:
*Logical Drive 1 (sda)
16GB RAID0, 64KB stripe, intended for use as the root filesystem
*Logical Drive 2 (sdb)
1.5TB RAID5, 128KB stripe, intended for storage of large media files and ISO images
I booted up the new Ubuntu Gutsy and did some manual partitioning in LiveCD mode prior to installation...again holding 4GB on sda2 for swap, and using the rest as the root filesystem on sda1, formatted ext3. Only this time I manually formatted it using the right stride value for 4K blocks on 64K stripes...and found out I had to use the "alternate" Ubu CD to manually install without reformatting the partition. Ugh...I digress. I also formatted the entire RAID5 device as a 1.5TB ext2 partition (sdb1) and mounted it as /opt/media. My first inkling of a problem appeared during the format...the progress counter for block group creation rapidly spun up to about 9800 of 13300, then I heard the drives start chugging and the counter paused...then continued going up in increments of a few dozen with pauses of a couple seconds between bursts.
The *real* rude awakening came when I started copying data onto this volume. I had used several spare ATA drives as temp storage, and planned on plugging them into the IDE0 socket on the mainboard and directly copying the data over to the array. This was, after all, how I got the data off the old one. I noticed that 700MB ISO files were taking around 3-4 minutes each to transfer! I used the "dd" command to copy a few of them and it clocked the speed around 4MB/sec.
So, I started doing block device tests, writing data directly to /dev/sdb using dd. No joy. I could get speeds around 20MB/sec...but only in transfers of 48-64MB or less. Once I crossed that filesize threshold, it bogged down to 4MB/sec. The RAID0 device (sda1) was clocking sutained writes of ~25MB/sec.
So, here's what I've tried:
*Jumper all hard disks to MASTER (there were set to CS out of the box) - no change
*Reconfigure RAID5 logical device to use 64K stripes - no change
*Reconfigure array as a single RAID5 device - no change
*Install Windows 2003 Server onto a separate disk and perform tests against the logical array devices - no change. It writes just as poorly under Windows as under Linux. For me, that eliminated the OS driver as the source of the problem.
*I currently have a support request open with LSI, but it's not going anywhere fast. After emailing me about checking for drive errors and jumper settings, the rep seems ready to throw up his hands. I've asked to escalate to L2/L3, but I'm not holding my breath on getting support for a product that's officially obsolete.
So I now appeal to you guys...I'm all ears. My data is safely backed up, so the array is presently "disposable." I am ready and willing to reconfigure/rebuild it in any way helpful. What I need are ideas. Anyone have experience using this controller with drives and logical volumes of this size? Am I overlooking something basic? I'm an IT guy, so I'm pretty sure I know what I'm doing...but will be the first to admit that I've made geeky mistakes in the past, so don't worry about bruising my ego.
More about :megaraid abysmal raid5 write performance
I have pretty much the same setup and the same problem. I have the MegaRaid I4 by LSI on my system hooked up to 4 drives. Each drive has its own channel. WHen I configure each drive as stand alone (map each physical drive to a logical drive, no aggregates) I get pretty good performance. About 33MB/s for each drive. They are Maxtor 160G 5400 rpm drives, so 33MB/s is about all I would expect. THis is also about what I get when I hook them directly to my Mobo.
When I configure them as a RAID0 with the MegaRaid, I get about 40MB/s. This is slow, because if I hook all 4 to my Mobo, even sharing channels, and configure them as RAID0 in XP (with the RAID hack), I get about 90MB/s.
When I configure them as RAID5 with the MegaRaid, I get about 10MB/s using write back, and about 3MB/s with write thru. THis is terrible peformance!! Using XP to form the RAID and hooking them up to my Mobo, I get about 50MB/s in RAID5.
So the MegaRAID card is doing worse than software in XP! And in XP mode I share channels between drives, whereas with the MegaRAID, I give each drive its own channel.
I have tried and failed using the MegaRaid 8088 and 8808 cards... the support staff over at LSI was not very well versed in deep support or hard questions. It seemed like they all have a script of answers to some very basic questions. when they have to go off of it they seem to be lost second level or engineering staff did not reply to questions. better to go with another company