I got a 2810sa card a while back to run RAID5 because I was fed up of having drives die on me. It is running with 4 WD5000YS drives right now. Firmware and drivers are the latest versions.
It was clear from the start that it was very slow, since when I dumped a 300GB drive's data onto the new RAID array, it took several times longer than I had estimated. So I ran HD Tach and it confirmed my fears. This result is 100% consistent, I get it no matter when I run the benchmark:
I thought it might be the PCI bus at first - too much competition for the limited bandwidth - but the results are far too consistent for that to be the case, and the burst speed shows that there is more bandwidth available at least at some points in time, which would mean that the graph should have peaks of 70mb/s+ here and there.
Does anyone have any ideas on what's going on? Frankly I don't mind it all that much - I got the 2810sa for cheap instead of the 2820sa which was current at the time not because I wanted speed but because I wanted data redundancy and in this respect I got what I paid for. Still, would be nice to see at least mediocre performance. Remember, the HD Tach benchmark is showing read speeds, which doesn't even require parity calculations.
The Adaptec 2810SA is a RAID card based on the IOP302/303 processor, the same one on the LSI MegaRAID SATA150-4. It's pretty well known that the IOP302/303 is kind of an underpowered processor for hardware RAID 5.
It's apparent that a) The Adaptec design isn't as good as the LSI design (Reference). And b) The Adaptec card does better when large block transfers (2MB+) are used, but small block transfer sizes (64K) cause performance to dramatically take a hit (Reference).
I would recommend a 3Ware 9500S or 9550SX to replace it.
Thanks, I appreciate the fact that you came around and checked out my problem. I have actually come across both of the sites you linked to, and when I read that the processor is insufficient for RAID5 performance to scale well I thought it meant that with more and more drives the performance hit a ceiling and didn't improve at some point. However, I had read that after the other site, which in particular said "The 2810SA topped out at 134-MB-per-second read and 102-MB-per-second write performance when doing large 2-MB block transfers.".
This is way off my performance as you can see. Further, transferring even one large file (like over 1gb) still always results in ~35mb/s speed (I used an standalone, empty WD5000YS for the test - a drive which has a minimum speed well over 35mb/s). This is for reads AND writes. Shouldn't read performance at the very least be better?
You would think it should be, but I can't think of any other reason why your performance would be that bad.
Unless one of the drives is messed up. You can check them individually with something like SpinRite or the manufacturer's disk utility.
You may also be running into some sort of motherboard incompatibility problem. Some PCI-X controllers don't play as well as they should when used in a PCI slot with certain PCI controllers. It's possible this is the situation you've run into, but there's no way to tell unless you talk to Adaptec support.
Actually I'm glad you brought that up - that maybe the card isn't quite so friendly with the idea of dropping down to a regular PCI slot. I had that on my list of possible causes but it was rather low on the list.
Before I head over to Adaptec (because I'll have to phone them, the website doesn't work for me), I wanted to ask about the stripe size. I've read some conflicting info here on the effects different stripe sizes have. Mine's set to 256K. I was wondering, say there is a 10mb file being written, does that mean that the controller would have less work since less stripes are needed or does it effectively not matter cause its still 10mb worth of stripes so to speak? I saw you talking about how the XOR calculations are not actually the hard part of a controller's duties, but then how is it that stripe sizes affect performance?
On this controller, definitely 256K stripes could hurt performance with small files. Reduce the stripe size to 32K.
You're correct, for a 10MB file, there's 10MB of XOR computations to do, regardless of stripe size. But, with a stripe size of 256K, that means that if the OS updates a file smaller than that (even as small as 1 cluster (4K)), the controller still has to update 256K worth of XOR. I think you could conceivably gain performance on this controller by reducing the stripe size to 32K.
I see what you're saying about small files and the stripe size business. However, I take it that in conclusion you will agree that copying a large file like a gig in size or greater from the RAID5 array (thus reading) to somewhere else should not be bottlenecked at 35mb/s right?
So at the very least I've confirmed that it is indeed a problem.