I have just put 2 x 320GB Western Digital Caviar Blue 16MB Cache HD's into RAID 0. I can see an obvious improvement in data transfer rate, however, the seek time sucks! Is this typical of all RAID 0 configurations, or did I do something wrong? Please help!
It looks about right to me, pretty much what I would expect.
If your OS is not on the array, if the array is simply a storage array, the access time may be a little slow, but if the OS is on this, it is actually pretty good.
RAID 0 does not improve access (seek) times. What RAID 0 does is split the activity between two drives so that you can (in theory) do twice as many I/Os per second. But it still takes the same amount of time to do each individual I/O.
It's like going from a road with one northbound lane to two northbound lanes. The new road can handle twice as many cars, but the speed limit is still the same.
The truth is that no RAID organization really improves access times. The best you can do is with mirrored RAID levels such as RAID 1 or RAID 1+0 - in those organizations there are two copies of the data so if the controller is smart enough it can direct an I/O request to the drive whose head is closest to the required block. But even that optimization doesn't improve access times all that much.
The only way to significantly improve access times is to use a drive that is inherently faster. High spin-rate drives like a 10Krpm Velociraptor are faster. SSDs (Solid State Disks) are way faster - they have access times that are about 100X faster than a hard drive because they have no mechanical heads to move around. That's why people pay the big bucks for them.
But I'm pretty sure my seek times got slower; I do remember single hard disks having seek times of less than 10ms. Is this because I have 2 hard drives, that my seek time doubles?
It is actually normal. Your 2 hard disks are not mechanically synched. So an access needs to wait for the 2 hard disks to be exactly at the correct sector. So you average access time is normally around 50% higher then a single disk.
lafontma is right - but it depends on the stripe size of the RAID set and the size of the I/O you're doing. If an I/O request can be completed using only one drive, then the access time is basically the same. But if the I/O request requires reading (or writing) both drives, then it won't complete until the slowest drive completes.
So for small I/Os the access time will typically be similar, for large I/Os it will typically be slower. Fortunately for large I/Os the longer access time may be offset by the faster transfer rate caused by the fact that you can be reading information from both disks in parallel.