I have a pretty noob question. I have experienced some slowdown when reading from multiple SATA drives at the same time. I have two SATA II internal drives each connected to a different SATA port on my motherboard. They are separate, no RAID here.
If I do a long read on both drives at the same time, I get significantly slower performance than if I only read from one of them. I've confirmed this in HD Tune. I was wondering if anyone could help me understand why this is happening. I was reading that SATA has no bus arbitration so I was assuming that each drive would be able to read at full capacity regardless of what the other was doing.
This would be strange. Normally, if you use the chipset SATA connectors; each should have full bandwidth even if used all ports at the same time. I confirmed this in my testings, as i used software RAID where each would draw full bandwidth.
What kind of hardware/chipset do you have, how did you test and what were the exact results? Have any screenshots/benchmark results for us to look at?
Thanks very much for the reply. I'm not too sure about the hardware, it is an old Dell Dimension 4700, I'm using the SATA connectors on the motherboard. After posting I realized that this system has SATA I and not SATA II like I originally thought. Could that be the problem? Did SATA I share throughput among drives?
I tested using HDTune, running the benchmark read test on each drive at the same time. I got an average read test 52.9 MB/s and that dropped to 47 MB/s when a second drive was being tested. The access time increased from 19ms to 35ms when the second drive was being tested.
I retested this on a more current system running two drives, again non-RAID, on a motherboard supporting SATA II. Using HD Tune I could see that both drives achieved similar performance regardless of whether I tested them separately or simultaneously. So it seems like a limitation of the older system but I'm not sure how to explain it.
The only explanation i have would be that your motherboard's chipset does not offer Serial ATA at all - but rather have a separate onboard SATA chip like Silicon Image, JMicron or Promise that supplies Serial ATA over a PCI bridge; this is not a "native" solution and would yield the performance levels you described.
In order to know if this is correct, i would need to know the chipset type. If you run Windows, you can download CPU-Z (free program downloadable from internet) and look at the Motherboard tab - here it should display the chipset of your system. If this chipset does not have Serial ATA support for itself, it has to be an addon solution like i described.
If you do have PCI, using RAID for additional performance is futile as RAID over a PCI bridge adds significantly to CPU utilization and I/O latencies; even just one disk on PCI and the others on native SATA would slow things down in my testings. PCI is to be avoided whenever performance is required.
Just thinking out loud here, but you did say older system. There is still some CPU power needed to run all this. If your reading both and are using an older system, I would imagine that the older slower CPU might be holding things back a bit.
Usually the CPU usage of basic RAID levels (0,1) is very low, even slow CPUs would be able to do this.
If using PCI for transfers, the CPU overhead would be bigger though, because of the many interrupts PCI would generate. Interrupts are very demanding to the CPU because it literally interrupts the CPU what it was doing and the CPU has to continue work on it later; the interrupt goes first. With many interrupts this leads to choppy behaviour and even mouse movements can become slow in some (rather extreme) cases.