Now, I know there has been a lot covered on this subject, but after paging through the TH search pages on transfer rates, I still couldn't quite find a specific answer to something that has bothered me for years.
Only recently though, have I seen this problem really metastasize; reduced transfer rates during multiple HDD to HDD file transfers.
First of all, I understand that sending files "single-stream" from one HDD to another in a single transfer allows for the most direct Read/Write instance, and thus the fastest rates, and that queuing multiple transfers slows that down because the drives are having to do a lot more platter skating, but what I have a hard time understanding is where is ALL that bandwidth overhead going?
For instance, I may be running a HDD to HDD transfer and hitting 130M/s, but when I cue another transfer, they may both settle to less than 40M/s, which leaves a transfer rate discrepancy of more than 40M/s just -gone-. I couldn't seriously be losing this just to seek times across platters? Could I?
Even on some "single stream" transfers, data is still spread across multiple platters, and I don't see a degradation in speed.
Another example is when running transfers between four completely unrelated drives.
I may be running transfers between drive A to drive B, one drive on channel 1 and the other on channel 2, and when I run a transfer from drive C to drive D (Say on channel 3 and channel 1, respectively) the -entire- list of transfers bottoms out in speed, with a large discrepancy of missing bandwidth once they all settle down into their anemic rates.
I thought this could be a case of buffer under-run, but across the entire system?
All of my drives are SATAII or better, and all have 32MB or better, all running on 7200RPMs.
Does anyone know what's going on here? Am I missing something?
Thanks everyone,
T.
Only recently though, have I seen this problem really metastasize; reduced transfer rates during multiple HDD to HDD file transfers.
First of all, I understand that sending files "single-stream" from one HDD to another in a single transfer allows for the most direct Read/Write instance, and thus the fastest rates, and that queuing multiple transfers slows that down because the drives are having to do a lot more platter skating, but what I have a hard time understanding is where is ALL that bandwidth overhead going?
For instance, I may be running a HDD to HDD transfer and hitting 130M/s, but when I cue another transfer, they may both settle to less than 40M/s, which leaves a transfer rate discrepancy of more than 40M/s just -gone-. I couldn't seriously be losing this just to seek times across platters? Could I?
Even on some "single stream" transfers, data is still spread across multiple platters, and I don't see a degradation in speed.
Another example is when running transfers between four completely unrelated drives.
I may be running transfers between drive A to drive B, one drive on channel 1 and the other on channel 2, and when I run a transfer from drive C to drive D (Say on channel 3 and channel 1, respectively) the -entire- list of transfers bottoms out in speed, with a large discrepancy of missing bandwidth once they all settle down into their anemic rates.
I thought this could be a case of buffer under-run, but across the entire system?
All of my drives are SATAII or better, and all have 32MB or better, all running on 7200RPMs.
Does anyone know what's going on here? Am I missing something?
Thanks everyone,
T.