Sign in with
Sign up | Sign in
Your question

AHCI - CPU usage among Harddrives

Last response: in Storage
Share
January 30, 2012 5:47:14 PM

More of a trivia, but something that has been bothering in that I want to know why.

Back before AHCI, there were many different drivers and implementations, so CPU usage could vary quite a bit. But now that AHCI is used for nearly everything, why does CPU usage vary among harddrives for a given throughput and block size?

Assuming two drives are using the exact same generic built in drivers, what would cause one drive to cause more kernel time than another for a certain IO load?

Thanks :-)
a b \ Driver
a b à CPUs
a c 415 G Storage
January 30, 2012 11:13:49 PM

Chances are that one drive is receiving more I/O requests than the other. Two drives can be transferring data at the same rate yet consume different amounts of CPU time if one drive is, for example, processing 100 reads of 1MB each per second while the other is processing 1000 reads of 100KB per second. The CPU time required to manage the I/O is related to the number of requests per second, not the number of bytes per second.
m
0
l
January 31, 2012 5:01:39 PM

While I agree with your post, what I don't understand is how harddrive benchmarks can have completely different results on different harddrives.

If the OS requests data block 0xABC123, then the harddrive should return that and nothing else. Unless a given HD may have different prefetch and will return extra data and the OS blindly accepts the extra data.

Example. Some of the HD benchmarks will transfer X amount of data. If you take the Total-MB/s and divide that by kernel time, you will get MB/s/CPU%.

Assuming the exact same amount of data is transferred and both drives use the same block sizes, the amount of IO should be nearly, if not, identical.

This means kernel time per IO varies among AHCI drives.
m
0
l
Related resources
a b \ Driver
a c 379 G Storage
January 31, 2012 5:19:10 PM

My guess is the different cache sizes on the drives and the firmware's algorithm's for fetching data off the platters and caching results makes a difference.
m
0
l
a b \ Driver
a b à CPUs
a c 415 G Storage
January 31, 2012 7:49:19 PM

Kewlx25 said:
While I agree with your post, what I don't understand is how harddrive benchmarks can have completely different results on different harddrives.
OK, I didn't understand you to be referring identical benchmarks running on the two drives. If you run the same benchmark on both drives and if both are using the same transfer sizes, then I agree that the CPU time should normally be the same. Some of the reasons I can think of that would cause a difference would be:

- if the drives are attached to different controllers then different drivers may be involved.
- if the drives are configured to use motherboard BIOS then different RAID organizations imply different amounts of CPU workload.
- if one of the drives is in PIO mode then it would cause a lot of extra CPU usage to transfer each byte read or written.

Hawkeye22 said:
My guess is the different cache sizes on the drives and the firmware's algorithm's for fetching data off the platters and caching results makes a difference.
I don't really agree with this because caching is internal to the drive. Caching could potentially improve the transfer rate if it's effective, and a faster transfer rate would imply a higher CPU utilization to service the I/O requests (other factors such as transfer size being equal), but it wouldn't cause a higher CPU utilization per transfer.
m
0
l
!