Known HDD Tweaks... Best approach?

JMLow54

Reputable
Mar 18, 2015
17
0
4,510
I recently completed an analysis of my HDD drives, in particular the file sizes currently there, and still to be added.

One factor, is that the sector size is set at the factory and is static. Where the benefit comes in though is cluster size.

I've experienced improvements in read/write times by tweaking the cluster size allocated at HDD format. So far, because of time to save/restore and other impacts to time, I've only increased the cluster size to 8192.

However, it would really save time if I knew where the break-over point was. I know that there is a point of diminishing returns, I just don't know what that is.

Of course, FORMAT permits a maximum cluster size of 64K. I know the threshold for cost/performance ratio decline is somewhere between 8192 and 64K.

What I don't know is, do the following factors have to be examined and adjustments made as well to get the best price/performance ratio from the format of the drive?

1) Memory capacity?
2) CPU power/speed?
3) BUS speed?
4) Drivers?
5) A combination of two or more listed?
6) Something not listed?

I do know that how new the HDD's are, also plays a part along with the version of SATA in use (which the drives must accommodate).

My rig has the following features:
A) AMD 4GHz 8-core processor.
B) 16GB DDR3 2133
C) SATA III (though HDD drives have not yet been updated fully. Only 2 of the bank support SATA III speeds.)
D) ASUS SABERTOOTH 990FX GEN3 R2.0 mobo (board had to be hardened due to environment it operates)

As to capacity, there is a total (among 5 internal) HDD capacity of 8.5TB (gross).

As to file sizes, two of the drives are at capacity now (4TB & 2TB) and those were analyzed. Average file size is 8.7GB, with max at 48GB and min at 2.7GB.

As data is captured, written then used later, the speed of retrieval becomes important. I don't wish to tweak a lot of different settings either in BIOS or Windows 7 at this point, I want to start at the hardware level that can find a good balance between capacity utilization and I/O.

Any guidance is greatly appreciated.
 
Solution
You will only get truly improved performance if you get software that is optimized for a particular I/O size. Let's say you did create 64K clusters. IF you had software that ONLY read or wrote in 64KB chunks you could improve performance. But desktop software and even most server software is not written with an emphasis on efficient I/O. Databases and a few other types of commercial software can be tuned. Custom software can be tuned. If you want performance, SSDs or RAM disks are what you need to think about. Mechanical HDDs won't perform.

kanewolf

Titan
Moderator
You will only get truly improved performance if you get software that is optimized for a particular I/O size. Let's say you did create 64K clusters. IF you had software that ONLY read or wrote in 64KB chunks you could improve performance. But desktop software and even most server software is not written with an emphasis on efficient I/O. Databases and a few other types of commercial software can be tuned. Custom software can be tuned. If you want performance, SSDs or RAM disks are what you need to think about. Mechanical HDDs won't perform.
 
Solution

JMLow54

Reputable
Mar 18, 2015
17
0
4,510
This issue is endemic within the software world. I'd hoped a OS solution had been implemented in the micro-world, and find it has not. My 40 years in IT (mid-range and mainframe) taught me that inefficient business software was the single biggest contributor to speed and reliability. My background in software design and deployments, does not a hardware technologist make... :( Such is the reason I reach out to forums such as this - to find an answer accurate and appropriate for the issue at hand.