Sign in with
Sign up | Sign in
Your question

Optimum cluster size

Last response: in Storage
Share
June 23, 2003 3:50:07 PM

What are some optimum cluster sizes for WinXP/NTFS on various hard drive configurations (single, RAID, SATA, etc)?

Can anyone point me to any good articles/resources on this subject?

Thanks!
-DOOM
__________

<i>GeneticWeapon says: "Your days posting here have about come to an end... I promise you that by the time I'm done making my point to you, you wont want to be here anymore...you fuucking bitch."</i>

More about : optimum cluster size

June 24, 2003 12:43:57 PM

That will depend on your applications and how you want your system to function but basically there are 2 choices:

Small clusters make more use of your hard disc space with slower access times.
Large clusters utilise less of your hard disc space but offer a faster access time.

These 2 points are affected by the size of files on your drive (large or small) Example: If you have a 32k cluster size then a 33k file will take up 2 whole clusters - so a hard disc full of small files with a large cluster size will really cut down your hard disc space.

In general large files are better on large cluster sizes and vica versa. That is why many people have their OS on 1 drive and their games etc on another (eg many small .dlls in windows and many large texture / sound files in games)

4.77MHz to 4.0GHz in 10 years. Imagine the space year 2020 :) 
Related resources
June 24, 2003 3:43:15 PM

Cool, mang, thanks a lot!!

<i>GeneticWeapon says: "Your days posting here have about come to an end... I promise you that by the time I'm done making my point to you, you wont want to be here anymore...you fuucking bitch."</i>
June 29, 2003 4:40:00 AM

Assuming you got SP1, you could run 'analyze' on XP defragmenter and look at the average file size for the volume. You should choose one that suits this value for optimum performance. There are tools for analyzing a volume for the best cluster size, use a net search engine.

It depends on the usage but speed, space efficency, and reliabilty should be considered as a whole where possible.

Also consider that fault tolerance can be a decision maker when making cluster sizes as above and it's not commonly associated with cluster size, not in my opinion. In general, larger cluster sizes make for more fault tolerant NTFS volumes.








:eek: 
June 29, 2003 1:55:46 PM

I ran tests on a pair of WD Raptors in a Raid 0 configuration using 4k and 64k chunks and clusters.

I found sequential write performance in Sandra was 9MB/s with 4k clusters and a whopping 94MB/s with 64k clusters.

All other results including sequential read were the same.

In my opinion using la arge cluster size decreases the amount of fragmentation as fewer locations are needed to write the data which in turn also increases write performance and eventually read times.

Storage is cheap these days so I don't give a second thought to wasting it.

<b>Vorsprung durch Dontwerk</b>.....<i>as they say at VIA</i>
!