On many arrays, the disk chunk size is 64K by default. I was told a long time ago by a storage expert on Clariion arrays that it is better to use 256K or 512K stripe size, meaning for example with RAID5, 4+1 or 8+1 disks, or RAID6, 4+2 respectively 8+2.
This makes sense as long as the array is taking advantage of that, but I don't really understand if this is a standard issue for any array, or if it is manufacturer based.
If any expert around here has a theory for, or against this, I would be grateful.
There are 2 settings you need to be concerned with, first the stripe size for RAID 0,5,6 and the allocation size within the OS. The stripe size in the size of each block of data written out to the member disks of the array, IE 64k gets written to disk 1, 64k to disk 2 and then 64k back to disk 1 in a 2 disk RAID 0. The allocation size is the minimum amount of data allocated in one unit on the file system. You can think of this as the minimum space 1 file will take up. Ideally you want these values to match and your disk offset to be eavenly divisible by your allocation size. This means that 1 read from the OS will correspond to block being read on disk.
If your allocation size is larger than your block size, say allocation size of 128k but a block size of 64k, you will always read 2 blocks on disk for every 1 read request by the OS.
As the posts above mentioned, for smaller files, a smaller block size is preferred because you're reading less data for that given file. For large files, larger blocks will decrease the number of requests and allow a larger possible disk in windows (windows has amax number of blocks per partition) at the expense of wasting space on smaller files. If you have 1,000 1k files and use a 256k block size you'll be using 256MB of disk space to store those files even those they really only contain 1MB of data.