RAID 0 Cluster size change

mep916

Distinguished
Sep 7, 2007
16
0
18,510
Currently, I'm running two Raptors in a RAID 0 configuration. I'd like to experiment with the cluster sizes to see if I gain better performance with a certain size. The current cluster size is 64k. Vista is currently installed on this array.

If I change the cluster size, will I lose my data? Will Vista become non-bootable?

Thanks,
Michael
 

cyberjock

Distinguished
Aug 1, 2004
305
0
18,780
Changing your cluster size will wipe your data unless you have one of the few RAID controllers that supports changing cluster size on the fly. I've done some serious research on cluster sizes for RAID arrays(RAID0 and RAID5 setup).

Cluster size is good to play around with if you are dealing with very few large files. Making larger cluster sizes means the MFT has to keep track of less sectors of a file. This is good if you are trying to minimize MFT overhead. When I say large I'm talking 1GB+ files accounting for significant number of files(not total space used) on your system. I personally handle very large files that are usually contiguous on my file server. So technically speaking I should have been the poster child for a large cluster size. Over 95% of my files are over 1GB in size. I did try changing the cluster size of my RAID array when I first set it up to measure performance. Did it matter? Not really. The performance gain going from 32k all the way to 1024k was less than 2%. I have read reviews that somewhat contradict this 2% so your results may differ. I opted to leave it at the default since it didn't seem to matter.

Will Vista become non-bootable. I'm not sure what the requirements are for Vista, but if your RAID is hardware, Vista shouldn't care what your cluster size is.

Unfortunately I am inclined to think that the people that change the cluster size to see a performance increase are probably thoroughly familiar with cluster size and know even more than I do. With that thought I left mine at default and I recommend you do the same. .I'm not trying to say you are too incompetent to make it work. I'm just saying that since you are asking the question you are probably one of those people that wouldn't see the benefits of it anyway. I don't think you should change it for 2 reasons:

1. If you change it you will most likely have to wipe the drive and start over. This is alot of hours wasted for what is most likely little performance benefit.
2. You mentioned that Vista in installed in the array. Like I said above, cluster size matters when dealing with large files. If you are running Vista on it, you are effectively saying that it WON'T be storing a large number of large files. If you have 2x74GB raptors, that's 150GB. Assuming Vista takes up 5GB that's 145GB for files. If each of your files is 1GB each(resulting in the most number of large files) that's only 145 files. I'm sure Vista installs a few thousand files, if not ten thousand or more. Even if it's only 1000 files, 145/1000 is 10%. That's not much IMO.

If you are doing video editing or something of that nature, you'd probably be better off using a separate drive for your boot device, and editing your video files on the RAID0 raptor array. Then you might be able to justify using the large cluster size. But why would you want to change your configuration so significantly JUST to use a larger cluster size? Just not worth it IMO.
 

mep916

Distinguished
Sep 7, 2007
16
0
18,510
Thanks. Excellent information. Besides Windows, I have some games and applications installed on the Vista partition. Your information about cluster size is very useful. I think I understand now. :sweat: I'm not too concerned about losing data - I have backups. Basically, I wanted to know what to expect when (if) I start making changes.

BTW, I think the RAID is software based. It's an nVidia RAID controller. I'm running the EVGA nForce 680i mobo. I don't know if that helps you determine the difference.

Thanks again for the lengthy, detailed explaination. :)

Michael
 

SomeJoe7777

Distinguished
Apr 14, 2006
1,081
0
19,280
There are two parameters we're talking about here:

1. Cluster size. This is the allocation unit used by the NTFS file system. This has nothing to do with the RAID controller. RAID controllers don't know anything about the file system.

Changing the cluster size requires a 3rd party program like Partition Magic, or creating the partition with the desired cluster size before Windows is installed.

2. Stripe size. This is a parameter of the RAID controller, and can be changed only by the RAID controller. Some RAID controllers can change the stripe size without losing any data (this is called a migration), others require that you delete the array and re-create it. The stripe size is independent of the file system.

64K is a typical stripe size that many RAID controllers default to. The default cluster size for NTFS is 4K.
 

mep916

Distinguished
Sep 7, 2007
16
0
18,510


Thanks for the info. Do you think I will see any significant performance increases if I start changing the cluster sizes? This is how I planned to test the array:

Basically, I installed Windows XP on a Hitachi hard drive. I'm hoping XP will recognize the array upon installation of the RAID controller driver. In XP, I want to benchmark the array using HD Tach with different cluster sizes until I get the size that provides the best performance. Obviously, this is time consuming. I'll be very dissapointed if I spend hours messing with this array, and not seeing any speed increase. Not to mention the likely data loss and reinstall of Windows Vista. What do you think?
 

cyberjock

Distinguished
Aug 1, 2004
305
0
18,780
@SomeJoe777

Doh! You're right! I mixed the 2 together into a complete mess! Thanks for catching that one. I do know the difference, and I feel nooby for making that mistake. I was more tired than I thought.

Stripe size is a 'lower' level of cluster size. Stripe sizes again won't matter much for you. The reason that the stripe size won't matter is the same as cluster size, except that in VERY large stripe sizes(4MB+) you might see performance start to drop as your stripe sizes are so big you are practically reading from 2 drives at different times and starting to negate the performance enhancing design of RAID0.

There are advantages to keeping stripe size a multiple of NTFS cluster size, but usually that happens because you'd have to choose very wacky settings to not meet this thumbrule.

I will confirm that I did change the cluster size and not the stripe size as I wasn't using a RAID0. I wouldn't expect your setup to be any different even though you are using RAID0.

If you run benchmarks on various cluster sizes, the results may not give real world performance numbers. There's many factors that come into play with benchmark performance numbers making benchmarks almost unreliable for comparison just by changing the cluster size.

If you want to understand how cluster size works, read below:

Let's pretend you have a hard drive formatted NTFS with 2 files. 1 file is 1024k and 1 file is 1048576k(1MB and 100MB respectively). Cluster size is 4k for this scenario.

File number 1 uses 256 clusters(1024k/4k). File number 2 uses 262,144 clusters(1024k/4k). When you need to read file number 1 in it's entirety your OS will read the MFT(Master File Table) for the clusters the file occupies. It will then begin collecting up all of the clusters. File #1 has to read only 256 clusters(256 clusters will be listed in your MFT). File #2 however has 262,144 clusters listed in the MFT. What happens if we change the cluster size to 8k? Now file number 1 uses 128 clusters, and file number 2 uses 131,072 custers. This means that your files are using 1/2 the number of cluster entries in your MFT. This means less overhead for the OS as it has less clusters in the list to read, process, and request. For file #2 you saved 131,072 entries by going from 4k to 8k for a cluster size, but saved a mere 128 entries in your MFT for file #1. This is also good for fragmentation because the likelihood of 131,072 clusters being contiguous is more likely than 262,144. So now you might be thinking "Well, why not just go as big as possible? Seems like nothing is better than the smallest number of cluster entries as possible." Well, the downside to that is the cluster size is your smallest possible size to allocate. If you have a 1 byte file it will consume 1 cluster. If 1 cluster is 4k, that's 4k. If it's 128k, you'll lose 128k. This is known as slack space. The other drawback is OS tasks involving writing data write data in clusters. If you change 1 byte in a file the OS can't just change that 1 byte. It will instead write over that cluster again rewriting the whole cluster with the new information. That means possibly reading all of the data on that cluster, then writing all of that data again with the 1 byte difference. A 4k cluster size is clearly a better option if you have to write 1 byte difference. Using large cluster sizes for your page file is VERY bad for this reason. There's also some optimizations when your page file is stored on a partition with a 4k cluster size because the paged memory happens to also be in 4k blocks(if I remember correctly).

This space is being lost for every file on your system that isn't consuming a full cluster. This is why FAT32 aged so fast. Hard drive size grew rapidly(exceeding Moore's Law in a hard drive sense) and FAT32 had a finite number of clusters it could assign, so bigger hard drives meant larger cluster sizes. With larger cluster sizes meant more 'slack space' lost. NTFS has limitations also, but the limits don't start to become worrisome until you hit 16 exabytes(1 exabyte is 1 billion GB). Remember that NTFS was designed for business use, so NTFS was planned for the significant future. When NTFS was first designed in the 90s this limit was set. They definitely planned ahead because they designed this in the 90s and we still can't fathom even 1% of that.

Now, NTFS does have the ability to store files inside the MFT itself. In my case a 1 byte file would be stored in the MFT itself instead of allocating a cluster for it. Files that are so small that the allocation information would exceed the size of the file are stored inside the MFT. I don't know exactly what that limit is, but it seems to be somewhere betwee 3-5k from my experience.

--------------------------------------------

RAID Stripe Size:

RAID stripe size determines how large a 'unit' of data is on a drive. A stripe size of 512k means that every 512k is stored on the next drive.

In a 3 drive RAID0 setup:

0kb - 512kb - Drive 0
512kb - 1024kb - Drive 1
1024kb - 1536kb - Drive 2
1536kb - 2048kb - Drive 0


Remember this is below the OS and partition. This is what the RAID controller does. Bigger stripes mean less stripes, but also makes it more likely that small files will be on only 1 drive. There's ALOT that goes into stripe size performance and I don't think i should go into that tonight as that wasn't your question. I left this at default on my RAID setup as I don't want to use a 'nonstandard' number and if I need to use recovery tools it won't work with my 'nonstandard' settings.

-------------------------------------------

Now, you said you use a software RAID on your machine. I hate to disappoint, but software RAID can't be bootable :(. The reason is you have to load a significant portion of the OS before the software RAID drivers load, and in a RAID 0 the computer will start booting from the first hard drive. Unfortunately your OS is scattered on 3 different hard drives, and it has no way to realize this. It has no idea what 'striping' is. If you have a RAID controller the controller knows what striping is, because it's actually the one doing all of that work. Your computer will see 1 giant hard drive for the RAID0. It won't see multiple separate drives. Some diagnostic utilities will be able to recognize that multiple drives exist, but the OS will only see 1 giant "C" drive. There is a way you can do RAID1 software and it be bootable, but I have not done this and you have to do some basic windows registry editing and other tricks to get it to work AFAIK.

If you have booted from the RAID before, then you definitely aren't using a software RAID. If you set up the raid in a BIOS type environment it's hardware, otherwise it's software. I can't seem to get nvidia.com to come up where I am at right now, so I can't check out your motherboard info.

As for cluster size, you can use a program like Partition Magic and change your cluster size without losing data. I haven't tried to change the cluster size of my boot partition before via this route, but I would assume you can do it. It would just do a reboot and then change it. Do back up your data before performing this operation though.
 

SomeJoe7777

Distinguished
Apr 14, 2006
1,081
0
19,280


I did some testing with cluster sizes and stripe sizes on my Promise M500i iSCSI array when I first installed it. I tried stripe sizes from 32K-256K, and cluster sizes from 4K to 64K. I then tested each configuration with IOMeter to measure throughput in MB/sec, and transaction rate in IOps.

I found less than 5% difference in all configurations. For certain queue depths, that array seemed to perform better with lower stripe sizes, for other queue depths, smaller was better. I saw a similar pattern in terms of cluster sizes.

I ended up choosing a 64K stripe size and 4K cluster size to avoid complications, since nothing else that I found was making any kind of meaningful performance impact.
 

mep916

Distinguished
Sep 7, 2007
16
0
18,510
Wow! Even after everything I've read, I'm stiil way off base, and providing incorrect information. My stripe size is 64K (default). The cluster size is 4KB (default). It's obvious I really don't know WTF I'm doing. I'm going to read, then re-read the previous posts. Then maybe, after more research, I'll start experimenting with different configurations. I really appreciate all the information, guys. It's very helpful. If I decide to goof with everything in the future, I'll post my benchmarks and let you know if I see any improvements.

Thanks again
Michael
 

TRENDING THREADS