Sign in with
Sign up | Sign in
Your question

Best size cluster for NTFS partition

Last response: in Windows XP
Share
Anonymous
August 13, 2005 1:49:59 AM

Archived from groups: microsoft.public.windowsxp.perform_maintain,microsoft.public.windowsxp.hardware,comp.sys.ibm.pc.hardware.storage (More info?)

By default WinXP formats NTFS to have 4k cluster sizes but what is
the best cluster size for my situation :-

I have a 60 GB NTFS partition which I use mainly for storing
downloads (software and audio). It will be used by WinXP.

What would the best NTFS cluster size be if this was a 160 GB
partition filled mainly with 200K jpegs and some 10 MB movie clips?

-------

I suspect that 4K might be the best for my 60G and 160 Gb partitions
becuase it saves space. But I don't know if there are overheads in
the MFT and other metadata when the NTFS partition gets to 160 GB.

I also read that third-party defrag utilities (like Diskeeper and
Perfectdisk) will not work on NTFS clusters above a certain size. Is
this true? What is the biggest cluster size I can have if I want to
defrag an NTFS partition?
Anonymous
August 13, 2005 1:50:00 AM

Archived from groups: microsoft.public.windowsxp.hardware,microsoft.public.windowsxp.perform_maintain (More info?)

For disk info in xp,try reading #814954 At microsoft.

"Alex Coleman" wrote:

> By default WinXP formats NTFS to have 4k cluster sizes but what is
> the best cluster size for my situation :-
>
> I have a 60 GB NTFS partition which I use mainly for storing
> downloads (software and audio). It will be used by WinXP.
>
> What would the best NTFS cluster size be if this was a 160 GB
> partition filled mainly with 200K jpegs and some 10 MB movie clips?
>
> -------
>
> I suspect that 4K might be the best for my 60G and 160 Gb partitions
> becuase it saves space. But I don't know if there are overheads in
> the MFT and other metadata when the NTFS partition gets to 160 GB.
>
> I also read that third-party defrag utilities (like Diskeeper and
> Perfectdisk) will not work on NTFS clusters above a certain size. Is
> this true? What is the biggest cluster size I can have if I want to
> defrag an NTFS partition?
>
Anonymous
August 13, 2005 2:38:25 AM

Archived from groups: microsoft.public.windowsxp.perform_maintain,microsoft.public.windowsxp.hardware,comp.sys.ibm.pc.hardware.storage (More info?)

4K is optimal....

--
Carey Frisch
Microsoft MVP
Windows XP - Shell/User
Microsoft Newsgroups

-------------------------------------------------------------------------------------------

"Alex Coleman" wrote:

| By default WinXP formats NTFS to have 4k cluster sizes but what is
| the best cluster size for my situation :-
|
| I have a 60 GB NTFS partition which I use mainly for storing
| downloads (software and audio). It will be used by WinXP.
|
| What would the best NTFS cluster size be if this was a 160 GB
| partition filled mainly with 200K jpegs and some 10 MB movie clips?
|
| -------
|
| I suspect that 4K might be the best for my 60G and 160 Gb partitions
| becuase it saves space. But I don't know if there are overheads in
| the MFT and other metadata when the NTFS partition gets to 160 GB.
|
| I also read that third-party defrag utilities (like Diskeeper and
| Perfectdisk) will not work on NTFS clusters above a certain size. Is
| this true? What is the biggest cluster size I can have if I want to
| defrag an NTFS partition?
Related resources
Anonymous
August 13, 2005 3:55:52 PM

Archived from groups: microsoft.public.windowsxp.perform_maintain,microsoft.public.windowsxp.hardware,comp.sys.ibm.pc.hardware.storage (More info?)

Just curious. Why?


--


Regards.

Gerry

~~~~~~~~~~~~~~~~~~~~~~~~
FCA

Stourport, Worcs, England
Enquire, plan and execute.
~~~~~~~~~~~~~~~~~~~~~~~~

"Carey Frisch [MVP]" <cnfrisch@nospamgmail.com> wrote in message
news:%23Fgz7z7nFHA.2224@TK2MSFTNGP10.phx.gbl...
> 4K is optimal....
>
> --
> Carey Frisch
> Microsoft MVP
> Windows XP - Shell/User
> Microsoft Newsgroups
>
> -------------------------------------------------------------------------------------------
>
> "Alex Coleman" wrote:
>
> | By default WinXP formats NTFS to have 4k cluster sizes but what is
> | the best cluster size for my situation :-
> |
> | I have a 60 GB NTFS partition which I use mainly for storing
> | downloads (software and audio). It will be used by WinXP.
> |
> | What would the best NTFS cluster size be if this was a 160 GB
> | partition filled mainly with 200K jpegs and some 10 MB movie clips?
> |
> | -------
> |
> | I suspect that 4K might be the best for my 60G and 160 Gb partitions
> | becuase it saves space. But I don't know if there are overheads in
> | the MFT and other metadata when the NTFS partition gets to 160 GB.
> |
> | I also read that third-party defrag utilities (like Diskeeper and
> | Perfectdisk) will not work on NTFS clusters above a certain size.
> Is
> | this true? What is the biggest cluster size I can have if I want to
> | defrag an NTFS partition?
Anonymous
August 13, 2005 3:55:53 PM

Archived from groups: microsoft.public.windowsxp.perform_maintain,microsoft.public.windowsxp.hardware,comp.sys.ibm.pc.hardware.storage (More info?)

Because 4k is the data size used when the system is "paging". It just seems
to make the operating system a bit more "snappy" [in my estimation]. I would
guess that it may eliminate extra overhead involved when using
larger/smaller cluster sizes, and the system is making use of the pagefile.

--
Regards,

Richard Urban
Microsoft MVP Windows Shell/User

Quote from: George Ankner
"If you knew as much as you think you know,
You would realize that you don't know what you thought you knew!"

"Gerry Cornell" <gcjc@tenretnitb.com> wrote in message
news:%2301zRW$nFHA.2916@TK2MSFTNGP14.phx.gbl...
> Just curious. Why?
>
>
> --
>
>
> Regards.
>
> Gerry
>
> ~~~~~~~~~~~~~~~~~~~~~~~~
> FCA
>
> Stourport, Worcs, England
> Enquire, plan and execute.
> ~~~~~~~~~~~~~~~~~~~~~~~~
>
> "Carey Frisch [MVP]" <cnfrisch@nospamgmail.com> wrote in message
> news:%23Fgz7z7nFHA.2224@TK2MSFTNGP10.phx.gbl...
>> 4K is optimal....
>>
>> --
>> Carey Frisch
>> Microsoft MVP
>> Windows XP - Shell/User
>> Microsoft Newsgroups
>>
>> -------------------------------------------------------------------------------------------
>>
>> "Alex Coleman" wrote:
>>
>> | By default WinXP formats NTFS to have 4k cluster sizes but what is
>> | the best cluster size for my situation :-
>> |
>> | I have a 60 GB NTFS partition which I use mainly for storing
>> | downloads (software and audio). It will be used by WinXP.
>> |
>> | What would the best NTFS cluster size be if this was a 160 GB
>> | partition filled mainly with 200K jpegs and some 10 MB movie clips?
>> |
>> | -------
>> |
>> | I suspect that 4K might be the best for my 60G and 160 Gb partitions
>> | becuase it saves space. But I don't know if there are overheads in
>> | the MFT and other metadata when the NTFS partition gets to 160 GB.
>> |
>> | I also read that third-party defrag utilities (like Diskeeper and
>> | Perfectdisk) will not work on NTFS clusters above a certain size. Is
>> | this true? What is the biggest cluster size I can have if I want to
>> | defrag an NTFS partition?
>
Anonymous
August 13, 2005 6:09:38 PM

Archived from groups: microsoft.public.windowsxp.perform_maintain,microsoft.public.windowsxp.hardware,comp.sys.ibm.pc.hardware.storage (More info?)

In article <eiMl2OAoFHA.2920@TK2MSFTNGP14.phx.gbl>,
richardurbanREMOVETHIS@hotmail.com says...
> Because 4k is the data size used when the system is "paging". It just seems
> to make the operating system a bit more "snappy" [in my estimation]. I would
> guess that it may eliminate extra overhead involved when using
> larger/smaller cluster sizes, and the system is making use of the pagefile.

I have a drive that is used to store small images, under 30k many times,
I have worked with the drive set at 512b and at the default 4k and even
larger - the 512b provides the best in unwasted slack space - and you
can really see this with 50,000+ files.

For database servers I move their data drive/array to larger cluster
sizes, 4k being way to small in my opinion.

Paging means little of you are not paging a lot.

What you have to do, to find the optimal size, is determine the size of
70% of your files and then determine the amount of wasted slack space
they consume and setup the cluster size for that. Sure, tracking small
cluster sizes is a performance hit, but wasted disk space is often more
of a problem for users.

--

spam999free@rrohio.com
remove 999 in order to email me
Anonymous
August 13, 2005 6:09:39 PM

Archived from groups: microsoft.public.windowsxp.perform_maintain,microsoft.public.windowsxp.hardware,comp.sys.ibm.pc.hardware.storage (More info?)

Also remember that if you go larger than 4k size clusters, the built in
defrag utility does not function on that drive/partition.

--
Regards,

Richard Urban
Microsoft MVP Windows Shell/User

Quote from: George Ankner
"If you knew as much as you think you know,
You would realize that you don't know what you thought you knew!"

"Leythos" <void@nowhere.lan> wrote in message
news:MPG.1d67c92ff48963ba989b9d@news-server.columbus.rr.com...
> In article <eiMl2OAoFHA.2920@TK2MSFTNGP14.phx.gbl>,
> richardurbanREMOVETHIS@hotmail.com says...
>> Because 4k is the data size used when the system is "paging". It just
>> seems
>> to make the operating system a bit more "snappy" [in my estimation]. I
>> would
>> guess that it may eliminate extra overhead involved when using
>> larger/smaller cluster sizes, and the system is making use of the
>> pagefile.
>
> I have a drive that is used to store small images, under 30k many times,
> I have worked with the drive set at 512b and at the default 4k and even
> larger - the 512b provides the best in unwasted slack space - and you
> can really see this with 50,000+ files.
>
> For database servers I move their data drive/array to larger cluster
> sizes, 4k being way to small in my opinion.
>
> Paging means little of you are not paging a lot.
>
> What you have to do, to find the optimal size, is determine the size of
> 70% of your files and then determine the amount of wasted slack space
> they consume and setup the cluster size for that. Sure, tracking small
> cluster sizes is a performance hit, but wasted disk space is often more
> of a problem for users.
>
> --
>
> spam999free@rrohio.com
> remove 999 in order to email me
Anonymous
August 13, 2005 11:05:29 PM

Archived from groups: microsoft.public.windowsxp.perform_maintain,microsoft.public.windowsxp.hardware,comp.sys.ibm.pc.hardware.storage (More info?)

In article <ux282UBoFHA.3312@tk2msftngp13.phx.gbl>,
richardurbanREMOVETHIS@hotmail.com says...
> Also remember that if you go larger than 4k size clusters, the built in
> defrag utility does not function on that drive/partition.

I never use MS Defrag, I run the big brother to it "Diskeeper" and find
no problems with it.

--

spam999free@rrohio.com
remove 999 in order to email me
Anonymous
August 14, 2005 3:16:44 AM

Archived from groups: microsoft.public.windowsxp.hardware,microsoft.public.windowsxp.perform_maintain (More info?)

On Sat, 13 Aug 2005 00:22:25 +0100, Alex Coleman wrote
(in article <96B13CD0592D31E75@67.98.68.12>):

> Can't find your reference 814954 at Microsoft. Is the number
> miskeyed?

Welcome, Alex, I see you have met our village idiot. Pay no attention to
anything posted by Andrew the Eejit - his sole aim is to cause damage and
disruption to as many computers as possible. He used to post with a valid
address, but I reckon people started complaining to him personally, so he now
posts via the CDO; he probably reckons he can't be traced that way... ;o)
<eg>
Anonymous
August 14, 2005 3:16:45 AM

Archived from groups: microsoft.public.windowsxp.hardware,microsoft.public.windowsxp.perform_maintain (More info?)

"Evadne Cake" <magrat_garlick@hotmail.com> wrote in message
news:0001HW.BF242FDC0019D3C8F0407550@news.ngroups.net...
> On Sat, 13 Aug 2005 00:22:25 +0100, Alex Coleman wrote
> (in article <96B13CD0592D31E75@67.98.68.12>):
>
>> Can't find your reference 814954 at Microsoft. Is the number
>> miskeyed?
>
> Welcome, Alex, I see you have met our village idiot. Pay no attention to
> anything posted by Andrew the Eejit - his sole aim is to cause damage and
> disruption to as many computers as possible. He used to post with a valid
> address, but I reckon people started complaining to him personally, so he
> now
> posts via the CDO; he probably reckons he can't be traced that way... ;o)
> <eg>
>

Are you trying to win a Bulwer-Lytton award? What's wrong with using the odd
period here and there to organise things?

Kerry
Anonymous
August 15, 2005 8:45:34 AM

Archived from groups: microsoft.public.windowsxp.perform_maintain,microsoft.public.windowsxp.hardware,comp.sys.ibm.pc.hardware.storage (More info?)

"Leythos" <void@nowhere.lan> wrote in message
news:MPG.1d67c92ff48963ba989b9d@news-server.columbus.rr.com...
>
> I have a drive that is used to store small images, under 30k many times,
> I have worked with the drive set at 512b and at the default 4k and even
> larger - the 512b provides the best in unwasted slack space - and you
> can really see this with 50,000+ files.
>

Yea, you've gained the whole 90 MB by doing that!
Anonymous
August 15, 2005 9:03:25 PM

Archived from groups: microsoft.public.windowsxp.perform_maintain,microsoft.public.windowsxp.hardware,comp.sys.ibm.pc.hardware.storage (More info?)

Agreed. For best overall file system performance, a 4K cluster size is
best. You only really need to consider going larger if the drive is used
for larger files (ie database, large multi-media files, etc...) and absolute
speed is the primary concern.

- Greg/Raxco Software
Microsoft MVP - Windows File System

Want to email me? Delete ntloader.


"Carey Frisch [MVP]" <cnfrisch@nospamgmail.com> wrote in message
news:%23Fgz7z7nFHA.2224@TK2MSFTNGP10.phx.gbl...
> 4K is optimal....
>
> --
> Carey Frisch
> Microsoft MVP
> Windows XP - Shell/User
> Microsoft Newsgroups
>
> --------------------------------------------------------------------------
-----------------
>
> "Alex Coleman" wrote:
>
> | By default WinXP formats NTFS to have 4k cluster sizes but what is
> | the best cluster size for my situation :-
> |
> | I have a 60 GB NTFS partition which I use mainly for storing
> | downloads (software and audio). It will be used by WinXP.
> |
> | What would the best NTFS cluster size be if this was a 160 GB
> | partition filled mainly with 200K jpegs and some 10 MB movie clips?
> |
> | -------
> |
> | I suspect that 4K might be the best for my 60G and 160 Gb partitions
> | becuase it saves space. But I don't know if there are overheads in
> | the MFT and other metadata when the NTFS partition gets to 160 GB.
> |
> | I also read that third-party defrag utilities (like Diskeeper and
> | Perfectdisk) will not work on NTFS clusters above a certain size. Is
> | this true? What is the biggest cluster size I can have if I want to
> | defrag an NTFS partition?
Anonymous
August 16, 2005 10:29:04 PM

Archived from groups: microsoft.public.windowsxp.perform_maintain,microsoft.public.windowsxp.hardware,comp.sys.ibm.pc.hardware.storage (More info?)

One of our servers uses 64KB block size. 700KB worth of cookie data
can easily take 120MB in user roaming profile directorys. This can be
copied to a 4KB block size partition and take around 7MB versus 120MB.

SQL server (MSDE) benefits from 64KB block size.

The benefit is if you have a bunch of large files (on a second drive,
don't do this on you Windows system drive), you get better performance
when loading/saving the files. If you setup a second drive just to
store a bunch of GB MPG files, the 64KB block size makes more sense.

This usually isn't worth it, though. If you want to increase your
performance, setup a RAID 0 across 2 or 3 drives. If you have two
drives that can sustain 50MB/s and you put them in RAID0 you can
realize 90-100MB/s sustained.

Some of this is my opinion, there are enough variables in systems today
that others may have different opinions based on those variables.
!