Number of files on a volume affect performance?

I don't know if this is an issue at all so thought I'd ask since I don't know how to measure such performance changes.

I have two very fast drives (primary is velociraptor and secondary is 640 AAKS, both WD) and very happy with the speed of my system as a whole, for seeks in particular.

That said, I recently decided to install some stuff on my secondary, non-OS drive which accounts for about 5000 directories and almost 70,000 files.

Although this lives on my secondary drive, I know the OS drive has to store the location of all these files but I only remember the days of a FAT table, etc.

Does the fact that the OS drive has to manage this extra list of 70k files and 5k directories affect the seek time at all of any file on the primary drive since it now has a 'directory', however optimized, of files to look through to find any particular file?

I can delete potentially half of these files if I sort through them but if there is no tangible effect to seek performance on the primary drive, I rather not.

Thoughts? Thanks!

BTW, in case it matters I'm running Vista 32bit.
12 answers Last reply
More about number files volume affect performance
  1. Defrag to regain performance. I don't find installing lots of files slows HHD down, just fragmenting.
  2. Yes, more files slow down hdd and increase seek time. Large number of small files make more impact than small number of large files.
  3. Crap, that is what I have is a TON of small files.

    So yes, I know there has to be some difference, even if it's .0000000001%, but what I'm trying to find out is if it is something that is ever noticable.

    I always defrag once a week so that is not an issue, but I'm just trying to decide if I should spend the time to remove all of the stuff I don't really need or not.

  4. You could re-format the secondary drive with a smaller block size, but I doubt you'd notice the difference. As long as you have used a good defrag program (not the Windows default) your search tree for any file should look something like 'file000 start block 327777 end block 327778'. And actually smaller files fragment less often than large files anyway.

    Basically, I'd not worry about it.
  5. A large number of files on one drive should not have any effect on another, the index (MFT on NTFS) is stored on the drive/partition on which the files are stored.

    You may notice some performance loss in Vista due to file indexing services searching through the files, and Vista running an automatic defrag.
  6. In general, small files themselves do not compromise file system performance. (At least with NTFS. FAT/FAT32 can start to have problems with large numbers of files).

    However, NTFS will store files smaller than a certain size directly in the MFT rather than allocating a block to them. Thus, with a lot of small files, it becomes important to maintain the MFT in a defragmented state rather than the files on the drive as a whole.

    The built-in Windows defrag tool will not defrag the MFT. You need a 3rd-party defrag tool to do that, such as Diskeeper, O&O Defrag, Raxco PerfectDisk, etc.
  7. I'm not an expert in HDD but I can say that periodically defragging the HDD really helps a lot. I use Vopt - a tiny program which is really cool and extremely faster that diskeeper which in my opinion is a bloated app. Vopt does what it says. Whenever i defrag i'd also delete all system restore points to save so much of space. Vopt does all that.
    BTW don't use the built-in Windows defrag tool which is good for nothing.
  8. You have a big problem here. Adding many files increases the mean access latency, since if operating system tries to find a file, the head of the hard disk will do larger distances, so larger latency. If you rarely use these files, you can do the following trick:
    Create a new partition and put your file there and defrag the os partition.
    If you freqeuntly use these files, then you have a serious problem. The best solution in this case to buy a new drive in order to put there yous files.

    P.S. defrgmant in these case helps only the accessing the fragment files, it should not increase the overall performance.
  9. When the number of files on the disk increases (regardless of size), the number of records in the Master File Table (MFT) also increases. This may cause the MFT to grow beyond it's preallocated zone, possibly causing it's fragmentation. A fragmented MFT reduces performance. Another point to note is that very small files (< 2KB IIRC) are stored within the MFT allocated zone itself, in order to reduce seek times, but unfortunately this may also lead to 'premature' fragmentation of the MFT AFAIK.

    If the space occupied by files on the drive exceeds ~85-88%, then the newer files will encroach into the MFT pre-allocated zone.

    I think if the MFT grows very large due to the large number of files on the drive, Windows takes longer to query it to find the record for a particular file. This may be what causes the slowdown (relatively) compared to a nearly empty disk.

    As for the argument that tracks towards the outside of the platter are faster than those closer to the spindle, well it's true to an extent only. IIRC, the actual physical placement of the file on the platter is totally up to the drive controller (hardware) and NTFS has nothing to do with it. However with an increasingly full drive, the probability of files ending up on the inner tracks is much higher, therby slowing down access a bit.

    Reg defragging, automatic defragmenters tackle fragmentation effectively in the background without bothering the user. Unlike with the older scheduled/manual defraggers, the user does not have to waste time and manpower defragging. Infact, the more advanced automatic defraggers even automatically resize th MFT to proactively prevent it's fragmentation, apart from defragging it (mostly) 'online' itself without the need for a boot-time defrag.
  10. As usual, thanks for the great feedback.

    Well, since I posted this...for reasons other than your feedback (was none at the time), I removed all 70k files and 5k directories and instead am moving over files as needed and right now I'm only at 1% of where I was.

    So that said, if the MFT did become fragmented (at the time), once I permenantly deleted all this on my second hard drive, would the MFT on that second drive become 'unfragmented' because it will start filling in the spots that were vacated by all the files I deleted?

    Also, and I don't want to hijack my own thread, but I realize there is some real hatred towards the built-in Vista defragger. However, that is all I use right now and mostly I've not changed because a few months back, Maximum PC (or was it CPU?) did an article in their mag about defraggers and which is really better.

    Suprisingly, none performed any differently as a whole. Each had certain advantages over the other but the wierd thing is that the Windows defragger did better than most and especially when it came to boot times for the OS, etc.

    So which defrag tool (or links to technical reviews) really makes a difference? And as someone mentioned, I think Diskkeeper performed the worst of them all so I feel bad for people walking into Fry's, paying money, and getting WORSE performance than if they didn't defrag at all.
  11. Actually, I found the article online...

    Maybe I misinterpreted the results...or maybe their review is faulty...but:
  12. arrpeegeer said:

    So that said, if the MFT did become fragmented (at the time), once I permenantly deleted all this on my second hard drive, would the MFT on that second drive become 'unfragmented' because it will start filling in the spots that were vacated by all the files I deleted?

    Good question to which I don't have a clear answer. You'll have to probably ask someone who is familiar with NTFS to get a reliable answer. I am merely a dilettante in these arcane subjects. :D

    It appears that the MFT zone is indeed emptied when you delete those files, but if the 'excessively grown' zone itself was fragmented, I don't think it becomes 'unfragmented' i.e. deleted, because MS says the MFT cannot shrink.

    From the MS website,
    "As files are added to an NTFS volume, more entries are added to the MFT and so the MFT increases in size. When files are deleted from an NTFS volume, their MFT entries are marked as free and may be reused, but the MFT does not shrink. Thus, space used by these entries is not reclaimed from the disk.

    Because of the importance of the MFT to NTFS and the possible impact on performance if this file becomes highly fragmented, NTFS makes a special effort to keep this file contiguous. NTFS reserves 12.5 percent of the volume for exclusive use of the MFT until and unless the remainder of the volume is completely used up. Thus, space for files and directories is not allocated from this MFT zone until all other space is allocated first.

    Also, some more info here

    Hope this helps. :)

    As for the defragmentation part, I use Diskeeper 2008 pro on both Vista and XP systems and I find it's performance excellent. I haven't benchmarked or anything and neither do I plan to, (it's just a waste of time), but Diskeeper keeps even nearly full drives full of thousands of files, nicely defragmented and access is always fast. I'd rather go by my own experience rather than Max PC's tests (they recommended the useless Auslogics defragger at one time IIRC, that gives me little confidence in their tests :D ).
Ask a new question

Read More

Hard Drives Performance Storage