Page 1:More SSD Capacity Through NTFS Compression
Page 2:NTFS Is 19 Years Old
Page 3:Test Setup And Benchmarks
Page 4:NTFS Compression In Practice
Page 5:Benchmark Results: Sequential Read And Write (CrystalDiskMark)
Page 6:Benchmark Results: 4 KB Random Reads/Writes (CrystalDiskMark)
Page 7:Benchmark Results: 512 KB Random Reads/Writes (CrystalDiskMark)
Page 8:Benchmark Results: Launching Applications, Windows Startup And Shutdown
Page 9:Benchmark Results: PCMark 7
Page 10:Benchmark Results: SYSmark 2012
Page 11:Should You Compress Data On Your SSD?
NTFS Is 19 Years Old
Windows NT 3.1, released by Microsoft in 1993, ushered in a new era. Instead of employing the File Allocation Table (FAT) used previously, the developer introduced the NT file system, which had a couple of notable advantages. For example, Microsoft lifted the 8+3-character file name length limit carried over from the days of DOS. Unlike FAT, which allows only Latin characters for file names, NTFS allows up to 256 characters, and it uses the Unicode character set. This was also supported by FAT32, which succeeded FAT and was introduced with Windows 95b in 1997. But that update had a hard time competing with NTFS, too.
After all, NTFS gives users other benefits like a journaling feature that first executes all pending file changes in a so-called journal on a reserved space before executing them directly. This allows for quick data recovery from NTFS partitions if write operations are interrupted by a system crash or power outage. NTFS also facilitates file and folder permissions, encryptable disk areas and user quotas, and the data compression capability we'll be testing today. Before you activate it, though, we want you to be aware of how it works and whether it will have an effect on your system.
NTFS uses the LZNT1 algorithm (a variant of LZ77) for lossless data compression, and 4096-byte clusters for data storage. The file system compresses the output data in blocks of 16 clusters, thus increments of 64 KB. If it can’t compress the output data of 16 clusters to less than 16 clusters, NTFS leaves them unchanged. If the LZNT1 algorithm can compress the 64 KB data block to 60 KB or less however, saving at least one cluster, that free cluster is treated like a sparse file. With sparse files, NTFS ignores those parts of the file that contains no information or zero-byte sequences. A compressed file can therefore consist of uncompressed and compressed clusters, as well as clusters declared sparse.
No file types are excluded from the compression scheme, but just like any other kind of data compression, the LZNT1 algorithm is inefficient for files that are already compressed, such as JPG, AVI, and ZIP files. The compression takes place at the file system level, making it invisible at the application level. As far as Windows and its applications go, there is no difference between a compressed and an uncompressed file.
Advantages: The greatest advantage of the NTFS compression is, obviously, an increase in capacity. Owners of small SSDs especially should be happy about every additional megabyte of drive space reclaimed. Increasing the compression and reducing file sizes could translate into faster read and writes speeds (at least theoretically, since less data is read and written from and to the drive).
Disadvantages: According to Microsoft, NTFS compression is very CPU-intensive, and not recommended for use in servers that handle large volumes of reads and writes. Even for home use, there are restrictions. You should only enable compression in folders with relatively few read and write accesses. More plainly, don't compress the Windows system folder. Also, copy operations are theoretically going to be slower, since the file system decompresses the corresponding files first, copies or moves them, and then compresses them again. If you send those compressed files over a network, they're decompressed first, consequently saving no bandwidth.
Another factor to consider: NTFS compression in 64 KB segments leaves the data highly fragmented, especially easily compressible files, since they'll be peppered with sparse clusters. This can shown clearly with an example: according to Microsoft, on average, the NTFS compression of a 64 KB data block generates one sparse cluster. Dividing a 20 GB file system into 64 KB segments generates 327 680 sparse clusters given that calculation. This is particularly relevant to hard drives; SSDs aren’t as affected because their access times are so low that fragmentation is less of an issue.
- More SSD Capacity Through NTFS Compression
- NTFS Is 19 Years Old
- Test Setup And Benchmarks
- NTFS Compression In Practice
- Benchmark Results: Sequential Read And Write (CrystalDiskMark)
- Benchmark Results: 4 KB Random Reads/Writes (CrystalDiskMark)
- Benchmark Results: 512 KB Random Reads/Writes (CrystalDiskMark)
- Benchmark Results: Launching Applications, Windows Startup And Shutdown
- Benchmark Results: PCMark 7
- Benchmark Results: SYSmark 2012
- Should You Compress Data On Your SSD?