New Windows-native NVMe driver benchmarks reveal transformative performance gains, up to 64.89% — lightning-fast random reads and breakthrough CPU efficiency
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Microsoft's native NVMe driver will make the best SSDs even faster. Originally made available on Windows Server 2025, the performance gains also translate directly to consumer Windows 11 via simple registry hacks. News outlet StorageReview has put the new NVMe driver through its paces in its native habitat, yielding some very eye-popping results that will make any storage enthusiast's mouth water.
The native NVMe driver brings improvements in three key areas of storage performance. Firstly, the NVMe driver substantially improves the 4K and 64K random read bandwidth and IOPS. It leads to faster data access and operations when the system is under a heavy load or when executing multiple tasks simultaneously.
Secondly, the NVMe driver has shown a dramatic reduction in 4K and 64K random read latency. It enables faster response times across demanding workloads. By addressing bandwidth and latency, you can see the performance gains in latency-sensitive workloads.
Article continues belowThirdly, and equally important, the NVMe driver has demonstrated the ability to reduce processor usage during sequential read and write operations regardless of block size. Through data transfer optimization, the processor overhead is lower, freeing up resources for other demanding workloads or background tasks. One potential benefit is lower power consumption, which is impactful for both mainstream consumers and enterprises.
StorageReview's test bench consisted of two 128-core AMD EPYC 9754 (codenamed Bergamo) processors, 768GB of DDR5-4800 memory, and 16 Solidigm P5316 30.72TB PCIe 4.0 SSDs in a JBOD configuration. The publication showed FIO benchmarks on Windows Server 2025 (OS Build 26100.32370).
Microsoft Native NVMe Driver Performance
| Header Cell - Column 0 | Non-Native Driver | Native Driver | Improvement |
|---|---|---|---|
4K Random Read (GiB/s) | 6.1 | 10.058 | +64.89% |
64K Random Read (GiB/s) | 74.291 | 91.165 | +22.71% |
64K Sequential Read (GiB/s) | 35.596 | 35.623 | +0.08% |
128K Sequential Read (GiB/s) | 86.791 | 92.562 | +6.65 |
64K Sequential Write (GiB/s) | 44.67 | 50.087 | +12.13% |
128K Sequential Write (GiB/s) | 50.477 | 50.079 | -0.79% |
According to StorageReview’s benchmarks, random read performance saw the most significant gains, with 4K and 64K read speeds increasing by 64.89% and 22.71%, respectively. Sequential 64K reads remained within the margin of error. Notably, increasing the block size from 64K to 128K resulted in a further 6.65% performance boost.
In terms of sequential write performance, using a 64K block size delivered a notable 12.13% increase. However, raising the block size to 128K provided no additional benefit, as results remained virtually unchanged.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
| Header Cell - Column 0 | Non-Native Driver | Native Driver | Improvement |
|---|---|---|---|
4K Random Read Latency (ms) | 0.169 | 0.104 | -38.46% |
64K Random Read Latency (ms) | 0.239 | 0.207 | -13.39% |
64K Sequential Write Latency (ms) | 0.399 | 0.558 | +39.85 |
128K Sequential Write Latency (ms) | 1.022 | 1.149 | +12.43% |
Latency testing yielded mixed results. Random read latency improved significantly, with 4K and 64K read times dropping by as much as 38.46% and 13.39%, respectively.
In contrast, sequential write latency worsened. The 64K write latency increased sharply by 39.85%. However, it seems that you can mitigate the performance by switching to a 128K block size, where latency rose by only 12.43%. It's about one-third of the increase seen at 64K.
| Header Cell - Column 0 | Non-Native Driver | Native Driver | Improvement |
|---|---|---|---|
64K Sequential Read CPU Usage | 44.89% | 37.11% | -7.78% |
128K Sequential Read CPU Usage | 61.56% | 49.56% | -12.00% |
64K Sequential Write CPU Usage | 70.44% | 57.78% | -12.66% |
128K Sequential Write CPU Usage | 58.44% | 47.33% | -11.11% |
One area where the NVMe driver delivered equal performance gains was in processor usage, regardless of whether the operation was sequential read or write.
For sequential reads, 64K and 128K operations reduced processor activity by 7.78% and 12%, respectively. Sequential writes reflected similar gains, with 64K and 128K writes requiring 12.66% and 11.1% fewer processor resources.
Microsoft’s highly-awaited NVMe driver is a crucial update that the company should have arguably launched years ago. For almost a decade and a half, Windows users have been limited by Microsoft’s outdated storage stack, and it has been evident that it has struggled to keep pace with advances in SSD technology. With PCIe 5.0 SSDs delivering unprecedented performance and PCIe 6.0 drives on the horizon, the demand for a modern storage stack has never been greater.
The native NVMe driver (nvmedisk.sys) is in both Windows Server 2025 and Windows 11 25H2. Despite its presence, Microsoft doesn't enable the driver by default. Instead, it operates as an opt-in feature that Windows users need to enable via specific registry changes. The need for broader compatibility and support from third-party vendors influences Microsoft’s decision to keep the native NVMe driver as opt-in for now.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Zhiye Liu is a news editor, memory reviewer, and SSD tester at Tom’s Hardware. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.
-
ktosspl Welcome to 2014, windows. Linux has native clean io stack since version 3.3 without any legacy SCSI translation layer...Reply -
palladin9479 Replyktosspl said:Welcome to 2014, windows. Linux has native clean io stack since version 3.3 without any legacy SCSI translation layer..
Windows doesn't have a SCSI translation layer, just the author explaining it the best they can
The NT storage API only supported a single queue per disk device. NVMe supports multiple queues and acts more like RAM then disk storage. For a long time the Linux kernel also only supported a single queue per disk device. Not long ago I was having to balance virtual workloads across multiple LUNs for that precise reason. -
Darkbreeze In computing native generally means "by default" or "without needing extra steps". If a registry hack is still required for this to work, then it is not native.Reply -
wakuwaku Reply
https://en.wikipedia.org/wiki/Native_(computing)Darkbreeze said:In computing native generally means "by default" or "without needing extra steps". If a registry hack is still required for this to work, then it is not native.
No it doesn't, only YOU think of it that way. I can guarantee it. Go ahead and make a poll. The general public and nerds will all agree with the above wiki, as we always had since we learned about computing.
Erm, yes it does? The Windows devs themselves explain it when posting about the new native NVME support on their official blog. Are you telling me either you know something that the devs themselves don't? Maybe you know that there is actually real MAGIC working underneath? Or are those devs lying about a non existent translation layer?palladin9479 said:Windows doesn't have a SCSI translation layer, just the author explaining it the best they can
Have a quote by a Windows dev from Microsoft:
This improvement comes from a redesigned Windows storage stack that no longer treats all storage devices as SCSI (Small Computer System Interface) devices—a method traditionally used for older, slower drives. By eliminating the need to convert NVMe commands into SCSI commands, Windows Server reduces processing overhead and latency.
If you convert commands from A to B, that is a translation layer.
Further down the same post:
With Native NVMe in Windows Server 2025, the storage stack is purpose-built for modern hardware—eliminating translation layers and legacy constraints
The dev clearly calls it a translation layer. What more do you want?
And I know you are lazy to search old Tom's article to get the link, like how you are lazy to read the article before commenting, so here you go:
https://techcommunity.microsoft.com/blog/windowsservernewsandbestpractices/announcing-native-nvme-in-windows-server-2025-ushering-in-a-new-era-of-storage-p/4477353 -
CrazyCarrot911 It works well on my machines that I tested it on for a few months but the BIG drawback is that most if not all Backup Software won't see your drives in Windows, so no backup the usual way and most software working with drives will also not see your drives. If you can live with that, go for it, it's 3 reg keys that you can enable/disable as you like. I have them disabled for now as I want a backup, every day with my Acronis Cyber Protect Advanced Suite --- which doesnt see the drives if in NVMe mode.Reply -
palladin9479 Replywakuwaku said:The dev clearly calls it a translation layer. What more do you want?
There is no translation layer... I think some aren't realizing how SATA / SAS works.
Serial Attached SCSI aka SAS use's a protocol very similar to parallel SCSI and was the preeminent enterprise disk protocol until very recently. SATA use's a version of that protocol, more akin the the ATA protocol of the IDE days but still similar enough that SATA disk can work on a SAS HBA. Big difference is that the SCSI bus has multiple devices attached to a single bus, can only address one at a time and have a single queue for them all we called this the SCSI chain. SATA/SAS both have direct connections from HBA to the device itself and you can address each device individually and each device gets it's own queue. NVME allows for multiple queues per device, which is what this driver is doing.
As for the confusion, in Microsoft speech "SCSI" tends to refer to any disk that isn't ATA or use's a separate HBA, I've installed SATA adapters and seen MS label them as "SCSI Disks".
https://www.seagate.com/files/staticfiles/support/docs/manual/Interface manuals/100293068j.pdf
That is the SCSI Command Protocol, which is what I think you are referring to. It's the binary commands that are sent to storage devices for them to read or write data.
And found the translation guide for getting subsystems written for the SCSI command set (SAS/FC) to speak to a NVME storage device.
https://www.nvmexpress.org/wp-content/uploads/NVM-Express-SCSI-Translation-Reference-1_1-Gold.pdf
I suspect this is what the MS people were speaking about, the same guide that the Linux kernel developers used. It's a simple mapping of binary commands. -
DS426 Reply
We're talking about storage drivers and protocols, not Windows settings. The specific context is important.Darkbreeze said:In computing native generally means "by default" or "without needing extra steps". If a registry hack is still required for this to work, then it is not native. -
DS426 Reply
The physical busses (SATA, SAS, NVMe over PCIe) still communicate the same as Windows performs the SCSI translation in its storage stack before the bus cares. It's not just a matter of queuing but also driver overhead.palladin9479 said:There is no translation layer... I think some aren't realizing how SATA / SAS works.
Serial Attached SCSI aka SAS use's a protocol very similar to parallel SCSI and was the preeminent enterprise disk protocol until very recently. SATA use's a version of that protocol, more akin the the ATA protocol of the IDE days but still similar enough that SATA disk can work on a SAS HBA. Big difference is that the SCSI bus has multiple devices attached to a single bus, can only address one at a time and have a single queue for them all we called this the SCSI chain. SATA/SAS both have direct connections from HBA to the device itself and you can address each device individually and each device gets it's own queue. NVME allows for multiple queues per device, which is what this driver is doing....
Microsoft has all of the SCSI to NVMe translations documented: https://learn.microsoft.com/en-us/windows-hardware/drivers/storage/stornvme-scsi-translation-support -
Darkbreeze Reply
Anything EVER that anybody said was "natively" supported in Windows, NEVER, EVER EVER EVER was said meaning "but you'll have to go modify the registry". Ever. So you know, eh, it's not even worth saying so I'll skip it but, in reality, you and maybe five of your friends are the only ones that don't get that "natively" means Windows already has it built in and will do it automatically. And believe me, I am and I know WAY the hell more computer nerds than you've ever known, in all probability. Like 46 years worth. And that doesn't count the ones from the first ten years of my life or the plethora of them I've met here over the last 12 years as a member and moderator. Much less, the 25-30 years of my life I've been building custom systems and meeting people through that regard.wakuwaku said:https://en.wikipedia.org/wiki/Native_(computing)
No it doesn't, only YOU think of it that way. I can guarantee it. Go ahead and make a poll. The general public and nerds will all agree with the above wiki, as we always had since we learned about computing.
And as far as the general public goes, pfffft, the general public gives a thing on Amazon a five star review if it shows up on their porch undamaged regardless that it ends up being a sock with seven toes. So, what the general public agrees with doesn't hold much water as compared to what actually IS. But still, obviously I know what you mean and in a sense you're right. But it's only right to those who are more technically astute. The layman sees "natively" and assumes Windows is just going to make it work which in this case would not be truthful. -
Darkbreeze And why is it, in both of those articles related to the NVME registry hack, there is no actual link to how to do it? Let's tell people about this thing we think is great, but not tell them how to do it or where to go to find that information. Not that we can't find it, but jebus, if you're going to open the door, put the goodies in the bag.Reply
Edit: Nevermind, I found it. They could have made it clearer than just a hidden link to Storage review, like, "visit Storage review here for more information on this". But honestly, it's the least of the problems with the articles here these days.