A Deeper Look At NVM Express
Before we jump into testing Intel's newest storage hardware, we need to take a look back to 2011. Although that was only three years ago, the landscape of SSDs was notably different. Intel and other vendors were pushing SSDs as drop-in hard drive replacements. They occupied the same form factors (2.5" with 7 and 9.5 mm z-heights), utilized the same physical interface (SATA 6Gb/s), and the same device stack (AHCI). Performance and reliability improved at regular intervals. Specifications like performance consistency and write endurance wouldn't be universally recognized for more than a year. While some SSDs saturated the SATA interface in sequential workloads, a majority were bottlenecked inside of the drive itself. Flash controllers, firmware, and NAND hadn't evolved to the point where the host interface presented performance challenges.
Then, in March of 2011, the industry took an incredibly forward-looking stance and released the NVMe 1.0 specification.
And by industry, I mean almost every major player in the flash storage market. The 13-company Promoter Group, backed by 80+ members, included Intel, Micron, Samsung, Dell, EMC, NetApp, IDT and Marvell. Their goal was to free future storage products from the limitations of SATA and AHCI. NVMe (Non-Volatile Memory Express) is a from-the-ground-up specification that replaces AHCI for PCIe-connected SSDs, focusing on efficiency, scalability, and performance. AHCI was developed at a time when words like sectors and cylinders were used to describe storage, and stack overhead was a tiny fraction of the media access time.
What may come as a shock to you is that even though NVMe does a lot to cut out controller and software latency, NAND latency remains the major contributor, illustrated in the slide above. Although this is the reality of flash today, NVMe has the future of non-volatile memory in mind. Resistive memory technologies like Phase Change Memory and Magnetic Tunnel Junction could offer a 1000x speed-up over current NAND. At that point, the bottleneck would push back to the device stack.
But NVMe's role isn't limited to latency reduction. With AHCI, the idea of parallelism wasn't fully integrated into the standard. Features like Native Command Queuing helped optimize transfers, but the interface never allowed SSDs to truly maximize their inherent parallelism.
If you read SSD reviews, you typically see IOPS measured across a range of queue depths, normally up to 32 outstanding commands. That is the point where most SATA-based SSDs achieve their peak performance. It is also the limit of AHCI. Many flash controllers can handle larger queue depths, though. You can see this for yourself from PCIe-based SSDs with their own proprietary drivers. Micron's P320h didn't achieve its peak performance until the queue depth hit 256. With NVMe, not only can the commands per queue increase from 32 to 64,000, but the number of queues increases from 1 to 64,000. Now that's what we call planning for the future.
Driver compatibility was the one major issue that all PCIe-based SSDs had. Every product shipped with proprietary software. Some vendors did a great job; others didn't. And unless your manufacturer of choice included its own option ROM, you couldn't boot from the drive. Even Intel's SSD 910 wasn't bootable. While this practice is generally accepted in enterprise environments, consumers need something a little more foolproof.
With NVMe, there is a standard driver that will be supported across multiple platforms, including BIOS support for booting. Windows 8.1, Windows Server 2012 R2, and Linux are a few of the operating systems already equipped to accommodate NVMe-based SSDs. Intel has a standalone driver, too. It remains to be seen whether the company's competitors rely on native support or augment the platform with proprietary software.
Booting Up From Intel's SSD DC P3700
Yes, the SSD DC P3700 will boot, albeit with a whole list of caveats. First, we needed a system that supported UEFI 2.3.1. Check. Then, we needed an operating system with native driver support. Windows Server 2012 R2, check. Finally, we needed to install the software. That proved easier said than done.
My first attempt left me at the Windows installation prompt. Setup was able to see the P3700, but complained that it wasn't bootable. At that point, I entered the BIOS to see if the P3700 showed up. It was nowhere to be found. On a hunch, I went into the boot screen to review my options. There were two options for the DVD-ROM: Legacy and UEFI. Of course, booting to the UEFI entry for the optical drive solved the issue. At that point, Windows not only recognized the P3700, but also allowed us to use it as a boot option. Interestingly, once the installation completed, the P3700 showed up in the BIOS as a UEFI boot option (not as an Intel SSD DC P3700, but as a Windows boot manager device).
The last step was to compare the boot time of our server going from an 800 GB Intel SSD DC S3700 to an 800 GB Intel SSD DC P3700. Keep in mind that this is a legitimate server; their boot processes are almost never described as fast.
- Intel SSD DC S3700 Boot Time: 64.8 seconds
- Intel SSD DC P3700 Boot Time: 44.5 seconds
We recorded a solid 20-second drop in boot time. Almost 20 seconds of that involves getting through the POST process, too.
As you can see from my diskpart screenshot, Windows recognizes the SSD DC P3700 as a boot device. Interesting, though not altogether surprising, is that the operating system also knows it's an NVMe-based device and where it resides in the PCIe root complex.