Evolving Beyond SATA
Serial ATA technology arrived in 2003, bringing with it smaller cables, hot swapping capability, I/O queuing, and a jump in throughput from PATA’s 133 MB/s to SATA’s initial speed of 1.5 Gbit/s (roughly 187 MB/s). Over three generations of revision, SATA ultimately climbed to 600 MB/s, where it looks to stay for the foreseeable future. However, SATA SSDs quickly improved to the point that they regularly exceeded 500 MB/s in sustained read and write rates and saturated the SATA connection. Today, ambitious users can reach sequential read throughput of over 1300 MB/s (and writes approaching 1000 MB/s) by joining four SATA SSDs into a RAID 0, with each SATA drive occupying its own board-level storage connector.
Are SSD RAIDs worth their speed? Consider the requirements:
- At least three SSDs each maintaining performance exceeding 500 MB/s
- Correct BIOS configuration
- A suitable RAID controller if on-motherboard resources don’t suffice
- Time, effort, and possibly troubleshooting for setup
- Power consumption multiplies by the number of SSDs used, as well as that of the RAID controller, if used
Perhaps most of all, the cost of implementation to achieve such speeds multiplies by the number of SSDs installed. When viewed on a cost-per-MB/s basis, the numbers become rather large and uncomfortable. Many end-users lack the technical knowledge to implement a RAID. Even IT staff, which should possess that knowledge, still must devote costly hours to performing, testing, and perhaps troubleshooting those RAID upgrades.
None of this comes as a surprise to anyone whose entertainment or work depends on fast storage. RAID technology has been widely used since the early 1990s. For many years, though, most users have asked a simple question based on the adequately performing systems before them: “Isn’t SATA good enough? Because it seems to be meeting all of my needs on this box right here.”
Sometimes, yes, SATA is good enough — but sometimes, and increasingly often, it’s not. By the same logic, isn’t a PC configured with a single-core, 3.4 GHz processor and a 250GB SATA hard drive, both of which were also recommended PC configurations in 2007, good enough? Anyone reading Tom’s Hardware can likely come up with a dozen reasons off the top of her or his head as to why these components are no longer sufficient. Most answers ultimately boil down to one rationale: Applications keep demanding more hardware horsepower. That said, if your applications span the likes of social media, email, and a word processor, by all means, keep rocking that single-core processor and 250GB hard drive.
A better question would be: What applications become constrained by conventional SATA architecture? We’ll return to this question soon.
Under PCI Express 3.0, each PCIe lane offers a theoretical bandwidth of 985 MB/s. Using four lanes concurrently, as in a PCIe x4 slot, allows for up to 3200 MB/s. No SATA or even SAS (Serial-Attached SCSI) drive can come remotely close to this. Even our RAID 0 discussed earlier tops out at half of this bandwidth.
One of the core factors at the heart of this SATA/SAS bottleneck is the Advanced Host Controller Interface (AHCI), a technical standard from 2004 that defines how a system’s host memory swaps data with attached storage. While AHCI is compatible with SSDs, the standard was designed for hard drives. For example, AHCI supports Native Command Queuing, a method for optimizing the order of data reads and writes across rotating media. Clearly, such algorithms are not required with SSDs.
A data exchange system designed for high-latency disk media will not take proper advantage of the capabilities of solid state media. Hard drives could only access one bit of data on a disk platter at a time while SSDs can access data on many die simultaneously thanks to their parallelized architecture. SSDs in general, and PCI Express memory drives in particular, could perform far better if not restricted by AHCI’s limitations.
The Non-Volatile Memory Host Controller Interface (NVMHCI, a.k.a. NVM Express or NVMe) specification, spearheaded by the NVM Express Work Group (comprised of Intel, Cisco, Samsung, Western Digital, Seagate, and other leading technology manufacturers), sought to revise the functionality of AHCI for a solid-state world. When NVMe debuted in 2011, the differences between the standards became glaring.
These technical differences translate into several real-world delays and costs on AHCI’s part:
- NVMe requires 3 microseconds to complete 1 million IOs. AHCI requires 30 microseconds.
- AHCI uses multiple controllers during data transfers, which add latency. NVMe uses no such controllers.
- NVMe dispenses with the latency-inducing SCSI/SATA translation used in AHCI.
- AHCI will use three times as many CPU cycles to reach a given number of IOPS compared to NVMe. Thus, to meet a time-sensitive IOPS goal, AHCI may require three times the number of CPU cores.
- Because NVMe operates over the flexible PCI Express bus, NVMe SSDs can take on any of several forms, including a PCIe upright slot card, M.2, and U.2. Early NVMe SSDs gravitated to x4 PCIe upright slots, but 2.5" form factor NVMe drives, such as SanDisk’s Skyhawk line, are now available as an enterprise solution while M.2 solutions, like the WD Black PCIe SSD, are designed with DIYers, gaming enthusiasts and content creators in mind.
NVMe: Western Digital Example – WD Black PCIe NVMe SSD
Little bigger than a stick of chewing gum, the WD Black PCIe SSD comes in 256GB and 512GB models, both of which boast speeds of up to 2050 MB/s on sequential reads and, on the 512GB model, up to 800 MB/s on sequential writes. The 256GB reaches sequential writes of up to 700 MB/s. On IOPS, both NVMe Black models specify random reads of up to 170,000. Random writes reach up to 130,000 (256GB) and 134,000 (512GB). Compare these numbers to the SATA SSD RAID read speeds of 1300 MB/s we mentioned earlier. One little WD Black PCIe SSD can outperform a handful of larger, more power-hungry, and far more costly collection of SATA SSDs. While the SATA SSD RAID clearly offers higher total gigabytes for storage, many applications, including gaming and high-volume transaction processing, will prioritize speed over capacity. On this basis, NVMe PCIe SSD technology marks a significant breakthrough in storage performance and resource savings.
Of course, there are several types of storage products for different needs. Consider Western Digital’s SATA M.2 500GB WD Blue SSD, which peaks at 545 MB/s on sequential reads and 525 MB/s on sequential writes. The product comes in both M.2 and 2.5” formats, reach from 500 GB to 2 TB in capacity, and use 6 Gb/s SATA connectivity. Why discuss a SATA example in an NVMe article? Because some users are going to want benefits other than raw performance. SATA still offers incremental price-per-gigabyte savings, and the M.2 format can deliver low-power and low-footprint advantages regardless of NVMe or SATA protocols.
Also, keep in mind that 3D NAND, still most commonly found on SATA drives, delivers markedly higher capacities. In time, 3D NAND should scale to NVMe performance levels, but today the technology emphasizes size over speed — to a point. 3D NAND, when properly implemented, is no throughput slouch. With maximum read performance of 560 MB/s, the WD Blue 3D NAND SSD sits near the head of the SATA class and may be a strong complement to an NVMe primary drive (see more on this below).
The SATA 3.0 bus has a maximum throughput of 6 gigabits per second (Gb/s), or a little under 600 MB/s after overheads. Current PCIe NVMe SSDs utilize x4 PCIe 3.0, which has a maximum throughput of about 3900 MB/s. This provides not only ample bandwidth for SSDs such as the WD Black PCIe but also room to keep growing. This also applies to write speeds. The WD Black PCIe SSD delivers sequential writes of up to 800 MB/s while the WD Blue SATA SSD tops out at 525 MB/s. The SATA drive’s maximum write speeds still fit within the SATA bus’s boundaries, but getting the most from that NVMe PCIe SSD requires that wider PCIe x4 pipeline.
Where NVMe May Meet the Masses
When SSDs first reached consumers (and even still today), a common strategy was to configure the SSD as the boot drive and put high-capacity HDDs alongside it as main file storage. This scenario reflects a general lack of affordable storage. If money were no object, people would buy all of the top-speed storage they could find and turn it all into a handful of logical volumes in a RAID. Of course, reality stings. As of this writing in August 2017, one e-tailer we checked sells a 2TB NVMe M.2 SSD for over $1,109.99. If conventional SATA SSDs are cost-prohibitive for bulk storage, suitability is even worse for NVMe. In contrast, at the same e-tailer, even a strong performer like the WD Black 2TB hard drive costs $129.99. For the near term, at least, the market seems stuck with putting the bulk of its non-critical data on disk. Fortunately, the market has found ways to cope with this arrangement in a hybrid storage approach (SSD + HDD) and make it productive.
For general purpose systems, hybrid storage makes sense…today. What remains unclear is how much local storage will be needed in coming years. Cloud-based applications continue to supplant local applications, and data tends to store with the host rather than the user. The Internet of Things continues its explosive rise into the tens of billions of devices, and most of that data will reside in the cloud. (One report anticipates that cloud traffic will quadruple between 2014 and 2019.) While many top-tier software providers offer both cloud and local versions of their offerings, the trend clearly points toward cloud for the bulk of data storage — if broadband speeds make such storage convenient and seamless. Right now, the U.S. doesn’t even rank in the global top 10 for average connection speeds, but the trend around the world is for speeds to keep increasing. As certain data shows, the penetration percentage for 25 Mb/s broadband in North America at the end of 2012 was under 3 percent. By the end of 2016, it had nearly reached 15 percent. While the market has yet to bear this out on a long-term basis, it seems probable that the faster the pipeline to the cloud, the more inclined people will be to offload their non-critical data to cheap, remote storage.
Yet that leaves the issue of critical data and the other trend we discussed earlier: the constant propensity for software to demand more powerful hardware over time. Consumers expect richer Web experiences with less wait time after a click. Players want their increasingly photorealistic games to load and refresh faster than ever. The press loves to cover how online transaction processing (OLTP), data analytics, and other massive-scale enterprise applications will benefit from faster storage, but the reality is that small/medium businesses and consumers also stand to gain. If there’s any doubt, simply take a user who has grown accustomed to SSDs over the last two or three years and strand him or her on a system saddled with a 5400 RPM hard drive. It’s like swapping out a sports car on the Autobahn for a family van on side roads to the store.
NVMe plays into these trends by being able to satisfy local high-performance computing needs even as bulk storage gets squeezed from PCs out into the cloud. In the short-term, NVMe already provides a way for power users and businesses to instantly accelerate their critical storage applications. Beyond this, looking to the end of the decade, expect NVMe to provide an affordable way for users to obtain outstanding performance from applications too demanding and/or time-sensitive to offload their real-time loads to off-site data centers.
Key Applications for NVMe
Again, if we ignore the vast potential for NVMe in enterprises and data centers, the market still contains several segments where faster storage will make a critical difference in user experiences. According to Eyal Bek, senior director of client SSD, Devices Business Unit, Western Digital, the top opportunities include gaming, entertainment production, and autonomous driving.
In gaming, no one expects storage to have much if any impact on frame rates. However, storage definitely affects loading times — and this concern will only grow more pronounced as maps and levels grow increasingly large and involved. Originally, players only needed about the time required to reload and respawn, but that began to change recently, when games started to load more content while in-level.
“Gaming is a market where a long loading time is equivalent to murder,” says Bek. “NVMe does assist with minimizing the loading times of games and their levels.”
In consumer entertainment, as of July 2017, one popular video streaming website has conditioned 1.3 billion users to embrace all manner of video content, from Hollywood movies to your neighbor’s sister’s kid’s clips of her cat snoring, all available instantly from anywhere on practically any device. The streaming nature of online video precludes the need for fast local storage, but that’s definitely not true of producing video. As video scales from 1080p to 4K and beyond, simply transferring files between local media sites becomes time-intensive. Prosumers shuffling entire projects can consume several hours simply by moving files from one drive to another. NVMe has the potential to significantly improve video editing workflows, and the larger the project the bigger the benefit. This will become an order of magnitude more important as VR filters down to the mainstream. Consider that modern VR cameras contain up to 24 image sensors, each contributing to a resolution of 8K x 4K per eye, and suddenly the data sets that must be moved and edited in real time become unbelievably massive — and a perfect fit for the capabilities of NVMe.
Not least of all, next-generation automobiles, particularly of the self-driving variety, could become a huge market for NVMe.). Certain companies have already made (mixed) headlines with their autopilot technologies, and several others are pouring millions into making self-driving transportation a ubiquitous reality. With autonomous driving, there’s no time to consult the cloud on split-second decisions. Decisions derived in part from storage need to be near-instantaneous.
“With the dawn of self-driving cars, low latency and response times makes the difference between success and failure for this new era,” says Bek. “High-resolution cameras are vital aspects of this technology that feed huge amounts of environmental data per second back to the car for immediate processing. This requires reliable, high-performing storage to record and analyze the captured images so that the system can issue the right command for an appropriate reaction to the situation, be it a stop sign, busy cross-walk, parking, or anything else.”
Not least of all, look for NVMe to play a role in the Internet of Things. On the client side, IoT devices are, in a sense, a form of embedded PC. Just as NVMe has already impacted server computing and is now pervading into the mainstream PC space, look for embedded and IoT markets to adopt this technology as well, especially where storage processing speed plays a role in total solution value.
NVMe Takes Over
To get a sense of how transitions happen in storage, look at this chart of how SATA SSD and, more recently, embedded NAND (e.MMC) have displaced hard disk in the notebook market.
According to the above data, this year will be the first in which flash storage outships HDD in this segment. Western Digital’s data clearly shows the market’s shifting preference for storage speed over high capacity. Certainly, small form factor and the rise of notebook/tablet hybrids has assisted with embedded flash’s rising success, but that doesn’t explain the displacement of one 2.5” drive technology with another. Performance does.
Storage technology transitions accelerate as the price gap between old and new narrows. As of this writing, a 250GB WD Blue SATA SSD can be found for $89.99. The 256GB WD Black M.2 2280 PCIe NVMe drive costs only $20 more at $109.99. If we look at the 500GB capacity point, the prices change to $149.99 and $199.99 — a wider gap, but many SSD adopters would happily pay an extra $50 for a significant jump in performance.
As with all computing components, NVMe will follow a progression from being a premium product to a mainstream offering, along with the spectrum of quality and use case segmentation that entails. Some vendors will specialize in performance and quality while others will gravitate to the bottom of the price spectrum. Don’t be surprised to see top-end implementations involve a pair of PCIe Gen3 slots and low-end solutions gravitate to BGA board-mounted designs. Review sites such as Tom’s Hardware will do the hard work of testing and educating power users on which NVMe products offer the best value for different user types, but also look to the manufacturers that have earned your trust over the years to provide the highest quality and support as you progress from the slower storage of yesterday into your NVMe experiences of tomorrow.