3D XPoint: A Guide To The Future Of Storage-Class Memory
Memory and storage collide with Intel and Micron's new, much-anticipated 3D XPoint technology, but the road has been long and winding. This is a comprehensive guide to its history, its performance, its promise and hype, its future, and its competition.
Where To Go From Here
Word of 3D XPoint filtered out nearly two years ago, and though we have yet to see end products, it is clear that the technology delivery chain is well underway. According to Intel and Micron, we could see products in the very near future.
From a performance standpoint, 3D XPoint appears to be unmatched for storage devices. 3D XPoint’s high performance and superb low-QD scaling are extraordinary, not to mention its low-latency characteristics. Our enterprise test regimen focuses on exposing the weaknesses of storage devices in mixed workload performance, variability, QoS, and low-QD scaling, but from the initial data provided by Intel and Micron, it appears that 3D XPoint improves all of these performance categories. One interesting caveat is how easily 3D XPoint saturates existing interfaces; even more performance is on the table, and all that it needs is a faster interface (such as PCIe 4.0 or the memory bus) to serve it up.
While there is an impressive amount of work underway to enable the new memory, we probably won't see it fully utilized within the next five years. Much like NAND, which we are just now starting to use as a true memory alternative, there will be a slow and steady cadence of improvements over many years.
On the enthusiast side, we will likely see Optane SSDs positioned for the ultimate in gaming performance and such, and Intel will likely use it as a DRAM supplement for notebooks and Ultrabooks. However, the price will be a big deterrent. It’s notable that cheap SSDs still haven't penetrated the desktop PC segment (still only 10% of the market), so we shouldn't expect Optane SSDs to proliferate outside of the upgrade market and the mobility segment for some time.
There will be considerable hardware, firmware, and controller challenges, but these are all surmountable with clever engineering. The software ecosystem, always the laggard, is more vexing. Optane SSDs provide incredibly fast QD1 performance, which is important for all applications, but operating systems just recently began to utilize NAND-based SSDs effectively, and games and applications still lag behind. The poor optimization doesn't bode well for unlocking the potential of 3D XPoint.
The entry-level pricing for Optane SSDs will have to be competitive to gain much foothold in the client SSD market, especially if it doesn't provide magical performance gains. The memory angle is tantalizing. Intel's addition of 3D XPoint support for Kaby Lake is telling; it eliminates all doubt that 3D XPoint will find use as a DRAM supplement very soon. Storage use cases merely ride behind existing standards that will work almost anywhere, such as NVMe, so CPU support isn't required. The best use of 3D XPoint will come on the memory side of the equation, and we can't wait to see how that pans out.
For Intel, 3D XPoint represents a strategic piece on the larger industry chessboard. The company is steadily bringing more technologies, such as networking (Omni-Path, silicon photonics) and memory on-package and placing them behind proprietary interfaces (widely thought to be a proprietary form of its QPI interface and the PCIe link for 3D XPoint). This tactic allows Intel to leverage its 99% CPU data center market share to attack other segments, such as memory, storage, and networking.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
However, the industry is aware of these developments, and three industry groups with competing open interfaces emerged recently. The Gen-Z consortium, OpenCAPI Consortium, and CCIX consortium are composed of broad groups, which include industry heavyweights such as AMD, Dell, EMC, Google, Mellanox, Micron, ARM, Broadcom, IBM, Seagate, Samsung, and Xilinx, that are working to create open alternatives to promote innovation and lower price structures. Unfortunately, some of these new interconnects aren't slated to appear in systems until 2018, which may be too little, too late.
Economics are going to be the true test, especially as other memory manufacturers retaliate with alternatives. Others may take Samsung's approach and simply attempt to use improvements upon existing technology to price 3D XPoint out of the market.
Intel and Micron have to be careful of the perception that they are trying to be a two-man standards group. NAND's main advantage is its broad availability; SanDisk licenses the tech to almost anyone, so it became an industry standard. From an economic standpoint, it's understandable that IMFT doesn't want to license the tech to others, but it will certainly hamper its chances at becoming a widely adopted technology, specifically if Intel locks it behind proprietary interfaces.
We progressed from milliseconds of latency with HDDs to microseconds of latency with SSDs, and that ignited a wildfire of innovation in processor and networking hardware to accommodate the radical change. Moving from microseconds to nanoseconds is sure to touch off yet another round of badly needed innovation, and it will certainly expose latency at all levels of the data pipeline.
The ultimate goal of the next generation of memory is to have the worlds of storage and memories collide permanently, with just one medium emerging for both purposes. 3D XPoint isn't suitable for that role, at least not yet, but it could help lay the groundwork. Intel's Optane products are coming to market soon, and Micron's QuantX isn't far behind. Unfortunately, these leading-edge products will only answer our performance and pricing questions, and of course, the pressing desire to know just what 3D XPoint is and how it works.
The remainder of the questions, such as its long-term viability, and how well the industry will use it, will be answered over the next decade as IMFT continues its quest into the very much uncharted 3D XPoint territory.
MORE: Best Deals
MORE: Hot Bargains @PurchDeals
Paul Alcorn is the Managing Editor: News and Emerging Tech for Tom's Hardware US. He also writes news and reviews on CPUs, storage, and enterprise hardware.
-
coolitic the 3 months for data center and 1 year for consumer nand is an old statistic, and even then it's supposed to apply to drives that have surpassed their endurance rating.Reply -
PaulAlcorn 18916331 said:the 3 months for data center and 1 year for consumer nand is an old statistic, and even then it's supposed to apply to drives that have surpassed their endurance rating.
Yes, that is data retention after the endurance rating is expired, and it is also contingent upon the temperature that the SSD was used at, and the temp during the power-off storage window (40C enterprise, 30C Client). These are the basic rules by which retention is measured (the definition of SSD data retention, as it were), but admittedly, most readers will not know the nitty gritty details.
However, I was unaware that JEDEC specification for data retention has changed, do you have a source for the new JEDEC specification?
-
stairmand Replacing RAM with a permanent storage would simply revolutionise computing. No more loading an OS, no more booting, no loading data, instant searches of your entire PC for any type of data, no paging. Could easily be the biggest advance in 30 years.Reply -
InvalidError
You don't need X-point to do that: since Windows 95 and ATX, you can simply put your PC in Standby. I haven't had to reboot my PC more often than every couple of months for updates in ~20 years.18917236 said:Replacing RAM with a permanent storage would simply revolutionise computing. No more loading an OS, no more booting, no loading data
-
Kewlx25 18918642 said:
You don't need X-point to do that: since Windows 95 and ATX, you can simply put your PC in Standby. I haven't had to reboot my PC more often than every couple of months for updates in ~20 years.18917236 said:Replacing RAM with a permanent storage would simply revolutionise computing. No more loading an OS, no more booting, no loading data
Remove your harddrive and let me know how that goes. The notion of "loading" is a concept of reading from your HD into your memory and initializing a program. So goodbye to all forms of "loading". -
hannibal The Main thing with this technology is that we can not afford it, untill Many years has passesd from the time it comes to market. But, yes, interesting product that can change Many things.Reply -
TerryLaze
Sure you won't be able to afford a 3Tb+ drive in even 10 years,but a 128/256Gb one just for windows and a few games will be affordable if expensive even in a couple of years.18922543 said:10 years later... still unavailable/costs 10k
-
zodiacfml I dont understand the need to make it work as DRAM replaement. It doesnt have to. A system might only need a small amount RAm then a large 3D xpoint pool.Reply
The bottleneck is thr interface. There is no faster interface available except DIMM. We use the DIMM interface but make it appear as storage to the OS. Simple.
It will require a new chipset and board though where Intel has the control. We should see two DIMM groups next to each other, they differ mechanically but the same pin count.