Pushing Intel Optane Potential: RAID Performance Explored

Conclusion

I don't expect many people to run out and buy two or three Optane Memory SSDs to build a RAID array. For many of us, it's just a waiting game. In time Intel will bring a high-performance Optane-based SSD to market that doesn't cost $1,500. Several sites have reported that product may be called the 900P. A 900-series SSD product would fall in line with the return of the Core i9 processor series, and it would possibly use a configuration like the DC P4800X. The Intel SSD 750 series used the same hardware as the DC P3x00 series, so Intel wouldn't be breaking any new ground in bringing enterprise/datacenter hardware to the prosumer market. If the reports are true, the upcoming Optane 900P consumer SSD will ship in a wider range of capacities than the SSD 750. That will give more users access to affordable 3D XPoint technology.

If you are not willing to wait for an unconfirmed product with an unknown release date, then following me down the RAID 0 array path is a viable option if you already have a motherboard with the right features. Our Asrock Z170 Extreme7+ with three M.2 slots allowed us to utilize the Optane Memory SSDs in RAID. Most motherboards only support two M.2 devices in native slots, but many also support a third drive in the bottom PCIe slot that can route the signal to the PCH for use with Rapid Storage Technology. You will need to check your motherboard specifications and BIOS configuration to verify your options. 

Our Optane Memory array is very fast. With an operating system installed it's difficult to even put into words the difference between an NVMe SSD and the array. It wasn't quite the performance leap going from a hard disk drive to an SSD, though, but the system was more responsive than a high-performance NVMe SSD with the latest NAND technology. Unlike the move from disks to SSDs, I can still live with regular SSDs after experiencing Optane as a boot device. There is clearly a difference, but not enough to where we would loathe going back to a traditional SSD. If you ever spend time on a hard disk drive system after using an SSD for more than a few weeks, you understand what I mean. Disks increase your level of anxiety almost instantly.

If anything, our tests give us a clearer picture of what Optane can do for end users. Cache SSDs carry a stigma that dates back to several years of failed product releases. Our readers made that very clear in the Optane Memory Review comments thread. The failed devices of the past will hurt Optane Memory even though it’s a very good product and the best caching approach to date. The argument about Intel's high system requirements is valid, though. We really wish Optane Memory would work with 6th generation processors and 100 series chipsets. I wouldn't buy a new motherboard and processor to use Optane Memory with a hard drive when I already have enough performance and features on an older platform paired with an SSD.

Image
Intel Optane Memory (32GB)


MORE: Best SSDs


MORE: How We Test HDDs And SSDs


MORE: All SSD Content

Chris Ramseyer
Chris Ramseyer is a Contributing Editor for Tom's Hardware US. He tests and reviews consumer storage.
  • hannibal
    Promising, very promising indeed!
    Reply
  • shrapnel_indie
    Expensive is a pro?
    Reply
  • gasaraki
    Impressive but... you should have tested the 960Pro in RAID also as a direct performance comparison.
    Reply
  • InvalidError
    Other PCH devices sharing DMI bandwidth with M.2 slots isn't really an issue since bandwidth is symmetrical and if you are pulling 3GB/s from your M.2 devices, I doubt you are loading much additional data from USB3 and other PCH ports. It is more likely that you are writing to those other devices.

    As for "experiencing the boot time", you wouldn't need to do that if you simply put your PC in standby instead of turning it off. If standby increases your annual power bill by $3, it'll take ~50 years to recover your 2x32GB Optane purchase cost from standby power savings. Standby is quicker than reboot and also spares you the trouble of spending many minutes re-opening all the stuff you usually have open all the time.
    Reply
  • takeshi7
    It would have been even better if the reads weren't bottlenecked by the chipset.
    Reply
  • takeshi7
    Next, get 4 Optane SSDs, put them in this card, and put them in a PCIe x16 slot hooked up directly to the CPU.

    http://www.seagate.com/files/www-content/product-content/ssd-fam/nvme-ssd/nytro-xp7200/_shared/_images/nytro-xp7200-add-in-card-row-2-img-341x305.png
    Reply
  • CRamseyer
    I have a similar card but it, like the Seagate, is not bootable.
    Reply
  • gasaraki
    "Next, get 4 Optane SSDs, put them in this card, and put them in a PCIe x16 slot hooked up directly to the CPU.

    http://www.seagate.com/files/www-content/product-conten..."


    Still just 4X bus to each M.2 so no different than on board M.2 slots.
    Reply
  • takeshi7
    19731006 said:
    "Next, get 4 Optane SSDs, put them in this card, and put them in a PCIe x16 slot hooked up directly to the CPU.

    http://www.seagate.com/files/www-content/product-conten..."


    Still just 4X bus to each M.2 so no different than on board M.2 slots.

    But it is different because those M.2 slots are bottlenecked by the DMI connection on the PCH. The CPU slots don't have that issue.

    Reply
  • InvalidError
    19731050 said:
    But it is different because those M.2 slots are bottlenecked by the DMI connection on the PCH. The CPU slots don't have that issue.
    The PCH bandwidth is of little importance here as once you set synthetic benchmarks aside and step into more practical matters such as application launch and task completion times, you are left with very little real-world application performance benefits despite the triple Optane setup being four times as fast as the other SSDs, which means negligible net benefits from going even further overboard.

    The only time where PCH bandwidth might be a significant bottleneck is when copying files from the RAID array to RAMdisk or a null device. The rest of the time, application processing between accesses is the primary bottleneck.
    Reply