Skip to main content

Aplicata Quad M.2 NVMe SSD PCIe x8 Adapter Review

Conclusion

The Aplicata Quad M.2 NVMe SSD PCIe x8 Adapter has obvious advantages for software-defined servers in data centers, but that's not our target audience. The adapter does have some nice features for mainstream users, but not for the obvious reasons.

The adapter allows you to fit four M.2 SSDs in your system. With most motherboards, you can only mount two M.2 SSDs before you have to use the PCIe slots. Unfortunately, the onboard M.2 slots usually route through the PCH that's shared with many other devices. The DMI link between the PCH and the CPU is only PCIe 3.0 x4 (the same as one NVMe SSD). That means your two NVMe SSDs share the same bus with nearly every other device connected to your system.

The Aplicata Quad M.2 NVMe SSD PCIe x8 Adapter moves your high-bandwidth storage to the PCI Express bus and doesn't route through the PCH. There are significant performance advantages that don't always show up in our canned testing, particularly if your applications crave extreme storage bandwidth.

We shouldn't always put performance first, though. The adapter allows you to build a high capacity NVMe SSD out of smaller and less expensive drives. You can cram up to 8TB of high-speed flash into the adapter if you use 2TB SSDs, and you can get there one step at a time by buying drives as your needs increase.

Thermal throttling is not an issue for most users, but you are moving a lot of data if you need a product that can write at up to 7,000 MB/s. Extended heavy sequential write workloads are the most common catalyst for thermal throttling conditions, but the adapter's full-height half-length design leaves room for large heat sinks to cool the M.2 SSDs. With moderate airflow, the adapter will reduce the chance of throttling much more than a similar product without heat sinks. You can still heat soak the coolers over time, but you move the condition out to hours of use rather than minutes or even seconds with bare drives.

There are a number of low-cost M.2-to-PCIe slot "dummy" adapters on the market, but you shouldn't lump this product into the same category. This adapter has a PLX bridge and, more importantly, provides additional features. Most SSDs with a custom NVMe driver cache some user data, so if your drive loses power, you can lose data. The capacitors on the Aplicata adapter provide another layer of security during an unexpected power loss.

The adapter also brings more connectivity to consumer-focused chipsets. The Z97 through Z270 chipsets can't provide sixteen lanes to the second primary PCIe slot if you already have a video card installed. If you only have a x8 connection available, many of the other M.2 to PCIe adapters only support two M.2 slots out of the four on the card.

A majority of users will not see a performance benefit from a product like this. Most of us would just be happy to own a single NVMe SSD, much less four. The Aplicata Quad M.2 NVMe SSD PCIe x8 Adapter costs $449 at the time of writing, so it's expensive and overbuilt for home or even some workstation users.

Most of the BOM cost stems from the PLX chip. Aplicata also sells a less expensive version without the PLX chip, but we expect less widespread compatibility. We have a similar bridgeless HighPoint 4x M.2 to PCIe x16 adapter, and it has compatibility issues in some of our older systems. We hope to test the newer Aplicata design with full x16 bandwidth soon.

Right now, all these products are limited by the platform. A bootable array is far more interesting than a secondary storage device, so we'll take another look at the Aplicata x8 adapter when Intel releases dongle keys for vROC.


MORE: Best SSDs


MORE: How We Test HDDs And SSDs


MORE: All SSD Content

  • dudmont
    Ram, when the dongle shows, will you be doing a test with this and similar products, with 32gb Optanes?
    Reply
  • daglesj
    I hope the 3 or so of you that can actually exploit this performance have fun using it.
    Reply
  • dudmont
    20196606 said:
    I hope the 3 or so of you that can actually exploit this performance have fun using it.

    While I wholeheartedly agree with you, there's a certain kid in a candy store kind of thing about articles like this.
    Reply
  • takeshi7
    I would have loved to see this with 4 of the Intel Optane 32GB drives installed. That would be the fastest 128GB SSD ever.
    Reply
  • AnimeMania
    I was too stupid to understand anything the article said, but not too stupid to have questions. Are you allowed to mix and match the four M.2 SSDs with different brands and capacities? Do the four M.2 SSDs appear as 4 different drives (with different drive letters) or does that depend on if they are RAIDed?
    Reply
  • PancakePuppy
    20197050 said:
    I was too stupid to understand anything the article said, but not too stupid to have questions. Are you allowed to mix and match the four M.2 SSDs with different brands and capacities? Do the four M.2 SSDs appear as 4 different drives (with different drive letters) or does that depend on if they are RAIDed?

    Functionally, the card is just a carrier for the PCIe packet switch, associated support components, and M.2 connectors. It should be completely unaware of NVMe, so you could plug in 4 of the same SSDs, or 4 completely different ones, or 2 SSDs and 2 M.2 to PCIe edge connector adapters, all fair game.
    Reply
  • DerekA_C
    curious as to why this isn't added to the backside of eatx or atx boards or even matx boards with some kind of heansink plate particularly to the x299 and x399 boards that support enough pcie lanes.
    Reply
  • bit_user
    Running a RAID-0 of 4 drives mostly makes sense if you're using it for caching or scratch space. I wouldn't use this to hold the primary copy of any data I really care about.

    Now, if they included a RAID-5 controller that could keep up with these drives, that would be very interesting.
    Reply
  • bit_user
    20197941 said:
    curious as to why this isn't added to the backside of eatx or atx boards or even matx boards with some kind of heansink plate particularly to the x299 and x399 boards that support enough pcie lanes.
    Hmmm...
    ■ Cost - high end motherboards are already quite expensive. They couldn't add something like this without driving away nearly all the customers who didn't want this specific feature.
    ■ Cooling - most cases don't direct much airflow to the underside of mobos.
    ■ Accessibility - most cases require motherboard removal to access the bottom, except for a cutout under the CPU.
    ■ Small market - it's not uncommon to find 2x M.2 NVMe slots on higher-end motherboards. What % of the market for a given motherboard really wants > 2?Need we go on?

    IMO, this is the best option: easily accessible, likely to have good airflow, and can be paired with many different motherboards. You could even install multiple, if you're doing something particularly crazy. Like trying to host big files over 100 Gbps Ethernet.

    BTW, if a motherboard did add something like this, then it would make more sense to place the M.2 boards perpendicular to the motherboard and add a bracket to hold the other ends. This could take the place of one of the expansion card slots, so you'd have some airflow moving across them.
    Reply
  • alan.campbell99
    I'm interested in trying this but it seems it won't ship to New Zealand, bugger.
    Reply