Micron shows off world's fastest PCIe 6.0 SSD, hitting 27 GB/s speeds — Astera Labs PCIe 6.0 switch enables impressive sequential reads

Astera Labs testbench holding Micron's PCIe 6.0 SSDs.
(Image credit: Astera Labs)

Micron and Astera Labs teamed up at DesignCon 2025 to show off the world's first PCIe 6 SSDs in the wild, paired with Astera's Scorpio PCIe 6.0 network fabric switch. When connected to two Micron PCIe 6.0 SSDs and an Nvidia H100 GPU, the switch handled sequential SSD read speeds of over 27 GB/s on each drive — doubling the speeds of today's fastest PCIe 5.0 drives.

The new Micron SSD was first shown off at Astera Labs' booth at DesignCon 2025, a high-end chip design conference that took place in the last week of February in California. The companies waited to share info on the test bench publicly until this week on the Astera Labs blog.

The Micron PCIe 6 SSD was confirmed to be the same model teased by Micron in August of last year as the industry's first PCIe 6.0 storage drive, then announced to have 26 GB/s read speeds.

The Astera Labs test bench used at DesignCon 2025 let Micron's PCIe 6 drive surpass expectations and measure read speeds of 27.14 GB/s. For context, the fastest PCIe 5.0 SSD that we've tested, the Crucial T705, hits maximum read speeds of 14.5 GB/s, maxing out the PCIe 5.0 x4 connection and still sitting at half of the speed of Micron's latest and greatest.

The drive was pushed to this point thanks to Astera's Scorpio P-Series Fabric Switch, an industry-first network switch at PCIe 6 that connects up to 64 PCIe 6.0 lanes. The switch, built to enable quick communication between CPU, GPU, and storage nodes in HPC and AI clusters, was paired with a helping software hand from Nvidia's Magnum IO GPUDirect (GDS). GDS provides a direct path for memory access, giving storage devices a clear line to GPU memory without going through the CPU buffer and adding latency.

Astera Labs has also been teasing its PCIe 6 products for some time. Last March, Astera's booth at GTC 2024 showed off its PCIe 6.0 Aries retimers—a device that acts as a signal repeater for PCIe devices, boosting PCIe signal integrity when tools such as PCIe bridges and extenders are in use. The Aries retimers were among the first PCIe 6.x devices seen in the real world, and now, Astera Labs and Micron make a strong display, showing that PCIe 6 is ready to rumble.

PCIe 6.x, so-called due to constant revisions and errata to the PCIe 6 spec (we are currently on PCIe 6.3), promises to be the next horizon for enterprise users and, eventually, consumer products. Today's PCIe 5.0 maxes out at 128 GB/s of bidirectional speeds on an x16 bus. PCIe 6.x doubles this, reaching 256 GB/s over an x16 connection. The HPC and AI enterprise industry is crying out for faster speeds, and PCIe 6.0 promises to enable them as it enters the market.

TOPICS
Dallin Grimm
Contributing Writer

Dallin Grimm is a contributing writer for Tom's Hardware. He has been building and breaking computers since 2017, serving as the resident youngster at Tom's. From APUs to RGB, Dallin has a handle on all the latest tech news. 

  • usertests
    I like numbers that double.

    PCIe 6.x, so-called due to constant revisions and errata to the PCIe 6 spec (we are currently on PCIe 6.3), promises to be the next horizon for enterprise users and, eventually, consumer products.
    Do we know if PCIe 6.0 is likely to be relevant to consumers in the near term?

    Not whether you need that fast of an SSD or PCIe 6.0 x16 graphics (you don't), just if implementation costs will be high. Some of these PCIe revisions are less painful than others because trace lengths don't have to be reduced. Other than that, we'll see the heat issues all over again.
    Reply
  • rluker5
    I wonder if the random read is much faster.
    Reply
  • FunSurfer
    Where is all the interesting stuff: What is the power consumption? What is the operating temperature? Does it need external cooler (active or passive)?
    After PCIe 5 SSDs high temperatures what is happening with PCIe 6 SSDs?
    Reply
  • joartrak
    All for the newer and faster drives BUT we are only now starting to see consumer 8TB models that are PCIe4 and they are pretty expensive. It does feel like we are skipping a few steps. Will we be seeing PCIe8 before we start seeing 16TB at $200-$300?
    Reply
  • bit_user
    The article said:
    Today's PCIe 5.0 maxes out at 128 GB/s of bidirectional speeds on an x16 bus. PCIe 6.x doubles this, reaching 256 GB/s over an x16 connection.
    Please stop adding together the bandwidth in each direction. It's confusing and really not useful. Most workloads are heavily asymmetric, meaning the bottleneck will be on the uni-directional speed.

    The uni-directional speed is what PCIe and frankly most of the industry traditionally quoted. It's really a more recent development that people like Nvidia have begun to spike the numbers by adding together the bandwidth in each direction.
    Reply
  • bit_user
    usertests said:
    Do we know if PCIe 6.0 is likely to be relevant to consumers in the near term?
    I've been wrong before, but I think not. It would further increase motherboard prices, at a time when those are clearly an issue and consumers really don't need more PCIe bandwidth.

    I'd bet we'll at least see CXL on consumer platforms before PCIe 6.0. If mainstream CPUs migrate towards using on-package DRAM and CXL becomes the primary memory expansion method, then I could definitely see a use case for PCIe 6.0 / CXL 3.0 on mainstream PC motherboards.

    usertests said:
    Some of these PCIe revisions are less painful than others because trace lengths don't have to be reduced. Other than that, we'll see the heat issues all over again.
    While PCIe 6.0 doesn't increase the clock speed, its PAM4 encoding does require better signal-to-noise ratio. That roughly corresponds to more PCB layers (or maybe more retimers?), which adds cost.

    The thing is that I can't say for sure how much margin consumer boards currently have in their S/N ratio, however you can be pretty sure the cheaper boards are close to the limit for PCIe 5.0. And that's where the greatest cost-sensitivity is. I guess they could just limit PCIe 6.0 to premium boards, but then the overall value proposition for even doing it would be less, especially since ThreadRipper and Xeon-W will almost certainly have it.

    Another way it adds cost is in the additional die area needed for the PCIe 6.0 controller. PCIe 6.0 adds some new protocol features, so it's not just about PAM4 encoding/decoding they have to accommodate.
    Reply
  • bit_user
    rluker5 said:
    I wonder if the random read is much faster.
    QD1 @ 4kB? Probably not much. The amount of time to send 4kB over PCIe 5.0 x4 is already just 0.25 usec, which should mean an upper limit of 4M IOPS.

    Sadly, I'm not having much luck finding reviews of high-end enterprise SSDs that actually test QD1, because that's just not an important use case for them. However, I think you can probably look at high-end consumer SSD benchmarks to get a sense of just how far off we are from being bandwidth-limited on 4k random read @ QD1.

    In fact, this high-end PCIe 5.0 SSD from last year quotes 4k random read (probably at something like QD256) at just 2.8M.
    https://www.storagereview.com/review/advancing-high-performance-storage-dapustor-haishen5-h5100-e3-s-ssd
    Since I'm reduced to quoting manufacturer specs, I guess I should also mention that Micron's 9550 (from about the same time) quotes 2.8M to 3.3M 4k random read IOPS. Okay, at 3.3M, we're getting close enough to link saturation that I'd definitely expect to see a benefit from higher speeds.
    https://www.storagereview.com/news/micron-9550-ssds-feature-speed-and-power-efficiency
    But, I maintain that shaving off like 0.13 microseconds from each read isn't going to do much for your QD1 case. Random 1MB reads would be a different story, though.
    Reply
  • bit_user
    FunSurfer said:
    Where is all the interesting stuff: What is the power consumption?
    Lots.

    FunSurfer said:
    What is the operating temperature?
    Hot.

    So, they won't yet have detailed specs on these drives (not that I've seen, anyway), but you can get a rough idea of what we're talking about by looking at their previous-gen perf-optimized enterprise SSDs. The datasheet for the 9550 says it averages 18W for peak sequential read and 16W for peak sequential write.
    https://www.micron.com/content/dam/micron/global/public/products/product-flyer/9550-nvme-ssd-product-brief.pdf
    Supported operating temperatures are up to 70C.

    FunSurfer said:
    Does it need external cooler (active or passive)?
    It's for servers, which have optimized airflow that will be routed through the storage array. Themselves, these drives don't contain an active cooling element, but only because they can offload this requirement onto the server chassis.

    Not sure if you've ever been inside a modern server, but they even have hot-swap fans!

    FunSurfer said:
    After PCIe 5 SSDs high temperatures what is happening with PCIe 6 SSDs?
    Intel added PCIe throttling to the things their thermal driver can do to limit CPU overheat conditions.

    SSDs have built-in thermal throttling for a long time, but I don't know if that includes dropping back to a lower PCIe rate. They will do this to save power, when you have ASPM enabled. Consumer ones, at least.
    Reply
  • thestryker
    rluker5 said:
    I wonder if the random read is much faster.
    Nope, and it won't be unless you're using a SCM type drive (even then PCIe revision will have no real impact on this performance). With Intel shutting down Optane more SCM drives have cropped up from the major vendors for enterprise, but they all seem pretty focused on specific workloads rather than being universally better. Nothing I've seen indicates anything of the sort will be headed to consumer grade drives, and this is likely due to limited cooling and board space.

    edit: AFAIK this is the best NAND SSD with regards to low latency/random performance and they haven't made a new version (though this may be due to Kioxia and how they've handled XL-Flash): https://www.storagereview.com/review/dapustor-x2900p-scm-ssd-review
    edit2: these are the SCM class drives out today:
    https://www.micron.com/products/storage/ssd/data-center-ssd/xtr-ssdhttps://www.solidigm.com/products/data-center/d7/p5810.html (StorageReview)
    Reply
  • Tanquen
    Cool, I'm guessing it still slows down to a few hundred k per second when copying large number of small files? I'm sure it'll be fun copying 30 GB virtual discs for my VMs, but that's about it.
    Reply