Other PCIe 5.0 SSDs Are Also Crashing Instead of Throttling

Seagate FireCuda 540
Seagate FireCuda 540 (Image credit: Seagate)

Some of the best SSDs, specifically the PCIe 5.0 drives based on the Phison PS5026-E26 controller, have been crashing instead of thermal throttling when operating without a cooler. However, it is worth noting that all of the impacted drives are designed and marketed to specifically be used with a heatsink, so the conditions that expose the thermal shutdown will not present themselves if the drives are used correctly (in accordance with manufacturer specifications).

Initially, only the Corsair MP700 exhibited this behavior; however, it's now become apparent that the issue is more widespread and affects other Phison E26-based SSDs that are designed for heatsinks when they are used without heatsinks attached.

German news outlet Computerbase discovered that the Seagate FireCuda 540, Gigabyte Aorus Gen5 10000, and Adata Legend 970 also suffer from the shutdown issue. It was to be expected since the PCIe 5.0 SSDs utilize the same Phison E26 controller. The FireCuda 540, Aorus Gen5 10000, and Leged 970 are still on firmware 22. Seagate hasn't gotten back to Computerbase when the new firmware will be available, whereas Gigabyte promised that it'll arrive "soon."

To recap: the issue only occurs when you use a PCIe 5.0 SSD without cooling. When the drive gets too hot, it shuts down to protect the SSD controller, NAND, and data. This shouldn't be an issue if the PCIe 5.0 drive is adequately cooled by the included heatsink or the M.2 heatsink from the motherboard. Regardless, Phison has released a new firmware (version 22.1) that ensures the PCIe 5.0 SSDs throttle — instead of just crashing, which can lead to data loss. 

Firmware 22.1 introduces link-state thermal throttling that essentially reduces the PCIe interface speed — for example, dropping from PCIe 5.0 to PCIe 4.0 or even PCIe 3.0 to lower the temperature of the physical layer (PHY) without throttling the processor clock. This will obviously impact the PCIe 5.0 SSD's performance, but it should also keep it from engaging a shutdown to protect the integrity of the SSD controller. According to Computerbase, the thermal threshold on the new firmware 22.1 is 85 degrees Celsius.

In the beginning, the Crucial T700 didn't have the issue. The PCIe 5.0 would throttle until the drive operated at hard drive speeds but didn't shut down thermally. Computerbase performed more tests on the Crucial T700, and it showed similar failures, so the drive may also need the firmware 22.1 update.

Under the new firmware 22.1, Phison E26-powered SSDs should still provide an acceptable level of performance at high temperatures. Without a cooler, the Corsair MP700 delivered sequential read and write speeds over 10 GB/s and 2 GB/s, respectively. Remember that it's a safety measure in case temperatures get out of hand, and you should always use a cooler with your PCIe 5.0 SSD.

Edit: 7/21/2023 3:45pm PT: Clarified that these SSDs are specifically designed to be used with a heatsink. 

Zhiye Liu
RAM Reviewer and News Editor

Zhiye Liu is a Freelance News Writer at Tom’s Hardware US. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.

  • TechieTwo
    Who would be so technically challenged as to run a PCIe 5 SSD without a heatsink? It's technically ignorant.
    Reply
  • bit_user
    PCIe 5.0 is not justified for virtually any consumers. It just adds cost, heat, and headaches, for a modest performance improvement you won't notice in practice.

    What we need is further iteration and refinement on PCIe 3.0 and 4.0 controllers.

    The only persuasive argument I can see for PCIe 5.0 is if you wanted to use it at just x2, so you could pack in more drives. But, that's not how I think most people are using them.
    Reply
  • InvalidError
    bit_user said:
    PCIe 5.0 is not justified for virtually any consumers. It just adds cost, heat, and headaches, for a modest performance improvement you won't notice in practice.
    The first generation or two of anything going into a new standard in the consumer space is almost always plagued with teething issues on top of having little to no benefit over cheaper, more mature stuff from the previous generation in most everyday uses. For normal people, it is usually much better to skip them.

    PCIe 4.0 SSDs were a hot mess too for the first two years. Now they are the mature low-power budget-friendly option that obsolesced 3.0 drives.
    Reply
  • InvalidError
    Tom Sunday said:
    “People always want the latest and the fastest and will go to any lengths to get it.
    I'm fine buying into mature, proven stuff. My last attempt at buying something new and somewhat exciting was Intel A750 and all I got was random crashing under all circumstances for two days. Now I have an RX6600.
    Reply
  • atomicWAR
    bit_user said:
    PCIe 5.0 is not justified for virtually any consumers. It just adds cost, heat, and headaches, for a modest performance improvement you won't notice in practice.

    What we need is further iteration and refinement on PCIe 3.0 and 4.0 controllers.

    The only persuasive argument I can see for PCIe 5.0 is if you wanted to use it at just x2, so you could pack in more drives. But, that's not how I think most people are using them.
    We finally had an adequate speed update with PCIe 3.0 drives coming from SATA 6G. Then they released PCIe 4.0 with drives that can hit ludicrous 7300MB/s that gets toasty but is easily manageable with passive heatsinks. With all the heat PCIe 5.0 brings to the table I don't feel like its a wise direction, yet, for the consumer industry. Kind going off what your saying. I would have loved to see PCIe 2X linked NVMe drives for PCIe 5.0 and been able to cram more drives into the same space. Even if that meant using riser's, PCIe AIB or even having the slots vertical to the board vs horizontal allowing for more drives. Me personaly, I can never have enough storage space. I just upgraded my HDD storage pool's to 80TB each (currently at 50% capacity but gobbling up more every day) for redundancy.

    Ultimately though once they get heat under control (ie not needing active cooling as a rule and not the exception) then I think PCIe 5.0 drive's heat issues will be ready for the main stream. And honestly had they converted to 5.0 drives to 2X links this gen on consumer boards by the time we hit PCIe 6.0 that would in theory have that level of heat we are getting now with 5.0 4x links, manufacturers would of had time in the server space to learn to cool these things. I mean I love more speed and new things but direction were going with heat on these ssds has me a little hot under the collar to say the least. But at the end of the day progress is progress. Happy to see it even if I think there would have been a better way.
    Reply
  • PlaneInTheSky
    TechieTwo said:
    Who would be so technically challenged as to run a PCIe 5 SSD without a heatsink? It's technically ignorant.
    Don't blame users.

    M.2 is a ridiculously flawed design for desktop PC.

    Who the hell decided this hillbilly M.2 design with tiny screws, pasted on heatsinks, sitting in between a hot CPU and GPU, was a good idea on PC. You have to literally take off the CPU cooler and heatsink on some PC to access the M.2 slot.

    I don't even want to know how many dropped that tiny M.2 screw into their PSU. The fact M.2 need tiny standoffs is icing on the cake. Awful design.

    M.2 was designed for hyper thin notebooks where people don't touch the thing. It's not a desktop design.

    U.2 like servers should be standard on desktop PC.

    M.2 sucks.
    Reply
  • InvalidError
    PlaneInTheSky said:
    U.2 like servers should be standard on desktop PC.

    M.2 sucks.
    U.2 sucks for consumers: you need $25-30 cables to connect each drive to the motherboard, you need to run power cables to each drive and the motherboard needs the added cost of PCIe retimers for signals to survive the trip from CPU to U.2 board connector through that cable. With M.2, you just slap a connector next to the CPU or chipset and call it a day, save ~$30 per port. The drives themselves are $5-10 cheaper from not requiring a separate housing and being able to use a simple card edge connector built from the PCB instead of as an additional part too.

    If you don't like where M.2 slots are on typical motherboards, then petition motherboard manufacturers to give you extra PCIe slots instead and slap M.2 SSDs on those.

    The vast majority of PC users likely don't even open their PCs for cleaning and won't care where the SSDs are located. They would care that a PC with slightly more serviceable SSD slots cost $50-100 more than an otherwise same-spec system with motherboard-mounted M.2 slots.
    Reply
  • bit_user
    InvalidError said:
    PCIe 4.0 SSDs were a hot mess too for the first two years. Now they are the mature low-power budget-friendly option that obsolesced 3.0 drives.
    Not sure about the "low-power" part, though. I think they're still higher-power than most PCIe 3.0 drives.

    I just bought a SK Hynix P31 Gold, and the professional reviews I read all basically gushed about how it never encountered thermal throttling. And it's just a PCIe 3.0 drive. Compared with some of the more power-efficient PCIe 4.0 drives, I see that active power is comparable (folder copy use case, so mostly sequential and QD=1 test case), but their idle is still about twice as high as the P31 Gold's.
    Reply
  • bit_user
    InvalidError said:
    I'm fine buying into mature, proven stuff. My last attempt at buying something new and somewhat exciting was Intel A750 and all I got was random crashing under all circumstances for two days.
    You also bought an "open box" unit. Don't leave that part out. It's unknowable whether any of the problems you experienced were due to actual hardware problems with that unit.

    For my money, "open box" items usually aren't marked down enough to be worth the potential for problems.
    Reply
  • Zerk2012
    I guess they should bundle them with this.
    https://www.newegg.com/ineo-m3/p/13C-00RW-00001
    Reply