Endurance Workloads
SSD Endurance: Data Integrity is Marathon, Not a SprintEndurance Workloads
While client applications, such as media playback and large file writes, are often sequential in nature, enterprise data tends to be very random and bursty in nature, often with many users seeking to access the drive simultaneously. Operations typically involve small files, but other data sizes also come into play. As such, JEDEC arrived at the following composition for a standardized enterprise endurance workload:
· 512 bytes (0.5k) — 4%
· 1024 bytes (1k) — 1%
· 1536 bytes (1.5k) — 1%
· 2048 bytes (2k) — 1%
· 2560 bytes (2.5k) — 1%
· 3072 bytes (3k) — 1%
· 3584 bytes (3.5k) — 1%
· 4096 bytes (4k) — 67%
· 8192 bytes (8k) — 10%
· 16,384 bytes (16k) — 7%
· 32,768 bytes (32k) — 3%
· 65,536 bytes (64k) — 3%
For those who read our prior article on why reliable storage benchmarks are necessary, know that the load here is similar to that employed in the SPC-1C profile. Writes are spread across 100% of the user logical block address (LBA) area to prevent any drive region from being disproportionately stressed. However, this doesn’t mean that the LBA space is evenly written. Rather, 50% of accesses fall in the first 5% of the user LBA space. The next 15% of the LBA gets 30% of the accesses, and only 20% falls in the remaining space. According to JEDEC, this is because multiple studies have found that less than 5% of enterprise data receives more than half of all accesses. About 20% of data account for more than 80% of drive accesses. The LBA space usage is designed to reflect these usage patterns.
Test workloads differ according to use case scenario. Unlike the JEDEC enterprise workload, the JEDEC client workload consists of repeating real trace commands rather than a synthetically generated data set. The client load doesn’t cover the entire LBA range, so the performance of the entire drive isn’t represented. This is not true of enterprise testing. Note that both the enterprise and client workloads require a fully random data pattern. There is also an optional non-random pattern that manufacturers can run in addition to the random testing.
Drives are tested for various numbers of hours at various temperatures according to this chart:

Yes, that’s 3000 hours of testing, and after this, retention must be assessed. JEDEC specifies a period of at least 500 hours and encourages manufacturers to test even longer than this for greater accuracy. There are approved “accelerated” testing methodologies under certain circumstances, but even still, JEDEC requires a given number of drives for testing that coincides with the number of allowed drive failures. For example, the requirement for zero failures is 31 drives. One allowable failure requires 68 drives. Keep in mind that these enterprise-caliber drives generally cost four figures each. Given the commitment needed in time and resources, it’s not surprising that published results for these new tests are slow in arriving.