The SSD Workload Performance Analysis

Performance Hazards For Flash SSDs

The performance qualities of flash-based solid state drives--so-called flash SSDs--have been disputed all over the Web. While the peak performance numbers of the latest products are typically more than impressive (featuring throughput levels of almost 250 MB/s and up to several thousand I/O operations per second), real-life performance may be very different. In fact, over time, performance can even come down to levels at which conventional hard drives are faster due to their steady performance.

Today's SSDs simultaneously benefit and suffer from flash technology. But if you leave all the poor performers aside, the latest generation offerings from Intel and Samsung definitely show more potential than shortcomings.

What is Performance?

Compared to flash SSDs, the performance evaluation of hard drives has been rather simple: you want to know the throughput performance in megabytes per second, and access time in milliseconds, for both desktop and notebook drives. Sometimes you need to add I/O performance analysis for servers and workstation products before making a decision. Although power efficiency is becoming more and more important, those are the performance metrics that matter most.

Application benchmarks help to assess performance in real life environments by simulating representative operational sequences. The hard drive format (3.5”/2.5”), recording technology, data density, and spindle speed have been the key parameters that historically had most influence on performance. Apart from them, the remaining factors, such as the interface bandwidth or cache size, are secondary.

Flash SSDs are Different

Essentially, hard drives are best at reading or writing data sequentially—the more they have to reposition their heads to tackle random operations, the more they slow down in terms of both throughput and I/O operations per second.

This is where flash SSDs kick in: they have extremely quick access times, as they just have to pick the right position within the memory array instead of moving physical components. In addition, the latest products are capable of delivering roughly twice the maximum throughput of a conventional hard drive, by lining up flash memory in multiple channels similar to dual-/triple-channel RAM configurations or RAID technology. The analysis of I/O performance reveals the degree of intelligence of the flash controller used in a flash SSD, as they have to maximize performance while providing wear-leveling for the flash cells.

The Black Box

As flash SSDs have become more complex, they have also become veritable “black boxes.” The physical location and strategy for storing data isn’t as simple as it is on hard drives, where it is rather easy to imagine how data is stored. By looking at the type of NAND flash memory, you can estimate whether a flash SSD will just be good at sequential reads, or if it can deliver high write and I/O performance as well. Single-level cell (SLC) flash, is the faster type; it stores one chunk of information per segment, making it quick. But SLC is expensive--often too expensive even for mainstream devices. Multi-level cell (MLC) flash is the increasingly popular alternative; it stores multiple information units per flash segment using several voltage levels, providing higher capacities.

However, the combination of smart controllers and multiple flash channels results in erratic use of the available resources. This means that a sequential stream of data is never actually written sequentially. The fact that files can be anywhere between a few bytes and many gigabytes, and that data is typically written, read, erased, and written again adds a layer of complexity that can have a substantial impact on flash SSD performance. This can become even more pronounced once you utilize the entire SSD capacity, leaving fewer options for the flash controller to optimize performance. Luckily, there are precautions you can take and firmware updates available as well. These updates continually improve flash controllers to reduce performance fluctuations, allowing future operating systems to catch up and work with new file systems that are aware of the characteristics of storage products.