What Do We Know About Storage?
SSDs are a relatively new technology (at least compared to hard drives, which are almost 60 years old). It’s understandable that we would compare the new kid on the block against tried and true. But what do we really know about hard drives? Two important studies shed some light. Back in 2007, Google published a study on the reliability of 100 000 consumer PATA and SATA drives used in its data center. Similarly, Dr. Bianca Schroeder and adviser Dr. Garth Gibson calculated the replacement rates of over 100 000 drives used at some of the largest national labs. The difference is that they also cover enterprise SCSI, SATA, and Fibre Channel drives.
If you haven’t read either paper, we highly recommend at least reading the second study. It won best paper at the File and Storage Technologies (FAST ’07) conference. For those not interested in pouring over academic papers, we’ll also summarize.
MTTF Rating
You remember what MTBF means (here's a hint: we covered it on page four of OCZ's Vertex 3: Second-Generation SandForce For The Masses), right? Let’s use the Seagate Barracuda 7200.7 as an example. It has a 600 000-hour MTBF rating. In any large population, we'd expect half of these drives to fail in the first 600 000 hours of operation. Assuming failures are evenly distributed, one drive would fail per hour. We can convert this to an annualized failure rate (AFR) of 1.44%.
But that’s not what Google or Dr. Schroeder found, because failures do not necessarily equal disk replacements. That is why Dr. Schroeder measured the annualized replacement rate (ARR). This is based on the number of actual disks replaced, according to service logs.
While the datasheet AFRs are between 0.58% and 0.88%, the observed ARRs (annualized replacement rates) range from 0.5% to as high as 13.5%. That is, the observed ARRs by data set and type, are by up to a factor of 15 higher than datasheet AFRs.
Drive makers define failures differently than we do, and it’s no surprise that their definition overstates drive reliability. Typically, a MTBF rating is based on accelerated life testing, return unit data, or a pool of tested drives. Vendor return data continues to be highly suspect, though. As Google states, “we have observed… situations where a drive tester consistently ‘green lights’ a unit that invariably fails in the field.”
Drive Failure Over Time
Most people assume that the failure rate of a hard drive looks like a bathtub curve. At first, you see many drives fail in the beginning due to a phenomenon referred to as infant mortality. After that initial period, you expect to see low failure rates. At the other end, there’s a steady rise as drives finally wear out. Neither study found that assumption to be true. Overall, they found that drive failures steadily increase with age.
Enterprise Drive Reliability
When you compare the two studies, you realize that the 1 000 000 MTBF Cheetah drive is much closer to a datasheet MTBF of 300 000 hours. This means that “enterprise” and “consumer” drives have pretty much the same annualized failure rate, especially when you are comparing similar capacities. According to Val Bercovici, director of technical strategy at NetApp, "…how storage arrays handle the respective drive type failures is what continues to perpetuate the customer perception that more expensive drives should be more reliable. One of the storage industry’s dirty secrets is that most enterprise and consumer drives are made up of largely the same components. However, their external interfaces (FC, SCSI, SAS, or SATA), and most importantly their respective firmware design priorities/resulting goals play a huge role in determining enterprise versus consumer drive behavior in the real world."
Data Safety and RAID
Dr. Schroeder’s study covers the use of enterprise drives used in large RAID systems in some of the biggest high-performance computing labs. Typically, we assume that data is safer in properly-chosen RAID modes, but the study found something quite surprising.
The distribution of time between disk replacements exhibits decreasing hazard rates, that is, the expected remaining time until the next disk was replaced grows with the time it has been since the last disk replacement.
This means that the failure of one drive in an array increases the likelihood of another drive failure. The more time that passes since the last failure means the more time is expected to pass until the next one. Of course, this has implications for the RAID reconstruction process. After the first failure, you are four times more likely to see another drive fail within the same hour. Within 10 hours, you are only two times more likely to experience a subsequent failure.
Temperature
One of the stranger conclusions comes from Google’s paper. The researchers took temperature readings from SMART—the self-monitoring, analysis, and reporting technology built into most hard drives—and they found that a higher operating temperature did not correlate with a higher failure rate. Temperature does seem to affect older drives, but the effect is minor.
Is SMART Really Smart?
The short answer is no. SMART was designed to catch disk errors early enough so that you can back up your data. But according to Google, more than one-third of all failed drives did not trigger an alert in SMART. This isn't a huge surprise, as many industry insiders have been suspecting this for years. It turns out that SMART is really optimized to catch mechanical failures. Much of a disk is still electronic, though. That's why behavioral and situational problems like power failure go, unnoticed while data integrity issues are caught. If you're relying on SMART to tell you of an impending failure, you need to plan for additional layer of redundancy if you want to ensure the safety of your data.
Now let's see how SSDs stack up against hard drives.