Platter Size Reliability?
Bolbi
I'm looking to buy a HDD for my new system build. I know that speed is better for higher data densities, but how much should I be concerned about the reliability of different platter sizes? E.g., are drives with 250 GB platters more reliably than those with 320 GB platter sizes? I'm not too concerned about speed as long as I stay with the latest generation drives.
16
answers
Last reply
More about platter size reliability

It all about the size, while the capacity and data density is increasing, the reliability is staying the same. The reliability is expressed in the number of unrecoverable read errors per 10^n sectors. If this number stays the same but the capacities keep growing, you will have HDDs who have alot of capacity but wont be able to remember the data in all cases; sometimes it will be lost.
This is already happening with current drives, which need to use ECC error correction in order to read data; the raw data read from the platters is too unreliable already! So it needs redundancy and correction to make a decent reliable drive. That's the beginning of the end ofcourse, HDDs dont have that much time left before they become useless.
For now, there is no alternative to raw data storage and given the fact you need a backup anyway this is no major concern to consumers. Focus on the backups and go for large disks.
The 2TB disks are said to be less reliable than 1.0 or 1.5TB though, but i've not personally seen any data to validate this claim. 
cjl said:Since modern drives are specified around 1 unrecoverable error per 10^14 (and they are improving), I honestly don't see any truth to that claim, sub mesa. A terabyte is only 8x10^12 bits, so we have quite a ways to go before there's a significant impact due to unrecoverable read errors.
Still, 8x10^12 is within spitting distance of 10^13  so with a 1 in 10^14 bit error rate it means that of the 1TB consumer drives being marketed today almost 10% of them are unable to read all of their platters without errors. If the error rate is the same for 2TB drives, it would mean that almost 20% of them couldn't successfully read all of their data.
That's not terribly encouraging... 
sminlal
Do not think the math works that way. Ex. picking the 5 of spades from a deck is 1 in 52. That does not mean if you take 52 decks that you will pull a 5 of spades. The odds are the same for each deck. Anyway thats my take. But as you indicated the odds are getting close to the point of getting an error, ie putting to 2 tbyte drives in raid 0. 
I'm not terribly concerned about error rates. However the math works (and I don't want to figure that out now ), I don't think the manufacturers would release a line of drives in which 20% or even 10% couldn't successfully read all of their data.
My original question was actually concerning whether higherdensity platters (and the drives they go into) tend to break faster, i.e., are they more likely to die after 3 years of usage as opposed to drives with lowerdensity platters? It seems that at least the earlier posters would answer "no". I think I'll probably just go with whatever drive goes on sale first amongst: WD Black, Seagate 7200.12, or Samsung F3. 
sminlal said:Hard drive error rates haven't really changed a whole lot over the last decade, although Enterprise class drives are usually quoted at one unrecoverable error per 10^15 bits, so there's obviously some room for improvement.
Still, 8x10^12 is within spitting distance of 10^13  so with a 1 in 10^14 bit error rate it means that of the 1TB consumer drives being marketed today almost 10% of them are unable to read all of their platters without errors. If the error rate is the same for 2TB drives, it would mean that almost 20% of them couldn't successfully read all of their data.
That's not terribly encouraging...
What you're forgetting is that misreading a single bit is basically never disastrous. In almost every case, your computer is basically unaffected by a single uncorrectable bit. It takes significantly more than that for serious consequences to occur (unless you are supremely unlucky about the location of that bit). In addition, hard drive manufacturers are working on ways to further improve drive reliability. I would not be too concerned, honestly. 
RetiredChief said:picking the 5 of spades from a deck is 1 in 52.
That's my point about reading all of the data from the drive. If the predicted reliability is 1 error in a million reads then it means you should expect to get one error if you do a million reads. What else could it mean?
Similarly, if the predicted error rate is one bad bit in 10^14 bits, then if you have 10^14 bits of storage and read all of it you can expect to get one error. It's just a statistical prediction (unlike the card example), so some people may not encounter any errors  but if the reliability estimate is roughly correct then for every lucky guy who gets no errors there's some other poor schmuck who gets two of them. 
cjl said:What you're forgetting is that misreading a single bit is basically never disastrous.
The good news is that regular backups minimize the risks by (a) creating another copy you can use if the original goes bad, and (b) actually reading all the data on the drive to see if there are any problems with it. This is one of the reasons I never use "smart" backup software that only ever backs up a file once unless it changes. It's also why I checksum all the files on my archive drives and verify them on a regular basis.
sub mesa is right in that the larger drives get, the more these error rates become worrisome. Something will have to improve in terms of error detection/correction if capacities continue to grow at the rate they have been. 
sminlal said:Yes, but if you look at every card in the deck then you're guaranteed to see the 5 of spades. In your card analogy, there's a 1 in 52 chance of seeing that card when you do a random pick from the deck, but a 100% chance if you look 52 cards.
That's my point about reading all of the data from the drive. If the predicted reliability is 1 error in a million reads then it means you should expect to get one error if you do a million reads. What else could it mean?
Similarly, if the predicted error rate is one bad bit in 10^14 bits, then if you have 10^14 bits of storage and read all of it you can expect to get one error. It's just a statistical prediction (unlike the card example), so some people may not encounter any errors  but if the reliability estimate is roughly correct then for every lucky guy who gets no errors there's some other poor soul who gets two of them.
Statistical analysis doesn't quite work the way that you think it does, rather more like Retired_chief's analogy. If a given drive has 10^14 chance of an error in one bit, and you have 10^14 bits of data stored, then there is STILL a 10^14 chance of a singlebit error, per drive. Or, to rephrase it slightly, one in 10^14 drives will have a singlebit error. I have no idea how the various MFG's come up with these stats, they don't seem to me to be relevant. All I can do is quote a phrase attributed to Mark Twain  "There are lies, damn lies, and statistics."
Google did a report based on real usage a few years ago, (2007) link follows:
http://labs.google.com/papers/disk_failures.pdf 
croc said:Or, to rephrase it slightly, one in 10^14 drives will have a singlebit error.
In reality, if one person in 1000 is a criminal, then if you talk to 1000 people there is a strong likelihood that you will have met a criminal. Again, it may not happen to me, but if the rate is correct then if 10 people talk to 1000 criminals each, the likelihood is that 10 criminals will have been talked to.
Edit: The Google study focused on drive failures, not unrecoverable data from a functioning drive. Those are two very different things. 
sminlal said:That's a totally wrong interpretation! It's like saying that if person in 1000 is a criminal, only one out of every 1000 cities will have a criminal.
In reality, if one person in 1000 is a criminal, then if you talk to 1000 people there is a strong likelihood that you will have met a criminal. Again, it may not happen to me, but if the rate is correct then if 10 people talk to 1000 criminals each, the likelihood is that 10 criminals will have been talked to.
Edit: The Google study focused on drive failures, not unrecoverable data from a functioning drive. Those are two very different things.
If you are going to 'cherry pick' bits out of my original post, then you have a 100% chance of misunderstanding what I was getting at. Statistical analysis is NOT a science, it is an artform at best. Give me a bunch of stats, and I can make them dance to whatever tune I choose to choreograph them to.
As to the Google stats, they are based on longterm data analysis of data failures, in this you are correct. But Google is only concerned about data failures, not recoverable errors. As you should be as well. So to make my original point a bit more clear, at some point in time EVERY drive will have a 100% chance of an unrecoverable error. Protect your precious data accordingly. 
croc said:If you are going to 'cherry pick' bits out of my original post, then you have a 100% chance of misunderstanding what I was getting at.
I think where you went astray was at the start when you said:Quote:If a given drive has 10^14 chance of an error in one bit...
I assume you actually meant that a drive has a "1 / 10^14 chance" of an error in one bit, but even that's not how drive manufacturers quote unrecoverable error rates. Those rates are quoted as (and this is a directly from a WD spec sheet):
Nonrecoverable read errors per bits read: < 1 in 10^15.
In other words, if you read 10^15 bits, WD expects you to encounter "no more than 1" read error. That's a big difference from what you started with. And it's actually good news because it's 10X less than the 1 in 10^14 error rate we've been discussing.
Edit: To clarify the severity of a read error, remember that even though rates are quoted as "per x bits read", a read error doesn't must mean one bit is bad, it actually means that the entire sector (512 bytes) is toast. 
sminlal said:I think where you went astray was at the start when you said:Quote:If a given drive has 10^14 chance of an error in one bit...
I assume you actually meant that a drive has a "1 / 10^14 chance" of an error in one bit, but even that's not how drive manufacturers quote unrecoverable error rates. Those rates are quoted as (and this is a directly from a WD spec sheet):
Nonrecoverable read errors per bits read: < 1 in 10^15.
In other words, if you read 10^15 bits, WD expects you to encounter "no more than 1" read error. That's a big difference from what you started with. And it's actually good news because it's 10X less than the 1 in 10^14 error rate we've been discussing.
We can continue to quibble over quabbles if you wish, or we can go into a discussion of semantics and how to properly do an ASCII expression of a mathematical formula.
But the bottom line is this: (and I repeat) "at some point in time EVERY drive will have a 100% chance of an unrecoverable error. Protect your precious data accordingly."
Fortunately, with the advent of onboard drive electronics (and more recently SMART diagnostics) the reliability of modern drives is much better than what we had twenty years ago. We now have the capability of predictive failure of the drive as reported by the drive itself. Unfortunately, **** still happens.
Related Resources
Ask a new question
Read More
Hard Drives
Build
Storage
Related Resources
 Reliability of reported motherboard voltages?
 Best Reliable OC Build I came up with 11001400$
 Best setup for reliability  no more tinkering
 Trusty 4730z slowly dying fix, improve, or replace?
 HDD reliability choice help question
 Large platter reliability
 Best drives for reliability?
 What is the best HDD in terms of Reliability/cost?
 SSD reliability
 HDD Reliability
 Intel SSD vs Crucial M4 reliability
 RAID 0 Reliability
 DRIVE RELIABILITY
 [Hard Disks] Best way to tax a new hard drive to test reliability?
 Western Digital 8mb cache Hard Drives Reliability