A story appearing online is forecasting the doom of RAID 5 in 2009. Apparently with storage capacities of modern SATA hard drives now reaching 2-terabytes in size, the odds of a read error during a RAID 5 disk reconstruction is becoming unavoidable.
According to Zdnet, SATA drives often have unrecoverable read rates (URE) of 10^14, which implies that disk drives will not be able to read a sector once every 100,000,000,000,000 bits read. With hard drive capacities expected to reach two-terrabytes in 2009, the odds of a read error become practically unavoidable when recovering from a 7-drive RAID 5 disk failure. Upon encountering such a read error during a reconstruction process, it is claimed that the array volume will be declared unreadable and the recovery processes will be halted. Apparently all 12-terrabytes of data stored on the drives will be lost... or at least will require some extra effort and knowledge to recover.
RAID 5 is described as a striped set with distributed parity, which protects against a single disk failure. When a drive fails in a RAID 5 set, the failed drive can be replaced, the data can be rebuilt from the distributed parity and the array can eventually be restored. If more than one drive fails however, the array will have data loss. For some, this can make the reconstruction process after a single drive failure a stressful event, as the array during that time will be vulnerable to more drive failures.
While using RAID 6 instead may seem like a solution, where RAID 6 is two drive failures are allowable instead of just one, the increased redundancy may not be cost effective. Also, as hard drive capacities continue to increase exponentially, year after year, even RAID 6 may soon become prone to the same problems. When single disk drives become 12-terrabytes in size, even a direct drive-to-drive copy may commonly encounter these read errors. The use of disk drives that have smaller capacities and improved unrecoverable read rates could be a solution to avoid these potential headaches.
The problem comes from the increasingly tight data density packed onto drive platters. Using traditional means, bit magnetic poles can often leak their polarity onto other adjacent bits, causing a switch in an otherwise normal bit. Manufacturers have switched to perpendicular recording methods to avoid such problems and increase density, but even this method has its physical limits. Manufacturers will have to find more creative solutions down the road if drives are going to exceed 2TB in size.