Sign in with
Sign up | Sign in

Performance

Nearline vs. Desktop: Deploying Value in the Data Center
By
Brought to you by

While the spec sheets may seem to give a small advantage to the desktop drive, what vendors and reviewers often call “sustained transfer” may only be a few minutes or even seconds of benchmarking time, and the disk area under test is likely to be the platters’ fastest, outermost rings. In short, these tests don’t reflect real-world conditions, especially in a data center environment.

In a single-user setting, disk access is likely to be sporadic and, in the case of random data loads, brief. Even when working with large, sequential data, such as video playback, the drive only has to deal with one user pulling one data stream. This is not how computing happens in the data center. Enterprises inevitably have multiple users hammering on a drive at once, forcing the disk mechanics to work faster and harder in order to keep those users supplied with data in latency-free real-time. In other words, sequential data rate is only one performance element in an enterprise drive – and arguably the least significant.

Moreover, enterprise systems are more likely to field a much higher volume of data transactions. The amount of workload on a drive has a distinct impact on its performance — another factor totally overlooked by spec sheet numbers. While the chart below is a bit long in the tooth (late 2008), it illustrates the sort of performance data that enterprise buyers should be investigating. The Storage Performance Council’s SPC-1C test applies an exhaustive read/write benchmark analysis across an entire drive for hours.

Source: http://www.storageperformance.org/spc1c_results/Seagate/C00001_Seagate-Barracuda-ES.2-ST31000640SS   /c00001_Seagate_Barracuda-ES2-ST31000640SS_SPC1C_executive-summary.pdf

The chart shows that disk performance under scaling load levels is not linear. It would be triply incorrect to infer that a drive specifying 200MB/s sustained transfer rate on its spec sheet would offer that same performance. First, most spec sheet benchmarks are based on an optimized, sequential-only data stream. In the real world, nearly all workloads are some mixture of random and sequential data. Second, the increasingly standard way to view the amount of work a drive can perform is in terms of terabytes written per year. A desktop drive might process 55TB/year while a similar-capacity enterprise drive would accommodate 550TB/year. This isn’t to say that a desktop drive couldn’t write more than 500TB in 12 months, but consumer-class products are not built to last under such workloads and would be much more prone to premature failure. Imagine a sedentary cubicle worker accustomed to walking one mile in any given day getting up and suddenly running a ten-mile race. He could conceivably do it, but he might drop dead after the finish line.