Sign in with
Sign up | Sign in

Analysis: Which Enterprise Capacity Point Performs Best?

Analysis: Which Enterprise Capacity Point Performs Best?
By

SSDs are best for performance. SATA hard drives provide maximum capacity. Enterprise-class SAS disks are the workhorses positioned between them. But which enterprise hard drive capacity makes sense when there are several from which to choose?

Different capacity points are based on different internal hardware configurations, and, as a result, the products within one hard drive family will typically deliver different performance and efficiency compared to another family. We wanted to know how the three popular capacities in the 2.5” enterprise hard drive space differ, and we used three of the latest Toshiba drives—300, 450, and 600 gigabyte capacities, each with different head and platter configurations—to find answers.

People tend to think that flash SSDs, which are undoubtedly much faster and physically more robust than hard drives, have started to displace hard drives in enterprise storage applications. In reality, though, high-reliability scenarios in banking or science cannot easily switch from magnetic to flash storage. Such a switch requires long-term reliability testing, meticulously predictable performance, and very particular component validation that depends on application requirements. Validating a new hard drive for deployment into an existing system environment is much simpler than validating a new technology, which is why flash SSDs can't just take over in critical servers just yet. The good old hard drive will be around for years to come.

Hard drives are based on rotating magnetic platters. Think of HDDs as complex turntables able to play a stack of LPs. Moving arms are used to position the read/write heads like pick-ups on old record players. One set of read/write heads is used per hard drive platter—one head each on the top and bottom sides of each platter to take advantage of both disk surfaces. Platter rotation speed highly influences access performance, because less waiting for a head to reach the desired spot on a platter means quicker access times. Throughput is influenced by the ability to read and write more bits at increasing spindle speeds. And finally, the combination of multiple platters allows increasing total capacity.

Modern hard drives typically consist of two or more rotating platters, but sometimes it's only necessary to utilize part of the platter surfaces to reach a desired capacity point. In a simple example, a 600GB 2.5” SAS hard drive provides 200GB capacity per platter. Creating a similar drive with 300GB requires two platters that aren’t entirely used and a 450GB drive has to be based on three platters like the 600GB drive. Now it’s interesting to look at performance. Does the access time decrease on such a 450GB drive because there's less ground for the heads to potentially travel? And what’s the impact on throughput? In theory, it should be optimal to have the drive operate mainly on the outer sectors of a hard drive where absolute linear speed is fastest.

Decision makers might wonder how each of the capacity points and configurations differ. We looked at the latest 2.5” 10,000 RPM enterprise hard drives from Toshiba and compared the three capacity points at 300, 450, and 600 gigabytes.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display 3 comments.
This thread is closed for comments
  • 0 Hide
    wuzy , June 23, 2010 1:22 AM
    It's pretty obvious that more platters/heads within the same product family does not affect performance, perhaps a little increase in access time. Overall the differences are negligible.

    But it's good to have additional solid data to backup the above statement I guess, although this test proved to be somewhat meaningless.
  • 0 Hide
    Anonymous , June 23, 2010 6:52 AM
    Weak article. How about some real test cases?
  • 0 Hide
    blarg_12 , August 4, 2010 8:11 PM
    The differences between the drives are so negligible that you are left with price and physical data density in your data center as the only two considerations.