Need more capacity? Want more hard drive performance? Knowing that hard drive prices are about to drop below $80 for a 1 TB drive, we decided to create the ultimate RAID array, one that should be able store all of your data for years to come while providing much faster performance than any individual drive could. Twelve Samsung 1 TB hard drives helped us to reach speed records and an impressive 10 TB net capacity.
Some of you may want to argue over this performance statement. After all, doesn’t everyone know that hard drives don’t stand a chance against solid state drives (SSDs)? It’s true. More and more high-end SSDs can now exceed 200 MB/s read and 100 MB/s write throughput with virtually zero access time—numbers that are becoming standard for more and more high-end SSDs. However, lofty SSD costs remain an issue, which is where good old hard drives kick in.
While hard drives can’t match an SSD’s quick access times, higher throughput can be achieved by using more than one drive in a striping RAID mode—and throughput is still the top characteristic people care about on their desktop systems. In addition, hard drive capacities exceed SSD capacities by many times over and also beat SSDs in terms of cost per gigabyte. For example, $1,000 won’t buy you more than 1 TB in SSD capacity, and even to get close requires taking a step or two down in performance. Meanwhile, with hard drives, we had 12 x 1 TB at our disposal. The only reason we didn’t use larger hard drives was constrained availability in quantities of ten or more.
The Idea: Massive Hard Drive Storage Within a $1,000 Budget
The prospect of using up to 12 3.5” hard drives in RAID certainly isn’t very applicable for desktop PCs. Twelve drives require a lot of space, a suitable SATA RAID controller, and they produce a noticeable amount of heat, noise, and vibration, as well. Still…it’s cool, and we’ll soon see what a massive RAID array using conventional hard drives can actually do.
We used twelve of Samsung’s first-generation terabyte hard drives, the Spinpoint F1 HD103UJ. Although the product is more than a year old, it’s still holds its own against some of its newer competition, including the Hitachi Deskstar 7K1000.B, Seagate Barracuda 7200.12, and WD’s Caviar Black. The F1’s 115 MB/s maximum read throughput continues to impress, and Samsung’s data density is so high that it can cram a full terabyte into only three platters. The drives spin at 7,200 RPM, use a SATA/300 interface, and come with 32 MB of buffer memory. Part of our decision to use the Samsung F1 drives was based on availability. Some of our units were spares from our Overdrive Overclocking Championship. Finding ten or more new drives from scratch would have been more difficult.
Samsung is about to release the high-performance Spinpoint F2. While F2 EcoGreen drives have been available at up to 1.5 TB for some months, the new F2 will spin at 7,200 RPM and reach up to 2 TB in the second half of the year. Hitachi and Seagate will likely follow as soon as it makes sense, as the top capacities aren’t sold in large quantities and hence represent only a small fraction of the market.
Other Drive Options?
The 1.0 TB capacity point isn’t particularly exciting anymore, but it is close to providing the highest capacity per dollar. In addition, high-performance 7,200 RPM drives still deliver higher throughput than the lower-power 1.5 TB hard drives by Samsung, Seagate, or WD. Using 2.0 TB hard drives would double the gross capacity of our array from 12 to an amazing 24 TB, but it will more than double the cost for the drives. You can get a 1.0 TB drive starting at approximately $85 while a 2.0 TB drive still is almost three times more expensive.
We wanted to build an array with at least a 10 TB capacity, and with 12 drives, we were able to reach a total gross capacity of 12 TB. We decided to run both RAID 0 for maximum performance and RAID 5 to balance performance with data protection. While a RAID 0 configuration distributes data evenly across all drives using so-called stripe sets, RAID 5 adds parity information on one of the drives. Parity is also distributed across the drives to avoid one drive becoming a parity bottleneck.
RAID 0

In RAID 0, the total capacity equals the capacity of one Spinpoint F1 drive times the number of drives. Each drive has a net capacity of 1,000 GB, if one kilobyte equals 1,000 bytes, or 931.32 GB if one kilobyte is defined as 1,024 bytes. The latter is the way Windows handles storage capacity. Twelve times this capacity results in 11,175.87 GB.
RAID 5

RAID 5 requires at least three hard drives, and it provides the total capacity of all array member drives minus one drive. This type of array will maintain data integrity in case one drive fails. If you want an array to remain operational with two failed drives, then you need to run RAID 6. For our test array, the total RAID 5 capacity was 10,244.54 GB.
Controller: Areca ARC-1680iX-20
We chose a 16-port combined SAS/SATA controller from Areca, the 20-port ARC-1680iX. The full-sized card is based on a x8 PCI Express interface and includes an Intel IOP348 processor at 1,200 MHz, which provides a good basis for serious XOR acceleration and RAID 5 performance. The card comes with a DDR2 DIMM socket and can hold anything between 512 MB and 4 GB; we used the default 512 MB module. Be sure to purchase ECC memory if you decide to install a larger cache capacity. Areca is among the few controller vendors to support RAID 6.
This card comes with a network port, which exclusively serves as a enabler of out-of-band management. Hence it’s possible to configure the card via the built-in Web server independent from the host PC’s operating system. The 20 SAS ports are available through multi-lane connectors (4 internal, 1 external), which is why we used 4-to-1 SAS fanout cables to attach the Samsung drives. As you might recall, SAS is fully SATA-compatible thanks to STP, the SATA Tunneling Protocol.
| System Hardware | |
|---|---|
| Hardware | Details |
| CPU | Intel Core i7-920 (45 nm, 2.66 GHz, 8MB L2 Cache) |
| Motherboard (Socket 1366) | Supermicro X8SAX Revision: 1.0 Chipset Intel X58 + ICH10R BIOS: 1.0B |
| RAM | 2 GB DDR3-1333 Corsair CM3X1024-1333C9DHX |
| System HDD | Seagate NL35 400 GB ST3400832NS 7,200 RPM, SATA/150, 8 MB |
| Power Supply | OCZ EliteXstream 800W OCZ800EXS-EU |
| Benchmarks | |
| Performance Measurements | h2benchw 3.12 PCMark Vantage 1.0 |
| I/O Performance | IOMeter 2006.07.27 Fileserver-Benchmark Webserver-Benchmark Workstation-Benchmark Database-Benchmark Streaming Reads Streaming Wirtes |
| System Software and Drivers | |
| Drivers | Details |
| Operating System | Windows Vista Ultimate SP1 |
| Intel Chipset | 9.1.0.1007 |
| AMD Graphics | Radeon 8.12 |
| Intel Storage Drivers | Matrix Storage Drivers 8.7.0.1007 |
Access Time

Both RAID arrays shorten the average access time by a significant amount. While an individual Samsung Spinpoint F1 1 TB drive averages a 13.8 ms access time, the RAID 0 and RAID 5 arrays drop access times to 10.1 and 10.4 ms.
Since none of the standard benchmark tools allow serious benchmarking on partitions larger than 2 TB, we had to move to HD Tach and IOmeter to get decent results. HD Tach could only be used as long as there was no GPT or MBR on the partition.

As mentioned above, 115 MB/s is the maximum read throughput for an individual F1 drive. With 12 drives tethered to the Areca ARC-1680iX card, our RAID 0 config returned almost 1 GB/s on average. Even in RAID 5, we saw 910 MB/s average throughput!

Write throughput is a bit slower, but we still observed 800 MB/s or more. Keep in mind that these are the average results. Peak numbers are higher, minimum transfer rates somewhat slower.
Transfer Diagram RAID 0
In RAID 0, the array with 12 drives mostly maintains read performance of almost 900 MB/s, but there are some negative peaks. Write throughput is more constant at 600 to 700 MB/s.
Transfer Diagram RAID 5
RAID 5 performance is slightly lower, as it drops below 600 MB/s once most of the 11 TB capacity is filled.

Since XOR calculation for data parity imposes a significant performance penalty, I/O performance is far superior in RAID 0, which has no parity. Still, any decent SSD, such as Intel’s X25-E (enterprise) or X25-M (consumer), can beat even our 12-drive RAID’s results on IOPS.

File server performance involves larger blocks, making the difference between the individual drive and the arrays less significant but still large.

The Web server profile is entirely based on read operations, and it only requests the kind of small files commonly found on HTML pages. Using 12 drives, we achieved greater than four times the performance of a single drive. However, a fast SSD still scores up to ten times faster.

Most people probably don’t want to install more than a few hard drives into their PC, as it requires a massive case with sufficient ventilation as well as a solid power supply. We don’t consider this project to be something enthusiasts should necessarily reproduce. Instead, we set out to analyze what level of storage performance you’d get if you were to spend the same money as on an enthusiast processor, such as a $1,000 Core i7-975 Extreme. For the same cost, you could assemble 12 1 TB Samsung Spinpoint F1 hard drives. Of course, you still need a suitable multi-port controller, which is why we selected Areca’s ARC-1680iX-20.
These are our findings: The 12 hard drives…
- still cannot reach the I/O performance and access time of a single Intel X25-E flash SSD (thousands of I/O operations per second)
- require careful system configuration (staggered spin-up)
- require a powerful RAID controller with sufficient ports
- aren’t convenient for desktop users
- are still subject to issues when using 2+ TB partitions
- deliver 6 to 8 times more throughput than an individual drive: almost 1,000 MB/s
- deliver 3 to 7 times better I/O performance than an individual drive
- result in 11 TB net capacity in RAID 5 or 10 TB in RAID 6
- deliver excellent cost per gigabyte, especially with the 1 TB Samsung drives we used (2 TB drives are still too expensive)
- still beat a flash SSD array in terms of throughput even if you keep two or three hard disks as spares





