
Fierce competition dominates the market for professionally-equipped Serial ATA RAID controllers. Shortly after manufacturers HighPoint and Promise became the first to launch their PCI-based products on the market, more well-known names like Adaptec and LSI Logic followed suit. A year ago, RAIDCore and NetCell also debuted their products, and made a good impression from the word go.
All of these manufacturers concentrated primarily on making their products better suited to the professional market, and are focusing particularly on offering devices on the PCI-X interface . Now, Taiwanese manufacturer Areca hopes to go them one better by supporting RAID 6.
RAID controllers are used most often in business settings, particularly for servers. The point of RAID is to increase the performance of the storage subsystem when using numerous hard drives simultaneously, and also protecting against data loss due to hard drive crashes. Even if regular backups are used, constant availability of storage systems is invaluable for business workflows, and this is what RAID provides.
A RAID Level 5 array is a common type, used in most normal business situations. In this arrangement, when data is written to the array, it is distributed to all drives but one. The controller generates a checksum (parity information) for the data set written, and writes the checksum to the final hard drive. This can be used to reconstruct the data if any one drive is lost. At the same time, performance is improved because data is being written to (or read from) many drives in parallel.
In RAID5, the drive chosen for the checksum changes for each data block written. Thus, it is an enhancement of RAID3, where a single dedicated drive is used for all checksums. RAID5 improves performance because in RAID3 the dedicated parity drive can create a bottleneck.
But there are also cases in which higher reliability is needed than can be met by RAID 5. Areca addresses this by offering the option of setting up a RAID 6 array. RAID 6 is like RAID 5 but uses two drives for parity data, which means two drives can fail without data loss. Naturally, this requires another hard drive to be put in the array. We took a close look at how well this RAID level functions, and how well it performs.
Today, storage systems based on the RAID Level 5 array are indispensable. But they are not perfect - they can only handle failure of a single hard drive. Although that doesn't seem problematic at first glance, in a worst-case scenario it can mean data loss. If a hard drive fails, it must be replaced as quickly as possible. Ideally, a reserve drive - a hot spare - should be ready. If it's not available, it's up to the administrators to react quickly. if no one is available on the weekend who can swap the defective drive and restore the RAID 5 array, the array will remain as vulnerable as a RAID 0 for a potentially significant length of time. Any further errors will invariably lead to total data loss - and restoring data at a data rescue company like CBL or Ontrack is very expensive.
Even when the defective drive is replaced, there is still a risk, because the RAID controller must restore the missing data blocks on the basis of the parity information and write it to the swapped drive. This process is called rebuilding and again makes use of the array drives. In practice, the problem with RAID 5 is that after an array has been used for a certain length of time, e.g. a couple of years, often several hard drives will crash almost simultaneously or one by one in succession.
RAID Level 6 Array In Detail

To prevent the nightmare of catastrophic data loss, a second set of parity information is recommended. And this is exactly what a RAID 6 array does: using the stripes of parity information already created, the controller can generate another parity set. This can be done with Reed-Solomon codes, which are commonly used during digital data transfer for forward data correction. However, this requires additional hardware. Areca takes a simpler route and creates the second parity set using XOR calculation, even though this requires adding a module of their own creation.

The controller provided to us is based on a 133 MHz PCI-X interface, but is now also available in an x8 PCI Express version. Our test card offers a total of eight serial ATA I ports. For an XOR unit, Areca relies on Intel's i80332 chip.
The ARC-1120 is a low-profile card with eight ports. Areca also offers 12 and 16 channel versions, the ARC-1130 and ARC-1160. These models have a higher profile, but offer an SO DIMM socket that can be used to expand the integrated 128 MB DDR333 cache almost as much as you like. Both models offer a port for the battery backup module.
With the Areca, a RAID array always consists of a RAID set and a volume set. The first is created when the desired drives are incorporated into a physical setup. On top of this the volume set is created, where the desired RAID level is chosen and the array is generated. However, one volume need not make use of all of the available storage; this allows several volume sets to be created per RAID set.




| Areca ARC-1120 | |
|---|---|
| Bus | PCI-X 133 MHz |
| XOR Engine | Intel 80332 |
| SATA Ports | 8x SATA II |
| Cache | 128 MB DDR333, ECC |
| RAID Levels | 0, 1, 0+1, 3, 5, 6, JBOD |
| Features | Online RAID Roaming
Online RAID Level Migration Online Stripe Size Migration 64 Bit LBA Redundant Flash Image Instant Availability / Background Init. Browser-based RAID Management Softw. Multi-Adapter-Support Battery Backup Module optional Notification: Email |

The SATA II ports already offer locks.
The ARC-1120 In Practice
Rebuild
A RAID 6 rebuild process for two failed Western Digital WD740 Raptor-type hard drives took 25 minutes. Naturally, for the purpose of comparison, we also created a RAID 5 array, removed one drive and started the restoration process here too: at 21 minutes, the time needed was barely shorter. This puts the Areca controller on a par with the 9500-12S from 3Ware, which, given the same starting situation, also needed 21 minutes.
RAID Level Migration
RAID 6 requires two drives in the array to be used just for checksums, so it costs more storage than RAID 5. If space on the array runs out, a short-term solution is to convert the RAID 6 array to a RAID 5 one. This adds back the capacity of one drive, at the cost of reducing the fail-safe system from two drives to just one. To switch from RAID 6 to RAID 5, the ARC-1120 needed just under 54 minutes - not bad, considering the size of the RAID array (8 fast hard drives at 74 GB each).
If you start out by choosing the fast but zero-security RAID 0, you will be left out in the cold if you want to migrate. The reason is that introducing a RAID mode that works with parity data reduces the usable storage capacity of the array. If the RAID array were filled up with data, no more space would be available for the parity information. Switching from RAID 5 to RAID 6 is not easily done, for the same reason.
In contrast, it is easier to migrate from an existing RAID 5 array to RAID 3 or even to RAID 0, because the available capacity either stays the same or increases. If a migration frees up additional memory, it can be added to the RAID set after the migration process is finished.

The 9000 series from 3Ware stands out mostly because of its consistently good performance, and because it came in just below the even more flexible BC4000 from Broadcom/Raidcore in our test comparison . 3Ware relies on its own XOR unit, 128 MB cache memory, optional battery backup unit and numerous software features. The overall result is a product that comes in only slightly behind more expensive SCSI models, if at all.

As early as about a year ago, we received the first RAIDCore controller. Even then, it made a splash with functionality as yet unseen. Today, RAIDCore belongs to Broadcom, and offers a number of PCI-X devices that all function based on software. They do not have their own caches; however, up to four 8 channel controllers can be used simultaneously , and arrays can be generated as desired.
Comparison Table
| 3Ware 9500 | Areca ARC-1120 | Broadcom/Raidcore BC4000 | |
|---|---|---|---|
| Interface | 64 Bit PCI 2.2 | PCI-X 133 MHz | PCI-X 133 MHz |
| Architechture | Hardware-based | Hardware-based | Software-based |
| XOR-Unit | 3Ware | i80332 | From System Processor |
| Cache | 128 MB ECC | 128 MB ECC | None |
| Ports | 4, 8, 12x SATA | 8, 12, 16x SATA II | 4, 8x SATA |
| Format | Full Structure Height | Half (8 Ports), Full Structure Height (12, 16 Ports) | Half Structure Height |
| RAID-Modes | 0, 1, 1+0, 5, 50, JBOD | 0, 1, 0+1, 3, 5, 6, JBOD | 0, 1, 1+N, 10, 10+N, 5, 50, JBOD |
| RAID Level Migration | Downgrading only | Downgrading only | Yes |
| Batterie-Backup | optional | optional | Optional |
| Background Init | Yes | Yes | Yes |
| Multi-Adapter | Yes | Yes | Yes, inclusive Adapter Spanning |
| Online Capacity Expansion | Yes | Yes | Yes |
| Multiple RAID Arrays | Yes | Yes | Yes |
| Drive Roaming | No | No | Yes |
| Spare Drive | Dedicated | Dedicated | Dedicated/Global/Distributed |
| Time Delayed drive starts | Yes | No | Yes |
| Website | www.3ware.com | www.areca.com.tw | www.raidcore.com |
| Processor(s) | |
|---|---|
| Socket 604 | Dual Intel Pentium 4 Xeon, 2.8 GHz, 512 kB Cache, FSB533 |
| System Components | |
| DDR-SDRAM | 2x 512 MB PC3200 Samsung, ECC, Registered |
| Motherboard | Asus PP-DLW, Rev. 1.03
Intel E7505 Chipset |
| Graphics Card | Matrox Millennium G450 AGP, 32 MB |
| Hard Drives | System Drive: Western Digital WD800JB
Test Drives: RAID 5 array consisting of 8x Western Digital WD740 Raptor, 74 GB, 10,000 rpm, 8 MB Cache |
| Controller I | Areca ARC-1120
8-Port, 128 MB ECC Cache |
| Controller II | 3Ware 9500-12S
12-Port, 128 MB ECC Cache |
| Software | |
| Intel Chipset | Intel Chipset Installation Utility 5.1.1.1002
Intel Application Accelerator RAID Edition Ver. 3.53 |
| DirectX | 9.0b |
| OS | Windows XP Professional Build 2600, Service Pack 1 |
| Benchmarks & Settings | |
| Transfer-Performance Benchmark | c't h2benchw Ver. 3.6 |
| Transfer Diagram | Winbench 99 2.0
Disk Inspection Test |
| I/O Performance | IOMeter 2003.05.10
Fileserver Benchmark Pattern Webserver Benchmark Pattern Database Benchmark Pattern Workstation Benchmark Pattern Throughput Benchmark Pattern |
| Application Performance | Winbench 99 2.0
Disk Winmarks Disk Inspection |
Test Drives: Western Digital WD740 Raptor










In the I/O-sensitive benchmarks, Areca came up short compared directly with 3Ware. We believe that for the most part, 3Ware's StorSwitch architecture works more efficiently than many of the Intel RISC chips used by other controller makers.
While the transfer rates of the ARC-1120 are high and beat those of 3Ware in nearly all block sizes, our IOMeter Suite reveals serious differences that count against the Taiwanese manufacturer. It is not surprising that high I/O performance could not be counted on during the simulation of one or even two defective drives. But in part, the 9500 from 3Ware yields up to double the performance in these situations - not a good showing for Areca when high-availability applications are a priority.
We must stress that this controller series was designed particularly for maximum data security, and in our experience this is not feasible without cutting performance.
The Areca controller is certainly a good choice when the priority is securing large amounts of data against the possibility of up to two hard drives crashing, while keeping access times low. This is especially true since its functions and features are competitive with other controllers that do not offer RAID 6.