Four SAS 6 Gb/s RAID Controllers, Benchmarked And Reviewed

Adaptec RAID 6805

Chip manufacturer PMC-Sierra introduced its "Adaptec by PMC" Series 6 RAID controller family in late 2010. The Series 6 controller cards are based on the dual-core ROC (RAID On Chip) SRC 8x6G controller, which supports 512 MB cache and up to 6 Gb/s per SAS port. There are three low-profile models available: Adaptec RAID 6405 (four internal ports, about $320), Adaptec RAID 6445 (four internal and four external ports, about $475), and our test subject, the $460 Adaptec RAID 6805, which features eight internal ports.

All models support JBOD as well as RAID levels 0, 1, 1E, 5, 5EE, 6, 10, 50, and 60.

Connected to the host system with its x8 PCI Express 2.0 interface, the Adaptec RAID 6805 supports up to 256 devices via SAS expanders. According to manufacturer specs, the sustained data transfer rate to the host computer can reach up to 2 GB/s, while the peak performance can reach 4.8 GB/s on the aggregated SAS ports and 4.0 GB/s on the PCI Express interface – the latter is the maximum theoretical transfer rate of a PCI Express 2.0 x8 bus.

Maintenance-Free ZMCP

Our test sample came with Adaptec's Flash Module 600, which implements Zero Maintenance Cache Protection (ZMCP) and renders obsolete the traditional Battery Backup Unit (BBU). The ZMCP module is a circuit board with a 4 GB NAND flash chip, used to back up the contents of the controller's cache in the event of a power failure.

Because the copy operation from cache to flash is very fast, Adaptec's able to use a capacitor to keep power flowing, rather than a battery. The capacitor's advantage is that it should last as long as the card, whereas battery backup has to be replaced every couple of years. Additionally, once cached in flash memory, saved data lasts for years if need be. In comparison, you generally get about three days of data retention from a battery backup unit before the cached information is lost, forcing you to move fast on recovery. As the ZMCP's name suggests, it's a zero-maintenance solution, able to withstand extended power loss.

Performance

The Adaptec RAID 6805 in RAID 0 mode falls short of its competitors in our streaming read/write tests. Then again, RAID 0 probably isn't a typical use case for a business looking at data protection (though it could be for a workstation user rendering video). Sequential reads clock in at 640 MB/s and sequential writes are almost identical at 680 MB/s. In those two metrics, LSI's MegaRAID 9265-8i tops the charts. Adaptec's RAID 6805 performs better in the RAID 5, 6, and 10 tests, but isn't really a first-place finisher. In an SSD-only setup, the Adaptec controller reaches up to 530 MB/s, but is outperformed by the Areca and LSI controllers.

Adaptec's card automatically recognizes what it calls a Hybrid RAID configuration, which consists of mixed hard drives and SSDs, offering RAID levels 1 and 10 in that configuration, and claiming to outperform the competition through special read/write algorithms. This consists of directing the read operations exclusively to the SSDs, while write operations obviously have to be passed on to both the disk drives and SSDs. Thus, read performance should come close to an SSD-only setup, while write performance should be no worse than an all-disk-based setup.

Our tests results don't quite reflect such a theoretical situation, though. With the notable exception of the Web server benchmark, where data rates of the hybrid setup do, in fact, approach what we'd expect from the pure SSD configuration, the mixing SSDs and hard drives isn't able to get anywhere close to an SSD-only arrangement.

The Adaptec controller fares much better in the hard disk I/O performance tests. Regardless of benchmark type (database, file server, Web server, or workstation), the RAID 6805 controller goes head-to-head with Areca's ARC-1880i and LSI's MegaRAID 9265-8i and consistently achieves first or second place. Only the HighPoint RocketRAID 2720SGL trails in the I/O performance benchmark. Replacing the hard disks with SSDs, LSI's MegaRAID 9265-8i shines and leaves the three other controllers in its dust.

Create a new thread in the US Reviews comments forum about this subject
This thread is closed for comments
46 comments
    Your comment
  • Great review! Though I would have like to see some RAID 1 and RAID 10 benchmarks. Don't usually see RAID 0 for expensive SAS RAID Controllers, and more RAID 10 configurations than RAID 5.
    1
  • I just sold my HighPoint RocketRAID 2720 because of the terrible drivers. Not only do the drivers add about 60 seconds to the Windows boot, they also cause random BSODs. The support was a joke, and the driver that came on the disc caused the Windows 7 x64 setup to instantly BSOD even though the box had a Windows 7 compatible logo on it. I even RMAed the card and the new one was exactly the same.
    1
  • Very cool, fast and expensive means not home server stuff. For that, try the IBM BR10I, 8port PCI-e SAS/SATA RAID controller, which is generally available on eBay for $40 with no bracket (I live for danger). You are stuck with 3 GB/sec per port, but if you add $34 for a pair of forward breakout cables you have 8 sata ports at a cost of under $10 per port. The card requires a PCIe X8 slot but if you only give it 4 lanes (the number of lanes offered by our Atom's NM10) if will give each port 1.5 Gb/sec. Cheap SAS makes software RAID 6 prudent in a home storage server.
    0
  • I have pretty much no use for anything other than raid 0 but it was still an interesting read. I think i prefer this type of article over the longer type with actual benchmarks thrown in (not for gpu or cpu reviews though).
    0
  • Great read! Way better than rumors and junk, stick with this kind of stuff Toms!
    2
  • Only wish this review had came earlier !

    I had a hard time deciding between 9265-8i, 1880 and 6805 a month ago. I bought the 6805 and always wondered why RAID-10 was not as fast as I thought it should be. This reviewed proved my worries.

    I eventually went to RAID 6 with 6 Constellation ES 1TB disks. Here's where the adaptec really shines. This is for a photo/video storage/editing disk array.

    Admittedly if I have a choice again I would have picked the Areca after seeing the numbers. Adaptec was the cheapest among all of them so it's not too much of a regret.
    0
  • Great review! As I am in the process of building a new home file server and always have a habit of going overboard in such situations, I will be referring back to this article many more times before purchasing.

    That said can you please talk more to the differences performance wise between SATA and SAS? I understand the reliability argument, however I wonder if for my purposes I would not be better served by using cheaper SATA disks over SAS disks?

    I would also love some direction with regard to a good enclosures/power supplies for a hard drive only enclosure. I realize I am quickly priced out of an enterprise solution in this arena, but have seen at least a couple cheaper options online such as the Sans Digital TR8M+B. (This enclosure is normally bundeled with some RocketRaid controller which I would probably discard in favor of either the Adaptec or LSI solution.)
    0
  • You are missing a huge competitor in this space. Atto RAID Adapters are on par and I think the only other one out there, why are they not compared in this review?
    0
  • I bought the Highpoint, for it's money, it was incredible value at a little under $120
    0
  • I evaluated all but the Highpoint for work. What isn't shown, and would be unrealistic for a home user, is that the LSI destroys the competition when you throw on a SAS expander. With 24 15k SAS drives, the LSI card tops out at 3500MB/s, RAID0 sequential write, while the Areca is
    0
  • Sorry for the double post, comment system doesn't like the less than character.

    I evaluated all but the Highpoint for work. What isn't shown, and would be unrealistic for a home user, is that the LSI destroys the competition when you throw on a SAS expander. With 24 15k SAS drives, the LSI card tops out at 3500MB/s, RAID0 sequential write, while the Areca is less than 2500 and the Adaptec is less than 1800. The Areca also has a lot of issues with stuttering during writes, your average may be fine, but the throughput has some significant dropouts.
    0
  • How do these cards compare with using the 6x SATA 2 connections on my motherboard, a couple of cheap $30 2 port SATA card (eg StarTEch PEXSAT32 2-Port PCI Express SATA 6 Gbps) and software RAID 6?

    I have more CPU that I can use (core i5) and want to use cheap 2 or 3 TB 7200 rpm SATA drives because I want lots of storage rather than maximum speed.
    0
  • When we will have asymmetric RAID 0? A RAID controller capable of splitting data on different sizes parts, so the largest parts go to the fastest drives, and the shorter ones to the slower drives.
    0
  • I'd also love to see a comparison between these controllers and software-RAID on the Intel Sandy-Bridge-E platform so see if we can believe Intel's marketing:

    http://intelstudios.edgesuite.net/idf/2010/sf/aep/STOS002/STOS002.html
    0
  • Great review, however I would have liked to see more details around raid configuration for each card. Things such as:
    1. Supported raid features
    2. Raid rebuild rates, notification features, etc
    3. Gatcha's with each card, ie are JBOD disk interchangeable between different raid cards.

    I am not surprise to see the HighPoint's card at the bottom of the list. You really get what you pay for with these cards, poor performance and even poorer support. I have a RocketRaid 2320 which has horrible drivers and sucks in every category. Will never use another HighPoint card due to the mounting issues I have encountered.
    0
  • A slight correct to the Article about FC controllers. You don't use FC for raw speed you use it for redundancy and multi-pathing. A single FC drive will connect to two different channels, each channel can go back to the same HBA on two different channels or to two completely separate HBAs. This way each drive has at least two channels to reach the host system. Also FC comes in 2, 4, 8 and 10 Gbps flavors, kinda crush's SAS-6 in raw bandwidth. Although honestly you won't see faster then 4 or 8 on the inside of a system, 10 is usually reserved for between SAN drive arrays and SAN fabric switchs. With multi-pathing not only are you getting redundant connections, you can mux the two path's to combine their bandwidth. A system sporting two dual 8Gbps HBAs would be communicating to the SAN at 32Gbps across four connections to two different switches.

    Which brings up the last point, FC's expandability is beyond SAS and FIS PM/PE's. PM/PE was designed for BBC connections where you have a single channel to a back plane with four to eight hot swap SAS connectors. And while they left room for you to implement 255 ID's per channel, there isn't a single vendor who provides that solution. FC on the other hand is as expandable as Ethernet. you can just keep adding more drive arrays, as many as you want. Each storage processor has it's own limit, usually around 255 disks, but you can just add more storage processors.

    That all being said, FC is for enterprise class storage networks. Its the absolute best protocol for that due to its expandability and scalability (disks + bandwidth). SAS is for local system disks on small to medium business servers. Any enterprise worth it's salt will be using VM technology with the VM's being stored on the SAN for availability / redundancy purposes.
    0
  • "Aside from their performance characteristics, they stand apart by offering handy features like mixed-environment SAS and SATA support, along with scalability via SAS expanders."
    Can't you test those statements in a upcoming article?
    My personal experiences says that the HP Sas expander, works flawless with the LSI and Acera card you tested, with both single and dual linking.
    However, the Adaptec only seems to understand single linking, while the Highpoint doesnt work at all with it.
    0
  • With my experience of losing a lot of data due to failing hard drives one motive to build a storage cluster on a dedicated controller is reliability.

    I myself have built a storage pool using ZFS operating the SAS controller in IT-mode (Initiator-Target mode which means that all RAID functionality is turned off which it should be when using ZFS). So you don't buy an expensive hardware RAID card for that, instead you buy a cheaper card with lower RAID functionality. The RAID is instead taken care of by the software which has shown to be a lot more reliable than hardware RAID solutions. The IT-people at CERN who process petabytes of data every day can testify to that when they operated a huge storage cluster built on Areca cards; In short, the hardware RAID wasn't as reliable as promised whereas the ZFS software RAID solution was.

    When using an operating system such as Solaris or OpenIndiana, one really important property of the controller is the platform compatibility. There are currently only two brands that can hold up to compatibility and that is LSI and Intel. LSI are known to be especially reliable and most thoroughly tested as most operating system vendors provide native drivers for use of LSI hardware in server environments and they have been used in such environments for years by now.

    Brands such as 3Ware and OEMS such as Dell, IBM, Intel, HP, Fujitsu-Siemens, Cisco et al build SAS cards that are mostly based on LSI chips (look for MegaRAID 1068e/1078e or 2008e/2108e chips in the specs).

    From a compatibility standpoint the Highpoint cards is the last brand I would recommend and from a reliability standpoint I would certainly recommend people to stay away from anything that comes from JMicron.
    0
  • Interesting review! What I find really disturbing is that results obtained by others using the Highpoint 2720 are much better while being consistent with each other, such as here:
    http://www.youtube.com/user/TheBenzoEnzo#p/a/u/0/yUYZx1zj9UA
    and here:
    http://www.tweaktown.com/reviews/4306/highpoint_rocketraid_2720sgl_sata_6g_raid_controller_review/index7.html
    Are the others lying, have they done it all wrong, or was there something wrong with Tomshardware's setup or drivers?
    0
  • Now with working links, sorry for the double post:

    Interesting review! What I find really disturbing is that results obtained by others using the Highpoint 2720 are much better while being consistent with each other, such as here:

    and here:

    Are the others lying, have they done it all wrong, or was there something wrong with Tomshardware's setup or drivers?
    0