Sign in with
Sign up | Sign in

Hardware Comparison Table and Test Setup

Roundup: Three 16-Port Enterprise SAS Controllers
By
Manufacturer
Adaptec
Areca
Promise
Model
RAID 51645ARC-1680ix-16SuperTrak EX16650
Internal Connectors
4x SFF 80874x SFF 80874x SFF 8087
External Connectors
1x SFF 80881x SFF-8088, 1x LAN, COMN/A
Cache
512 MB DDR2-400 ECC
on board
DDR2-533 512 MB
DIMM
512 MB DDR2 ECC
on board
Profile
Full Height, half lengthFull Height
Full Height
Interface
PCI Express x8PCI Express x8PCI Express x8
XOR Engine
1.2 GHz Dual-Core RAID on Chip (ROC)Intel IOP348
1200 MHz
Intel IOP348
1200 MHz
RAID Level Migration
Yes
Yes
Yes
Online Capacity Expansion
Yes
Yes
Yes
2+ TB Volumes (64 bit LBA)
Yes
Yes
Yes
Multiple RAID Arrays
Yes
Yes
Yes
Command Line Interface
Yes
Yes
No
Hot Spare Support
Yes
Yes
Yes
Battery Back-up Unit
Optional
Optional
Optional
RAID 5 init
45 min
25 min
1h 16 min
RAID 5 rebuild
33 min
50 min
55 min
RAID 6 init
55 min
25 min
1h 30 min
RAID 6 rebuild Drive 1
40 min
55 min
1h 4 min
RAID 6 rebuild Drive 2
32 min
rebuilt simultaneously
55 min
RAID 6 rebuild Total
1h 12 min
57 min
1h 59 min
Spin Down Idle Drives
Yes
Yes
No
Power Consumption Power Saving
298 W
296 W
N/A
Power Consumption Idle
368 W
365 W
364 W
Power Consumption Peak
412 W
409 W
402 W
Supported RAID Modes
0, 1, 1E, 5, 5EE, 6, 10, 50, 60, JBOD0, 1, 10(1E), 3, 5, 6, 30, 50, 60, Single Disk or JBOD0, 1, 1E, 5, 6, 10, 50, 60
Fan
No
Yes
No
Supported OS
Windows XP, Server 2003/2008, Vista, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), SCO OpenServer, UnixWare, Sun Solaris 10 x86, FreeBSDWindows 2000/XP/Server 2003/Vista
Linux
FreeBSD
Novell Netware 6.5
Solaris 10 x86/x86_64
SCO Unixware 7.x.x
Mac OS X 10.x (EFI BIOS Support)
Microsoft Windows Vista, 2000, XP, Windows Server 2003, Windows Server 2008
Red Hat Linux, SuSE Linux, Miracle Linux, Fedora Core, Linux open source driver (32/64-bit)
FreeBSD, VMware 3.02, 3.5
Other Features
Copy-back Hot-spareIntegrated Web server
Warranty
3 years
3 years
3 years
Price
$999
$999
$800
System Hardware
Processor(s)
2x Intel Xeon Processor (Nocona core); 3.6 GHz, FSB800, 1 MB L2 Cache
Platform
Asus NCL-DS (Socket 604)
Intel E7520 Chipset, BIOS 1005
RAM
Corsair CM72DD512AR-400 (DDR2-400 ECC, reg.)
2x 512 MB, CL3-3-3-10 Timings
System Hard Drive

Western Digital Caviar WD1200JB

120 GB, 7 200 RPM, 8 MB Cache, Ultra ATA/100

Test Drives
16x Fujitsu MBA3147RC
147 GB, 15,000 RPM, 16 MB Cache, SAS
Mass Storage Controller(s)
Adaptec RAID 51645
Areca ARC-1680D-IX-16
Promise Supertrak 16650
Networking
Broadcom BCM5721 On-Board Gigabit Ethernet NIC
Graphics Subsystem
On-Board Graphics
ATI RageXL, 8 MB
I/O Performance
IOMeter 2003.05.10
Fileserver-Benchmark
Webserver-Benchmark
Database-Benchmark
Workstation-Benchmark
Streaming Reads
Streaming Writes
System Software & Drivers
OS
Microsoft Windows Server 2003 Enterprise Edition, Service Pack 1
Platform Driver
Intel Chipset Installation Utility 7.0.0.1025
Graphics Driver
Default Windows Graphics Driver


Test Drives: Fujitsu MBA3147RC (15,000 RPM)

We used 16 Fujitsu MBA3147RC 15,000 RPM SAS drives to make sure that the controllers could be saturated during our tests. These Fujitsu drives are state-of-the-art server models with 16 MB cache and throughput of over 150 MB/s.        

Display all 26 comments.
This thread is closed for comments
  • 1 Hide
    scimanal , April 24, 2009 7:38 AM
    How about a Raid 10 or 50 Speed comparison? Generally, I run in those modes, and this article would be well suited to have such information.
  • 1 Hide
    spazoid , April 24, 2009 12:03 PM
    Would've been nice to see (although not terribly useful) how much power consumption the base system (without the controllers) requires, so one could know how much extra heat to expect if adding one of the cards to their system. I doubt a lot of people are going to be switching from one of these controllers to another, so knowing how their power usage is, only compared to eachother, isn't very valuable :) 

    Thanks for the review though. Now, here's to hoping that this ~800 MB/s bottleneck is going to disapear in the near future.
  • 0 Hide
    Jerky_san , April 24, 2009 12:22 PM
    Wish this article was wrote about 4 months ago since we built a system but we used a promise card.. If I'd known the performance was as bad as this compared to the other 2 I would of went with a little more expensive..
  • 3 Hide
    kschoche , April 24, 2009 12:33 PM
    I love that I have to click on an image TWICE before I can get it to a readable size. Once again, Toms has failed to fix the zoom buttons/features, I *was* very interested in the article until I got to the results and got so frustrated that I just gave up.
  • 1 Hide
    gwolfman , April 24, 2009 3:11 PM
    Quote:
    It allows a speed of 3 Gb/s in today’s implementation, with 6 Mb/s coming up this year...
    Wow, 6 Mb/s is really fast. I sure hope USB 2.0 doesn't take over since it can do 480 Mb/s. :p 
  • 3 Hide
    Anonymous , April 24, 2009 3:14 PM
    What about LSI Logic controllers?, these 3 HBA tested are using the same Intel IOP348 processor, whilst LSI uses it's own!.
    Another reason to test it is because HP and Dell "own" raid controllers are all rebranded LSI cards(dunno about IBM) and use LSI controllers so chnces are you'll end up with one of those.

    Also, are you testing with battery backup?, because if you don't then almost any array controller will forbid write cache thus killing performance.
    Plase do a review with an HP P411 controller with 512MB BBWC or with a LSI MegaRAID 8888 with 512MB BBWC
  • 1 Hide
    gwolfman , April 24, 2009 3:25 PM
    Btw, where can I find the test scripts for the benchmark patterns (web-server,file-server, workstation, & database)???
  • 0 Hide
    tucci999 , April 24, 2009 4:20 PM
    Samsung used two cards like this to create a RAID using 24 of their 250GB SSD's, in a skulltrail setup. They Had 2GB/s Second Read and Write Speeds, in a 6TB setup. It was amazing.
  • 1 Hide
    co1 , April 24, 2009 5:12 PM
    SSD's have a much greater IOP capability then typical SAS drives. In testing we have seen them overwhelm even high end SAS controllers. 4000 IOPS on RAID0 is reaching the limitations of the SAS drives(experience shows 300 IOPS/sec drive*16 drives). Would love to see the test with even a smaller number of SSDs on future tests (about 3 seems to be the max that systems can handle)
  • 0 Hide
    thearm , April 24, 2009 5:19 PM
    I love how I have to be very careful once I clike the drop down list or it will go away. Then I have to try to use the drop down again. The scroll being on the edge of loosing the drop down doesn't help.

    I liked the older interface.
  • 0 Hide
    co1 , April 24, 2009 5:20 PM
    SSD's have a much greater IOP capability then typical SAS drives. In testing we have seen them overwhelm even high end SAS controllers. 4000 IOPS on RAID0 is reaching the limitations of the SAS drives(experience shows 300 IOPS/sec drive*16 drives). Would love to see the test with even a smaller number of SSDs on future tests (about 3 seems to be the max that SAS controller can handle)
  • -1 Hide
    scimanal , April 24, 2009 5:36 PM
    This is a bit wild, but Has anyone ever tried a R10 array, putting SSDs in pair with a Hard Disk ( A mirrored pair with one SSD, one Disk)? The reason I ask is the Raid Card could write to the Hard Disk, and sync up when available to the SSD, while the SSD would show as the faster node, and pull from it.

    This is more of a curiosity that anything else? Would that work? No idea if it is a good idea.
  • 0 Hide
    Jerky_san , April 24, 2009 7:01 PM
    i know the promise card won't allow a SSD to be put with normal hard drives.. At least in their interface they don't.
  • 1 Hide
    obarthelemy , April 24, 2009 10:51 PM
    it would have been interesting to have at least one SATA and one SSD product included for comparison
  • 0 Hide
    michaelahess , April 24, 2009 10:55 PM
    Yeah, a lot of comparisons lately don't have rival technologies to compare to. We need baseline's as well as similar tech to give an idea of how much (or little) a bit (or lot) of money can make over another solution.

    Also, RAID 10 and 50 would be good as stated above, I use both very heavily.
  • 0 Hide
    Anonymous , April 25, 2009 7:33 AM
    What is the difference between RAID 5EE and RAID 6. They seem almost identical to me. Both provide 2 spares.
    It seems like this could be it, but nothing made I could find made the comparison.
    RAID 5EE uses spare for faster reading, and faster rebuild.
    RAID 6 can support 2 simultaneous failures.
    Can someone confirm?
  • 0 Hide
    ossie , April 25, 2009 9:36 AM
    As Mast pointed out, a glaring miss is LSI and it's OEMs. (yes, IBM also uses LSI, as intel and it's OEMs do - even if LSI doesn't use intel IOPs anymore).
    Another missing manufacturer is 3ware (AMCC).
    Missing BBUs have a huge impact on write performance, as the WB cache is usually disabled (at least on LSI's).

    @Hargak:
    "We used 16 Fujitsu MBA3147RC 15,000 RPM SAS drives to make sure that the controllers could be saturated during our tests. These Fujitsu drives are state-of-the-art server models with 16 MB cache and throughput of over 150 MB/s."

    Fujitsu's MBA drives throughput is nowhere near 150MB/s, they are a little bit slower than Seagate's Cheetah 15k.5 at ~120MB/s. The only faster ones are Seagate's 15k6 (~170MB/s) and Hitachi's Ultrastar 15k450 (~155MB/s). Ironically it's the same duo that also reviewed these ones:
    http://www.tomshardware.com/reviews/ultrastar-cheetah-sas,2004-6.html
  • 0 Hide
    ShadowFlash , April 25, 2009 10:11 PM
    This is a bit wild, but Has anyone ever tried a R10 array, putting SSDs in pair with a Hard Disk ( A mirrored pair with one SSD, one Disk)? The reason I ask is the Raid Card could write to the Hard Disk, and sync up when available to the SSD, while the SSD would show as the faster node, and pull from it.

    This is more of a curiosity that anything else? Would that work? No idea if it is a good idea.

    A much better kooky idea would be to use a pair of SSD's in a mobile RAID 0 enclosure ( to increase IO and capacity, not sequential throughput ) as the parity drive in a RAID 3/4, and put it up against a standard RAID 5. I've put alot of thought into this one, and I'm convinced that it would be superior to RAID 5 in almost any way. Most RAID 3's are really just RAID 4's anyhow, which is prefered in this case. I'm willing to bet that the speed of the SSD's will more than offset the performance loss of a dedicated parity drive, resulting in a sole bottleneck of xor calculations to reach pure RAID 0 speeds. Being a RAID 4, random writes ( a problem with RAID 5 ) could be significantly increased. This solution could be quite cost effective for those unwilling to take the full plunge into SSD's while taking advantage of the lower price points of traditional mass storage. I really wish "someone" nudge, nudge, could try this set-up out and report the results.
  • 0 Hide
    ShadowFlash , April 25, 2009 10:15 PM
    not to mention, far less loss in performance when degraded, and higher sequential numbers which are standard benefits of RAID 3.
  • 0 Hide
    industrial_zman , April 25, 2009 11:39 PM
    I know everyone is asking where is LSI, but I'm curious where is 3Ware in this shoot out? did both companies miss the deadline entry date?

    I've actually been looking closely at the Areca models for a while now. The upgradeable RAM modules is very tempting for a tweaker like me. I might just go back to my old stand-by of Adaptec based on this review.

    There is one more article I would like to see written first: "Does more cache help RAID controller's performance?" now there is a baseline of the Areca card with stock 512MB RAM onboard; lets see how it performs with 1GB, 2GB, and shall I even mention 4GB onboard the controller? same tests would be appreciated for comparison as well.
Display more comments