Sign in with
Sign up | Sign in

Results: Sequential Throughput

Roundup: Three 16-Port Enterprise SAS Controllers
By

RAID 0

Areca’s card clearly is the fastest product when it comes to simple sequential reads without pending commands. It starts off at 680 MB/s and maxes out at 820 MB/s, as soon as deeper command queues are involved. Adaptec’s maximum performance is very much the same. Promise starts at 390 MB/s and reaches 800 MB/s, but only when long command queues are involved.

There is a clear hierarchy in sequential writes for RAID 0: Areca is fastest, Adaptec is second and Promise finishes third.There is a clear hierarchy in sequential writes for RAID 0: Areca is fastest, Adaptec is second and Promise finishes third.

RAID 5

With all RAID 5 member drives available, RAID 5 throughput is very much like the numbers we saw in RAID 0. However, once the controllers must rebuild array data on the fly, performance drops. Areca manages to maintain its performance level the best, while Adaptec and Promise are impacted by the missing drive.

Areca and Adaptec manage to maintain the same write performance for sequential operation on degraded arrays, while the Promise card shows a noticeable performance drop once one RAID 5 drive is missing.

RAID 6

RAID 6 with double redundancy is important for mission-critical systems. Again, read throughput is similar to the excellent results seen in RAID 0. Removing one drive (to simulate a failure) has only a small impact on Adaptec’s performance, but makes a larger difference for Areca and Promise. Once two drives fail, Areca manages to maintain the same performance level as with only one failed drive, while Adaptec and Promise lose still more performance.

In a RAID 6 array, sequential write performance is always the same on healthy, single, or double degraded arrays in the case of Adaptec and Areca. Unfortunately, the Promise card’s performance drops by almost 50% once one or two drives of a RAID 6 array are missing.

Display all 26 comments.
This thread is closed for comments
  • 1 Hide
    scimanal , April 24, 2009 7:38 AM
    How about a Raid 10 or 50 Speed comparison? Generally, I run in those modes, and this article would be well suited to have such information.
  • 1 Hide
    spazoid , April 24, 2009 12:03 PM
    Would've been nice to see (although not terribly useful) how much power consumption the base system (without the controllers) requires, so one could know how much extra heat to expect if adding one of the cards to their system. I doubt a lot of people are going to be switching from one of these controllers to another, so knowing how their power usage is, only compared to eachother, isn't very valuable :) 

    Thanks for the review though. Now, here's to hoping that this ~800 MB/s bottleneck is going to disapear in the near future.
  • 0 Hide
    Jerky_san , April 24, 2009 12:22 PM
    Wish this article was wrote about 4 months ago since we built a system but we used a promise card.. If I'd known the performance was as bad as this compared to the other 2 I would of went with a little more expensive..
  • 3 Hide
    kschoche , April 24, 2009 12:33 PM
    I love that I have to click on an image TWICE before I can get it to a readable size. Once again, Toms has failed to fix the zoom buttons/features, I *was* very interested in the article until I got to the results and got so frustrated that I just gave up.
  • 1 Hide
    gwolfman , April 24, 2009 3:11 PM
    Quote:
    It allows a speed of 3 Gb/s in today’s implementation, with 6 Mb/s coming up this year...
    Wow, 6 Mb/s is really fast. I sure hope USB 2.0 doesn't take over since it can do 480 Mb/s. :p 
  • 3 Hide
    Anonymous , April 24, 2009 3:14 PM
    What about LSI Logic controllers?, these 3 HBA tested are using the same Intel IOP348 processor, whilst LSI uses it's own!.
    Another reason to test it is because HP and Dell "own" raid controllers are all rebranded LSI cards(dunno about IBM) and use LSI controllers so chnces are you'll end up with one of those.

    Also, are you testing with battery backup?, because if you don't then almost any array controller will forbid write cache thus killing performance.
    Plase do a review with an HP P411 controller with 512MB BBWC or with a LSI MegaRAID 8888 with 512MB BBWC
  • 1 Hide
    gwolfman , April 24, 2009 3:25 PM
    Btw, where can I find the test scripts for the benchmark patterns (web-server,file-server, workstation, & database)???
  • 0 Hide
    tucci999 , April 24, 2009 4:20 PM
    Samsung used two cards like this to create a RAID using 24 of their 250GB SSD's, in a skulltrail setup. They Had 2GB/s Second Read and Write Speeds, in a 6TB setup. It was amazing.
  • 1 Hide
    co1 , April 24, 2009 5:12 PM
    SSD's have a much greater IOP capability then typical SAS drives. In testing we have seen them overwhelm even high end SAS controllers. 4000 IOPS on RAID0 is reaching the limitations of the SAS drives(experience shows 300 IOPS/sec drive*16 drives). Would love to see the test with even a smaller number of SSDs on future tests (about 3 seems to be the max that systems can handle)
  • 0 Hide
    thearm , April 24, 2009 5:19 PM
    I love how I have to be very careful once I clike the drop down list or it will go away. Then I have to try to use the drop down again. The scroll being on the edge of loosing the drop down doesn't help.

    I liked the older interface.
  • 0 Hide
    co1 , April 24, 2009 5:20 PM
    SSD's have a much greater IOP capability then typical SAS drives. In testing we have seen them overwhelm even high end SAS controllers. 4000 IOPS on RAID0 is reaching the limitations of the SAS drives(experience shows 300 IOPS/sec drive*16 drives). Would love to see the test with even a smaller number of SSDs on future tests (about 3 seems to be the max that SAS controller can handle)
  • -1 Hide
    scimanal , April 24, 2009 5:36 PM
    This is a bit wild, but Has anyone ever tried a R10 array, putting SSDs in pair with a Hard Disk ( A mirrored pair with one SSD, one Disk)? The reason I ask is the Raid Card could write to the Hard Disk, and sync up when available to the SSD, while the SSD would show as the faster node, and pull from it.

    This is more of a curiosity that anything else? Would that work? No idea if it is a good idea.
  • 0 Hide
    Jerky_san , April 24, 2009 7:01 PM
    i know the promise card won't allow a SSD to be put with normal hard drives.. At least in their interface they don't.
  • 1 Hide
    obarthelemy , April 24, 2009 10:51 PM
    it would have been interesting to have at least one SATA and one SSD product included for comparison
  • 0 Hide
    michaelahess , April 24, 2009 10:55 PM
    Yeah, a lot of comparisons lately don't have rival technologies to compare to. We need baseline's as well as similar tech to give an idea of how much (or little) a bit (or lot) of money can make over another solution.

    Also, RAID 10 and 50 would be good as stated above, I use both very heavily.
  • 0 Hide
    Anonymous , April 25, 2009 7:33 AM
    What is the difference between RAID 5EE and RAID 6. They seem almost identical to me. Both provide 2 spares.
    It seems like this could be it, but nothing made I could find made the comparison.
    RAID 5EE uses spare for faster reading, and faster rebuild.
    RAID 6 can support 2 simultaneous failures.
    Can someone confirm?
  • 0 Hide
    ossie , April 25, 2009 9:36 AM
    As Mast pointed out, a glaring miss is LSI and it's OEMs. (yes, IBM also uses LSI, as intel and it's OEMs do - even if LSI doesn't use intel IOPs anymore).
    Another missing manufacturer is 3ware (AMCC).
    Missing BBUs have a huge impact on write performance, as the WB cache is usually disabled (at least on LSI's).

    @Hargak:
    "We used 16 Fujitsu MBA3147RC 15,000 RPM SAS drives to make sure that the controllers could be saturated during our tests. These Fujitsu drives are state-of-the-art server models with 16 MB cache and throughput of over 150 MB/s."

    Fujitsu's MBA drives throughput is nowhere near 150MB/s, they are a little bit slower than Seagate's Cheetah 15k.5 at ~120MB/s. The only faster ones are Seagate's 15k6 (~170MB/s) and Hitachi's Ultrastar 15k450 (~155MB/s). Ironically it's the same duo that also reviewed these ones:
    http://www.tomshardware.com/reviews/ultrastar-cheetah-sas,2004-6.html
  • 0 Hide
    ShadowFlash , April 25, 2009 10:11 PM
    This is a bit wild, but Has anyone ever tried a R10 array, putting SSDs in pair with a Hard Disk ( A mirrored pair with one SSD, one Disk)? The reason I ask is the Raid Card could write to the Hard Disk, and sync up when available to the SSD, while the SSD would show as the faster node, and pull from it.

    This is more of a curiosity that anything else? Would that work? No idea if it is a good idea.

    A much better kooky idea would be to use a pair of SSD's in a mobile RAID 0 enclosure ( to increase IO and capacity, not sequential throughput ) as the parity drive in a RAID 3/4, and put it up against a standard RAID 5. I've put alot of thought into this one, and I'm convinced that it would be superior to RAID 5 in almost any way. Most RAID 3's are really just RAID 4's anyhow, which is prefered in this case. I'm willing to bet that the speed of the SSD's will more than offset the performance loss of a dedicated parity drive, resulting in a sole bottleneck of xor calculations to reach pure RAID 0 speeds. Being a RAID 4, random writes ( a problem with RAID 5 ) could be significantly increased. This solution could be quite cost effective for those unwilling to take the full plunge into SSD's while taking advantage of the lower price points of traditional mass storage. I really wish "someone" nudge, nudge, could try this set-up out and report the results.
  • 0 Hide
    ShadowFlash , April 25, 2009 10:15 PM
    not to mention, far less loss in performance when degraded, and higher sequential numbers which are standard benefits of RAID 3.
  • 0 Hide
    industrial_zman , April 25, 2009 11:39 PM
    I know everyone is asking where is LSI, but I'm curious where is 3Ware in this shoot out? did both companies miss the deadline entry date?

    I've actually been looking closely at the Areca models for a while now. The upgradeable RAM modules is very tempting for a tweaker like me. I might just go back to my old stand-by of Adaptec based on this review.

    There is one more article I would like to see written first: "Does more cache help RAID controller's performance?" now there is a baseline of the Areca card with stock 512MB RAM onboard; lets see how it performs with 1GB, 2GB, and shall I even mention 4GB onboard the controller? same tests would be appreciated for comparison as well.
Display more comments