Sign in with
Sign up | Sign in

Roundup: Three 16-Port Enterprise SAS Controllers

Roundup: Three 16-Port Enterprise SAS Controllers
By

Even entry-level servers come with dual- or quad-core processors and many gigabytes of RAM. But a proper storage subsystem still depends on powerful and flexible host adapters, usually with RAID capabilities. We have three 16-port high-end SAS cards from Adaptec, Areca, and Promise in-house and are ready to run them through their paces in search of a winner.

What SAS is All About

While server storage used to center on adapters and drives employing the Small Computer System Interface (SCSI), today’s interface choice for Direct Attached Storage (DAS) applications is called SAS: Serial Attached SCSI. The parallel SCSI bus had insurmountable issues, such as varying signal run-time on each of the wires at increased speeds, which is why a serial transmission was deployed. SAS is a serial point-to-point connection protocol that doesn’t require signal termination. It works with an 8b/10b encoding scheme. It allows a speed of 3 Gb/s in today’s implementation, with 6 Gb/s coming up this year (representing 300 MB/s and 600 MB/s net throughput per port, respectively). On the surface, that 300 MB/s may not appear faster than the 320 MB/s of UltraSCSI, but the throughput is available per connected device, rather than being shared.

SAS controllers (also known as initiators) utilize SSP, the SAS SCSI Protocol, to talk to client devices (known as targets). The SATA Tunneling Protocol (STP) also lets them utilize Serial ATA drives, and the SAS Management Protocol (SMP) is used to manage expanders. SAS uses both fanout expanders and edge expanders, which can be compared to switches in the networking world. One SAS controller can work with up to two edge expanders, which are in turn used to run up to 128 drives. Fanout expanders allow hooking up even more edge expanders.

The beauty of SAS is that it is extremely flexible and scalable. You can use a variety of configurations within a SAS domain, consisting of SAS and SATA drives set up to provide solutions for various performance and capacity requirements. The cards we looked at are all in the $1,000 range, and provide excellent features and performance for enterprise storage solutions.

Display all 26 comments.
This thread is closed for comments
  • 1 Hide
    scimanal , April 24, 2009 7:38 AM
    How about a Raid 10 or 50 Speed comparison? Generally, I run in those modes, and this article would be well suited to have such information.
  • 1 Hide
    spazoid , April 24, 2009 12:03 PM
    Would've been nice to see (although not terribly useful) how much power consumption the base system (without the controllers) requires, so one could know how much extra heat to expect if adding one of the cards to their system. I doubt a lot of people are going to be switching from one of these controllers to another, so knowing how their power usage is, only compared to eachother, isn't very valuable :) 

    Thanks for the review though. Now, here's to hoping that this ~800 MB/s bottleneck is going to disapear in the near future.
  • 0 Hide
    Jerky_san , April 24, 2009 12:22 PM
    Wish this article was wrote about 4 months ago since we built a system but we used a promise card.. If I'd known the performance was as bad as this compared to the other 2 I would of went with a little more expensive..
  • 3 Hide
    kschoche , April 24, 2009 12:33 PM
    I love that I have to click on an image TWICE before I can get it to a readable size. Once again, Toms has failed to fix the zoom buttons/features, I *was* very interested in the article until I got to the results and got so frustrated that I just gave up.
  • 1 Hide
    gwolfman , April 24, 2009 3:11 PM
    Quote:
    It allows a speed of 3 Gb/s in today’s implementation, with 6 Mb/s coming up this year...
    Wow, 6 Mb/s is really fast. I sure hope USB 2.0 doesn't take over since it can do 480 Mb/s. :p 
  • 3 Hide
    Anonymous , April 24, 2009 3:14 PM
    What about LSI Logic controllers?, these 3 HBA tested are using the same Intel IOP348 processor, whilst LSI uses it's own!.
    Another reason to test it is because HP and Dell "own" raid controllers are all rebranded LSI cards(dunno about IBM) and use LSI controllers so chnces are you'll end up with one of those.

    Also, are you testing with battery backup?, because if you don't then almost any array controller will forbid write cache thus killing performance.
    Plase do a review with an HP P411 controller with 512MB BBWC or with a LSI MegaRAID 8888 with 512MB BBWC
  • 1 Hide
    gwolfman , April 24, 2009 3:25 PM
    Btw, where can I find the test scripts for the benchmark patterns (web-server,file-server, workstation, & database)???
  • 0 Hide
    tucci999 , April 24, 2009 4:20 PM
    Samsung used two cards like this to create a RAID using 24 of their 250GB SSD's, in a skulltrail setup. They Had 2GB/s Second Read and Write Speeds, in a 6TB setup. It was amazing.
  • 1 Hide
    co1 , April 24, 2009 5:12 PM
    SSD's have a much greater IOP capability then typical SAS drives. In testing we have seen them overwhelm even high end SAS controllers. 4000 IOPS on RAID0 is reaching the limitations of the SAS drives(experience shows 300 IOPS/sec drive*16 drives). Would love to see the test with even a smaller number of SSDs on future tests (about 3 seems to be the max that systems can handle)
  • 0 Hide
    thearm , April 24, 2009 5:19 PM
    I love how I have to be very careful once I clike the drop down list or it will go away. Then I have to try to use the drop down again. The scroll being on the edge of loosing the drop down doesn't help.

    I liked the older interface.
  • 0 Hide
    co1 , April 24, 2009 5:20 PM
    SSD's have a much greater IOP capability then typical SAS drives. In testing we have seen them overwhelm even high end SAS controllers. 4000 IOPS on RAID0 is reaching the limitations of the SAS drives(experience shows 300 IOPS/sec drive*16 drives). Would love to see the test with even a smaller number of SSDs on future tests (about 3 seems to be the max that SAS controller can handle)
  • -1 Hide
    scimanal , April 24, 2009 5:36 PM
    This is a bit wild, but Has anyone ever tried a R10 array, putting SSDs in pair with a Hard Disk ( A mirrored pair with one SSD, one Disk)? The reason I ask is the Raid Card could write to the Hard Disk, and sync up when available to the SSD, while the SSD would show as the faster node, and pull from it.

    This is more of a curiosity that anything else? Would that work? No idea if it is a good idea.
  • 0 Hide
    Jerky_san , April 24, 2009 7:01 PM
    i know the promise card won't allow a SSD to be put with normal hard drives.. At least in their interface they don't.
  • 1 Hide
    obarthelemy , April 24, 2009 10:51 PM
    it would have been interesting to have at least one SATA and one SSD product included for comparison
  • 0 Hide
    michaelahess , April 24, 2009 10:55 PM
    Yeah, a lot of comparisons lately don't have rival technologies to compare to. We need baseline's as well as similar tech to give an idea of how much (or little) a bit (or lot) of money can make over another solution.

    Also, RAID 10 and 50 would be good as stated above, I use both very heavily.
  • 0 Hide
    Anonymous , April 25, 2009 7:33 AM
    What is the difference between RAID 5EE and RAID 6. They seem almost identical to me. Both provide 2 spares.
    It seems like this could be it, but nothing made I could find made the comparison.
    RAID 5EE uses spare for faster reading, and faster rebuild.
    RAID 6 can support 2 simultaneous failures.
    Can someone confirm?
  • 0 Hide
    ossie , April 25, 2009 9:36 AM
    As Mast pointed out, a glaring miss is LSI and it's OEMs. (yes, IBM also uses LSI, as intel and it's OEMs do - even if LSI doesn't use intel IOPs anymore).
    Another missing manufacturer is 3ware (AMCC).
    Missing BBUs have a huge impact on write performance, as the WB cache is usually disabled (at least on LSI's).

    @Hargak:
    "We used 16 Fujitsu MBA3147RC 15,000 RPM SAS drives to make sure that the controllers could be saturated during our tests. These Fujitsu drives are state-of-the-art server models with 16 MB cache and throughput of over 150 MB/s."

    Fujitsu's MBA drives throughput is nowhere near 150MB/s, they are a little bit slower than Seagate's Cheetah 15k.5 at ~120MB/s. The only faster ones are Seagate's 15k6 (~170MB/s) and Hitachi's Ultrastar 15k450 (~155MB/s). Ironically it's the same duo that also reviewed these ones:
    http://www.tomshardware.com/reviews/ultrastar-cheetah-sas,2004-6.html
  • 0 Hide
    ShadowFlash , April 25, 2009 10:11 PM
    This is a bit wild, but Has anyone ever tried a R10 array, putting SSDs in pair with a Hard Disk ( A mirrored pair with one SSD, one Disk)? The reason I ask is the Raid Card could write to the Hard Disk, and sync up when available to the SSD, while the SSD would show as the faster node, and pull from it.

    This is more of a curiosity that anything else? Would that work? No idea if it is a good idea.

    A much better kooky idea would be to use a pair of SSD's in a mobile RAID 0 enclosure ( to increase IO and capacity, not sequential throughput ) as the parity drive in a RAID 3/4, and put it up against a standard RAID 5. I've put alot of thought into this one, and I'm convinced that it would be superior to RAID 5 in almost any way. Most RAID 3's are really just RAID 4's anyhow, which is prefered in this case. I'm willing to bet that the speed of the SSD's will more than offset the performance loss of a dedicated parity drive, resulting in a sole bottleneck of xor calculations to reach pure RAID 0 speeds. Being a RAID 4, random writes ( a problem with RAID 5 ) could be significantly increased. This solution could be quite cost effective for those unwilling to take the full plunge into SSD's while taking advantage of the lower price points of traditional mass storage. I really wish "someone" nudge, nudge, could try this set-up out and report the results.
  • 0 Hide
    ShadowFlash , April 25, 2009 10:15 PM
    not to mention, far less loss in performance when degraded, and higher sequential numbers which are standard benefits of RAID 3.
  • 0 Hide
    industrial_zman , April 25, 2009 11:39 PM
    I know everyone is asking where is LSI, but I'm curious where is 3Ware in this shoot out? did both companies miss the deadline entry date?

    I've actually been looking closely at the Areca models for a while now. The upgradeable RAM modules is very tempting for a tweaker like me. I might just go back to my old stand-by of Adaptec based on this review.

    There is one more article I would like to see written first: "Does more cache help RAID controller's performance?" now there is a baseline of the Areca card with stock 512MB RAM onboard; lets see how it performs with 1GB, 2GB, and shall I even mention 4GB onboard the controller? same tests would be appreciated for comparison as well.
Display more comments