Sign in with
Sign up | Sign in

4x4 Ports: Multi-Lane Technology, Enclosures, Connections

Roundup: Three 16-Port Enterprise SAS Controllers
By

SAS supports a variety of different cables. SFF-8482 is the internal connector for individual drives, while SFF-8087 connectors are used on host adapter cards internally; SFF-8088 is the external version. Both are also known as mini-SAS, and they merge four individual ports into one connector.

The cards we're reviewing in this article all offer four internal SFF-8087 ports, for up to 16 clients. Adaptec and Areca also offer an additional external port; SFF-8484 is used to condense four ports into one connection internally.

SAS Enclosures

Multi-lane cables and connectors are used to connect SAS enclosures and appliances to host adapters. This is often done regardless of the actual number of drives used, meaning that a single multi-lane connection may be used to operate anything from a single drive to 16, 24, or even more drives, depending on the enclosure configuration. The four ports available per multi-lane connection add up to 1,200 MB/s of bandwidth, which is sufficient to operate high-performance drive arrays.

SAS storage enclosures are used for a simple reason: they’re easier to manage than storage that is installed directly into servers. Since enterprise systems are always mounted into 19” racks, it makes a lot of sense to add SAS enclosures as you need them. But you don’t have to go for enclosures: it is still possible to hook up individual drives to SAS adapters directly. All you need are the appropriate cables, which you typically have to purchase separately.

SAS Connections

SAS devices are often dual-ported, allowing controllers to establish two physical connections for the sake of redundancy, or to double the interface bandwidth. A single SAS connection runs at 300 MB/s today, with 600 MB/s available later this year. Keep in mind that this bandwidth is not necessarily relevant when connecting individual drives, but it certainly matters when attaching multiple drive arrays to host adapters via multi-lane cables.

Display all 26 comments.
This thread is closed for comments
  • 1 Hide
    scimanal , April 24, 2009 7:38 AM
    How about a Raid 10 or 50 Speed comparison? Generally, I run in those modes, and this article would be well suited to have such information.
  • 1 Hide
    spazoid , April 24, 2009 12:03 PM
    Would've been nice to see (although not terribly useful) how much power consumption the base system (without the controllers) requires, so one could know how much extra heat to expect if adding one of the cards to their system. I doubt a lot of people are going to be switching from one of these controllers to another, so knowing how their power usage is, only compared to eachother, isn't very valuable :) 

    Thanks for the review though. Now, here's to hoping that this ~800 MB/s bottleneck is going to disapear in the near future.
  • 0 Hide
    Jerky_san , April 24, 2009 12:22 PM
    Wish this article was wrote about 4 months ago since we built a system but we used a promise card.. If I'd known the performance was as bad as this compared to the other 2 I would of went with a little more expensive..
  • 3 Hide
    kschoche , April 24, 2009 12:33 PM
    I love that I have to click on an image TWICE before I can get it to a readable size. Once again, Toms has failed to fix the zoom buttons/features, I *was* very interested in the article until I got to the results and got so frustrated that I just gave up.
  • 1 Hide
    gwolfman , April 24, 2009 3:11 PM
    Quote:
    It allows a speed of 3 Gb/s in today’s implementation, with 6 Mb/s coming up this year...
    Wow, 6 Mb/s is really fast. I sure hope USB 2.0 doesn't take over since it can do 480 Mb/s. :p 
  • 3 Hide
    Anonymous , April 24, 2009 3:14 PM
    What about LSI Logic controllers?, these 3 HBA tested are using the same Intel IOP348 processor, whilst LSI uses it's own!.
    Another reason to test it is because HP and Dell "own" raid controllers are all rebranded LSI cards(dunno about IBM) and use LSI controllers so chnces are you'll end up with one of those.

    Also, are you testing with battery backup?, because if you don't then almost any array controller will forbid write cache thus killing performance.
    Plase do a review with an HP P411 controller with 512MB BBWC or with a LSI MegaRAID 8888 with 512MB BBWC
  • 1 Hide
    gwolfman , April 24, 2009 3:25 PM
    Btw, where can I find the test scripts for the benchmark patterns (web-server,file-server, workstation, & database)???
  • 0 Hide
    tucci999 , April 24, 2009 4:20 PM
    Samsung used two cards like this to create a RAID using 24 of their 250GB SSD's, in a skulltrail setup. They Had 2GB/s Second Read and Write Speeds, in a 6TB setup. It was amazing.
  • 1 Hide
    co1 , April 24, 2009 5:12 PM
    SSD's have a much greater IOP capability then typical SAS drives. In testing we have seen them overwhelm even high end SAS controllers. 4000 IOPS on RAID0 is reaching the limitations of the SAS drives(experience shows 300 IOPS/sec drive*16 drives). Would love to see the test with even a smaller number of SSDs on future tests (about 3 seems to be the max that systems can handle)
  • 0 Hide
    thearm , April 24, 2009 5:19 PM
    I love how I have to be very careful once I clike the drop down list or it will go away. Then I have to try to use the drop down again. The scroll being on the edge of loosing the drop down doesn't help.

    I liked the older interface.
  • 0 Hide
    co1 , April 24, 2009 5:20 PM
    SSD's have a much greater IOP capability then typical SAS drives. In testing we have seen them overwhelm even high end SAS controllers. 4000 IOPS on RAID0 is reaching the limitations of the SAS drives(experience shows 300 IOPS/sec drive*16 drives). Would love to see the test with even a smaller number of SSDs on future tests (about 3 seems to be the max that SAS controller can handle)
  • -1 Hide
    scimanal , April 24, 2009 5:36 PM
    This is a bit wild, but Has anyone ever tried a R10 array, putting SSDs in pair with a Hard Disk ( A mirrored pair with one SSD, one Disk)? The reason I ask is the Raid Card could write to the Hard Disk, and sync up when available to the SSD, while the SSD would show as the faster node, and pull from it.

    This is more of a curiosity that anything else? Would that work? No idea if it is a good idea.
  • 0 Hide
    Jerky_san , April 24, 2009 7:01 PM
    i know the promise card won't allow a SSD to be put with normal hard drives.. At least in their interface they don't.
  • 1 Hide
    obarthelemy , April 24, 2009 10:51 PM
    it would have been interesting to have at least one SATA and one SSD product included for comparison
  • 0 Hide
    michaelahess , April 24, 2009 10:55 PM
    Yeah, a lot of comparisons lately don't have rival technologies to compare to. We need baseline's as well as similar tech to give an idea of how much (or little) a bit (or lot) of money can make over another solution.

    Also, RAID 10 and 50 would be good as stated above, I use both very heavily.
  • 0 Hide
    Anonymous , April 25, 2009 7:33 AM
    What is the difference between RAID 5EE and RAID 6. They seem almost identical to me. Both provide 2 spares.
    It seems like this could be it, but nothing made I could find made the comparison.
    RAID 5EE uses spare for faster reading, and faster rebuild.
    RAID 6 can support 2 simultaneous failures.
    Can someone confirm?
  • 0 Hide
    ossie , April 25, 2009 9:36 AM
    As Mast pointed out, a glaring miss is LSI and it's OEMs. (yes, IBM also uses LSI, as intel and it's OEMs do - even if LSI doesn't use intel IOPs anymore).
    Another missing manufacturer is 3ware (AMCC).
    Missing BBUs have a huge impact on write performance, as the WB cache is usually disabled (at least on LSI's).

    @Hargak:
    "We used 16 Fujitsu MBA3147RC 15,000 RPM SAS drives to make sure that the controllers could be saturated during our tests. These Fujitsu drives are state-of-the-art server models with 16 MB cache and throughput of over 150 MB/s."

    Fujitsu's MBA drives throughput is nowhere near 150MB/s, they are a little bit slower than Seagate's Cheetah 15k.5 at ~120MB/s. The only faster ones are Seagate's 15k6 (~170MB/s) and Hitachi's Ultrastar 15k450 (~155MB/s). Ironically it's the same duo that also reviewed these ones:
    http://www.tomshardware.com/reviews/ultrastar-cheetah-sas,2004-6.html
  • 0 Hide
    ShadowFlash , April 25, 2009 10:11 PM
    This is a bit wild, but Has anyone ever tried a R10 array, putting SSDs in pair with a Hard Disk ( A mirrored pair with one SSD, one Disk)? The reason I ask is the Raid Card could write to the Hard Disk, and sync up when available to the SSD, while the SSD would show as the faster node, and pull from it.

    This is more of a curiosity that anything else? Would that work? No idea if it is a good idea.

    A much better kooky idea would be to use a pair of SSD's in a mobile RAID 0 enclosure ( to increase IO and capacity, not sequential throughput ) as the parity drive in a RAID 3/4, and put it up against a standard RAID 5. I've put alot of thought into this one, and I'm convinced that it would be superior to RAID 5 in almost any way. Most RAID 3's are really just RAID 4's anyhow, which is prefered in this case. I'm willing to bet that the speed of the SSD's will more than offset the performance loss of a dedicated parity drive, resulting in a sole bottleneck of xor calculations to reach pure RAID 0 speeds. Being a RAID 4, random writes ( a problem with RAID 5 ) could be significantly increased. This solution could be quite cost effective for those unwilling to take the full plunge into SSD's while taking advantage of the lower price points of traditional mass storage. I really wish "someone" nudge, nudge, could try this set-up out and report the results.
  • 0 Hide
    ShadowFlash , April 25, 2009 10:15 PM
    not to mention, far less loss in performance when degraded, and higher sequential numbers which are standard benefits of RAID 3.
  • 0 Hide
    industrial_zman , April 25, 2009 11:39 PM
    I know everyone is asking where is LSI, but I'm curious where is 3Ware in this shoot out? did both companies miss the deadline entry date?

    I've actually been looking closely at the Areca models for a while now. The upgradeable RAM modules is very tempting for a tweaker like me. I might just go back to my old stand-by of Adaptec based on this review.

    There is one more article I would like to see written first: "Does more cache help RAID controller's performance?" now there is a baseline of the Areca card with stock 512MB RAM onboard; lets see how it performs with 1GB, 2GB, and shall I even mention 4GB onboard the controller? same tests would be appreciated for comparison as well.
Display more comments