Sign in with
Sign up | Sign in

LSI SAS 9300-8e & HGST Ultrastar SSD800MM: 12 Gb/s SAS, Tested

LSI SAS 9300-8e & HGST Ultrastar SSD800MM: 12 Gb/s SAS, Tested
By

With the announcement of LSI's SAS 9300-8e and HGST's Ultrastar SSD800MM earlier this year, the world was officially introduced to 12 Gb/s SAS. Today we get our first look at how two times the interface bandwidth translates to real-world performance.

Early in 2009, we were introduced to the first 6 Gb/s SAS products. Four years later, we're ready for a look at 12 Gb/s-capable SAS devices from LSI and HGST. For a true appreciation of this technology, we need to take a deeper look into how it was conceived.

When it comes to evaluating an interface, it's really to overlook all of the details. From the highest levels, most folks assume a theoretical doubling of speed. But speed is relative. Are clock rates increasing? How about the number of available channels? Do latencies change? And what about line code? As we've learned from the evolution of PCI Express, it's not easy to simply jump from 2.5 to 5 to 10 GT/s. At some point in there, physics becomes an increasingly large obstacle.

In fact, PCIe shares some of the same characteristics as SAS and SATA. From the first to second generations of PCI Express, data rate increased from 2.5 to 5 GT/s. Both standards employed 8b/10b encoding with simple, fixed transmitter equalization. Third-gen PCIe is limited to 8 GT/s. However, overhead is significantly reduced through a transition to 128b/130b. At the end of the day, peak theoretical throughput doubled, though the path to get there was different.

We could have guessed, then, that the creation of a 12 Gb/s SAS standard wasn't going to be easy for the T10 technical committee.

SATA-IO board member Paul Wassenberg gave us some great insight when he answered why the SATA protocol would not be going to 12 Gb/s:

"Twelve gigabit per second SATA would seem to be the logical next step, and the T10 (SAS) committee has done a lot of the work on 12 Gb/s already. From this work, we know that the transition from 6 Gb/s to 12 Gb/s is not simple. SAS 3.0 (12 Gb/s) requires transmitter equalization, which adds a great deal of complexity to the interface controller and the PHY. In silicon, complexity equates to more die area, which means higher cost. Also, the protocol needs to change to support transmitter training, and that turns out to be fairly significant. Additionally, many of the backplanes and cables that worked fine at 6 Gb/s won't reliably carry data at 12 Gb/s."

Low-cost client systems don't readily embrace technologies that cost more. But the enterprise does, particularly when performance is of the utmost importance. For many customers, the transition to 12 Gb/s SAS will be evolutionary, and they'll integrate it one piece at a time over the next few years. With this in mind, we're missing the one puzzle piece that probably means the most in the SAS ecosystem: 12 Gb/s expanders.

Particularly when you're talking about mechanical disks, it takes a lot of drives to saturate an eight-port 6 Gb/s HBA. But IT professionals carefully balance the controllers, expanders (which adapt a certain number of SAS ports to a larger number of storage devices), and drives themselves to optimize for their specific application. If you have an eight-port HBA or RAID card today, it's limited to 48 Gb/s (8 x 6 Gb/s). Hooking up 128 disks via an expander severely limits the throughput of each one.

On the other hand, adopting 12 Gb/s SAS increases that ceiling to 96 Gb/s (at least in theory). By replacing HBAs and expanders, it's possible to alleviate the interface-imposed bottleneck without even changing out your drives.

Of course, removing one bottleneck creates another one somewhere else. So, where would the weak link be in a 12 Gb/s-capable SAS topology? It turns out to be the PCI Express bus.

That's right. An eight-port 12 Gb/s HBA or RAID card saturates an eight-lane PCI Express 3.0 link. Why not use a 16-lane connector? Most 2U servers still ship with up to x8 upgrade slots, so 16-lane cards aren't on the table right now.

Naturally, testing 12 Gb/s SAS requires that we use compatible adapters and drives. LSI supplied us with its SAS 9300-8e controller, and HGST provided the already-public Ultrastar SSD800MM.

Display 13 Comments.
This thread is closed for comments
  • 0 Hide
    slomo4sho , June 20, 2013 12:56 AM
    Now only if this technology was viable for home builds. :(  Maybe in a couple years?
  • 0 Hide
    major-error , June 20, 2013 6:50 AM
    The performance and relative maturity of this prototype drive certainly is impressive, but this is what the enterprise space demands.
    At the consumer level though, the article takes on a completely different tone--I would be very surprised if we don't start seeing mention of PCIe4 at/before the top of the next CPU cycle (so, in 24-36 months at most.)
  • 0 Hide
    raidtarded , June 20, 2013 7:26 AM
    Actually, Adaptec already saturated PCIe 3.0 with 6GB/s. The chart is incorrect, it doesn't take 12Gb/s to saturate the PCIe bus. Well, not for Adaptec.
  • 0 Hide
    falcompsx , June 20, 2013 11:50 AM
    Remember when mechanical hard drives struggled to saturate their interfaces? Times sure have changed with SSD tech.
  • 0 Hide
    CaedenV , June 20, 2013 12:01 PM
    Quote:
    The performance and relative maturity of this prototype drive certainly is impressive, but this is what the enterprise space demands.
    At the consumer level though, the article takes on a completely different tone--I would be very surprised if we don't start seeing mention of PCIe4 at/before the top of the next CPU cycle (so, in 24-36 months at most.)


    Ya, my bet is that we will not start to see SATA4 or PCIe4 until Skymont at the earliest. Considering it is looking like Broadwell may be pushed back due to 14nm die shrink issues I would bet that Skymont will have similar issues when moving to 10nm. But at least for home users you can cram 2 SSDs in RAID0 with a proper RAID card and get a little performance boost until then. I guess the only problem is that most people are going to use the onboard Intel RAID for RAID0, which will get you a killer synthetic benchmark, but in practical reality it is really just expanding your volume with very little speed benefit.
  • 0 Hide
    kj3639 , June 20, 2013 1:27 PM
    Go HGST! WOO!!!!
  • 0 Hide
    bit_user , June 21, 2013 12:08 AM
    * wipes drool off floor *

    That's a quality review of some quality products. I like the insights shared, throughout. I especially appreciated the link to the SATA-Express paper. Thanks!

    MORE REVIEWS LIKE THIS!!
    :) 
  • 0 Hide
    bit_user , June 21, 2013 12:13 AM
    Quote:
    Actually, Adaptec already saturated PCIe 3.0 with 6GB/s. The chart is incorrect, it doesn't take 12Gb/s to saturate the PCIe bus. Well, not for Adaptec.
    How many ports and how many lanes, though? If it's just a 8-port card, the math doesn't support that, as 6x8 = 48 Gbps, which is less than the 8 x 8 = 64 Gbps that a x8 PCIe 3.0 slot should carry.
  • 0 Hide
    raidtarded , June 21, 2013 12:25 AM
    It is the equivalent of a nuke bomb compared to the LSI products. It has 24 Native ports.
    Quote:
    Quote:
    Actually, Adaptec already saturated PCIe 3.0 with 6GB/s. The chart is incorrect, it doesn't take 12Gb/s to saturate the PCIe bus. Well, not for Adaptec.
    How many ports and how many lanes, though? If it's just a 8-port card, the math doesn't support that, as 6x8 = 48 Gbps, which is less than the 8 x 8 = 64 Gbps that a x8 PCIe 3.0 slot should carry.


  • 0 Hide
    raidtarded , June 21, 2013 12:26 AM
    It is a 24 port native raid controller. smokes the 4 ports.
  • 0 Hide
    drewriley , June 21, 2013 11:18 AM
    Quote:
    Quote:
    Actually, Adaptec already saturated PCIe 3.0 with 6GB/s. The chart is incorrect, it doesn't take 12Gb/s to saturate the PCIe bus. Well, not for Adaptec.
    How many ports and how many lanes, though? If it's just a 8-port card, the math doesn't support that, as 6x8 = 48 Gbps, which is less than the 8 x 8 = 64 Gbps that a x8 PCIe 3.0 slot should carry.


    The graph is slightly misleading because it includes some assumptions. I mentioned the x8 assumption, and you found the other major one, which limits it to 8 port cards. Also, they list the SAS throughput with 8b/10b taken into account.

  • 0 Hide
    drewriley , June 21, 2013 11:31 AM
    Quote:
    It is a 24 port native raid controller. smokes the 4 ports.


    I personally love the Adaptec 72405, it is amazing that they can provide 24 native ports and absolutely amazing sequential performance. But, when you look at external connectivity, there isn't a ton of difference. Adaptec has a version with 16 external ports, or 16x6Gbps, which is 96Gbps. LSI has an 8 port version, which gives you 8x12Gbps, or 96Gbps. While Adaptec allows you to connect more drives without the use of expanders, LSI allows you to get better performance per drive. I really like the fact that we have two companies catering to high-end RAID that offer different solutions, which gives us, the customer, the most flexibility.

  • 0 Hide
    drewriley , June 21, 2013 11:34 AM
    Quote:
    * wipes drool off floor *

    That's a quality review of some quality products. I like the insights shared, throughout. I especially appreciated the link to the SATA-Express paper. Thanks!

    MORE REVIEWS LIKE THIS!!
    :) 


    Thank you, I appreciate the feedback!