Sign in with
Sign up | Sign in

Supporting Hardware Grows Up

Don’t Get Stuck Riding A Dinosaur: Next-Gen Interfaces And Protocols Explained
By
Brought to you by

As we’ve mentioned, current solid state drives are giving their interfaces all they can handle from a throughput standpoint, and companies know this. SATA and SAS interfaces that have a theoretical limit of 6 Gb/s are soon going to be bottlenecking SSDs as the drives continue to evolve. SAS is expect to graduate to 12 Gb/s in the near future, which will provide some breathing room, but even this is rather weak compared to what’s available over PCIe. A single PCIe 3.0 lane has 8 Gb/s of maximum bandwidth at its disposal. You don’t even need napkin math to see that a PCIe x4 connection supplies up to 32 Gb/s, nearly three times as much throughput compared to 12 Gb/s SAS.

Throughput is an easy one, though, because we’re constantly being beaten over the head with drives’ speed ratings. A slightly more complicated issue is the subject of power envelopes for storage drives. It’s something that data centers with storage servers full of front-loading SATA/SAS drive bays are especially concerned about. Such bays can usually handle power draws of 10W, which also happens to be the active power consumption for the majority of enterprise hard drives. Given this existing environment, enterprise SSD manufacturers like to design SSDs that fall within this envelope.

Limiting SSDs to 10W can hamper their performance. Hard drives spend the majority of their power budget on moving their read/write heads and rotating their platters. Hard drives have a maximum spindle speed (7,200 or 15,000 RPM, for example), but supplying extra power won’t make them spin faster than this maximum rated speed. The matter is slightly different for SSDs, which draw power primarily to feed their NAND memory chips. People with a familiarity of how SSDs operate know that the drives are able to access multiple memory channels simultaneously, like a striped RAID array. Therefore, as the number of active die at an SSD’s disposal increases, so does its performance. So, when you see an SSD that has eight or 16 memory channels, and each channel communicates with four to eight die, you might think that an SSD could have 128 active die when running at full tilt. And you’d be wrong.

The problem is that 10W (or 9W, frequently) power ceiling. This limits an SSD to keep approximately 32 to 40 die active at any one time, well below the number of die an SSD can potentially have. Unlike a hard drive, an SSD could actually put extra power to work, activating more die/channels at any given time, increasing performance.

As with the case of increasing throughput, once again it’s PCIe to the rescue. The PCIe bus can supply attached devices with up to 25W power, which allows for more active die and greater performance. Given these facts, it makes good sense to see PCIe as the future for SSD performance scaling.

If enterprises do make a wholesale move to PCIe-based storage, expect some growing pains. For example, front-mounted bays for swapping SATA and SAS drives make adding and upgrading enterprise storage a much less painless process, and so a similar concept is under development that would simplify PCIe drive swapping. The multi-function SFF-8639 connector performs a similar role as the SFF-8482, which accepts both SATA and SAS connections. The SFF-8639 connector will accept single- and dual-port SATA Express (just like SCSI Express, SATA will get the PCIe treatment), dual-port SAS, MultiLink SAS, or up to four-port PCIe device configurations.


To accommodate the SFF-8639 and the devices it can connect, enterprises require a new type of hot-swappable drive bay, naturally. On that front, we have Express Bay, as it’s currently (and informally) called. Express Bay ports deliver 25W to connected drives and make possible the same hot-plug backplane technology that benefits SAS drives. HP Discovery has already showcased an early implementation of Express Bay technology, according to Nigel Poulton’s blog.