Sign in with
Sign up | Sign in
Your question
Solved

Question regarding PCI Express Bandwidth Limit for SATA 6.0Gbps RAID

Last response: in Storage
Share
August 2, 2011 6:02:30 AM

Hi Guys,

I have a quick question regarding what I'm seeing as an obstacle in getting the proper performance out of a RAID card...

I'm interested in setting up two RAID arrays (a RAID0 of 2 disks, and a RAID1 of another 2 disks), and am looking at a hardware RAID solution. I am interested in the LSI 9260-4i in particular.

Now my question is this: this RAID card is advertised as having 6Gbps SATA support [1]. What I don't get though is that as a PCI-E 2.0 x8 card, it should theoretically top-off at 8 Gbps - so I don't get how it can support 4 (or, depending on the model, 8) channels of 6Gbps drives.

Basically, I want to create a 2x RAID0 array of 6Gbps drives (OCZ Vertex 3 240GB), and a 2x RAID1 array of 6Gbps drives (WD Velociraptor 600GB). I realize that neither of these disks takes full advantage of the 6.0Gbps SATA III interface, but they do exceed the 3.0Gbps SATA II interface specifications by a nice amount even in standalone configurations.

So I don't understand how this card can support ~8Gbps (2x SSD in RAID0) + 4Gbps (2x WD in RAID1) without reaching the limits of the underlying interface.

Are there any PCI 2.0 x16 RAID cards available, because I couldn't find any... And I'm really not looking forward to buying two of these cards to pull this off, but if that's the only choice...

Thanks!

[1]: http://www.newegg.com/Product/Product.aspx?Item=N82E168...
a c 311 G Storage
August 2, 2011 5:44:11 PM

A PCI-E 2.0 lane carries 500 MB/s (4 Gbit/s), according to good old Wikipedia. So 8 lanes is about 32 Gbit/s. Let's look at your specs and PCI-E specs and see if bits and bytes got swapped somewhere.

Assuming 4 Gbit/sec for each drive, 8 Gbit/sec for the RAID 0 and 4 Gbit/sec for RAID1, I count 12 Gbit/sec, or 1.2 Gbyte/sec (using 8/10 encoding). While the interface claims 32 Gbyte/sec. No problem.

I'm quite sure that I've compared apples to oranges somewhere getting the raw bit rate and the data bit rate mixed up, but the factor of 0.8 isn't enough to throw this off.
-----------------------------------------------------------------

Have you considered bypassing PCI-E limitations by using the chipset's RAID ability? RAID 1 and "RAID" 0 don't chew up processor time calculating parity, so the performance off the chipset can be reasonable.

m
0
l
August 2, 2011 5:47:12 PM

I was going by the PCI spec website:

http://www.pcisig.com/news_room/faqs/pcie2.0_faq/

Quote:
For example, a PCI Express 1.1 x8 link (8 lanes) yields a total aggregate bandwidth of 4Gbps, which is the same bandwidth obtained from a PCI Express 2.0 x4 link (4 lanes) that adopts the 5GT/s signaling technology.


Which would tell me that a PCI-E 2.0 x8 would have a bandwidth of 8Gbps...

No motherboards on the market at the moment have 4 SATA III ports with RAID support, only 2 at the most.
m
0
l
Related resources
a c 311 G Storage
August 2, 2011 5:50:41 PM

Interesting. If I didn't misread the Wikipedia, then there is a discrepancy between the two sources. Oh, well, Wikipedia isn't the final authority, after all, but I will go back and check my reading. Odds are that I made a silly mistake.

The SIG text is clear. An x8 link will give an aggregate rate of 8 GBytes / second.

The Wikipedia text is clear: "The PCIe 2.0 standard doubles the per-lane throughput from the PCIe 1.0 standard's 250 MB/s to 500 MB/s. This means a 32-lane PCI connector (x32) can support throughput up to 16 GB/s aggregate." So one-fourth of an x32 (eight is a fourth of thirty-two, isn't it?) is - 4 GBytes / second. They don't agree. And neither matches my first attempt at calculating a number. Maybe 500 MB/s is each direction, and aggregate is twice that, counting simultaneous read and write?

[:wyomingknott] And I did so wall at math in school.

Edit: Got my bits and bytes mixed up the first time, I did. "So 8 lanes is about 32 Gbit/s" and then "...the interface claims 32 Gbyte/sec." I made the mistake I thought you had!
m
0
l
August 2, 2011 7:11:27 PM

That's OK :) 

And actually, the SIG text says 8 gigabits per second (1 gigabyte).
m
0
l

Best solution

a c 415 G Storage
August 2, 2011 10:20:14 PM

Computer Guru said:
So I don't understand how this card can support ~8Gbps (2x SSD in RAID0) + 4Gbps (2x WD in RAID1) without reaching the limits of the underlying interface.
There are two important points to keep in mind:

a) the PCIe interface is only involved in transfers of logical sectors, not physical sectors. For example, when you write a sector to a RAID-1 volume, that one sector is transferred once over the PCIe interface, and then RAID controller then retransmits two copies of it over separate SATA ports to the two drives in the array.

b) Hard drives are much, much slower than 6Gbit/sec. In fact even a FAST hard drive can only sustain about 1/4 of that data rate, and then only for the outermost sectors (the innermost sectors are typically only half as fast). This means that an 8-lane card running at 4GByte/sec can easily handle the aggregate sustained throughput of over 25 hard drives running at 150MByte/sec, even if all sectors were transferred over the PCIe bus.
Share
August 3, 2011 5:18:45 AM

sminlal, thanks for your reply and spot on.

I'd completely forgotten that the 2x bandwidth for RAID0 would be happening past the PCI-E interface completely, and until then it would be the normal logical sectors being carried alone that would count.

I realize that the hard disks will never reach the 6Gbps mark in such a configuration. As I mention in my first post, I'm assuming a maximum, unsustainable burst speed of 4Gbps for each array.

This means we're talking about a maximum, unsustainable burst speed of around 8Gbps on a... 8Gbps interface. Makes perfect sense and sounds great!

Thanks for catching that oversight on my behalf.
m
0
l
August 3, 2011 5:18:57 AM

Best answer selected by Computer Guru.
m
0
l
a c 415 G Storage
August 3, 2011 8:04:14 PM

Computer Guru said:
I'd completely forgotten that the 2x bandwidth for RAID0 would be happening past the PCI-E interface completely, and until then it would be the normal logical sectors being carried alone that would count.
There's no redundancy in a RAID-0 volume, so every sector read or written from the disk is also transferred over the PCIe bus. Only redundant RAID organizations such as RAID-1 have the RAID controller doing more I/O to the disks than is transferred over the PCIe interface.

Also, just to be clear, according to this page the LSI 9260-4i card is an 8X PCIe card and that means it can transfer 4GByte/sec (32Gbit/sec) over the PCIe interface. The raw PCIe bit transfer rate is actually higher than that but the 10/8 encoding means that the effective transfer rate for data is about 500GByte/sec per lane per direction. See: http://www.pcisig.com/news_room/faqs/pcie3.0_faq/#EQ3
m
0
l
!