PCIe 3.0 recommendation for a controller

erpsaa

Honorable
May 17, 2012
46
0
10,540
Hi all,

I'm currently running the Intel (LSI) RT3WB080 HW RAID Controller and am looking to move to a PCIe3 without a taking out a second mortgage. I can't even find any published pricing on these, one example is this HBA LSI.

http://thessdreview.com/our-reviews/sata-3/lsi-sas-9207-8i-pcie-3-0-host-bus-adapter-quick-preview/

My RT3WB080 is 2.0 x8. I do not believe any 2.0 x16's exist and if they did, there is the 3.0 GB/s limitation, which would not be worth making a move to, since I'm getting 2.3 GB/s R/W now.

Can anyone recommend a PCIe3 with 8 SATA3 ports, that support the 8 240GB Vertex 3's I am now using? They used a Crucial M4 in the above test, 8 x 256 GBs and they scaled perfectly.

One more question, since these all require "call for pricing" and a friend whose business has deployed these cards has no idea. does anyone know how much they do cost? I'm assuming a ridiculous amount.

Thanks in advance for any help.

Dean
 
Solution
Here are some LSI PCIe 3.0 RAID controllers, and their cost on Newegg.com:
All of the cards support connecting 8 SATA 6Gb/s SSDs.

MegaRAID SAS 9270-8i $639.99
MegaRAID SAS 9271-8i $689.99
MegaRAID SAS 9271-8iCC $899.99
MegaRAID SAS 9271-4i $459.99
MegaRAID SAS 9286-8e $839.99
MegaRAID SAS 9286CV-8e $1,039.99
MegaRAID SAS 9286CV-8eCC $1,269.99

You'll have to do additional research to see which card is best for your situation.
Here's the link to the spec sheet of the cards: http://www.lsi.com/products/storagecomponents/Pages/6GBSATA_SASRAIDCards.aspx
Here are some LSI PCIe 3.0 RAID controllers, and their cost on Newegg.com:
All of the cards support connecting 8 SATA 6Gb/s SSDs.

MegaRAID SAS 9270-8i $639.99
MegaRAID SAS 9271-8i $689.99
MegaRAID SAS 9271-8iCC $899.99
MegaRAID SAS 9271-4i $459.99
MegaRAID SAS 9286-8e $839.99
MegaRAID SAS 9286CV-8e $1,039.99
MegaRAID SAS 9286CV-8eCC $1,269.99

You'll have to do additional research to see which card is best for your situation.
Here's the link to the spec sheet of the cards: http://www.lsi.com/products/storagecomponents/Pages/6GBSATA_SASRAIDCards.aspx
 
Solution

erpsaa

Honorable
May 17, 2012
46
0
10,540
Derek, thank you! I found those cards and many more; however I was thinking HBA because I'm running a Sandybridge-E CPU on a Workstation Board (the P9X79 WS). Although the board had 40 total Lanes, with not one legacy port, and I only use three Ports, with 3 PCIe3 open, I have more than enough 3.0 x8's. My concern is the Sandy Bridge E Core i7 3960-E. The CPU OC’d remans stable and very cool 24/24 the 17100 Memory (2,133) at 2,300, so completing most work aside from a data migration overnight, combined with the PC, running so smooth, (I think you know what I mean) when I say, I do not want to give up the configuration unless I have to and for Work, storage speed is key to a large part of my business. If I so a Data Migration, it must be done on the VM, running XP Pro SP3 and I need to pull the RAM and replace it with 1,333 ECC 8GB 4 x 2GB chips and load the Turbo OC profile at 3,900. It’s all about Storage for me.

I always assumed the 3.0 limitation applied only to GPU's which are of little concern to me, as long as the resolution does not need tweaking for a simple extended Desktop.

I had both Presales and Tech support at LSI on the phone and it seemed as if they had no idea what I was asking, when I simply was trying to find out if their cards (the ones you listed would work and scale 1:1) or if I needed their (I was kidding about cost) cheap HBA card and Flash it firmware which would allow RAID 0, 5, 6, 10 and 60 (that's all I need) since I use Intel's Web Counsel 2 for monitoring and they had no idea what I was asking, except they kept repeating, Your mainboard will support 3.0x8 cards, so it will support our 3.0x8 card. Yeah, it will support them, probably at 2.0 and this gives me no reason to upgrade, since 5 drives begin to even out using 2.0 unless go witj eve more expensive NAND, which can cause problems with the GPU.

When I would bring up the 3960-E CPU to LSI, all I got was "why should this matter?"

Is it me? I was told a long while back the only Sandy Bridge that supports PCIe3 is the 4 Core i7 3820.

I can deploy my existing WS to another employee along with my existing RTWB3080 with 4 x 90B SSD's in one 4 Port channel of the RT3WB080 (PCIe 2.0x8) and a 2TB File drive.

I can then build a new WS, using the ASUS Z9PE-D8 WS (Dual LGA2011) which uses the Xenon E5's. Although some of the 1600's and a few others are unlocked I do not see a need to OC at that point, however, if I have to, my concern is putting let's say 2 Corsair 80i's would be useless, these E5 Xenon’s run very hot normally at about 75 degrees C. If it were even needed to OC such a configuration, I would need to build a "REAL" Liquid cooling system, not use a self-contained unit. (But that is simply my opinion, I could be wrong).

Out of the blue LSI rep said: "With our new new card you can add 2 more GPU's SLI'd and OC'd", followed by "and your games will be blasting."

This guy was an idiot. I'm not a gamer, but I do not even believe what he said was true, most gamers do not need blazing fast storage, just a fast SSD for boot and I think most games take advantage of maybe 2 cores, so OC’ing and good Multiple GPU’s is a Gamer configuration, not 8 SSD’s on a RAID card. Again, fell free to correct me, these are assumptions I’m making.

It also does seem I will need to trade in a new GeForce GTX 680 single card. A very nice card with Tweeking, but I suffer from the same extended monitor issues and EVGAs fixes are not working. Hooking up 2 SLI'd ASIS 660 Ti cards gave me exactly what I wanted, with two HDMI cables. Clean 1920 x 1080 on 2 x 27 inch Samesung Digital Monitors.

The resolution become important when running a data migration using the Virtual Machine and Win XP3 and using Adobe Creative Suite (expecially Photoshop and Dreamweaver), as well as the Autodesk converter that runs with Quicktime Pro.

The biggest effect on productivity for me would be my storage array moving at 1:1 scaling.

What would you do in my case? A new build, with the 2 660's SLI’d, since this EVGA GeForce GTX 580 4GB card can’t give me two monitors at 1,920 x 1080 at extended dealtop, without a screen tweak. It oversamples and must be manually brought down.

BIOG QUESTION: Am I wrong about the 3960-E and can I simply upgrade the RAID Card without the trouble of an HBA, flashing firmware, and going with dual Xenon’s I cannot even OC (assuming I would need to) without true liquid cooling.

For obvious reasons, I trust the opinions here more than LSI’s presales, most gamers lnow more than most IT managers I know! :)

Thanks again Derek.

Dean
 


The motherboard and the CPU have to support PCIe 3.0 in order for a PCIe 3.0 Graphics Card or RAID card to work at that bandwidth.

Since your motherboard has PCIe 3.0 slots it obviously supports it.
The Intel i7 3960-X Extreme is not a 3rd Generation Intel Core CPU and does not support PCIe 3.0.
The LSI RAID cards will still work, but only at PCIe 2.0 bandwidth.



Both of you are correct.
GPUs that are SLI'd enable gamers to have more FPS. For gamers, having the highest FPS and being able to play any game at it's max settings is the holy grail.
You are correct in that SSDs do not increase FPS. They only decrease game-load and level-load times, which equates to slightly better performance.



You don't need an HBA. I'm far from an expert on RAID controllers but I don't believe you even need a PCIe 3.0 RAID card to get maximum performance from 8 SATA 6Gb/s SSDs. You just need an x8 (minimum) card that has ROC (RAID-On-Chip), and a good memory cache (512MB and better).

The MegaRAID SAS 9270-8i is an x8 card with a dual core ROC and 1GB of cache.

That card and your 2 660s in SLI and you should be good-to-go. :)

Also read your motherboard's User Manual about your PCI slots on your motherboard.
If the slot that your RAID card is going to be connected to is sharing resources with your SLI then make sure you will still be getting x8 speeds.

EDIT: After reading that SSD Review article in your 1st post it seems you would get even better performance with PCIe 3.0. :)
But in that case you would have to buy a new motherboard and a new CPU.
 

erpsaa

Honorable
May 17, 2012
46
0
10,540
Thanks Derek. I thought you might come to that conclusion as have I.

You're correct, I do get Max performance out of my current card, 256K RAM, 8 slots, but PCIe 2.0.

I appreciate your time.

Dean
 

popatim

Titan
Moderator
By the way, the only i7 to support pcie3 is the 3770 (so far)

edit. Also just having read the review I suspect two things. The first that the WS class motherbd must not need the cpu to support pcie and the other thing that is probably more relevant to you is that they used an 8 drive raid 0 array. Raid 5/6 does not scale linerally.
 

erpsaa

Honorable
May 17, 2012
46
0
10,540


Popatim,

You may be right, in fact, logic demands you are correct, or there is a different CPU being used given the 3770 is works with an 1155 socket. The Mainboard used in the test is the exact Mainboard I use. The ASUS LGA2011 P9X79 WS (Workstation SSI CEB). They never did mention the processor, which requires I post a message and get an answer on that.

You are correct on the RAID-5 scaling, but the test happened to be a match to what I have been using for almost two years (RAID-0) (except they used 8 x M-4 SSD's and I use 8 x Vertex 3 240GB's SSD's) My configuration has always been RAID-0 using the PCIe2 Intel (LSI) RT3WB080 HW RAID Card, 256K DDR2 (as Derek pointed out, it uses Raid on Chip (ROC). My scale begins to curve after you approach 5 disks and tops out at about 2.3GB/s average RW (more of a 3GB limitation of PCIe2).

Here is where the logic comes in. There is no LGA2011 Sandybridge-E CPU that supports PCIe3, yet these guys got the HBA to work on an LGA2011 Workstation Mainboard. It would make no sense to me that JBOD, or flashing the HBA, would deliver any different results than the LSI models Derek pointed out.

So, what were they using for a CPU? Either they were using a Sandybridge-E and you are correct, in that the Mainboard allows PCIe3 support without the CPU, but my guess is an E5 Xenon. My board also supports Xenon CPUs and there are dozens which are LGA2011 that would support PCIe3.

I'll try to find out, I should have considered this before. Thank you.
 

popatim

Titan
Moderator
The said they used a 3820 cpu which is pcie 2 on page 2 (about 1/2 way down right under the asus mssd pic). You made a good point that there are no 2011 cpu's supporting pcie3, It did not dawn on me that the 3770 is a socket 1155 processor then. I thought something funny was going on in that review and now I'm thinking they just made up results based on what they expected to see.
 

erpsaa

Honorable
May 17, 2012
46
0
10,540
I must have missed that and while the i7-3820QM is an Ivy Bridge which does support PCIe3, with one problem. I forget but I think the Laptops it supports are 1124 (something like that). ;-)

OTOH, you may be correct about the Main-board. They were a bit too specific about how to raise the threshold on PCIe2 to be making up the scale and they did not flash with IR Firmware and use the BIOS RAID.

You may be correct in what you first said about the Mainboard. I did get that board specifically because ASUS guarantees 2 years stock and 24/24/7 operation and it also guarantees comparability with server components.

I had an issue registering, but the guys email is there, I'll simply ask him and also ask our Intel support rep. Worse case, I can get a single Xenon, but I'd rather not change a stable configuration that works very smooth. Storage changes are one thing, not need to reinstall OS and apps, etc, but I like the 3960. Very stable. We've all been there at some time.

 

erpsaa

Honorable
May 17, 2012
46
0
10,540
Sorry about the accidental posts, I had to run out. I reahat lly need to know what you guys think about this exchange especially the reply from INTEL. To me, that was what is leaning that the Sandybridge-E's all support PCIe3, however, it could be the Mainboard. You cannot modify what Deepak suggested on the Mainboard, unless he knows something I don't.

Also, Derek and Popatim, I'm way too busy to take down my Workstation yet and test it for certain. Also, I do not lose anything when I go with Derek's suggestion, since I am getting the LSI MegaRAID 9271 8i. LSI has the list at $700.

http://store.lsi.com/store.cfm/MegaRAID_Controllers/6Gb_SAS_9270_RAID_Controllers/LSI00327.

Derek, the cards also dropped in price, the LSI MegaRAID LSI00331 (9271-8i) PCI-Express 3.0 x8 is $689 on Egg Head and includes 2 x SFF-8087 mini-SAS expanders, each of which have 4 SATA600 cables. I do not like the quality of LSI cables, so I'll be using the set I have. I called my Ingram Micro rep and he's checking, but said if I give it one to two months, I should be able to pick that card up for a less than $500, it's at $562 now (but from Ingram only.

That said, what do you guys think of this reply below? TAgain, the attitude of the Intel Rep VERY telling to me.

I'll paste bottom up for clarity and have removed most of my questions that did not relate to the HBA and 1:1 scaling. but I left in some additions to as insight to overall content and sorry for being so late in getting this posted:

"On Feb 28, 2013, at 9:06 PM, Deepak Sharma wrote:
Greetings Dean,

Les forwarded me your email. Thank you for the write-up, and no need to apologize. We appreciate your response and are fulfilled in knowing that we help people like you. It really is the best feeling to get such a response, and to know the work we do really reaches out to people.

I personally thank you for commending me server build. While many people have different ways of building an Application Server, this method was the one I chose, you have a different one, but of course, you had to build a Web Server. Also, if building an App or Web server was easy, there would be no need for a guide. The key isn't only to explain, nor is that the main highlight; the primary concerns are the inevitable bumps along the path, and how to surpass them when the happen.

Anywho, I wanted to say thanks, but I also noticed that you had a few questions. I'm not going to lie - I had a tough time following the email haha! I wasn't sure if you were asking questions in some places or using them as discussion-openers.

If you don't mind, could you please list you questions? I can assist you with them, but as far as Chris' article goes, I'm not sure how much I can go into that. I wasn't part of that project, and Chris isn't with us anymore, but I will certainly assist you as much as I can.

Thanks, and take care!

Deepak Sharma |
Technology X | Managing Editor
The SSD Review | Product Editor

My reply:

On Fri, Mar 1, 2013 at 7:36 AM, Dean Poulos <dpoulos@erpsaa.com> wrote:
Hi Deepak,

Thanks for the kind reply. Sorry for all those words to ask a simple question.

The HBA test was done with the same Mainboard I use for my personal workstation. The ASUS P9X79 WS (SSI CEB). The processor used in your test was an i7 3820 SandybridgeE which is PCIe2.

Yet, the setup scaled perfectly with the PCIe3 x8 in RAID-0 using the HBA in a PCIe3 slot getting the full benefit of PCIe3 with a CPU that only supports PCIe2.

My question is, since GPUs do not achieve the benefit of PCIe3 without an Ivy Bridge Intel, is it the Mainboard that makes the difference in this case?

I would rather not change my CPU (i7 3960x) for a Xenon, (the ASUS P9X79 WS also takes any Xenon) they run a bit hot for what I do.

Is it because a 3.0 x8 RAID card does not interact with a CPU the way a GPU does the reason you're test worked?

I can definitely benefit from upgrading my LSI MegaRAID RT3WB080 PCIe2 card to one of LSI's PCIe3.

LSI tells me this will work but they could not explain why a PCIe2 CPU does not matter. I was hoping you guys could explain why. Is it simply that a storage array on a PCIe lane does not need 3.0 CPU support and a GPU does?

Thank you and again sorry for the long first question.

Your board caters to a wide audience and this is appreciated by people like me who are not gamers, but who knows gamers are smarter than most IT managers I deal with! At the same time, my needs as a business are 24/24 operation with my i7 3960x at a steady 4.8GHz., and not going past 42 degrees C. (By that I mean, the H100 Corsair I use is not a true Water Cooler IMHO, if I was forced to Xenon, I would either need a Cooler, or I'd need to clock it down). Clockspeed is important, but not as important as Storage speeds. The data crunching we do, especially using Virtual machines, demands the fastest possible storage solution. Thank you!

Dean

ERP Solutions & Auditing, LLC
Dean Poulos—Managing Director
C: +1 617-510-0443
F: +1 781-750-3500
O: +1 781-221-2168
http://www.erpsaa.com/

DEEPAKS REPLY:

"Hey Dean,

No worries. I just wanted to know about your questions for clarity's sake :)

I looked into this, and while the 3820 is a 2011 socket CPU, it may or may not support PCI-E 3.0.

This Intel post here will sum it up better:

http://forum-en.msi.com/index.php?topic=159466.0

I understand why you're confused now. A SB-E 3820 running in a PCI-E 3.0 capable ASUS board...firstly that is confusing in itself. We would think that the 3820 is Ivy Bridge, but instead it's a Sandy Bridge E - a hybrid if you will.

What happens then is "unofficial" support for PCI-E 3.0, which essential means it "maybe" has it. That would leave it entirely up to the motherboard whether it wants to support PCI-E 3.0, or keep it at the confirmed PCI-E 2.0.

Unfortunately I cannot provide any insight as to what triggers the ASUS (or any 2011 board) to classify the 3820 as an Ivy Bridge and give it PCI-E 3.0 capability. Is it the motherboard, or do we need some sort of hack? Is it a certain batch or particular models of the 3820? We just can't say.

As far as we can tell, the 3820 is capable, but it isn't confirmed or researched. Why Intel is keeping it a secret is beyond me, but clearly it isn't true, natively-supported PCI-E 3.0.

So the discrepancies that you are seeing are most likely due to this. The scaling probably happened either by Chris modifying the ASUS to run the 3820 with PCI-E 3.0, or it happened without Chris knowing anything of what we have discussed here as he could've assumed the 3820 to be PCI-E 3.0, which could have been somehow triggered by the P9X79.

Hope this helps!"

MY REPLY

From: Dean Poulos <dpoulos@erpsaa.com>
Date: March 1, 2013, 10:36:52 AM EST
To: Deepak Sharma
Subject: Re: [The SSD Review]

ROTFL, yeah that's definitely "Intel Speak" in that link you included. They crack me up, you should be on one of their "mandatory Webinars" we resellers have to deal with. :??:

This boils down to using simple Logic.

1. I am using the ASUS P9X79 WS (SSI CEB) with a Sandybridge-E i7 3960x.

2. Sandybridge-E (not only the 3820) may support PCIe3, but not officially is what I'm gleaning from the tone of that Intel rep. If the i7 3820 works, so would the 3930, 3960 and 3970.

Conclusion: Grab one of the LSI RAID cards that will accept my 8 Vertex 3 240GB SSD's. Do a benchmark.

If I come out flying at 6-8GB/s I know it works and I'll send some chocolates to my ASUS Channel rep.

If I'm still stuck under the 3.0 limit, I'll have to hand over my 3960 to one of my sales reps and buy an Xenon E5 and pray that a self-contained H110 keeps the darn Xenon cool enough, so I can stay at 4.8 without having to buy a real liquid cooling system, or crank it down. I do not want to give up that Motherboard for the ASUS Dual CPU LGA 2011, despite the fact it makes me drool!

I will let you know how it goes.

This will definitely be a yes or no on the Sandybridge-E, at least with this ASUS WS SSI CEB Mainboard.

I will also have a lot more confidence in LSI's support and presales if this works, who said "It will work in 3.0 x8 on your motherboard, ignore the CPU."

Thank you,

Dean

I'll do the test when time permits and then know for certain. The thing is, who knows, since I would only know it works with the RAID card if it does. I'm getting rid of the annoying GeForce GTX 680 4Gb and replacing it with 2 GeForce GTX 670 2GB.

I assume (and yes, this is a stupid question) that that I can plug a cable in each card and to each 27" monitor I use and be able to get no oversampling at 1920 x 1080, as I do not.

I do not NEED to force it into SLI and will still be able the the NVIDIA software to extend the desktop, correct?

Thanks and I'd love to hear what you guys think.

Dean
 

erpsaa

Honorable
May 17, 2012
46
0
10,540
Guys,

New information and I am quoting an official Intel rep. They will not answer my question as to why, but in no uncertain terms, the Sandybridge-E second gens, 3920, 3930. 3960 and 3970 all support 3.0. It took me way too much time and digging and I wrote about three letters to Intel. If they want their Gold partners to be loyal, they need to be more forthcoming.

It is NOT the ASUS P9X79 WS Mainboard. It is ANY LGA 2011 Mainboard which supports PCIe3.

One easy way to notice this is go through new egg and some of the retail sites and they have done one of two things:

They have removed any reference to PCIe2 from the specs they list on all SB-E. Yet, Intel won't announce it. This is what did it for me.

A private email from my Intel channel rep saying the best he could do is tell me and I quote: "since 2011 all Sandybridge-E's have supported PCIe 3.0 and that goes for GPUs as well."

Since GPUs are of little concern to my business, I tracked down a number of benchmarks and every SB-E runs at 5 (GT/s) in PCIe2. I finally get why PCIe2 suffers the 20-30% overhead using the 8b/10b encoding rather than PCIe3’s 128b/130b virtually eliminating encoding and delivering 8 GT/s at x8. That would crush the bandwidth issue.

I pissed I can't get an answer as to why, except my cynical nature tells me it would dillute the Xenon business.

 

baddad

Distinguished
Oct 20, 2006
1,249
0
19,310
It's the X79 chip set that gives you the PCIe 3 support with I7 2011 Processors.
PCI Express 40 lanes 2.0 [Officials Intel information sheet](and depending on the motherboard brand and model it can afford a PCI-e 3.0* connection supported by a third party motherboard manufacturer e.g ASUS SABERTOOTH X79)
 

popatim

Titan
Moderator
Thanks for the update Dean. I'm glad we finally got to the bottom of that mystery and cudos to you for your effort in getting to it.

As for your display question, yes you can have 2 non-sli'd 680's in your system each running a separate display and still expand your desktop across them. This is what I do too <grin> you can also game in full screen on one and have something else going on on the second screen with no loss of gaming fps performance as long as you have enough cpu. I often game while i catch up on some of my shows. This can be a bit tricky as most full screen games like to blank out the 2nd display.
 

Neckemoff

Honorable
May 27, 2013
1
0
10,510
Hi,

I just want to give you a heads up about considering LSI 9271 for an array of SSDs from my personal experience. It will not perform as well as you think (blame how technology works or LSI) and worse for realistic performance of 4k random reads. In fact, you can create a software raid that will perform better. The LSI raid cards are designed for the business need of large IO throughput, where lots of small volumes are used by different applications at the same time. They do not excel at performance IO for a single operation, but still put out respectable numbers.

For max performance, you are best off using intel x77 or x79 (they are limited to raid0 with only 2 SATA 6 ports) with 2 quality SSDs in raid0. A single larger/ better quality SSD will maximize real world IO demands (4k random read) on a gaming rig - a 500g SSD will be better than two 250 SSDs in raid0. In fact, raid will decrease random 4k read, which is what you want for gaming. Unless you are trying to set sequential read/write benchmarks or have a business requirement for this, i recommend you not spend money on large SSD arrays.

Good luck.