Will all these parts work together? 3x NVME 960 Pro in raid?

HybridWolf

Notable
Apr 18, 2017
246
0
860
I know a fair bit about computers, but one thing I never knew about was pcie and how it worked. Especially pcie lanes in cpu's?

Anyways, I need someone to verify if all of these parts can work together.

This is a build from one of my friends, he has a budget for $16,000 and wants the most overkill setup for very heavy editing at 4k + footage and video recording, hence why he chose 128gb of ram for adobe premier and other ram-hungry programs.

Here is the link to the current build. Build

This is the motherboard that had 3x M.2 2280 slots. Motherboard

This is the NVME SSD's he plans to use. 960 Pro's 2tb

I heard that some motherboard chipsets allow a M.2 NVME SSD to use 4 pcie lanes, but not from the cpu, from the motherboard, not affecting the other components, but what I want to know is does this specific motherboard chipset support having 3 ssd's without using any cpu lanes? And if the other 2 ssd's need cpu lanes, how many will be left over in the whole system?

He plans to use the 2 1080 Ti's in sli, both in x16 slots and an Elgato 4K60 Pro Capture Card, which utilises a x4 pcie slot, which he plans to put in a x8 slot ( due to bandwidth taken up by 2 1080 ti's limiting a 16x slot to only 8x as far as I understand )

So my main question is, with all of the hardware in the system (including the 4k60 capture card ), will all of these items work together with enough cpu lanes to support it all?

Thanks.
 
Solution
Some PCI-E slots can be redirected to the PCH (Chipset on the motherboard), the motherboard manual will have this info.
I know multi M.2 to PCI-E cards to exist but I am unfamiler with them as I dont need them.

RAID 1 is 2 disk's with the same data on them, great for mission critical applications, if one drive fails you can keep going no need to reboot. No read or write performance increase, and get the capacity of 1 disk
RAID 5 and 6 are similar, RAID 5 requires a minimum of 3 drives, you get the capacity of 2 disks, you lose write performance, gain some read performance. RAID 6 requires a minimum of 4 drives, get the capacity of 2, lose even more write performance, and gain less read

RAID 10 is a minimum of 4 drives, its a...
If it is using all the NVMe drives through the motherboard (not the CPU) your maximum speed will be ~ 4GB/s for all the NVMe drives combined due to the motherboard chipset being connected to the CPU with a single PCI-Express 3.0 4x link. RAID would be pointless and end up reducing performance in real workloads.
For the best performance you will need to attach the NVMe drives to the CPU's PCI-E lanes, which you will not have enough of unless you run each GPU in 8x mode.
 

HybridWolf

Notable
Apr 18, 2017
246
0
860
Well, he's planning on using raid 0 for optimum performance in synthetic workloads and other very taxing cases, but he did say that any other raid that can safely store some of the data so that if 1 drive fails, not all of them will lose the data.
( I think this is raid 5/10? ) ( Also im not aware of all the raid levels. )

@Snipergod87 You mention to run SLI in x8 mode, will that hinder the fps he receives in games by any chance or is there enough data running through them both? Also how can I set it up so that all of the nvme ssd's run through the cpu instead of the x4 pcie link?
 
Depends on how that motherboard is layed out, but you might need to use a PCI-Express -> M.2 Card (or multi M.2 card)
8x-8x will reduce performance but very minimally.

Problem with RAID on NVMe arrays is that the point of NVMe was to reduce protocol overhead to allow for faster storage, by putting them in an array you are adding a large amount of overhead and reducing the performance, this is why when people raid NVMe devices it tends to be in some sort of parity, RAID 1,5,6,10. Not RAID 0.

The motherboard manual will have detailed information in regards to what M.2 slot is chipset controlled vs CPU and what PCI-Express slots are chipset vs CPU
 

USAFRet

Titan
Moderator


RAID 0 is the antithesis of "safe". If 1 drive, or the RAID controller goes bad, all data across all drives is lost at that moment.
And performance will likely be slower than a single drive. PCIe lanes, as mentioned above.

With a budget like that, you're trying to do too much in a single box.
This use case screams for a NAS box or other secondary storage.
Do all of your work on the system, with individual drives. Then move it to the NAS box with redundant drives, etc.
And actual backups.
 

HybridWolf

Notable
Apr 18, 2017
246
0
860
So, to clarify a few things, pcie slots arent always sent to the cpu lanes?
And what is a good pci-e -> M.2 Card holder? The only one i'm currently aware of that can hold all 3 ssd's would be the Acer DIMM ( I think thats what it's called ), only problem being that they don't support samsung nvme drives.

Also can you maybe go over the common types of raid used for multiple nvme ssd's? Such as raid 5,6 and 10.
 
Some PCI-E slots can be redirected to the PCH (Chipset on the motherboard), the motherboard manual will have this info.
I know multi M.2 to PCI-E cards to exist but I am unfamiler with them as I dont need them.

RAID 1 is 2 disk's with the same data on them, great for mission critical applications, if one drive fails you can keep going no need to reboot. No read or write performance increase, and get the capacity of 1 disk
RAID 5 and 6 are similar, RAID 5 requires a minimum of 3 drives, you get the capacity of 2 disks, you lose write performance, gain some read performance. RAID 6 requires a minimum of 4 drives, get the capacity of 2, lose even more write performance, and gain less read

RAID 10 is a minimum of 4 drives, its a combination of RAID 1 and RAID 0, you can protected against disk failures and get a speed increase (not as big as 4 disks in RAID 0, only an increase of 2 disks in RAID 0) and get the capacity of 2 drives.

Now those speed increases most noticable mainly for mechanical HDD's due to their high latency and slower speeds and IOPS, adding a little more latency wont make much of a difference to mechanical drives.

In the enterprise space NVMe disks will either be used in RAID1, RAID 6 or RAID 10, or JBOD with a much more intelligent "RAID" method (like an array that has a few hundred GB's of RAM to cache read and writes to, than they write them out to the array so it keeps performance super high since the transactions are in RAM, then flushed to persistent storage, or object storage
RAID 5 isnt used anymore due to high likely hood of URE's in large arrays (over 1-2 TB) which will render your array dead.
 
Solution

USAFRet

Titan
Moderator


You don't marry a RAID array with a particular type of drive, you look for what you want it to do, whichever drive type.
Mostly.

https://en.wikipedia.org/wiki/Standard_RAID_levels
https://www.pcmag.com/article2/0,2817,2370235,00.asp
https://www.prepressure.com/library/technology/raid

However, With SSD's and even worse, with NVMe drives, the concept of RAID 0 automagically being faster is not necessarily true. Or any combination that includes a RAID 0 portion. Often, just the opposite, and you lose performance.
SSD's - http://www.tomshardware.com/reviews/ssd-raid-benchmark,3485.html
NVMe (950 Pro) - http://www.tomshardware.com/reviews/samsung-950-pro-256gb-raid-report,4449.html

You need to research exactly what you're trying to do or protect against, and build the system to do that.
You can't just throw a bunch of drives together and sprinkle some RAID dust over them. You'll come back wondering why it is not "faster" or "better".
Research.