X99A relation between M.2 slot, PCIe, and SATA ports

droidling

Distinguished
Nov 25, 2007
49
0
18,530
I'm building a new desktop for heavy photo processing for photogrammetry, 3D Scanning, 3D CAD, and VR gaming. I'm trying to build to upgrade. With a budget closer to $1500 than $2000. These are the components I have so far.

CPU: Intel i7-5820k
Motherboard: MSI X99A SLI Plus
GPU: Gigabyte GeForce GTX 1070 8 GB GDDR5 256 bit PCI-E 3.0 x 16 Windforce OC (GV-N1070WF2OC-8GD) (I'd like to have the option for a second GTX 1070 in SLI mode later)
Memory: G.Skill Ripsaws V 32GB (2x16) DDR4 3333 F-4333C16D-32GVR
Optical Drive:LG Electronics 14x SATA Blu-ray WH14NS40 - OEM
Power Supply: Corsair CMPU-1000HX

It's been a good 10 years since I built a gaming/graphics level PC. I feel pretty good about most of my parts choices. I am a little lost when it comes to the relationship between the processor lanes, PCIe bus and modern storage. In particular I wonder how it effects adding a second video card later.

According to the X99A manual the motherboards M.2 slot can be either M.2 SATA (6Gb/s) or PCIe (32Gb/s). Other than speed I'm not to sure what the difference is.

If I use a faster PCIe 3.0 x 4 M2 drive will that take lanes away from the PCIe 3.0 x 16 slots? Will it also take up 2 of my Raid compatible SATA 3.0 ports?

Is there really any point in having a 6Gb/s M.2 drive.? They seem to be the same speed as a standard SATA III SSD, are more expensive, and take up 2 SATA 3.0 ports. At least they do on my mother board.

Could I get similar transfer rates by putting in 6 x $60-$70 240GB SATA III 6Gb/s SSDs in a raid 5 array, and not use any of the PCIe bus bandwidth? This may seem crazy but it is about the same price as a 1TB PCIe 3.0 x 4 drive, and it adds fault tolerance.

Edit:
Hopefully this diagram will help explain my confusion.



I can't tell if the M.2 slot gets it's bandwidth from the PCIe slots, the PCI Express Bus or some combination of the two. I also don't understand why the DMI 2.0 link isn't a huge bottleneck between the CPU and all of those SATA 3.0 and USB ports.

I haven't kept up with hardware developments for quite a few years. I apologize if any of this seems dreadfully obvious.
 
Solution
The m.2 in PCI-E mode takes 4 lanes from your CPU, leaving 24 lanes available for up 2 two GPUs. The GPU's will run at 3.0x16 and 3.0x8 with that CPU regardless of whether or not you install a PCI-E SSD in the m.2 slot, so there's no drawback to using it. I think the PCI-E SSD would generally outperform the RAID-5 with SATA, but it's up to you as to whether the fault tolerance is worth it.

I saw an article on this site in which such a RAID was tested, in case you're interested. http://www.tomshardware.com/reviews/ssd-dc-s3500-raid-performance,3613.html

scuzzycard

Honorable
The m.2 in PCI-E mode takes 4 lanes from your CPU, leaving 24 lanes available for up 2 two GPUs. The GPU's will run at 3.0x16 and 3.0x8 with that CPU regardless of whether or not you install a PCI-E SSD in the m.2 slot, so there's no drawback to using it. I think the PCI-E SSD would generally outperform the RAID-5 with SATA, but it's up to you as to whether the fault tolerance is worth it.

I saw an article on this site in which such a RAID was tested, in case you're interested. http://www.tomshardware.com/reviews/ssd-dc-s3500-raid-performance,3613.html
 
Solution

droidling

Distinguished
Nov 25, 2007
49
0
18,530


So I would need to step up to a 40 lane CPU if I want to add the second video card and get full speed out of it. If I go for the i7-6850K it starts looking like I'd be in the $2000 system range. I'll have to think about that.

I can't say I understood all the particulars of the RAID article you linked. It does seem obvious that the 4 GB/s DMI link is going to limit the speed of any RAID that doesn't have a dedicated controller with it's own PCIe lanes. I'm getting quite a bit out of it and what I don't understand are questions for another forum.

Either way it looks like the M.2 PCIe 3.0x4 drive is my best choice.

Thank you for you help.