How many graphics cards can my CPU handle?

Dec 6, 2017
4
0
510
Friends,...
How many graphics cards can my CPU handle?

I'm building a new water cooled tower, with an Intel i7 8700K (6-core coffee lake Z370) cpu.
I intend to install two GTX 1080ti graphics cards, but would potentially want to increase this up to 4 in the near future. - (which means having to spend a fortune on a godlike Mobo to get 4 PCIE 16 slots.)

My specific question is : "Will the Intel 8700K cpu handle this ?"

Before you ask: Im not a gamer. Im a motion graphics designer and my render software is GPU based, and the more Cuda cores I have, the faster it gets.

thanks
 
Solution
The 8700k has only 16 PCIE lanes connected directly to the CPU. That’s fine for two way SLI because both cards will run at 8x speeds and shouldn’t be a problem. But 4 cards would put each card at 4x speeds and that would bottleneck the GPU’s. You need to move up to the X series cpu’s for real 4 way SLI utilization.

YoAndy

Reputable
Jan 27, 2017
1,277
2
5,665
For that I'll rather have the x299 series of motherboards, but the i7 8700k is strong enough and will run just fine. But for you to fully utilize your card potentials you should get the x299 series of motherboards and the i7 7800x will do just fine..

i7 8700k
Up to 1x16 or 2x8 or 1x8+2x4 Max # of PCI Express Lanes 16

i7 7800x
Max # of PCI Express Lanes 28
 

huntlong

Respectable
Aug 17, 2017
335
1
2,115
The 8700k has only 16 PCIE lanes connected directly to the CPU. That’s fine for two way SLI because both cards will run at 8x speeds and shouldn’t be a problem. But 4 cards would put each card at 4x speeds and that would bottleneck the GPU’s. You need to move up to the X series cpu’s for real 4 way SLI utilization.
 
Solution

Th_Redman

Distinguished
Jan 5, 2011
330
0
18,860
What about the Nvidia Quadro workstation graphics cards-especially in your field? If you don't need a GPU for gaming, I personally would go that route for what you do. Also, the AMD Radeon WX series is worth checking out, as well.
 

YoAndy

Reputable
Jan 27, 2017
1,277
2
5,665
Geforce vs Quadro – What’s the Difference?

GEFORCE PROS:
Faster clock speeds – Dollar for dollar, Geforce cards generally offer faster GPU clock speeds in the 10-20% range. For example, the Geforce GTX 1070 features a boost clock speed of 1683 MHz while the more expensive Quadro P2000 maxes out at 1470MHz. This speed equates to better overall general performance, which brings us to our next point…

Versatility and value – Looking to do a little gaming, a little 3D rendering, and some video? Faster clock speeds along with more CUDA cores and VRAM dollar for dollar make the Geforce cards the go-to for all purpose systems. That the power for the money, especially at the lower/mid tiers, makes Geforce the better value for most users.

Multi-monitor support – For day traders, enthusiast gamers, or extreme multi-taskers looking to use 3, 4 or even 8 monitors, Geforce cards provide the best path forward. 10-Series cards from GTX 1060 and up all support 4 monitors each natively and can easily be paired with a second card to double the monitor support. Most Quadro cards (with the exception of the NVS line) except those on the very high end will max out at two displays, requiring adapters and splitters to accommodate more.

Best for: Gaming, all around computing, day trading (multi monitor support), budget CAD, amateur video


QUADRO PROS:
Specific Render tasks – Quadro cards are designed for very specific render tasks like CAD design and professional video rendering. For example, the wire frame, double sided polygon rendering common with many CAD programs like AutoCAD makes Quadro the clear choice for this type of work, outperforming Geforce by a significant margin.

Extreme Power – Geforce does have beefy options like the GTX 1080Ti, but for the most extreme performance, a Quadro is simply without equal. For example, the Quadro P6000 features a stunning 24GB of GDDR5X VRAM and 3840 CUDA cores to provide 12 TFlops of power – and that’s on a single card. No Geforce card comes close. That type of power does come at a cost, but if the budget is open, Quadro is king in this department. Additionally, Quadro cards can also be paired with NVIDIA Tesla cards (a system formerly called NVIDIA Maximus) which allows for simultaneous visualization and rendering, exponentially improving performance.

Double precision computations – For complex double precision computations like those found in scientific and arithmetic calculations, Quadro significantly outperforms the Geforce equivalent. This is a very specific use case, but if it’s yours, you’ll understand the importance.

Durability/Warranty – Similar to Xeon processors, Quadro cards are generally designed for maximum durability and longevity and stand up to the rigors of daily strenuous use better than the consumer oriented Geforce. As a result, Quadro cards offer a longer, more robust warranty on average.

Best for: Certain Scientific and data calculations, CAD rendering, Professional-grade video production, 3D creation

So at the end of the day, which is better?

Ultimately, this really depends on your specific use case. For a lower to mid-range budget, I almost always recommend Geforce simply because of the value and versatility. But if all our rendering performance is what you’re after for CAD and video specifically, Quadro is likely the way to go.

http://www.velocitymicro.com/blog/geforce-vs-quadro-whats-the-difference/
 

TJ Hooker

Titan
Ambassador

SLI is for gaming, you don't need the cards in SLI for rendering.
And Nvidia only supports two way SLI for 10 series cards.
And a x4 connection to each GPU in SLI wouldn't just bottleneck it, it wouldn't be possible to even enable SLI (requires a x8 connection to each GPU). Although you can get around that with certain mobos that include a PCIe switch chip.

@toby.hallam.personal unfortunately I don't know what sort of PCIe bandwidth is required for good rendering performance (for gaming you typically want at least x8). But if x4 is good, then you should be ok with an 8700K.
 
Dec 6, 2017
4
0
510


Really appreciate the reply Andy. I hadnt even considered the Quadro range. Im also now re-considering the X299 chipset for the cpu instead of just going for the latest Z370. I suspect X299 is coming down in price, and a good i9 will give me 44 pcie lanes.
Thank you brother
 
Dec 6, 2017
4
0
510


Thank you my Friend. I think I fell into the trap of "newest is best". Hence going for the Z370chip, and not realizing that this will actually create a bottleneck for my GPU plans. Having looked at the core X chain of products I think I will get more choice from the x299 chipset like the i9 7900X which has 44 lanes and 10 cores. This feels like it will maximize the GPU speed (or at least prevent bottleneck). For GPU I'm also now considering the QUADRO range (as mentioned by other guys below). ----EDIT,... not any more , these are over £5000 each !!
For rendering (with Octane Render by OTOY) it's all about the CUDA cores, but lots of VRAM too. The GPU loads the entire 3D model scene before it renders out.
I need to find the solution that offers the best "price per cuda".... along with a CPU that doesnt bottleneck all that expensive hardware.
Thanks Again
 

TJ Hooker

Titan
Ambassador
After a bit of googling, it looks like PCIe bandwidth has no effect on rendering itself, only on the amount of time it takes to transfer the scene to the GPU. So if you're doing a bunch of simpler renders such that rendering time is comparable to transfer time, then you'll start to be bandwidth limited. But if your render time is much longer than the transfer time, bandwidth limitations become less relevant.

Example (made up numbers):
You need to transfer 11GB to the card (worst case scenario for a 1080 Ti). This would take 1.375 seconds with a x8 link, 2.75 with a x4 link. Does an extra ~1.4 seconds matter? If your typical render jobs take around, say, 30 seconds then the entire process only takes ~4% longer than it otherwise would. If your typical render jobs only take around 5 seconds for example, then it will take ~22% longer due to reduced bandwidth.

I found the following link that shows no difference in render performance based on PCIe bandwidth, but unfortunately they only look at x8 vs 16.
https://www.pugetsystems.com/labs/articles/Core-i7-7820X-vs-Core-i9-7900X-Do-PCI-E-Lanes-Matter-For-GPU-Rendering-1030/
 
Dec 6, 2017
4
0
510


Really appreciate your reply & simple example @TJHooker.
You're absolutely right, an extra second or two is not a big deal for just loading the scene data. However, my thinking is.... if I'm spending the kind of money to buy a 4-card system, it just doesnt feel like money well spent if I allow the CPU to create any kind of bottleneck (just for the sake of a few hundred dollars). So Im going go for a CPU with at least twice the pcie lanes.
I have learned a few things from this.
Thank you friend.