Hi,
I'm thinking of building a 2 or 4 GPU system to run deep learning neural networks on. Deep learning requires a great amount of device-to-device PCI express bandwidth. I've read that the device-to-device bandwidth is largely determined by the PCI express switch that is in the motherboard. If you have a motherboard with four x 16 slots and one PCI e switch connecting then all, then you each device should theoretically receive 16 PCI lanes to one another.
How do I determine which motherboards have the best PCI switches for a Deep learning machine that requires the maximum device-to-device bandwidth?
I'm thinking of building a 2 or 4 GPU system to run deep learning neural networks on. Deep learning requires a great amount of device-to-device PCI express bandwidth. I've read that the device-to-device bandwidth is largely determined by the PCI express switch that is in the motherboard. If you have a motherboard with four x 16 slots and one PCI e switch connecting then all, then you each device should theoretically receive 16 PCI lanes to one another.
How do I determine which motherboards have the best PCI switches for a Deep learning machine that requires the maximum device-to-device bandwidth?