Memory And Bandwidth
DMI And Bandwidth
Another potential issue is the interface between the CPU and PCH. DMI 3.0 is essentially equivalent to a four-lane PCIe 3.0 link, offering roughly 4 GB/s of bi-directional bandwidth. All I/O from your USB-attached thumb drive, SATA-based SSD and gigabit Ethernet network goes through the PCH and across that interface before landing in system memory and eventually the CPU or GPU.
Using multiple devices simultaneously connected to the PCH forces them to compete for bandwidth. Intel claims that there shouldn't be as much contention now that the third-gen DMI doubles the previous generation's peak throughput, but it's still a plausible concern. That's one reason you probably wouldn't want to use a multi-GPU configuration on a chipset like H170 that won't divide the CPU's PCIe lanes between multiple graphics cards. It's also one of the reasons why Nvidia doesn't allow SLI across four-lane links.
Starting with Skylake, Intel added DDR4 support to its memory controller. But because the technology is still fairly new, the company retained support for DDR3 as well, easing adoption of its most modern platform.
Don't take that to mean any DDR3 module will work with Skylake, though. Only DDR3 operating at or below 1.35V is officially supported, and using DDR3 at higher voltage levels could damage the CPU's integrated memory controller.
Several board vendors list support for RAM operating at higher voltage levels, and you may not run into problems using RAM rated for 1.5 or 1.65V, but Intel doesn't recommend it. A lot of damage materializes slowly over time due to electromigration. As such, you'll want to carefully weigh the risks of dropping older modules into Skylake-based systems, providing you have a motherboard with DDR3 slots at all.