Fine, but Intel are just playing air guitar as a spoiler.
We have already seen the sudden appearance of similar "me too" white papers from nvidia re gpu mcmS.
What is the object here? Its to boost HB interlink performance where pcie3 struggles - ie. linking storage/memory/gpu/cpu/nvme.
Clearly, linking the gpu is the greatest mainstream challenge for pcie. Intel do not make a decent GPU, so they are back to square 1 - they have to get nvidia to play ball with a chiplet for intels mcm.
With the imminent release of raven ridge apu, amd will have IN PRODUCTION, examples of these resources interlinked separately& independently from the pcie bus, using the Fabric bus.
Maybe not all in one place, but working examples of having the problem areas covered well & economically, either now or soon.
There is not a lot of other call for chiplets. PCIE handles the low bandwidth details stuff pretty well as is. The real (HB) problems, will have been sorted for millions of happy amd customers long before intels mcm is a reality, in the usual 5 years a fresh product takes.
For example, the pro vega ssg gpu card has 2TB of raid 0 nvme ~storage/cache extender & 16GB of gpu ram. They are all linked on on the discrete vega fabric bus.
Similarly, on amdS imminent apu, we see vega gpu and zen cpu interlinked using infinity fabric.
The age old problem of teaming multi processors, reduces largely to one of maintaining coherency between them.
There is little doubt fabric works excellently at teaming cpuS, judging from Ryzen, & fabric has surely always been planned as including both amds cpu & their gpuS.
AMD have all the ingredients and skills, & are not many steps away from the killer hedt+ product - an APU MCM with multi core cpu, multi gpu, hbm2 cache with nvme ~storage/cache.
If it requires the space of 2x MCMs for high end products, a 2 socket mobo could accommodate that.