I may have the wrong idea about what VT-d offers. I was hoping it would allow me to set up a couple of VMs under VMWare or VirtualBox, and as long as my M/B's chipset and the CPU supported VT-d and it was enabled in the BIOS, all of my VMs would have direct access to resources like video cards, and the direct access would be transparently enabled when the VM became the active VM.
But this Intel document says that one of the downsides of using VT-d is that the direct access is limited to one VM.
...two drawbacks to using DDA (with today's state of S/W support)
Unavailability of the DDA device for use by other VMs
Limited migration support for VMs with DDA
Do I understand correctly that VT-d would have be disabled in VM1 before it could be enabled in VM2, so that one could not simply switch between VMs?
Or does VM1 also have to be stopped?
Keep in mind, if needed and I assume it will be, you MUST verify that both the CPU and MOBO's offer it. In particular MOBO's manufactures often both Disable it AND don't offer the settings in their BIOS.
You will have to decide on your exact setup beforehand and carefully research the requirements. What you posted is too vague to draw a conclusion.
Thanks for the reply. I am hoping that Intel doesn't disable Vt-d in their own motherboards. :-) But my central question is how to understand the caveat that Intel's doc offers: "With today's state of S/W support ... Unavailability of the DDA device for use by other VMs". Can onboard accelerated graphics be a DDA device? In other words, does VT-d give a VM direct access to onboard accelerated graphics? If you have two VMs, can only one of them be given direct access to the accelerated graphics? I suppose these are questions for VirtualBox or VMWare, since the Intel doc points the finger at the S/W.
When you use IOMMU (which is what Intel's VT-d is all about) you can have several virtual machines running. The IOMMU makes it possible for you to pass through PCI/PCIe hardware directly to the VMs. That means if you decide to pass through a graphics card to one of your VMs it will be unavailable to the other VMs and the host. Of course you can hot-plug/deplug the graphics card while all systems are running and switch the system it is attached to, although the operating system on the VM where it was attached to in the beginning is not likely to like that the GPU suddenly disappeared.
So, technically you can have more complex IOMMU configurations where you pass through several hardware components to different VMs. The only caveat is that once the hardware is passed through it becomes unavailable to all systems but the VM to which you have passed it through.
Not all hardware can be passed through. As you pass through a piece of hardware it must be reset somehow. You see, normally a driver initializes say a hard drive controller the controller goes through a test sequence at boot. Once the test sequence is completed it may get "confused" if another driver (in the VM) suddenly reinitializes the controller.
To alleviate this confusion it is an imperative that the hardware component can be reset by software. This feature is called function level reset (FLR) where the hardware is reset at function level. Another way to reset the hardware is by cutting the Vcc voltage from the PCIe slot which can be done via ACPI. Again, not all hardware components support this type of reset, particularly hardware that have an auxiliary power connector which don't rely on the Vcc voltage on the slot. This reset type is called D3-D0 and VMWare has a HCL for VM DirectPath and reset support for different hardware components.