Re: Full 3D gaming in virtual machine

Djhg2000

Distinguished
May 16, 2009
165
0
18,680
This was originally intended as a reply to this thread http://www.tomshardware.co.uk/forum/336186-15-full-gaming-virtual-machine but when trying to submit my comment I found that the thread had been closed (found it strange that I could still get to the point of writing the comment though :ouch: ). I had so much new info to provide that I simply couldn't let it go because of a closed thread and I'm sure at least someone will find this very useful.

TLDR; Using Xen to virtualize Windows you can use a real graphics card and get close to native gaming performance. Thread closed so I started a new one.

--- The comment I couldn't post -------
Sorry for bumping, I had completely forgot about this thread. I thought I should give everyone an update now that I've gotten more familiar with Xen, including some tasty performance improvements.


First off, I'm still using the Debian Sid repo version of Xen, so everything should apply even to those who don't want to compile anything.


Second, I bought an ASUS Radeon HD 7950 a few months ago (yet another coincidence, great minds think alike) and I can verify that it does indeed work with VGA Passthrough. I have experienced a few graphics driver crashes (Catalyst 12.10) but not while gaming so I'm thinking something with power management, doesn't happen often enough for debugging and nothing but a minor annoyance anyway.


Then I figured I should repost my config, it has changed a little due to the upgraded graphics card and switching to Windows 7 but if you look closely you'll also find something else.
# Xen configuration file
# Xen configuration file
# Written for Xen 4.1
#
# Currently boots Windows 7 from /dev/sdb
##################################################
## Reconfigured for parallel boot with WinVista ##
##################################################


# Name of machine
name = "Win7"

# RAM (MB)
memory = 3000

# CPU
vcpus = 6
cpus = "2-7"

# Disk drive(s)
disk = [ 'phy:/dev/sdb,ioemu:hda,w', 'phy:/dev/sr0,ioemu:hdc:cdrom,r' ]
#cdrom='/dev/sr0'

# Boot device (c = HDD, d = CDROM)
boot = 'dc'

# Chainloader and emulation layers
kernel = '/usr/lib/xen-default/boot/hvmloader'
device_model = '/usr/lib/xen-default/bin/qemu-dm'
builder = 'hvm'
vnc = 1
sdl = 0
acpi = 1
apic = 1
stdvga=0
serial='pty'

# Networking
vif = [ 'mac=00:16:3e:09:cb:15, type=ioemu, bridge=xenbr0' ]

# PCI (the fun part)
#gfx_passthru=1 # Enable when Xen 4.2 hits the repo
pci = [ '01:00.*', '05:00.0', '08:00.0']

# Temporary solution until I've figured out mouse forwarding
usbdevice='tablet'

As you can see above, I can now boot up my old Vista install for my friends to use when they're over and totally eliminating the need for borrowing another computer! I have tested this for around a week this past summer and it worked wonderfully, except for my still pretty buggy USB3 controller which can take a few tries before it works. This could be a driver issue and doesn't happen very often for the first VM. I took away the last controller (the two bottom USB3 ports on my motherboard) and gave that to the Vista VM, I will probably just get another USB controller eventually but I wanted to try this out to see if it worked.


Then, let's move on to launching the VM. I'm using the following script to launch my Windows 7 VM.
#!/bin/bash

VM_NAME=Win7

# Make sure it's not already running
if xm list | grep "$VM_NAME" ; then
echo "$VM_NAME is running, aborting"
exit
else
echo "$VM_NAME is not running, proceeding"
fi

echo "Setting up PCI devices"
pci_hide_setup_Win7

echo "Booting $VM_NAME"
if xm create $VM_NAME ; then
echo "$VM_NAME booted"
# Scroll lock LED to indicate machine has booted
#xset led 3
else
echo "An error occured, exit status $?"
fi

if xm list | grep "$VM_NAME" ; then
echo "Setting up VCPU pinning for $VM_NAME"
xm vcpu-pin $VM_NAME 0 2
xm vcpu-pin $VM_NAME 1 3
xm vcpu-pin $VM_NAME 2 4
xm vcpu-pin $VM_NAME 3 5
xm vcpu-pin $VM_NAME 4 6
xm vcpu-pin $VM_NAME 5 7
exit
else
echo "$VM_NAME is not running, aborting"
exit
fi

This script is accompanied by a couple of other scripts to set up the PCI hiding, here it's called "pci_hide_setup_Win7". Let's take it from the top:

1. This script can be used for any VM bu changing the VM_NAME variable specified at line 3 (but don't forget the PCI hide script if you need one)
2. You can uncomment the line "xset led 3" if you want an indication to remind you that you have started the VM
3. This is the performance enhancing bit; by pinning the VCPU cores to CPU cores, I saw a massive gain in 3DMark Vantage graphics performance (up by just over 8000 points)

Before VCPU pinning:
3DMark Vantage http://www.3dmark.com/3dmv/4370912
3DMark 11 http://www.3dmark.com/3dm11/4787321

After VCPU pinning:
3DMark Vantage http://www.3dmark.com/3dmv/4384454
3DMark 11 http://www.3dmark.com/3dm11/4858857

As you can see, the performance gain isn't just within the margin of error. In both cases my dom0 was sitting idle at the desktop. The strange part is how it has affected the games; some perform noticeably better but others seem to be the same. I haven't been able to definitely figure out why but my guess at the moment is how well they're threaded, as the Xen hypervisor could move around the cores to balance out the load without pinning but now that it can't, every core stays where they are which in turn results in less context switches for the physical CPU cores.

You will need to adjust the pinning according to your setup but the lines you are looking for look like this "xm vcpu-pin $VM_NAME 0 2", where 0 is the VCPU core and 2 is the physical CPU core. It's also a good idea to pin the cores for your dom0, in my case "dom0_max_vcpus=8 dom0_vcpus_pin", add it to your GRUB_CMDLINE_XEN line in /etc/default/grub .


As a final word; I'll make an attempt to reinstall my Creative X-Fi sound card soon to see if it works better now that Xen has had a few updates, in addition to switching my main VM to Windows 7. It is possible that this issue could be resolved or reduced through VCPU pinning although I doubt it.


(Again, sorry for bumping :whistle: )
--- End of the comment I couldn't post -------

If anyone has any questions about virtualizing with Xen like this I'd be happy to answer them.

To mods:
Categorizing this thread wasn't easy by the way, it could probably be in "Homebuilt Systems", "Linux/Free BSD", "Windows 7", "Gaming General", "Graphic & Displays", etc but I figured those who really are interested in this hang out in "Graphic & Displays". Also the original thread was here so I guess it should be fine?
 

Matt Cannon

Honorable
Apr 26, 2013
1
0
10,510
This is all very good when using one or 2 virtual machines.

what I am trying to do is run a gaming cafe, now instead of using 16 machines I want to use one mean server,
and then just have 16 virtual machines, booting to 16 thin clients.

Do you guys think this is possible? The server I am thinking about is the EVGA Board, Dual socket CPU.
192GB RAM, and 3 690's, with 2 I7 extremes.

Any thoughts on this?
 

Djhg2000

Distinguished
May 16, 2009
165
0
18,680


I don't think it's possible for 16 of them with reasonable performance, but I think maybe about 3 or 4 of them per physical machine. The limiting factor would be PCIe lanes, you won't get enough bandwidth to host 16 graphics cards and USB controllers (not needed for thin clients as they run over network but then the network bandwidth would be an issue as well).

I found that 2 gaming VMs works pretty well, in my case it was one Windows 7 VM and a leftover Vista VM for demonstration purposes. We played some co-op Left 4 Dead 2, watched movies, browsed the web, etc. It all worked just like on a physical machine. I don't have a powerful enough 3rd graphics card to test with but even with my setup the bandwidth was limited to 8x PCIe 2.0 lanes, adding one more card would make that 4x.

With the EVGA board I guess you could make that 4 machines as you would get twice the PCIe lanes to play with, however the board I think you're talking about doesn't support onboard graphics which means you'd need to SSH into the host machine to boot your VMs. If you're going with the directly connected route you'll want to add 2 USB controllers as well and stick with 3 VMs (or 3 USB controllers and 4 single slot graphics cards).

Also, 32GB RAM would be sufficient with 4 VMs and 7GB each (you'll want a bit of a margin until you're done tweaking the system). This would also cut down the budget a bit since 4x32GB=128GB is way cheaper than 192GB. On the other hand you'd need 4 separate machines to host all 16 VMs but fear not, you could just simply clone the 1st host and change the hostname for the other 3.


Reading your post again, I don't think 3 690s would cut it for 16 VMs, especially since they need one card each.
 

walkeith25

Honorable
Jun 16, 2013
8
0
10,510
I would appreciate any help
5.jpg
06.jpg

07.jpg