Building an HPC for virtualized gaming

Have no experience building servers.

Need some help finding the best value for my budget of < $1500. (excluding GPU)
I can start off with just 1 CPU or less ram to keep costs under budget.
system does not need to be overkill, or underrated, but just enough for the job with a little bit of room for expansion of capabilities.

Tasks that this HPC will handle:

Windows 2008 Server R2 -w/ Hyper-V OS
-Runs at least 50 virtual machines fluidly, any freezing / hang up cannot be tolerated. (50 is just an arbitrary minimum target, the more the better)
-For testing purposes, game is not GPU intensive but may require decent CPU power
-Passing the GPU straight to VM, (no virtualized GPU).
-each virtual machine requires a minimum of 1500 MB ram
-Unless there is a way to make linked clones in windows 2008, I will need around 15-20GB of harddrive space per VM, and somehow overcome the IOPS bandwidth at a low cost solution.
-Needs to be 100% stable. Would like to avoid any troubleshooting nightmares.

This will require dual socket,
and someone recommended that I use this CPU:

Intel Xeon E5-2620 Sandy Bridge-EP 2.0GHz (2.5GHz Turbo Boost) 15MB L3 Cache LGA 2011 95W Six-Core Server Processor

and a barebones server such as this:

I am having trouble figuring out the best place to purchase a barebones server, don't know if newegg is the best option for server parts. I also read somewhere that the mobo in this server does not support 6 core, but I am having trouble verifying this information because the supermicro site is down or something. Not to mention that they have so many models which makes it even more confusing. Also, these barebone servers don't come with many fans, which makes me want to use my own chassis.

::Is it cheaper to put together each part myself?
If so, I have no idea where to look, or what to look for. I'm already having trouble finding a motherboard that has room to fit 2 cpu heat sinks.
I have no experience on where to purchase ECC ram or what specs to look for either.

Preferably I would like to keep costs under $1000 if at all possible, so I have enough left over for the 2nd CPU and/or RAM

Thanks for taking the time to read this, your help is greatly appreciated!
14 answers Last reply
More about building virtualized gaming
  1. Someone recommended hyper-v to me, as vmware is horrible for this task. I know it definitely engages the GPU more than vmware.

    Oh yes, RemoteFX will be enabled

    Is there a better solution?
  2. I don't know that there is a solution for what your wanting to do. Most virtualization applications aren't GPU related tasks. Maybe someone from the community will have a better idea for you. But I'm not sure that it can be done.

    I may be completely wrong and RemoteFX might be able to do this. You also might have better luck once Server 2012 is available since they are expanding what RemoteFX can do.
  3. RemoteFX will handle this, I forgot to mention that. Yea will definitely look into server 2012
  4. declensions said:
    RemoteFX will handle this, I forgot to mention that

    RemoteFX also lists a 12 virtual machines per GPU limitation.

    I see now it requires workstation cards. I will have to bow to someone else on that. I have no experience with those cards.
  5. yes, at least 3 GPUs will be used. Having trouble finding a barebones system that can fit more than 2 cards. Perhaps a workstation card is better

    Which is why I am wondering if it is cheaper/better to build the system from scratch
  6. declensions said:
    yes, at least 3 GPUs will be used. Having trouble finding a barebones system that can fit more than 2 cards. Which is why I am wondering if it is cheaper/better to build the system from scratch

    Might want to look into EVGA's Classified series of motherboards.

    EVGA Classified SR-X Family&series=All Motherboards&sw=5

    EVGA Classified SR-2 Family&series=All Motherboards&sw=5

    I would build from scratch. This would allow you to choose where and what your compromises are. Also as you have found most barebones systems aren't designed to have multiple GPU's. You can then choose a case that offers the cooling required for both those processors and the graphics cards.
  7. I am hoping for a little bit more information than just individual components.
    For example, I will find a cpu that I like, but the mobo will be incompatible with the chassis, or the mobo will not support 6 core (i do not understand why there are dual socket lga 2011 mobo that do not support 6 core), and I have to search multiple stores for the compatible parts.
    Is there a one-stop shop location where I can source all of the server parts I need?

    Would like a little more advice from others with experience, thank you!
  8. or please direct me to a forum that is more suitable for my questions.

    but I do appreciate all the help you've given thus far

    i require at least 50 GB of ECC ram and am having trouble sourcing this. unsure what size capacity ram is the cheapest/best

    With my budget in consideration, I would like to know if this is feasible, as I do not want to end up with a useless piece of metal furniture.
    Or overspend on any parts. I have looked at the SR-X before but I do not require that many pci-e slots, i have seen some dual socket 2011 mobo for half the cost and higher max ram support, but cannot find one that supports 6-core CPU
  9. The RemoteFX got me very interested, but I have yet to find any actual reviews using it in practical situation with some benchmarks. I have the gpu laying next to the server for whenever I find a half meaningful situation to test it, we dont really need gpu "virtualisation". You should really test this on a machine you already have and get some benchmark of your usage with whatever number of VM's it can handle under their needed load. I dont see how passing the gpu to the vms is not virtualising it, however it may be used in the end.

    You would stay inside your budget if you start with 1 cpu and less ram, dont think you can go wrong with supermicro, they say max E5-2600 though. IOPS need again, you will have to figure out if you end up needing SSD's for this.

    We can go on endlessly with hardware recommendations vs budget.
  10. If you cannot use GPU in HyperV VMS and are limited to 2x PCI-e in VMware @ 12VMs each this has a quick ending.
  11. simon2600 said:
    If you cannot use GPU in HyperV VMS and are limited to 2x PCI-e in VMware @ 12VMs each this has a quick ending.

    "Windows Server 2008 R2 with SP1 has been tested for up to 12 virtual machines per GPU, for a total of 24 virtual machines on two physical GPUs"

    Did you mean remotefx and not vmware?
    it has been tested, but I think you can run more. Either that or I hear a workstation card can run as many with vram as limiting factor.

    i have this barebones server in mind, it is cheaper than putting together the individual parts myself, and has mostly what I am looking for, except that there is only room to fit 2 GPU in the case:

    if just these parts can be done cheaper, let me know
  12. This topic has been moved from the section CPU & Components to section Systems by Mousemonkey
Ask a new question

Read More

New Build Windows Server 2008 Servers Systems Product