Best $5-6k custom build for VM's

vmwarerig

Reputable
May 3, 2017
7
0
4,510
I want to build a workstation that will run up to 5 virtual machines using VMWare Workstation (as opposed to acting as an ESXi hypervisor). The host OS and most of the VM's will be running Windows 10, although there will be at least one Linux VM running dozens of Cisco network device emulators (sort of like GNS3). I won't be doing any gaming or video/photo editing. Other than the Cisco stuff, the host and the other VM's will mainly be used for web browsing, MS Office applications, and playback of video files. I currently need 6 displays at 1920x1080 resolution, although the number of displays and/or resolution could increase in the future.

This is what I've come up with so far, but I would like to hear thoughts/suggestions on my selections. The big question mark right now is the memory, which I'll get to below. Here is my current list:


CPU: Intel Core i7-6950X 3.0GHz 10-Core Processor ($1649.99 @ Newegg)
CPU Cooler: Noctua NH-D15 82.5 CFM CPU Cooler ($89.95 @ Newegg)
Motherboard: Asus X99-DELUXE II ATX LGA2011-3 Motherboard ($399.99 @ Newegg)
Memory: G.Skill Aegis 64GB (4 x 16GB) DDR4-2400 Memory ($421.99 @ Newegg)
Storage: Samsung 960 Pro 1.0TB M.2-2280 Solid State Drive ($629.99 @ Newegg)
Storage: Crucial MX300 1.1TB 2.5" Solid State Drive ($289.99 @ Newegg)
Two Video Cards: EVGA GeForce GTX 1060 3GB 3GB FTW+ GAMING Video Card ($214.98 @ Newegg)
Case: Corsair 750D Airflow Edition ATX Full Tower Case ($162.98 @ Newegg)
Power Supply: EVGA SuperNOVA G3 850W 80+ Gold Certified Fully-Modular ATX Power Supply ($137.98 @ Newegg)
Operating System: Microsoft Windows 10 Pro OEM 64-bit ($142.98 @ Newegg)
UPS: CyberPower CP1500PFCLCD UPS ($199.86 @ Newegg)

Total: $4555.66



My main question is over the memory. I don't think I want to overclock either the CPU or memory because it doesn't seem like the performance benefits for my needs would be worth the time spent tinkering around. It's much more important to me that the machine operates smoothly (no blue/black screens, errors, compatibility issues, etc.) and that I don't have to spend a lot of time troubleshooting once it's built. I know Xeon processors with ECC memory would be very stable, but I don't know if I can justify the extra cost since this won't technically be a server.

So the question is, which memory is best for my setup? If I understand Intel's website correctly (http://ark.intel.com/products/94456/Intel-Core-i7-6950X-Processor-Extreme-Edition-25M-Cache-up-to-3_50-GHz), I should be using DDR4 2400/2133 if I'm not trying to overclock. I looked through the G.Skill and Corsair QVL lists and the only 64GB 2400Mhz kit I could find that specifically mentioned the Asus X99-DELUXE II was the G.Skill Aegis (https://www.gskill.com/en/product/f4-2400c15q-64gis). I haven't seen it mentioned in the forums or on sites like Falcon Northwest or Origin PC, so I'm worried there is something negative about it that I'm not aware of. I'm more interested in stability over achieving peak performance, but am I missing out on a huge performance increase by choosing a dual channel kit when my other components are theoretically capable of quad channel? Or is complex overclocking required in order to get quad channel? Will 2400Mhz make a big difference compared to 3000Mhz or more? Are there any better memory options for my setup? Some additional cost isn't an issue if it's reasonable.

I am also willing to use a different motherboard or CPU if recommended.
 
Solution
Looks good but wait for 2066 pin X299 this June. Skylake-x 12 core is coming and should be priced lower to compete against AMD. AMD by the way is also bring their 16 core Ryzen soon so maybe a little wait will get you much better performance for the price.
 
To have more VMs means you want multiple drives to store individual VMs, GPUs where you need them, and a lot of cores to distribute. There's no point in running an overclockable Intel CPU as VMs are meant for stable terminal access.

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Xeon E5-2630 V4 2.2GHz 10-Core Processor ($649.99 @ SuperBiiz)
CPU: Intel Xeon E5-2630 V4 2.2GHz 10-Core Processor ($649.99 @ SuperBiiz)
CPU Cooler: Noctua NH-U12S 55.0 CFM CPU Cooler ($57.89 @ OutletPC)
CPU Cooler: Noctua NH-U12S 55.0 CFM CPU Cooler ($57.89 @ OutletPC)
Motherboard: Supermicro MBD-X10DAX EATX Dual-CPU LGA2011-3 Motherboard ($439.72 @ Amazon)
Memory: Kingston 64GB (4 x 16GB) Registered DDR4-2133 Memory ($615.99 @ Amazon)
Storage: SK hynix SL308 250GB 2.5" Solid State Drive ($88.99 @ SuperBiiz)
Storage: SK hynix SL308 250GB 2.5" Solid State Drive ($88.99 @ SuperBiiz)
Storage: SK hynix SL308 250GB 2.5" Solid State Drive ($88.99 @ SuperBiiz)
Storage: SK hynix SL308 250GB 2.5" Solid State Drive ($88.99 @ SuperBiiz)
Storage: SK hynix SL308 250GB 2.5" Solid State Drive ($88.99 @ SuperBiiz)
Storage: SK hynix SL308 250GB 2.5" Solid State Drive ($88.99 @ SuperBiiz)
Video Card: AMD Radeon Pro WX 5100 8GB Video Card (2-Way CrossFire) ($389.99 @ SuperBiiz)
Video Card: AMD Radeon Pro WX 5100 8GB Video Card (2-Way CrossFire) ($389.99 @ SuperBiiz)
Case: Lian-Li PC-A76 ATX Full Tower Case ($209.99 @ B&H)
Power Supply: SeaSonic 1050W 80+ Platinum Certified Fully-Modular ATX Power Supply ($168.99 @ SuperBiiz)
Operating System: Microsoft Windows 10 Pro OEM 64-bit ($133.49 @ OutletPC)
Total: $4297.86
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2017-05-03 14:04 EDT-0400
 

vmwarerig

Reputable
May 3, 2017
7
0
4,510




Thanks for this suggestion. I assumed this type of build would be cost prohibitive but clearly it can be done for the same price. It would be great if you could elaborate on why this is a better setup than the i7. I see that the main benefit would be twice as many cores for the VM's, and that alone might be justification enough, is that your reasoning? I also see that dual CPU's would give me 80 PCIe lanes instead of 40, but why would I need more than 40?

Xeon processors would also allow ECC memory, which I understand is more stable. Is the only benefit that my host and guest VM's would require reboots less often? I realize that's very important for servers, but for my needs I could live with reboots every once in a while. As far as allowing 1TB+ memory with Xeon vs. 128GB with Core, I don't know if I'll ever need more than 128GB.

One concern I have is that there are only a few motherboards available, and based on what I've read about them it sounds like they're trickier to set up and geared more towards professionals. I assume I can get through most issues through forums, but there seems to be less available for these motherboards online. It seems like I'd be more on my own and could end up spending huge amounts of time, possible without even finding a solution. So I'm wondering how much ease of use would be a factor going with the Xeons.

Lastly, as elbert pointed out, the X99 has been around a long time and will soon be replaced by the X299. Will something similar be occurring soon with the C612 chipset used for this type of dual Xeon E5-2630 setup? If so, what would be the benefit of waiting for the next generation chipset? On the other hand, if the C612 is actually future proof rather than soon-to-be obsolete, that would be a big plus.

Generally speaking, I am left guessing as to which approach to take as it feels a bit like comparing apples to oranges, and I only have the information I've been able to find online but not much experience. If there are any factors to consider that aren't easily apparent from comparing spec lists, please share them.
 
1) Yep. Double the cores in VMs, "double" the speed! (Not actually going to double speed lol)
2) Benefit of the ECC RAM is so little to virtually anyone that it doesn't matter. Only useful part of ECC is the byte correction, which helps color accuracy. Normal people won't notice a bit of off bits from RAM.
3) The POST time is definitely higher with 2 CPUs and that much RAM. And it's definitely trickier to debug. But, hey, any questions and you're on Tom's Hardware to ask, anyways.
4) If you're looking for something to use for VMs, then just get this. 1) IPC won't be improving by 20% anymore. 2) This would be more stable with more documentation, so easier to fix problems.
 

vmwarerig

Reputable
May 3, 2017
7
0
4,510
I researched all your suggestions and the dual Xeon 2630's definitely looks like the best option, as does Supermicro (as opposed to Asus) for the motherboard. They have a lot of different models and the one you chose also looks like my best bet, so thanks for that. However, I also noticed Supermicro has these "Barebones" Super Workstations that include the chassis, motherboard, heat sinks, and storage backplane all pre-asssembled (excluding the heat sinks of course). The price difference is insignificant compared to the individual components and it would put me at ease knowing most of the assembly was already completed by Supermicro. I've seen a bunch of lengthy posts from BambiBoom on this website recommending these barebones workstations, as well, so I'm pretty much sold. My pick at this point is the 7048A-T (https://www.supermicro.com/products/system/4U/7048/SYS-7048A-T.cfm). The only issue is that none of them come with the X10DAX motherboard. There are some barebones workstations with very similar model motherboards (X10DAi and X10DRi, for example), but the major feature missing in these models is they don't have this feature: "Supports 3-way Geforce SLI (4-way SLI support for dual GPU graphics cards)." The X10DAX does support SLI (although not Crossfire for the AMD GPU's that were recommended).

So, my question is, do I need SLI for what I'm going to be doing? Again, I won't be gaming or doing any image/video editing. My heaviest video requirement is video playback on at least 6 1920x1080 displays, but that may increase in quantity or resolution in the future. My understanding is that even web browsers and other common apps utilize GPU's now, but I could use some explanation. Also, since I'll be running Windows as the host OS, there is currently no way for me to do GPU hardware passthrough to my virtual machines (perhaps my Linux VM for the Cisco devices would've benefitted from it? I'm not sure). I suppose there's always the possibility I'd try virtual reality one day on my host OS if it was ready to support it, in which case I assume I would want SLI, but as it stands my second video card will only serve to provide more outputs for extra monitors. Can someone confirm whether my understanding of how SLI works is correct? If there's no need for SLI, I would go with the 7048A-t barebones for $1000. But if I do need SLI, for $2000 instead of $1000 I could go with the 7048GR-TR barebones which has a motherboard that also supports SLI (https://www.supermicro.com/products/system/4u/7048/SYS-7048GR-TR.cfm), although the downside is that this barebones workstaion is more server-like and loses the "whisper quiet" rating of <27dB that the 7048A-T has. Or I could buy one of the Supermicro "Superchassis" (745BTQ-R1K28B-SQ) and install the X10DAX motherboard myself. I like the idea of buying the barebones workstation rather than just the superchassis because the barebones is pre-assembled with the motherboard, but I would be willing to forego that benefit and assemble the motherboard myself if the SLI is really going to be worth it.

Regardless of the chassis/motherboard, I'm still having trouble figuring out how to go about deciding on the video card(s). Given my requirements in the previous paragraph, it seems that any card(s) capable of supporting 6 displays would be suitable, without needing the latest and greatest. I have no problem with the price of the two recommended Radeon Pro 5100's, but I'd just like to better understand what it is about these that makes them more suitable for my needs. Is it just the 4 displayports per card? There seem to be many cards that could support 6 displays (http://multimonitorcomputer.com/top-4-best-videocard-for-multiple-monitor-computers.php). Also, it seems that some cards in the same price range have higher performance benchmarks than the Radeon Pro (https://www.techpowerup.com/gpudb/2873/radeon-pro-wx-5100), so I'm just wondering what makes these the best option. I did notice that they only take up a single slot each, which I believe would leave the other PCIe slots open whereas double-wide GPU's would not. On ther other hand, if I did decide on going with the X10DAX motherboard, I believe it would be better to choose NVIDIA GPU(s) in order to take advanatage of SLI, but correct me if I'm wrong.
 
You don't need to Crossfire; the 2 WX5100's were not meant to be Crossfired in this case. Those cards are there as VGA passthroughs. And why I choose those cards, you ask? Well, mostly for the single slot profile and 4 DP slots. Unfortunately, the motherboard only supports room for low-profile GPUs.

Now that I think about it, only NVIDIA supports passthrough for Linux. Maybe I should change the build up to reflect that.

And yes, 4 DP per card.
 

vmwarerig

Reputable
May 3, 2017
7
0
4,510
Yes, that would be great if you could suggest NVIDIA cards if you think it would be a better option. In regards to VGA passthrough, I don't know how likely it is that I will use Linux as the host OS, but I suppose it doesn't hurt to at least have that option down the road.

So if there was no intention to choose components that would allow SLI/Crossfire, should I assume you don't think I have any need for it? I'm curious what your thoughts are regarding my dilemma over the motherboard/chassis.
 
There's absolutely no need for SLI. In gaming, SLI may result in either little performance increase, no performance increase, or glitches in game. And in video editing or rendering, there's nothing option for SLI. If more than 1 card, it will be considered compute, but there's no option for SLI. In case of running multiple VMs, the cards are never considered SLI; it is individual cards that are recognized one by one in the VM server, then only 1 card is recognized in VM clients.

Maybe these?

https://www.newegg.com/Product/Product.aspx?Item=N82E16814133624
 

vmwareig,

RAM: The dual Xeon E5-2640 v4 and abandoning the idea of overclocking are good decisions. However, the RAM should be configured to symmetrically complete one quad-memory channel for each CPU. That is, each CPU should have four modules associated: 64GB = 4X 8GB + 4X 8GB. This importantly leaves the other two channels open to accept another 64GB- which in my view may be advisable from the start. Using 128GB will ensure that the host OS /applications and all VM's can load entirely into RAM, avoiding disk swaps under full load. The rule of thumb equation was to have 30GB for the host OS and 20GB for each VM. So: 30GB + ( 5X 20GB) = 130GB.

__ Tested RAM: In a dual processor system it is strongly advisable to adhere to the motherboard maker's tested RAM. ECC registered memory has a 1-bit parity check to synchronize the processors, making it important that the RAM timing is correct.

I recently bought ECC reg. RAM that was supplied for HP dual Xeon servers using the same processors (E5-2690) and having nearly identical motherboard specifications to the HP z620 workstation. The RAM was GP-labeled Samsung and had the same timings, but with a different version number. Whenever this system booted there was a memory training error. When this happened, the system would boot and run applications, but the 64GB total RAM had multiples of 8GB subtracted from the total. In Passmark Performance Test the memory score was in the 2200's.

After installing the HP workstation version labelled with the exact HP part number RAM certified for the HP z620 and z820, the memory training errors disappeared and the Passmark scores went into the 2500's.

GPU / Multiple Monitor Support: As regards the GPU, SLI is not an option to extend multiple monitor support as it is combining multiple GPUs and video RAM to a single set of outputs. If there is to be no 3D visualization, consider: NVIDIA NVS 810, having 8 mini-Displayport connections that will drive 4K displays at 60 Hz.

__ Dual Dissimilar GPUs: There is another, more flexible solution possible in which a second GPU can run an independent video driver: SOLVED – Quadro P2000 + GTX 1080TI, CAD AND GAMING on a HP Z620 ! ! ! I would have used only two exclamation marks, but it is encouraging as it's possible to run two GPU's with different characteristics on a single system simultaneously. There is then a possibility to select GPU's having the optimal attributes. Additionally, the GPU's may work in sync in GPU computing and there may be a good argument to consider a Tesla GPU coprocessor as this acts as an extension to processing core count and system memory.

Supermicro Superworkstation: This is, in my view, a very good solution that not only provides a very high performance, flexibility in features- e,g, the ability to have three double height GPU's, but is importantly an integrated system that redirects the time and effort of researching, ordering, assembling, wiring of a fully component system quickly to setup, configuration, and use. As the decisions regarding the case / chassis, motherboard, CPU cooling, and power supply have been engineered, the user only needs to mount CPU's and coolers, plug in RAM, GPU(s), and drives, then load OS(s) and applications. The specification is generous for example, the CPU coolers can dissipate 160W thermally and there is extensive accommodation for drives, optionally including hot swap bays. They are also rated for quiet running, which is a prime factor in a workstation.Supermicro are server specialists and the Superworksations , motherboards, CPU coolers, and power supplies are of server quality and reliability. Speed has it's place, but is highly counter-productive if the application crashes- ever.

An interesting project!

Cheers,

BambiBoom

CAD / 3D Modeling / Graphic Design:

HP z620_2 (2017) > Xeon E5-1680 v2 (8-core@ 4.1GHz) / 64GB DDR3-1866 ECC Reg / Quadro P2000 5GB / HP Z Turbo Drive M.2 256GB + Intel 730 480GB + Seagate Constellation ES.3 1TB / ASUS Essence STX PCIe sound card / 825W PSU / Windows 7 Prof.’l 64-bit > 2X Dell Ultrasharp U2715H (2560 X 1440) / Logitech z2300 2.1 Sound

[Passmark Rating = 6166 / CPU rating = 16934 / 2D = 820 / 3D= 8849 / Mem = 2991 / Disk = 13794] 4.24.17 Single Thread Mark = 2252

Assembled last week using new: z620 case / chassis / power supply, E5-1680 v2, Quadro P2000 and used: motherboard RAM, and drives (from previous system). Total cost was about $1,900. The Quadro P2000 performs above the level of GTX 1060.

Analysis / Simulation / Rendering:

HP z620_1 (2012) (Rev 3) 2X Xeon E5-2690 (8-core @ 2.9 / 3.8GHz) / 64GB DDR3-1600 ECC reg) / Quadro K2200 (4GB) + Tesla M2090 (6GB) / HP Z Turbo Drive (256GB) + Samsung 850 Evo 250GB + Seagate Constellation ES.3 (1TB) / Creative Sound Blaster X-Fi Titanium PCIe sound card + Logitech z313 2.1 speakers / 800W / Windows 7 Professional 64-bit > > HP 2711x (27" 1980 X 1080)

[ Passmark System Rating= 5675 / CPU= 22625 / 2D= 815 / 3D = 3580 / Mem = 2522 / Disk = 12640 ] 9.25.16 Single Thread Mark = 1903

Assembled in 8/16 with used and reused parts. Total cost was about $1,400.



 
Solution

vmwarerig

Reputable
May 3, 2017
7
0
4,510
The next gen of Xeon's are available and Supermicro has a new version of the workstation I was looking at previously (old one was 7048A-T, new one is 7049A-T). They said the motherboard will be ready for production in a couple weeks, what do you guys think?

https://www.supermicro.com.tw/products/system/4U/7049/S...


I'm thinking of putting in the following components:

2 x Xeon Silver 4114
12 x 8GB memory (or perhaps 12 x 16GB)
NVIDIA Quadro NVS 810

The 4114 looks like the best value in terms of cores per dollar, but compared to the 2630, should I be concerned about the drop in cache size? The 4114 has "13.75 MB L3") compared to the 2630's "25 MB SmartCache"? I also noticed that none of the new Xeon's mention VT-d or VT-x on Intel's site, whereas the 2630's specifically include these features. I'm just going to assume that all the new Xeon's have the latest virtualization technology, right?

Am I correct that 12 is the optimal number of DIMMs, given that there are 6 memory channels per CPU?

I'm told that the Quadro has almost no 3D capability, but as hard as I try, I can't think of any tasks I will be performing that will require 3D. Are there some basic tasks which benefit from 3D capability that I may not be considering?

I was told that even though the 7049A-T product page mentioned M.2, that type of hard drive hasn't been fully tested with their motherboard, so Supermicro is recommending an AIC instead. I'm debating between the following for the boot drive, on which I'll install Windows 10:

- Intel 750 1.2TB HHHL Form Factor
- Intel P3520 1.2TB HHHL Form Factor
- Intel P3500 1.2TB HHHL Form Factor

Whichever model I end up choosing, is there any reason for stepping it up to the 2TB model, given that I'm only using it for the OS and installing applications? I'm going to put each VM on a separate SSD.