Which CPU is best for virtualization?

burdene

Honorable
Oct 29, 2013
2
0
10,510
I'm looking to build/purchase a system that's capable of running up to 5 virtual machines at once. A local computer store recommended a Xeon processor whereas a guy that's been tinkering with computers for over 20 years is saying the i7 is WAY better for what I'm trying to do. I'm not a gamer and my purpose for running VM's is that I'm studying for MS Server certifications. I'm figuring I need at least 16GB RAM and maybe even 32, but which processor (that doesn't cost $1,000) is best suited for my needs? Thanks!
 
Solution


Xeons are very flexible with price nowadays, some Xeons are cheaper than I7, but provide the same performance, like my example above, which is like 50$ cheaper than I7-4770, but is pretty much the same minus GPU.

As a matter of fact, if you want to buy I7 CPU and not overclock it, then it would actually be cheaper to just get Xeon and enjoy the same performance :)

Gaidax

Distinguished
One of those babies will do good: http://ark.intel.com/products/family/78581/Intel-Xeon-Processor-E3-v3-Family?q=xeon

These are the Haswell budget Xeons that match I7 performance, but come with better price tag (at expense of the useless internal GPU).


This one in particular is the best at price/performance ratio: http://ark.intel.com/products/75055/Intel-Xeon-Processor-E3-1240-v3-8M-Cache-3_40-GHz

In case you need a bit more muscle - this one looks very nice and "relatively" cheap: http://ark.intel.com/products/75780/Intel-Xeon-Processor-E5-1650-v2-12M-Cache-3_50-GHz
 

cklaubur

Distinguished
Other than a few features, the Xeon processors are identical to the Core I7s. Personally, I'd go with the I7. It will do virtualization just fine for what you need, and won't cost you $1,000.

Casey

*EDIT* Well, I didn't realize the Xeon E3's were that cheap. It's probably a toss-up at this point.
 

Gaidax

Distinguished


Xeons are very flexible with price nowadays, some Xeons are cheaper than I7, but provide the same performance, like my example above, which is like 50$ cheaper than I7-4770, but is pretty much the same minus GPU.

As a matter of fact, if you want to buy I7 CPU and not overclock it, then it would actually be cheaper to just get Xeon and enjoy the same performance :)
 
Solution

Joe Domos

Reputable
Apr 21, 2015
1
0
4,510


Thanks Gaidax. Here's an update, circa april 2015.

Disclaimer: I'm in the market for exactly the same needs as the OP (burdene), here's what I found. I'm no specialist, just spent the last weeks reading a lot on the matter from various serious sources (VMware, Citrix, Hyper-V/technet, etc etc.) and I'm basically just rehashing here ─ and crossing these observations with intel's current 'ark'. Also, all of this solely applies to a home lab / small prod environment, with no scaling considerations whatsoever (when you'll need *more*, years from now, you'll likely buy a whole new server core with up-to-date techs and specs and pricepoints).

E3 versus E5: I say go E3, lowest-specs; and if you need "more", buy several E3 servers.

  • ■ In the Haswell E5 family, v3 of the 1650 is available for ~$590. Comes with 15M Cache (up from 12M), TurboBoost at 3.8GHz (down from 3.9).
    http://ark.intel.com/products/82765/Intel-Xeon-Processor-E5-1650-v3-15M-Cache-3_50-GHz
    ■ I found that the major difference between E3 and E5 families are
    - DDR3 vs. DDR4 support, thus
    -- increased Memory Bandwidth by a huge factor (2-3)
    -- increased max size (768 GB vs 32GB ─ yes, you read these figures right).
    I'm not sure how many labs fit these requirements, though. Seems vastly overpowered if it comes at the expense of having several physical machines. For a lab or small environment (emphasis: not applicable to scalable businesses), DDR4 is nice but certainly not mandatory as of 2015. Memory-wise the real virtualization benefit of E5 is increased max size (768 vs 32 is more than huge, it's abysmal).

    - Dual-socket (multi-CPU) support (E5-2xxx series). Be prepared to fork way north of $1000 for such a 2CPU+MB+RAM server heart. My opinion is: just don't.

    Again, though this E5/E3 difference doesn't necessarily come at a huge cost, flops-wise (in the lower-end anyway), you should probably consider the overall price range of an E3- vs. E5-based server. Don't just look at CPU specs/price.
    For instance, at 4 pCores (8 logical threads), E5-1620v3 and E3-1241v3 both come under $300; but the overall profiles of machines based on these two will allow the E5 machine to be much more costful (both initially and while running) (E5v3 sports DDR4 up to 768GB against E3v3's DDR3 up to 32GB, also think of cooling 140W TDP versus 80W, etc.)
If you're looking to virtualize, the emphasis should be put on "more" (as in, "several") rather than "big" (as in, "powerful" or "fast").

So, "more cores" is better than "big/fast cores"; likewise "more RAM" is preferable to "better/faster RAM".

The main reason is that when dealing with VM's you'll assign cores and GB's of RAM (both of which you never have quite enough once you start running enterprise-grade products, even in test labs), whereas you're less likely to be bottlenecked by raw speed/power at the cores level.

Perhaps unintuitively though, I've also heard VM/lab experts advise home/studying/lab users to get E3 with graphics ─ for a negligible increase of 4W TDP and $10 cost, you save a GPU card and perhaps more importanly a PCI-E slot. Business wouldn't do that (these 4W/$10 are NOT negligible in a pro environment), but at home it may apparently save you trouble (typically, PCI-E slots are useful for multi-NIC cards, SAS/HBA slots, etc. ; and it's always nice to be able to plug a screen/KVM directly to your barebone server). Consider the E3 family, compare for instance 1241 with 1246 (or 1271/76, or 1281/86). It's a moot point with IPMI or generally with most type-1 hypervisors, but being able to just plug a screen is... well... it makes sense, in a lab, where you break things every other day. Remember, for a small $10 increase in cost.

Until you have 3-4 physical machines in your lab, it is vain to stack CPUs in a single machine.
Wild guess: in a lab/small office, you will probably never find any benefit in multi-CPU versus several actual physical machines. Don't get this the wrong way: benchmarks may prove otherwise, pricepoint also, but in the end, you need several machines if you're to experiment with clusters, High Availability, vMotion, and so on. You'll always find yourself in a situation where you need to keep some server instance running (Windows Domain Controllers / DNS / DHCP / Routing / Firewall come to mind) while totally tweaking another machine. You can do all this and more all virtualized on a monster server, but it's much more realistic (read: worthy of experience) to deal with several physical machines (a vSphere/vCenter Server install comes to mind, you may not want to take it down while rebooting/maintaining your cluster of type-1 hosts for instance).

Therefore, 1 CPU = 1 physical machine. There will be an increased cost (redundant MB's, HDD's, etc.), but in the long run a single machine can't make a viable lab.

With all this in mind, E3-1231v3 seems like a great budget choice:
- 4 pCores / 8 threads (max 8vCPUs as seen from a VM).
- only $250
- 3.4-3.8 GHz (the best E3 in class, 1281v3, caps at 3.7-4.1 GHz which is negligible and never a requirement).
- cheap DDR3, which can be capped RIGHT NOW at 32 GB
for a mere $250 (go ECC, for your own sake always go ECC with servers especially when dealing with VMs or demanding filesystems such as ZFS or databases). The "right now" part is important, because paying now for the *potential* of 768 GB max RAM, a crazy amount you're unlikely to ever fill before you buy a whole new server, is *not* sound usage of your financial resources.

If you don't care for $37 more, and it's your only server-grade computer in the lab, then do yourself a favor and grab the 1246v3 (integrated graphics, +100MHz).

Since you should always favor server-grade components in at least 1 machine (because, do you want to spend hours troubleshooting crappy useless VIA drivers or faulty ethernet management?...), good candidates for these Xeon CPU are any Supermicro motherboard with Intel GB NICs, IPMI (headless management), and the kind of components that don't cause driver issues in your beloved non-gamer lab ─well, I mean, it's a different kinda game, certainly not lacking in fun... ;-).

Overall, E3-1231 + 32 GB + Supermicro MB will net you approx. $700-750 worth of wondersome.

Rather than going E5, let alone dual-socketing, for a lab or SOHO I'd buy twice that E3 machine: I get a lab complete with VM clustering, compliant with Windows ADDS dual-DC on separate physical hosts, separated power sources, etc. To ease my costs a little, instead of two E3's I'd buy only one and then build an i5 consumer-grade computer with only 16 GB (a $400-500 machine) (but I'd prefer two identical E3's for such a cluster, it makes much more sense) . Then any crappy laptop can make the case for more physical machines, as long as it has, say, 4GB / 2 logical cores (you really don't want to go below that for type-1 hypervisors, or you're probably better off booting straight to an OS).

The best part is, when you're done working, you can always dual boot that beast to enjoy its power (it's just a big workstation, it works for gaming/video/whatnot, has PCI-E slots for GPUs should you need it, etc.) In SOHO/Lab environments, I'd advise using a typical tower case rather than U-racks because the sound alone of cooling these innefficient heat-fests will drive you mad before you even reach a login screen. Use your gamer experience: go modular PSU, 12-14cm fans (intake on the HDDs, outtake at the top near the CPU), and your server will run cold and silent as it shoud ─ minus the dB's, air conditioning and power cost of a server room...

Notes:
- You probably wish to investigate some form of active/online UPS, because burning/breaking your lab just isn't an option if the power grid is an issue where you live. (I also use a fully solid-state MacMini to test brutal power offs, there I'm sure there's no risk to the hardware itself).
- your next best friend seems to be a level-3 manageable switch (maybe refurbished cisco, maybe lower grade, maybe a pfSense server with a bunch of NICs, whatever fits you but don't make all networking virtual especially if you're going to play with WAN/VNC, VLAN, routing... and need to foolproof your implementation of such stragegies (and emulate distant/offsite replication, ADDS management/integration, etc.)
- automation and QoL: it may be nothing but a Logitech G-Keyboard on your client/admin machine will save you hours of writing (macros for login_name/tab/password/enter, your FQDNs, one-key shortcuts instead of several mouse clicks, scripts commands... it's just QoL to make you focus on learning). Only one thumb rule: never automate what you don't know (if you need to lookup for something, then do it manually, and again, and again until you don't need to lookup for anything ─ at this point, automate if it's a big time sink, and move on). Windows Powershell Cmdlets come to mind, ssh login, pings, nslookups, stuff like that.

Please feel free to comment and criticize, this is just my opinion, based on others' and my own experiences with virtualization. I need confirmation of all this as much as I wanted to spread what info I've gathered.