choucove said:
There are several key issues with the above configuration. The type of memory chosen is not server memory, but desktop memory, and will not work with your Xeon-class processors. Your CPU cooler is also a liquid cooling unit that will not fit into a 2U server chassis. And given the number of hard drives AND SSDs you are looking at, you shouldn't even be considering onboard RAID controllers at all.
Remember with a custom-built solution that it is 100% up to you to support it and ensure that every single component you select is 100% compatible with the OS and intended usage that you are putting these servers in for. For example, if you're planning to run ESX for your host OS in running your 30 VMs, then you won't be able to utilize your onboard RAID controller anyways, you will have to get a dedicated hardware RAID controller that is certified compatible with ESX.
My recommendation for this type of thing would be forget custom built. There's too many variables, and unless you know EXACTLY the right components to give you unquestionable stability, you are going to be better off with a pre-configured system from one of the big names like HP and Dell. It may cost you a little more, but it's gonna be done right. If you aren't worried about taking on the responsibilities of supporting it, then you can get more "bang for your buck" often with a custom built system, but there are several things you need to consider.
Let's look at your intended usage. You want to run 30+ VMs on a single server. While this is possible, you're probably far better off load balancing this across several cheaper servers and you can even possibly go to a cluster environment that way. Especially if you are wanting to allocate up to 6 GB of vRAM to each virtual machine, you are going to be looking at better efficiency across several servers for that number of VMs.
Unless you have some specific need for a really high clock speed, don't worry about going for the fastest Xeon on the market. Stick with something that gives you many threads to spread across your multiple VMs in the 2.0 to 2.5 Ghz range. For instance, even the Xeon E5-2620 offers six cores (twelve threads) and can turbo boost to 2.5 Ghz which should offer great performance for most standard VM workloads. You can also get two of these for the price of a single Xeon E5-2643 which is only a quad-core processor. Next look at your RAM. You are going to need ECC Registered DDR3 1333 or 1600 memory for these servers, which is going to run you about double the cost of standard desktop memory. You're going to be spending a TON of money on RAM if you need to allocate 6 GB to each VM and plan to run upwards of 30 simultaneously. One way that you might be able to help with this expense is if you plan on using dynamic memory. Most VMs will run just fine with 4 GB of RAM (or even less, depending upon the demand and usage) so I'd suggest allocating a minimum of 4 GB to each VM, with room to grow to 6 GB if the VM needs the extra space. You will have to configure priorities, so your host OS will know which VM to give extra memory to over others as it is needed. This way you only have to purchase 4 GB of RAM per VM plus a little extra for high demand situations.
On to hard drives. You're going to need enterprise class hard drives for reliability in RAID environments. Standard "mainstream" desktop hard drives aren't going to offer you the reliability or performance you will need. WD RE4 drives are built specifically for this need, and while they are expensive, they're probably the best on the market for this environment. Also, be careful in your evaluation of using SSDs in a server environment as well. Many models are not really designed for use in RAID, and while it is more expensive, it is much better to find SSDs built for use in RAID. Again, a dedicated hardware RAID controller is going to be a must for the proper performance and reliability, as your onboard controller just isn't capable of that for a server environment in the type of workload you are describing.
I don't know how much storage space you need, or the amount of throughput performance you are needing for your storage, but often this is the main bottleneck of running multiple VMs on a single server. It might be wise to consider setting up a cluster environment where you can run your VMs on two node servers (where your key resources are lots of processing cores and lots of RAM) and store your actual VMs on a shared storage device or SAN (where your key resources are lots of fast hard drive in a very well protected RAID array.) This is expensive to do, no doubt about it, but it's the true enterprise solution. There are lots of different solutions for SANs to utilize iSCSI, Fiber Channel, or direct attached SAS. If this is something that you are interested in, I'd highly recommend contacting to the storage specialists at HP or Dell to talk with them about your specific project and needs and they can be incredibly helpful in discussing the best fit.
I want to thank you for putting in your time to answer this post. It has really helped me out.
I am looking at some HP and Dell servers now.