Sign in with
Sign up | Sign in
Your question

Server Build questions.

Tags:
  • Virtual Machine
  • CPUs
  • Servers
  • NAS / RAID
  • Xeon
  • Build
  • Windows Server 2008
  • SSD
  • Business Computing
Last response: in Business Computing
Share
July 29, 2013 6:40:33 AM

Looking for a server that is going to allow me have 30+ VM's, and it will be hosting Windows Server 2008 servers. The average memory that each VM will have is about 6GB, and should have 2 CPU's.

$ 1,065.00 SuperChassis 836E1-R800V with 920W Redundant Platinum power supply
http://www.amazon.com/Supermicro-Rackmount-Server-Chass...

$1088.00 Intel Xeon E5-2643 Quad-core 3.3ghz
http://www.amazon.com/Cm8062107185605-Intel-Processors-...


$1056.00 = 8 G.Skill 16gb DDR3 1600MHZ
http://www.amazon.com/G-Skill-PC3-12800-1600MHz-Desktop...


$445.00
http://www.amazon.com/Supermicro-X9DAI-O-LGA2011-USB3-0...


$80.00 NH-U12DXi4 CPU-Kühler - 120mm
http://www.amazon.com/noctua-NH-U12DXi4-CPU-K%C3%BChler...


$384.00 - 4 Seagate Barracuda 3.5 Inch 2Tb 7200 Rpm
http://www.amazon.com/Seagate-Barracuda-Inch-Internal-D...


$728.00 = 4 SanDisk Extreme SSD 240 GB SATA 6.0
http://www.amazon.com/SanDisk-Extreme-2-5-Inch-Solid-SD...

$165.00 Raid Smart Battery AXXRSBBU9
http://www.amazon.com/Intel-AXXRSBBU9-Raid-Smart-Batter...


This is what I have so far. The price is about $5,000. I know I am missing some items, but I am still working on this.

I need the price range to be under $7,000, but can go up if needed.

Give me some good feedback, and I would like to see what other people have in mind.



Thanks for all of your help.

More about : server build questions

July 29, 2013 7:15:37 AM

There are several key issues with the above configuration. The type of memory chosen is not server memory, but desktop memory, and will not work with your Xeon-class processors. Your CPU cooler is also a liquid cooling unit that will not fit into a 2U server chassis. And given the number of hard drives AND SSDs you are looking at, you shouldn't even be considering onboard RAID controllers at all.

Remember with a custom-built solution that it is 100% up to you to support it and ensure that every single component you select is 100% compatible with the OS and intended usage that you are putting these servers in for. For example, if you're planning to run ESX for your host OS in running your 30 VMs, then you won't be able to utilize your onboard RAID controller anyways, you will have to get a dedicated hardware RAID controller that is certified compatible with ESX.

My recommendation for this type of thing would be forget custom built. There's too many variables, and unless you know EXACTLY the right components to give you unquestionable stability, you are going to be better off with a pre-configured system from one of the big names like HP and Dell. It may cost you a little more, but it's gonna be done right. If you aren't worried about taking on the responsibilities of supporting it, then you can get more "bang for your buck" often with a custom built system, but there are several things you need to consider.

Let's look at your intended usage. You want to run 30+ VMs on a single server. While this is possible, you're probably far better off load balancing this across several cheaper servers and you can even possibly go to a cluster environment that way. Especially if you are wanting to allocate up to 6 GB of vRAM to each virtual machine, you are going to be looking at better efficiency across several servers for that number of VMs.

Unless you have some specific need for a really high clock speed, don't worry about going for the fastest Xeon on the market. Stick with something that gives you many threads to spread across your multiple VMs in the 2.0 to 2.5 Ghz range. For instance, even the Xeon E5-2620 offers six cores (twelve threads) and can turbo boost to 2.5 Ghz which should offer great performance for most standard VM workloads. You can also get two of these for the price of a single Xeon E5-2643 which is only a quad-core processor. Next look at your RAM. You are going to need ECC Registered DDR3 1333 or 1600 memory for these servers, which is going to run you about double the cost of standard desktop memory. You're going to be spending a TON of money on RAM if you need to allocate 6 GB to each VM and plan to run upwards of 30 simultaneously. One way that you might be able to help with this expense is if you plan on using dynamic memory. Most VMs will run just fine with 4 GB of RAM (or even less, depending upon the demand and usage) so I'd suggest allocating a minimum of 4 GB to each VM, with room to grow to 6 GB if the VM needs the extra space. You will have to configure priorities, so your host OS will know which VM to give extra memory to over others as it is needed. This way you only have to purchase 4 GB of RAM per VM plus a little extra for high demand situations.

On to hard drives. You're going to need enterprise class hard drives for reliability in RAID environments. Standard "mainstream" desktop hard drives aren't going to offer you the reliability or performance you will need. WD RE4 drives are built specifically for this need, and while they are expensive, they're probably the best on the market for this environment. Also, be careful in your evaluation of using SSDs in a server environment as well. Many models are not really designed for use in RAID, and while it is more expensive, it is much better to find SSDs built for use in RAID. Again, a dedicated hardware RAID controller is going to be a must for the proper performance and reliability, as your onboard controller just isn't capable of that for a server environment in the type of workload you are describing.

I don't know how much storage space you need, or the amount of throughput performance you are needing for your storage, but often this is the main bottleneck of running multiple VMs on a single server. It might be wise to consider setting up a cluster environment where you can run your VMs on two node servers (where your key resources are lots of processing cores and lots of RAM) and store your actual VMs on a shared storage device or SAN (where your key resources are lots of fast hard drive in a very well protected RAID array.) This is expensive to do, no doubt about it, but it's the true enterprise solution. There are lots of different solutions for SANs to utilize iSCSI, Fiber Channel, or direct attached SAS. If this is something that you are interested in, I'd highly recommend contacting to the storage specialists at HP or Dell to talk with them about your specific project and needs and they can be incredibly helpful in discussing the best fit.
a b à CPUs
July 29, 2013 9:16:14 PM

Can you explain what you are trying to do with 30+ instances of Server08? What kind of loads will these have? What will they be running? Why only 6GB per server?

In addition to everything stated above me, a little background on what you are trying to accomplish will help us greatly in telling you what you should get hardware wise.
Related resources
July 30, 2013 1:23:22 PM

The money you save by building your own will be short lived by the time and effort you will spend ironing out problems.
July 30, 2013 8:57:34 PM

choucove said:
There are several key issues with the above configuration. The type of memory chosen is not server memory, but desktop memory, and will not work with your Xeon-class processors. Your CPU cooler is also a liquid cooling unit that will not fit into a 2U server chassis. And given the number of hard drives AND SSDs you are looking at, you shouldn't even be considering onboard RAID controllers at all.

Remember with a custom-built solution that it is 100% up to you to support it and ensure that every single component you select is 100% compatible with the OS and intended usage that you are putting these servers in for. For example, if you're planning to run ESX for your host OS in running your 30 VMs, then you won't be able to utilize your onboard RAID controller anyways, you will have to get a dedicated hardware RAID controller that is certified compatible with ESX.

My recommendation for this type of thing would be forget custom built. There's too many variables, and unless you know EXACTLY the right components to give you unquestionable stability, you are going to be better off with a pre-configured system from one of the big names like HP and Dell. It may cost you a little more, but it's gonna be done right. If you aren't worried about taking on the responsibilities of supporting it, then you can get more "bang for your buck" often with a custom built system, but there are several things you need to consider.

Let's look at your intended usage. You want to run 30+ VMs on a single server. While this is possible, you're probably far better off load balancing this across several cheaper servers and you can even possibly go to a cluster environment that way. Especially if you are wanting to allocate up to 6 GB of vRAM to each virtual machine, you are going to be looking at better efficiency across several servers for that number of VMs.

Unless you have some specific need for a really high clock speed, don't worry about going for the fastest Xeon on the market. Stick with something that gives you many threads to spread across your multiple VMs in the 2.0 to 2.5 Ghz range. For instance, even the Xeon E5-2620 offers six cores (twelve threads) and can turbo boost to 2.5 Ghz which should offer great performance for most standard VM workloads. You can also get two of these for the price of a single Xeon E5-2643 which is only a quad-core processor. Next look at your RAM. You are going to need ECC Registered DDR3 1333 or 1600 memory for these servers, which is going to run you about double the cost of standard desktop memory. You're going to be spending a TON of money on RAM if you need to allocate 6 GB to each VM and plan to run upwards of 30 simultaneously. One way that you might be able to help with this expense is if you plan on using dynamic memory. Most VMs will run just fine with 4 GB of RAM (or even less, depending upon the demand and usage) so I'd suggest allocating a minimum of 4 GB to each VM, with room to grow to 6 GB if the VM needs the extra space. You will have to configure priorities, so your host OS will know which VM to give extra memory to over others as it is needed. This way you only have to purchase 4 GB of RAM per VM plus a little extra for high demand situations.

On to hard drives. You're going to need enterprise class hard drives for reliability in RAID environments. Standard "mainstream" desktop hard drives aren't going to offer you the reliability or performance you will need. WD RE4 drives are built specifically for this need, and while they are expensive, they're probably the best on the market for this environment. Also, be careful in your evaluation of using SSDs in a server environment as well. Many models are not really designed for use in RAID, and while it is more expensive, it is much better to find SSDs built for use in RAID. Again, a dedicated hardware RAID controller is going to be a must for the proper performance and reliability, as your onboard controller just isn't capable of that for a server environment in the type of workload you are describing.

I don't know how much storage space you need, or the amount of throughput performance you are needing for your storage, but often this is the main bottleneck of running multiple VMs on a single server. It might be wise to consider setting up a cluster environment where you can run your VMs on two node servers (where your key resources are lots of processing cores and lots of RAM) and store your actual VMs on a shared storage device or SAN (where your key resources are lots of fast hard drive in a very well protected RAID array.) This is expensive to do, no doubt about it, but it's the true enterprise solution. There are lots of different solutions for SANs to utilize iSCSI, Fiber Channel, or direct attached SAS. If this is something that you are interested in, I'd highly recommend contacting to the storage specialists at HP or Dell to talk with them about your specific project and needs and they can be incredibly helpful in discussing the best fit.


I want to thank you for putting in your time to answer this post. It has really helped me out.

I am looking at some HP and Dell servers now.
!