Sign in with
Sign up | Sign in
Your question

Quad core x 2 - SQL servers, WMware, Virtal Server

Last response: in CPUs
Share
February 9, 2007 8:16:02 PM

I see an awful lot of discussion about Vista and will my latest gaming rig run best with a quad core but what about biz applications? *sigh*

Anyone here run a dual socket Quad core machine that has a couple of 4 or 5 Vmware servers running? Virtual servers? Got several SQL servers humming along? I'm curious about THAT performance. I guess I’m toying with the idea of setting up an 8 core server that does nothing but host Virtual servers running SQL servers, and Windows 2003 server sessions. I'd like to see how well 8 cores handles VMware, and virtual server type applications.
February 9, 2007 8:24:39 PM

Quote:
I see an awful lot of discussion about Vista and will my latest gaming rig run best with a quad core but what about biz applications? *sigh*

Anyone here run a dual socket Quad core machine that has a couple of 4 or 5 Vmware servers running? Virtual servers? Got several SQL servers humming along? I'm curious about THAT performance. I guess I’m toying with the idea of setting up an 8 core server that does nothing but host Virtual servers running SQL servers, and Windows 2003 server sessions. I'd like to see how well 8 cores handles VMware, and virtual server type applications.



The only problem I could foresee is an IO bottleneck when the servers are being hit a lot.

The biggest issue will be RAM. You need at least 2GB for each VM. They can run with less but that would be an even worse bottleneck as several VMs swap to the same HDD.
February 9, 2007 8:26:48 PM

At least 16 gigs of ram is planned. I'd also think stick with a SAN for HDD and not deal with it on the server itself. Anyone have a similar setup?
Related resources
February 9, 2007 9:14:36 PM

Ram can be an issue dependant on how you have the machines configured however in terms of page file use, we never run into an issue with VM's with small amounts (less than 1GB) and I/O contention due to paging...as this is an enterprise application we *never* use virtual disks, rather dedicated SAN based LUN's per each VM.

Our core VMWARE server is an HPDL380 G5 with 2x Xeon 5160 CPU's, 16GB RAM, dual FC HBA's jacked into our SAN. We allocate chunks of high performance F/C disk from our SAN.

Out of the 12 VM's running on this box, only one has more than 512MB of allocated memory, that being a SQL box which has 2GB. Because of the dedicated disk, there is never an issue of disk IO causing contention between the machines.

The advantage to doing things this way is it makes moving the underlying server hardware to someting greater later on a breeze as all we do is build a new box, move the HBA's (SAN disk allocation is based on HBA WWN so there's no need to represent any storage on the SAN side, just ensuring the disks make their way into 2003 identical to the old box), install VMWARE and move our VM definition files across to the new box. Our original box was a DL360G4 with two 3.2Ghz single core Xeons and 8GB of ram, only running 5 VM's at the time.

We're getting ready to deploy a much larger machine using dual Xeon 5355's (8 cores) and 32GB of RAM here in the near future and I expect the move to that box will be just as easy as the move to the current machine.
February 9, 2007 11:33:34 PM

Quote:
Ram can be an issue dependant on how you have the machines configured however in terms of page file use, we never run into an issue with VM's with small amounts (less than 1GB) and I/O contention due to paging...as this is an enterprise application we *never* use virtual disks, rather dedicated SAN based LUN's per each VM.

Our core VMWARE server is an HPDL380 G5 with 2x Xeon 5160 CPU's, 16GB RAM, dual FC HBA's jacked into our SAN. We allocate chunks of high performance F/C disk from our SAN.

Out of the 12 VM's running on this box, only one has more than 512MB of allocated memory, that being a SQL box which has 2GB. Because of the dedicated disk, there is never an issue of disk IO causing contention between the machines.

The advantage to doing things this way is it makes moving the underlying server hardware to someting greater later on a breeze as all we do is build a new box, move the HBA's (SAN disk allocation is based on HBA WWN so there's no need to represent any storage on the SAN side, just ensuring the disks make their way into 2003 identical to the old box), install VMWARE and move our VM definition files across to the new box. Our original box was a DL360G4 with two 3.2Ghz single core Xeons and 8GB of ram, only running 5 VM's at the time.

We're getting ready to deploy a much larger machine using dual Xeon 5355's (8 cores) and 32GB of RAM here in the near future and I expect the move to that box will be just as easy as the move to the current machine.




Ooooh, Fibre Channel. That will definitely avoid IO bottlenecks. Still for large footprint apps, like SQL I would recommend at least 2GB. I've used Exchange as a VM with 1GB and it performed well, though I only had Outlook clients at 512MB and never more than 2 VMs.

For a production environment - depending on the app - I would defintiely go with at least 4x the app footprint plus the 512MB for Windows - not Vista. That will usually come out to at least 1GB usually 2GB.
February 10, 2007 12:01:51 AM

i wouldnt try and run vista as the main OS i would say use 2003 server as the base OS for the VMs
February 10, 2007 2:05:40 AM

Quote:
Ram can be an issue dependant on how you have the machines configured however in terms of page file use, we never run into an issue with VM's with small amounts (less than 1GB) and I/O contention due to paging...as this is an enterprise application we *never* use virtual disks, rather dedicated SAN based LUN's per each VM.

Our core VMWARE server is an HPDL380 G5 with 2x Xeon 5160 CPU's, 16GB RAM, dual FC HBA's jacked into our SAN. We allocate chunks of high performance F/C disk from our SAN.

Out of the 12 VM's running on this box, only one has more than 512MB of allocated memory, that being a SQL box which has 2GB. Because of the dedicated disk, there is never an issue of disk IO causing contention between the machines.

The advantage to doing things this way is it makes moving the underlying server hardware to someting greater later on a breeze as all we do is build a new box, move the HBA's (SAN disk allocation is based on HBA WWN so there's no need to represent any storage on the SAN side, just ensuring the disks make their way into 2003 identical to the old box), install VMWARE and move our VM definition files across to the new box. Our original box was a DL360G4 with two 3.2Ghz single core Xeons and 8GB of ram, only running 5 VM's at the time.

We're getting ready to deploy a much larger machine using dual Xeon 5355's (8 cores) and 32GB of RAM here in the near future and I expect the move to that box will be just as easy as the move to the current machine.




Ooooh, Fibre Channel. That will definitely avoid IO bottlenecks. Still for large footprint apps, like SQL I would recommend at least 2GB. I've used Exchange as a VM with 1GB and it performed well, though I only had Outlook clients at 512MB and never more than 2 VMs.

For a production environment - depending on the app - I would defintiely go with at least 4x the app footprint plus the 512MB for Windows - not Vista. That will usually come out to at least 1GB usually 2GB.

I wouldn't backend a VM box with anything else short direct attached SAS storage and long term the SAN is more efficient and cheaper. The SQL VM is a light weight SQL box (our heavyweight boxes are dedicated machines, again back ended by our trusty SAN) The VM's with <1GB are either internal web serves or "tool" servers for us in our daily admin work. Exchange will never sit in a VM in our environment. Can't have it.

I like VMWare, I like the concept and as VM tech matures on the hardware side we'll move more and more machines into that type of environment. Of course this is coming from the storage guy (yours truly) but I do get a say.... ;) 
February 10, 2007 2:59:09 AM

Actually Linux would be the OS of choice by far if setting up a VMHost.

Unless of course you were going high end with ESX which requires no host.

Vista should only be used for small test cases. It really trashes VM perfromance, or at least in all of the Beta tests. XP would work better if you really needed to use a Desktop OS as the host.
February 10, 2007 4:17:32 AM

Quote:
Actually Linux would be the OS of choice by far if setting up a VMHost.

Unless of course you were going high end with ESX which requires no host.

Vista should only be used for small test cases. It really trashes VM perfromance, or at least in all of the Beta tests. XP would work better if you really needed to use a Desktop OS as the host.


We host ours on 2003 Server. I test guest OS's on my desktop (XP Pro) but it's limited
February 10, 2007 2:30:23 PM

Quote:
Actually Linux would be the OS of choice by far if setting up a VMHost.

Unless of course you were going high end with ESX which requires no host.

Vista should only be used for small test cases. It really trashes VM perfromance, or at least in all of the Beta tests. XP would work better if you really needed to use a Desktop OS as the host.


We host ours on 2003 Server. I test guest OS's on my desktop (XP Pro) but it's limited


Well, I hope no one thought I meant put VMWare or Virtual Server on XP or Vista. Those are only clients. Believe me that Virtual Server or GSX/ESX will be excellent for Exchange with a Server 2003 host.

Soon MS' hypervisor will debut with LongHorn. Hypervisors are designed to remove the need for an OS host (Xen, ESX). With a SAN or NAS, things like Exchange and SQL can be used in VMs. Virtual Server has backup mechanisms and imaging which helps secure the app.

I like Virtual server because it can be accessed through the Internet.
February 10, 2007 3:22:55 PM

Quote:
Actually Linux would be the OS of choice by far if setting up a VMHost.

Unless of course you were going high end with ESX which requires no host.

Vista should only be used for small test cases. It really trashes VM perfromance, or at least in all of the Beta tests. XP would work better if you really needed to use a Desktop OS as the host.


We host ours on 2003 Server. I test guest OS's on my desktop (XP Pro) but it's limited


Well, I hope no one thought I meant put VMWare or Virtual Server on XP or Vista. Those are only clients. Believe me that Virtual Server or GSX/ESX will be excellent for Exchange with a Server 2003 host.

Soon MS' hypervisor will debut with LongHorn. Hypervisors are designed to remove the need for an OS host (Xen, ESX). With a SAN or NAS, things like Exchange and SQL can be used in VMs. Virtual Server has backup mechanisms and imaging which helps secure the app.

I like Virtual server because it can be accessed through the Internet.

Nah, not at all - I use it on XP on my desktop (yes my sig says Vista - I need to change it back to XP...), I have a mini environment set up for VB Script testing on my machine here at home (E6600, 2GB ram, etc) and when I'm not absorbed in World of Warcrack, I spend a lot of time in the 4 machines I have set up (3 servers and 1 workstation) writing, modifying, running, etc.
February 10, 2007 3:59:51 PM

Quote:
Actually Linux would be the OS of choice by far if setting up a VMHost.

Unless of course you were going high end with ESX which requires no host.

Vista should only be used for small test cases. It really trashes VM perfromance, or at least in all of the Beta tests. XP would work better if you really needed to use a Desktop OS as the host.


We host ours on 2003 Server. I test guest OS's on my desktop (XP Pro) but it's limited


Well, I hope no one thought I meant put VMWare or Virtual Server on XP or Vista. Those are only clients. Believe me that Virtual Server or GSX/ESX will be excellent for Exchange with a Server 2003 host.

Soon MS' hypervisor will debut with LongHorn. Hypervisors are designed to remove the need for an OS host (Xen, ESX). With a SAN or NAS, things like Exchange and SQL can be used in VMs. Virtual Server has backup mechanisms and imaging which helps secure the app.

I like Virtual server because it can be accessed through the Internet.

Nah, not at all - I use it on XP on my desktop (yes my sig says Vista - I need to change it back to XP...), I have a mini environment set up for VB Script testing on my machine here at home (E6600, 2GB ram, etc) and when I'm not absorbed in World of Warcrack, I spend a lot of time in the 4 machines I have set up (3 servers and 1 workstation) writing, modifying, running, etc.


OF course. I have a 2003 domain setup with Exchange and two clients for client server app tests. I just wouldn't use it for prod. XP only supports 3.25GB RAM.
February 12, 2007 11:43:23 AM

VMWare has had all of that for quite a long time.

ESX has no host OS and other features.
I think it's really cool when you move a running VM from one physical host to another physical host but dont even need to interupt the Virtual host in the process.

ESX is where it's at in the data centers and has been for years.
ESX is all web based mgmt so yet it can be managed over that thing called the internet too.
February 12, 2007 3:27:06 PM

Quote:
VMWare has had all of that for quite a long time.

ESX has no host OS and other features.
I think it's really cool when you move a running VM from one physical host to another physical host but dont even need to interupt the Virtual host in the process.

ESX is where it's at in the data centers and has been for years.
ESX is all web based mgmt so yet it can be managed over that thing called the internet too.



Yes and it's also about $10,000.
February 12, 2007 3:50:33 PM

$10,000? Oh, Well we spend lots more than that :>>

I have no clue how much a single license would cost, but even with six-figure budgets for VMWare, the server budget was lower with ESX than w/o it.

Imaging the cost of building a whole new data center :>
ESX can let us get more servers in less space at a lower cost than anything.

Many companies I work with are growing at a rate of 100+ servers a year in their data centers. This means they need more space, more power, and more cooling. Data centers built just a few years ago are bursting at the seems. Voila! They can easily cut their server count out and get many more years of service.

Imagine a company that can affor 0% downtime and would lose more than $10,000 a minute? They can seemlessly transfer their running system from one server to the next.
February 12, 2007 4:56:02 PM

Quote:
$10,000? Oh, Well we spend lots more than that :>>

I have no clue how much a single license would cost, but even with six-figure budgets for VMWare, the server budget was lower with ESX than w/o it.

Imaging the cost of building a whole new data center :>
ESX can let us get more servers in less space at a lower cost than anything.

Many companies I work with are growing at a rate of 100+ servers a year in their data centers. This means they need more space, more power, and more cooling. Data centers built just a few years ago are bursting at the seems. Voila! They can easily cut their server count out and get many more years of service.

Imagine a company that can affor 0% downtime and would lose more than $10,000 a minute? They can seemlessly transfer their running system from one server to the next.


That's for a single sever license. That's why my old company went with Vurtual Server. It's not a hypervisor but not everyone needs a hypervisor.

But as I say for those who may care LongHorn Server has a hypervisor and I'm sure they will attempt to undercut VMWare again.
!