Solved

Processor for Virtualization Lab (With comptbl Board around 100$ 4 Dimm+onboard Graphics)

Heres some food for thought for my fellow Rig designers.:D
Which processor do i choose among these for virtualisation Lab
I will be running min 8 VM win server 8, win server 12,win 7,ubuntu .
I also need to know what will be best suited board with onboard graphics and which chipset of Amd Board is best.
AMD FX-8320 (FD8320FRHKBOX) Processor

AMD FX-8150 Zambezi 3.6GHz Socket AM3+ 125W Processor FD8150FRGUBOX

AMD 4.4 GHz AM3+ FX 6-Core Edition FX-6300 (FD6300WMHKBOX) Processor

AMD 3.1 GHz AM3+ FX 8120 Processor

AMD A10-6800K Richland 4.1GHz (4.4GHz Turbo) Socket FM2 100W Quad-Core Desktop Processor - Black Edition AMD Radeon HD 8670D

Appreciate quick reply.:bounce:
kindly give pros n cons of each processor as to why its not suited.
i can shell out some few extra dollars for GPU card if required.

i also tried searching for Dual processor Boards does anyone have idea which boards support "Dual cpu" Piledriver,or Bulldozer .
31 answers Last reply Best Answer
More about processor virtualization lab comptbl board 100 dimm onboard graphics
  1. Best answer
    Any of those FX-8cores will be fine. I would go with the cheapest 8 core (each single thread can be assigned to a VM) and 16 or 32 GB of RAM. Windows Server will use the most, depending on what services you are running (2 cores). You may need to run PFsense in a VM as well in order to create a network. So that's 3 cores, minus at least 1 for the host.

    4 cores will not be enough, and with 1 VM Windows running, that's already 5 cores used.

    Videocard will be useless, won't make any difference.

    Dual processor board will use the Optron processors, but for a cheap lab, I wouldn't bother going with server equipment. If you are using Hyper-V servers to host the VM's, you can cluster across 2 computers.
  2. Alec Mowat said:
    Any of those FX-8cores will be fine. I would go with the cheapest 8 core (each single thread can be assigned to a VM) and 16 or 32 GB of RAM. Windows Server will use the most, depending on what services you are running (2 cores). You may need to run PFsense in a VM as well in order to create a network. So that's 3 cores, minus at least 1 for the host.

    4 cores will not be enough, and with 1 VM Windows running, that's already 5 cores used.

    Videocard will be useless, won't make any difference.

    Dual processor board will use the Optron processors, but for a cheap lab, I wouldn't bother going with server equipment. If you are using Hyper-V servers to host the VM's, you can cluster across 2 computers.


    thnx
    i would appreciate if you could suggest a Motherboard as well.
    can you elaborate a bit more on use of pfsense??
  3. Alec Mowat said:
    Any of those FX-8cores will be fine. I would go with the cheapest 8 core (each single thread can be assigned to a VM) and 16 or 32 GB of RAM. Windows Server will use the most, depending on what services you are running (2 cores). You may need to run PFsense in a VM as well in order to create a network. So that's 3 cores, minus at least 1 for the host.

    4 cores will not be enough, and with 1 VM Windows running, that's already 5 cores used.

    Videocard will be useless, won't make any difference.

    Dual processor board will use the Optron processors, but for a cheap lab, I wouldn't bother going with server equipment. If you are using Hyper-V servers to host the VM's, you can cluster across 2 computers.

    You absolutely don't need one core per VM. A dual core CPU can run as many VMs as will fit in memory and the bottleneck often is the storage, not the CPU. I can run 8-10 VMs on my Quad core with 16 GB of memory and a lot more on my 8 core and 32 GB of memory. Why PFSense for a VLab? A firewall should already be protecting the site.
  4. DIONODELL what virtualization solution will you use? That could affect your hardware selection, particularly if you want to use ESXi.
  5. GhislainG said:
    Alec Mowat said:
    Any of those FX-8cores will be fine. I would go with the cheapest 8 core (each single thread can be assigned to a VM) and 16 or 32 GB of RAM. Windows Server will use the most, depending on what services you are running (2 cores). You may need to run PFsense in a VM as well in order to create a network. So that's 3 cores, minus at least 1 for the host.

    4 cores will not be enough, and with 1 VM Windows running, that's already 5 cores used.

    Videocard will be useless, won't make any difference.

    Dual processor board will use the Optron processors, but for a cheap lab, I wouldn't bother going with server equipment. If you are using Hyper-V servers to host the VM's, you can cluster across 2 computers.

    You absolutely don't need one core per VM. A dual core CPU can run as many VMs as will fit in memory and the bottleneck often is the storage, not the CPU. I can run 8-10 VMs on my Quad core with 16 GB of memory and a lot more on my 8 core and 32 GB of memory. Why PFSense for a VLab? A firewall should already be protecting the site.



    GhislainG said:
    DIONODELL what virtualization solution will you use? That could affect your hardware selection, particularly if you want to use ESXi.


    i am goin to try Both Vsphere esxi 5.1 for 15 to 20 days extensive
    and Hyper V for some time till i get private cloud setup on a bigger setup.
    Hardware selection is a headache if u have "Tight Budget".Poor me.
    Bitcoins Accepted:D
  6. GhislainG said:
    Alec Mowat said:
    Any of those FX-8cores will be fine. I would go with the cheapest 8 core (each single thread can be assigned to a VM) and 16 or 32 GB of RAM. Windows Server will use the most, depending on what services you are running (2 cores). You may need to run PFsense in a VM as well in order to create a network. So that's 3 cores, minus at least 1 for the host.

    4 cores will not be enough, and with 1 VM Windows running, that's already 5 cores used.

    Videocard will be useless, won't make any difference.

    Dual processor board will use the Optron processors, but for a cheap lab, I wouldn't bother going with server equipment. If you are using Hyper-V servers to host the VM's, you can cluster across 2 computers.

    You absolutely don't need one core per VM. A dual core CPU can run as many VMs as will fit in memory and the bottleneck often is the storage, not the CPU. I can run 8-10 VMs on my Quad core with 16 GB of memory and a lot more on my 8 core and 32 GB of memory. Why PFSense for a VLab? A firewall should already be protecting the site.


    But Why firewall if i am working on an isolated Network.
    Kindly enlighten me on pfsense deployment in Lab as i have not considered it in my design.
    How much of resource will it take.
    my lab will use nested visualization as well if need be.
    As o f now my 1sthost should run esxi Win 12k(2 domain controllers, 1 core server,exchange 07,win 8,)
    second host will use 6 core cpu after which i want to test for vmotion and svmotion)
    this will run xp and Workstation to run other clients.
    what sata hdd a non ssd is the best and not RAID config.
    i just lost 2 Baraccudas 1tb within 6months with 64 mb cache.
    I also plan to use my spare 6gb of 1066 RAM that is lying around
    for my second host.
  7. Hyper-V should be relatively easy once you find the required drivers for the selected motherboard. ESXi is more challenging; you should read the following or google "esxi amd iommu whitebox":

    http://thehomeserverblog.com/esxi/esxi-5-0-amd-whitebox-server-for-500-with-passthrough-iommu-build-2/ - I'd consider an FX 8320 and a better PSU for this build. Unlike Asus, ASRock and Gigabyte often support IOMMU (you didn't say if you'll need/want it or not) on their desktop motherboards.

    http://thehomeserverblog.com/esxi/esxi-5-0-amd-whitebox-server-for-500-with-passthrough-iommu/ - same comments as the other build.

    http://www.reddit.com/r/homelab/comments/19wopx/critique_my_whitebox_build/

    http://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware

    Make sure the selected motherboard includes a supported LAN controller or buy one if need be.
  8. I'd suggest using WD Black or WD Red hard disks. I use WD Black drives on my VM servers.
  9. GhislainG said:
    Hyper-V should be relatively easy once you find the required drivers for the selected motherboard. ESXi is more challenging; you should read the following or google "esxi amd iommu whitebox":

    http://thehomeserverblog.com/esxi/esxi-5-0-amd-whitebox-server-for-500-with-passthrough-iommu-build-2/ - I'd consider an FX 8320 and a better PSU for this build. Unlike Asus, ASRock and Gigabyte often support IOMMU (you didn't say if you'll need/want it or not) on their desktop motherboards.

    http://thehomeserverblog.com/esxi/esxi-5-0-amd-whitebox-server-for-500-with-passthrough-iommu/ - same comments as the other build.

    http://www.reddit.com/r/homelab/comments/19wopx/critique_my_whitebox_build/

    http://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware

    Make sure the selected motherboard includes a supported LAN controller or buy one if need be.


    Thnx
    i considered going for AMD 8150 fx because of its IOMMU support but i read it has voltage drop issues and core not reaching 3.1ghz.:pt1cable:
    I read lot of abusive threads of Intel vs amd :D but my question still remains unanswered
    should it be 6300 or 8250 as 8150 and 6300 have similar benchmark score.
    On the other hand there are people who have advised to go for fx8150 3.6ghz and claimed to push it to 5ghz:love: on factory cooler.
    my config as of now.
    Amd 8350
    MB ASRock 970 Extreme3
    Seasonic 600W PSU
    Sapphire 6670 DDR5
    Ram Do i go for Ripjaws or can i make do with Cheapo once like corsair 1333.
    i use Flipkart.com to purchase my items.

    Also can you explain the passthrough setup mentioned on the 500$ rig page?

    Slot Setup for the ESXi AMD Whitebox

    PCI-e x16: Radeon HD6670 (Passthrough to VM)

    PCI-e x4 : LSI SAS3041E 4-Port SAS/SATA PCI-e x4 (Passthrough to VM)

    PCI-e x1 : 5 Port PCI-E USB Port (for Passthrough)

    PCI-e x1 : GB NIC (RealTek 8168, used by ESXi host)

    PCI : Intel Pro/1000 MT Dual Gigabit PCI-X NIC

    PCI : ATI Rage XL Pro 8BM PCI Video Card (Console Video)

    ptional SATA Controller Card: LSI SAS3041E 4-Port SAS/SATA PCI-e x4 — $25

    i am confused
  10. All AM3+ processors support IOMMU. Go with the 6300, the 8320 or the 8350 if it fits in your budget; don't buy an older processor like the 8150 unless you get a great deal on it. I'd go for the 8320 as it only costs a bit more than the 6300 and the performance difference with the 8350 is not important on a server (unless its purpose is to run benchmarks).

    Overclocking a server is looking for stability problems, particularly with 32GB of memory. I'd select a 1600 memory kit that's known to work reliably when installed on the selected motherboard or buy a 1600 kit and run it at 1333 if it isn't stable at 1600.

    You need passthrough to assign a physical disk to a VM; same for the video controller, etc. Read the what the builder said about VMotion and passthrough. My server supports passthrough, but I have yet to find a reason to use it (unless I create a VM that requires direct access to the hard disk and/or video card).
  11. GhislainG said:
    Alec Mowat said:
    Any of those FX-8cores will be fine. I would go with the cheapest 8 core (each single thread can be assigned to a VM) and 16 or 32 GB of RAM. Windows Server will use the most, depending on what services you are running (2 cores). You may need to run PFsense in a VM as well in order to create a network. So that's 3 cores, minus at least 1 for the host.

    4 cores will not be enough, and with 1 VM Windows running, that's already 5 cores used.

    Videocard will be useless, won't make any difference.

    Dual processor board will use the Optron processors, but for a cheap lab, I wouldn't bother going with server equipment. If you are using Hyper-V servers to host the VM's, you can cluster across 2 computers.

    You absolutely don't need one core per VM. A dual core CPU can run as many VMs as will fit in memory and the bottleneck often is the storage, not the CPU. I can run 8-10 VMs on my Quad core with 16 GB of memory and a lot more on my 8 core and 32 GB of memory. Why PFSense for a VLab? A firewall should already be protecting the site.



    Pfsense is a router/switch, so you can create a virtual network with it's own DNS and DHCP if you are running a virtual lab. You don't need protection if it's virtual, I imagine it's locked in a secure network already.

    If you are using VMware, you're primary system with be a running the Vmware OS, and you'll have to manage it remotely. It's not as easy as Hyper-V, which boots on a Window Server OS. If you are using Hyper-V or VMware, you can use the Host as the primary server.

    If you are using Virtual Box, you'll want to run PFsense. It emulates a router, a switch and a firewall.

    You can have 8-10 VM's on a quadcore, but that's horrible for a production environment. I recommend 1 Core per service. If you are running exchange and IIS or SQL on one box, I would dedicate two cores, 1 for each service.
  12. DIONODELL said:
    GhislainG said:
    Alec Mowat said:
    Any of those FX-8cores will be fine. I would go with the cheapest 8 core (each single thread can be assigned to a VM) and 16 or 32 GB of RAM. Windows Server will use the most, depending on what services you are running (2 cores). You may need to run PFsense in a VM as well in order to create a network. So that's 3 cores, minus at least 1 for the host.

    4 cores will not be enough, and with 1 VM Windows running, that's already 5 cores used.

    Videocard will be useless, won't make any difference.

    Dual processor board will use the Optron processors, but for a cheap lab, I wouldn't bother going with server equipment. If you are using Hyper-V servers to host the VM's, you can cluster across 2 computers.

    You absolutely don't need one core per VM. A dual core CPU can run as many VMs as will fit in memory and the bottleneck often is the storage, not the CPU. I can run 8-10 VMs on my Quad core with 16 GB of memory and a lot more on my 8 core and 32 GB of memory. Why PFSense for a VLab? A firewall should already be protecting the site.


    But Why firewall if i am working on an isolated Network.
    Kindly enlighten me on pfsense deployment in Lab as i have not considered it in my design.
    How much of resource will it take.
    my lab will use nested visualization as well if need be.
    As o f now my 1sthost should run esxi Win 12k(2 domain controllers, 1 core server,exchange 07,win 8,)
    second host will use 6 core cpu after which i want to test for vmotion and svmotion)
    this will run xp and Workstation to run other clients.
    what sata hdd a non ssd is the best and not RAID config.
    i just lost 2 Baraccudas 1tb within 6months with 64 mb cache.
    I also plan to use my spare 6gb of 1066 RAM that is lying around
    for my second host.


    Step back

    Is this a project computer that you plan on tearing down, or is this a production system that you plan on maintaining for awhile?

    That's some pretty hefty load.

    If you are not actually running Exchange, outside of 1 or 2 mail boxes, I would just get the 8320 for the additional cores and grab 16GB of any type of RAM. It's not going to be processing enough information for the RAM to really make an issue. You just want enough to spread around the VM's.

    I would take the cores over performance either way. Most server Xeon cores are only clocked around 2.2 ghz.
  13. GhislainG said:
    All AM3+ processors support IOMMU. Go with the 6300, the 8320 or the 8350 if it fits in your budget; don't buy an older processor like the 8150 unless you get a great deal on it. I'd go for the 8320 as it only costs a bit more than the 6300 and the performance difference with the 8350 is not important on a server (unless its purpose is to run benchmarks).

    Overclocking a server is looking for stability problems, particularly with 32GB of memory. I'd select a 1600 memory kit that's known to work reliably when installed on the selected motherboard or buy a 1600 kit and run it at 1333 if it isn't stable at 1600.

    You need passthrough to assign a physical disk to a VM; same for the video controller, etc. Read the what the builder said about VMotion and passthrough. My server supports passthrough, but I have yet to find a reason to use it (unless I create a VM that requires direct access to the hard disk and/or video card).


    Alec Mowat said:
    DIONODELL said:
    GhislainG said:
    Alec Mowat said:
    Any of those FX-8cores will be fine. I would go with the cheapest 8 core (each single thread can be assigned to a VM) and 16 or 32 GB of RAM. Windows Server will use the most, depending on what services you are running (2 cores). You may need to run PFsense in a VM as well in order to create a network. So that's 3 cores, minus at least 1 for the host.

    4 cores will not be enough, and with 1 VM Windows running, that's already 5 cores used.

    Videocard will be useless, won't make any difference.

    Dual processor board will use the Optron processors, but for a cheap lab, I wouldn't bother going with server equipment. If you are using Hyper-V servers to host the VM's, you can cluster across 2 computers.

    You absolutely don't need one core per VM. A dual core CPU can run as many VMs as will fit in memory and the bottleneck often is the storage, not the CPU. I can run 8-10 VMs on my Quad core with 16 GB of memory and a lot more on my 8 core and 32 GB of memory. Why PFSense for a VLab? A firewall should already be protecting the site.


    But Why firewall if i am working on an isolated Network.
    Kindly enlighten me on pfsense deployment in Lab as i have not considered it in my design.
    How much of resource will it take.
    my lab will use nested visualization as well if need be.
    As o f now my 1sthost should run esxi Win 12k(2 domain controllers, 1 core server,exchange 07,win 8,)
    second host will use 6 core cpu after which i want to test for vmotion and svmotion)
    this will run xp and Workstation to run other clients.
    what sata hdd a non ssd is the best and not RAID config.
    i just lost 2 Baraccudas 1tb within 6months with 64 mb cache.
    I also plan to use my spare 6gb of 1066 RAM that is lying around
    for my second host.


    Step back

    Is this a project computer that you plan on tearing down, or is this a production system that you plan on maintaining for awhile?

    That's some pretty hefty load.

    If you are not actually running Exchange, outside of 1 or 2 mail boxes, I would just get the 8320 for the additional cores and grab 16GB of any type of RAM. It's not going to be processing enough information for the RAM to really make an issue. You just want enough to spread around the VM's.

    I would take the cores over performance either way. Most server Xeon cores are only clocked around 2.2 ghz.


    I cant say as of now ill be just playing with it as a lab running every 64 bit OS on most hypervisors and even Exchange.
    6 months down the line can become a Multiseat System should i go on with my plan of teaching what i know till now to new kids ie MS server Tech & Web server so they can practise for free.I like to share my knowledge with these college kids who cant afford Big Systems or Server Software.Its a little contribution i make ,i wish i could make more.
    I want to really exploit the power of the system to the max.
    Right now i am stuck with RAM for 32GB Ripjaws
    i am paying almost 400$ .My system plan is at standstill RAM and HDD prices are going through the roof in India.
    Rest everything is Done and by the way i managed to get Asrock 970 extreme3 2.0 :bounce::D:bounce: for same price as old one.Very difficult to source them here.
  14. Honestly I have yet to see a lab server that's very busy. Even production servers often are way underutilized, but faster systems are more responsive (not a serious issue in a lab environment). If it's much less expensive than the FX 8320, the FX 6300 would be a viable option. You could also start with 16GB of memory and increase it to 32GB later on, but you risk not finding the exact same kit and run into stability issues.
  15. GhislainG said:
    Honestly I have yet to see a lab server that's very busy. Even production servers often are way underutilized, but faster systems are more responsive (not a serious issue in a lab environment). If it's much less expensive than the FX 8320, the FX 6300 would be a viable option. You could also start with 16GB of memory and increase it to 32GB later on, but you risk not finding the exact same kit and run into stability issues.


    I am kind a confused over this line about Ram that i need to use
    :heink:Note7: AMD FX series CPU on this motherboard support up to DDR3 1866 MHz as its standard memory frequency.
    can you DECODE this line for me.
    Does it mean i can use lower frequencies for FX as well??:??:
    http://www.asrock.com/mb/AMD/970%20Extreme3%20R2.0/?cat=Memory

    i bought 8350 :D finally.was a good deal very little price difference with 8320.
    this build is way way way above my budget,:ouch:i started with budget but the lust of
    little more performance is :pt1cable:

    you are right about the resource utilization part even in a lab i will have lot of idle
    time.Thats why i am extra cautious over RAM purchase.
  16. My bad
    i missed this
    Supported memory: DDR3-1866 n above for 8350
  17. You can use cheaper DDR3-1333 as you won't notice the difference unless you run benchmarks. On my Hyper-V server (i7-3770), CPU utilization is 1-15% unless I get several VMs really busy. I knew from the start that it exceeded my requirements, but it's fast.
  18. DIONODELL said:
    GhislainG said:
    Honestly I have yet to see a lab server that's very busy. Even production servers often are way underutilized, but faster systems are more responsive (not a serious issue in a lab environment). If it's much less expensive than the FX 8320, the FX 6300 would be a viable option. You could also start with 16GB of memory and increase it to 32GB later on, but you risk not finding the exact same kit and run into stability issues.


    I am kind a confused over this line about Ram that i need to use
    :heink:Note7: AMD FX series CPU on this motherboard support up to DDR3 1866 MHz as its standard memory frequency.
    can you DECODE this line for me.
    Does it mean i can use lower frequencies for FX as well??:??:
    http://www.asrock.com/mb/AMD/970%20Extreme3%20R2.0/?cat=Memory

    i bought 8350 :D finally.was a good deal very little price difference with 8320.
    this build is way way way above my budget,:ouch:i started with budget but the lust of
    little more performance is :pt1cable:

    you are right about the resource utilization part even in a lab i will have lot of idle
    time.Thats why i am extra cautious over RAM purchase.




    Unlike Workstation environments, Server environments will use all available resources. SQL and Exchange are particularly bad for overloading systems.
    In an experiment environment, it won't really matter. You won't run enough at once to really impact anything. Better performing hardware will make better performing services.

    You can run numerous vCores on each core (I believe up to 4 per core), but I still recommend leaving a dedicated core per server. You don't want to time out VM's when you are loading 100 GB SQL DB's.

    Keep in mind, if you are planning to run this server 24/7, I would take warranty over performance. Things break more often when they used more often. Hard drives are especially important.
  19. I have to disagree on hard drive reliability. Systems that run 24x7 often have less hard drive issues than systems that are powered off several times a week. I can't figure out why you recommend one core per server, particularly for a lab. If a VM is busy while others are not, all available resources will be used (up to the maximum number of vCores allocated to the VM). When loading a 100 GB DB, I'm more concerned by disk I/O than CPU load.
  20. GhislainG said:
    I have to disagree on hard drive reliability. Systems that run 24x7 often have less hard drive issues than systems that are powered off several times a week. I can't figure out why you recommend one core per server, particularly for a lab. If a VM is busy while others are not, all available resources will be used (up to the maximum number of vCores allocated to the VM). When loading a 100 GB DB, I'm more concerned by disk I/O than CPU load.


    Harddrives die all the time. Most production servers run a RAID5 or RAID6 so it's not a big issue. But if you are only running 1 drive, it could die. You want to run a daily backup on a Production server is possible. That puts more stress on the drive.

    If you are running something BIG, like SQL Databases or large Exchange directories, having more cores will greatly benefit. Having more RAM on your exchange server will be a big help too. Not all applications will take advantage of more than one core, so running an AV scan overnight can drain 1 core.

    If you are not actually using your server, you just want to open a bunch of unused VM's, than the quality of your hardware will not matter.

    In a production environment, you need good quality server drives. It's a lot of work when people take shortcuts and things break.

    https://communities.vmware.com/servlet/JiveServlet/previewBody/21181-102-1-28328/vsphere-oversubscription-best-practices%5B1%5D.pdf

    "In a virtual machine, processors are referred to as virtual CPUs (vCPUs). When an administrator adds
    vCPUs to a virtual machine, each of those vCPUs is assigned to a pCPU, although the actual pCPU
    may not always be the same."

    In this Dell white paper, the following vCPU: pCPU guidelines are established:
    •1:1 to 3:1 is no problem
    •3:1 to 5:1 may begin to cause performance degradation
    •6:1 or greater is often going to cause a problem

    If you are just experimenting, you may as well pick up the extra cores and not have to worry about data loss, or performance. You'll thank me you are actually running a service, and not just hosting a few blank screens.
  21. hi Guys i just saw this system for sale on local portal.
    CPU: i7-950 (8M Cache, 3.06 GHz, 4.80 GT/s Intel® QPI)
    Motherboard: ASUS P6X58D-E
    RAM: G.SKILL PI Series 12 GB
    Graphic Card: AMD Radeon HD 5670
    Cooler: Corsair H100 series Cooler
    Keyboard: Logitech G15, Gaming Keyboard
    Speaker: Altec Lansing VS2621
    Monitor: DELL 24 inch LCD monitor
    Hard disk: OCZ-AGILITY2 SSD
    Cabinet: Cooler Master CM690 II
    Power Supply Unit: Corsair 650 Watt
    Will it suffice whitebox requirement with full load.
    considering one VM per core.
    I have very little info about the processor capabilities for virtualization.
  22. Alec Mowat said:
    GhislainG said:
    I have to disagree on hard drive reliability. Systems that run 24x7 often have less hard drive issues than systems that are powered off several times a week. I can't figure out why you recommend one core per server, particularly for a lab. If a VM is busy while others are not, all available resources will be used (up to the maximum number of vCores allocated to the VM). When loading a 100 GB DB, I'm more concerned by disk I/O than CPU load.


    Harddrives die all the time. Most production servers run a RAID5 or RAID6 so it's not a big issue. But if you are only running 1 drive, it could die. You want to run a daily backup on a Production server is possible. That puts more stress on the drive.

    If you are running something BIG, like SQL Databases or large Exchange directories, having more cores will greatly benefit. Having more RAM on your exchange server will be a big help too. Not all applications will take advantage of more than one core, so running an AV scan overnight can drain 1 core.

    If you are not actually using your server, you just want to open a bunch of unused VM's, than the quality of your hardware will not matter.

    In a production environment, you need good quality server drives. It's a lot of work when people take shortcuts and things break.

    https://communities.vmware.com/servlet/JiveServlet/previewBody/21181-102-1-28328/vsphere-oversubscription-best-practices%5B1%5D.pdf

    "In a virtual machine, processors are referred to as virtual CPUs (vCPUs). When an administrator adds
    vCPUs to a virtual machine, each of those vCPUs is assigned to a pCPU, although the actual pCPU
    may not always be the same."

    In this Dell white paper, the following vCPU: pCPU guidelines are established:
    •1:1 to 3:1 is no problem
    •3:1 to 5:1 may begin to cause performance degradation
    •6:1 or greater is often going to cause a problem

    If you are just experimenting, you may as well pick up the extra cores and not have to worry about data loss, or performance. You'll thank me you are actually running a service, and not just hosting a few blank screens.


    This is interesting Info.
    So now i have 8cores at my disposal with 4ghz what will i see on esxi Host as cpu count.
    And when i allocate resources will it be 1core 4ghz vcpu?
    for ex this is screenshot of 6core AMD 6300
    http://thehomeserverblog.com/wp-content/uploads/2013/02/esxi-home-server-9-overview.png

    kindly enlighten me.
  23. DIONODELL said:
    hi Guys i just saw this system for sale on local portal.
    CPU: i7-950 (8M Cache, 3.06 GHz, 4.80 GT/s Intel® QPI)
    Motherboard: ASUS P6X58D-E
    RAM: G.SKILL PI Series 12 GB
    Graphic Card: AMD Radeon HD 5670
    Cooler: Corsair H100 series Cooler
    Keyboard: Logitech G15, Gaming Keyboard
    Speaker: Altec Lansing VS2621
    Monitor: DELL 24 inch LCD monitor
    Hard disk: OCZ-AGILITY2 SSD
    Cabinet: Cooler Master CM690 II
    Power Supply Unit: Corsair 650 Watt
    Will it suffice whitebox requirement with full load.
    considering one VM per core.
    I have very little info about the processor capabilities for virtualization.

    The issue is not the number of cores, but the amount of memory. You'll need to add several hard disks or SSDs to maximize performance.
  24. DIONODELL said:
    Alec Mowat said:
    GhislainG said:
    I have to disagree on hard drive reliability. Systems that run 24x7 often have less hard drive issues than systems that are powered off several times a week. I can't figure out why you recommend one core per server, particularly for a lab. If a VM is busy while others are not, all available resources will be used (up to the maximum number of vCores allocated to the VM). When loading a 100 GB DB, I'm more concerned by disk I/O than CPU load.


    Harddrives die all the time. Most production servers run a RAID5 or RAID6 so it's not a big issue. But if you are only running 1 drive, it could die. You want to run a daily backup on a Production server is possible. That puts more stress on the drive.

    If you are running something BIG, like SQL Databases or large Exchange directories, having more cores will greatly benefit. Having more RAM on your exchange server will be a big help too. Not all applications will take advantage of more than one core, so running an AV scan overnight can drain 1 core.

    If you are not actually using your server, you just want to open a bunch of unused VM's, than the quality of your hardware will not matter.

    In a production environment, you need good quality server drives. It's a lot of work when people take shortcuts and things break.

    https://communities.vmware.com/servlet/JiveServlet/previewBody/21181-102-1-28328/vsphere-oversubscription-best-practices%5B1%5D.pdf

    "In a virtual machine, processors are referred to as virtual CPUs (vCPUs). When an administrator adds
    vCPUs to a virtual machine, each of those vCPUs is assigned to a pCPU, although the actual pCPU
    may not always be the same."

    In this Dell white paper, the following vCPU: pCPU guidelines are established:
    •1:1 to 3:1 is no problem
    •3:1 to 5:1 may begin to cause performance degradation
    •6:1 or greater is often going to cause a problem

    If you are just experimenting, you may as well pick up the extra cores and not have to worry about data loss, or performance. You'll thank me you are actually running a service, and not just hosting a few blank screens.


    This is interesting Info.
    So now i have 8cores at my disposal with 4ghz what will i see on esxi Host as cpu count.
    And when i allocate resources will it be 1core 4ghz vcpu?
    for ex this is screenshot of 6core AMD 6300
    http://thehomeserverblog.com/wp-content/uploads/2013/02/esxi-home-server-9-overview.png

    kindly enlighten me.
    8 CPUs at 4 GHz (one CPU per core, e.g., the server I'm connected to shows 48 CPUs). You decide on the frequency assigned to a vCPU, i.e., it doesn't have to be 4 GHz.
  25. GhislainG said:
    DIONODELL said:
    Alec Mowat said:
    GhislainG said:
    I have to disagree on hard drive reliability. Systems that run 24x7 often have less hard drive issues than systems that are powered off several times a week. I can't figure out why you recommend one core per server, particularly for a lab. If a VM is busy while others are not, all available resources will be used (up to the maximum number of vCores allocated to the VM). When loading a 100 GB DB, I'm more concerned by disk I/O than CPU load.


    Harddrives die all the time. Most production servers run a RAID5 or RAID6 so it's not a big issue. But if you are only running 1 drive, it could die. You want to run a daily backup on a Production server is possible. That puts more stress on the drive.

    If you are running something BIG, like SQL Databases or large Exchange directories, having more cores will greatly benefit. Having more RAM on your exchange server will be a big help too. Not all applications will take advantage of more than one core, so running an AV scan overnight can drain 1 core.

    If you are not actually using your server, you just want to open a bunch of unused VM's, than the quality of your hardware will not matter.

    In a production environment, you need good quality server drives. It's a lot of work when people take shortcuts and things break.

    https://communities.vmware.com/servlet/JiveServlet/previewBody/21181-102-1-28328/vsphere-oversubscription-best-practices%5B1%5D.pdf

    "In a virtual machine, processors are referred to as virtual CPUs (vCPUs). When an administrator adds
    vCPUs to a virtual machine, each of those vCPUs is assigned to a pCPU, although the actual pCPU
    may not always be the same."

    In this Dell white paper, the following vCPU: pCPU guidelines are established:
    •1:1 to 3:1 is no problem
    •3:1 to 5:1 may begin to cause performance degradation
    •6:1 or greater is often going to cause a problem

    If you are just experimenting, you may as well pick up the extra cores and not have to worry about data loss, or performance. You'll thank me you are actually running a service, and not just hosting a few blank screens.


    This is interesting Info.
    So now i have 8cores at my disposal with 4ghz what will i see on esxi Host as cpu count.
    And when i allocate resources will it be 1core 4ghz vcpu?
    for ex this is screenshot of 6core AMD 6300
    http://thehomeserverblog.com/wp-content/uploads/2013/02/esxi-home-server-9-overview.png

    kindly enlighten me.
    8 CPUs at 4 GHz (one CPU per core, e.g., the server I'm connected to shows 48 CPUs). You decide on the frequency assigned to a vCPU, i.e., it doesn't have to be 4 GHz.


    What is the Config of your Host?
  26. That host is a ProLiant DL585 G7 with 256 GB of memory; storage is on a SAN with a bunch of hard disks. My personal servers are less powerful, but they also are quieter and more energy efficient.
  27. so you have the 3 opteron CPU server.
    I know about this beast.
  28. GhislainG said:
    That host is a ProLiant DL585 G7 with 256 GB of memory; storage is on a SAN with a bunch of hard disks. My personal servers are less powerful, but they also are quieter and more energy efficient.


    An actual brand-name server is far better, more reliable and has much better warranty coverage than anything DIY.

    If you are just playing around and shutting this system off at night, the FX will be fine.

    If you are planning on running this 24/7, going with an HP or Dell or even IBM is a much better option.
  29. DIONODELL said:
    Alec Mowat said:
    GhislainG said:
    I have to disagree on hard drive reliability. Systems that run 24x7 often have less hard drive issues than systems that are powered off several times a week. I can't figure out why you recommend one core per server, particularly for a lab. If a VM is busy while others are not, all available resources will be used (up to the maximum number of vCores allocated to the VM). When loading a 100 GB DB, I'm more concerned by disk I/O than CPU load.


    Harddrives die all the time. Most production servers run a RAID5 or RAID6 so it's not a big issue. But if you are only running 1 drive, it could die. You want to run a daily backup on a Production server is possible. That puts more stress on the drive.

    If you are running something BIG, like SQL Databases or large Exchange directories, having more cores will greatly benefit. Having more RAM on your exchange server will be a big help too. Not all applications will take advantage of more than one core, so running an AV scan overnight can drain 1 core.

    If you are not actually using your server, you just want to open a bunch of unused VM's, than the quality of your hardware will not matter.

    In a production environment, you need good quality server drives. It's a lot of work when people take shortcuts and things break.

    https://communities.vmware.com/servlet/JiveServlet/previewBody/21181-102-1-28328/vsphere-oversubscription-best-practices%5B1%5D.pdf

    "In a virtual machine, processors are referred to as virtual CPUs (vCPUs). When an administrator adds
    vCPUs to a virtual machine, each of those vCPUs is assigned to a pCPU, although the actual pCPU
    may not always be the same."

    In this Dell white paper, the following vCPU: pCPU guidelines are established:
    •1:1 to 3:1 is no problem
    •3:1 to 5:1 may begin to cause performance degradation
    •6:1 or greater is often going to cause a problem

    If you are just experimenting, you may as well pick up the extra cores and not have to worry about data loss, or performance. You'll thank me you are actually running a service, and not just hosting a few blank screens.


    This is interesting Info.
    So now i have 8cores at my disposal with 4ghz what will i see on esxi Host as cpu count.
    And when i allocate resources will it be 1core 4ghz vcpu?
    for ex this is screenshot of 6core AMD 6300
    http://thehomeserverblog.com/wp-content/uploads/2013/02/esxi-home-server-9-overview.png

    kindly enlighten me.


    The server shown in the screen shot has 2vCPU's and 4 GB of RAM. The RAM overhead is the amount of free RAM required for the VM to boot. This is because VMware allows overallocation of resources. You can give 4 hosts 4 GB of RAM each, with only 8 GB total in the server.

    However; I don't recommend allocating more resources than you actually have. This can cause issues in the future.
  30. GhislainG said:
    All AM3+ processors support IOMMU. Go with the 6300, the 8320 or the 8350 if it fits in your budget; don't buy an older processor like the 8150 unless you get a great deal on it. I'd go for the 8320 as it only costs a bit more than the 6300 and the performance difference with the 8350 is not important on a server (unless its purpose is to run benchmarks).

    Overclocking a server is looking for stability problems, particularly with 32GB of memory. I'd select a 1600 memory kit that's known to work reliably when installed on the selected motherboard or buy a 1600 kit and run it at 1333 if it isn't stable at 1600.

    You need passthrough to assign a physical disk to a VM; same for the video controller, etc. Read the what the builder said about VMotion and passthrough. My server supports passthrough, but I have yet to find a reason to use it (unless I create a VM that requires direct access to the hard disk and/or video card).


    HIIIiiiiii
    how are you
    can i PM you or email you if possible???
    let me know..
  31. You can PM me.
Ask a new question

Read More

AMD Graphics Processors CPUs Virtualization