Only not sure if the 2x 256 SSD Raid 0 will handle so much VMs. And instead could divide into 2x 2x SSD 128gb Raid 0 instead.
Maybe my expectations are wrong, I don't know what is the CPU speed vs Core relation. If I hit the double of VMs with the last setup it will be cheaper to buy it, if I don't then maybe I should go to the cheaper.
Having the extra CPU clock speed help to compensate less cores available?
I'll start out by saying you are somewhat crippling your build with the Radeon HD graphics cards - I would recommend looking at AMD Firepro cards instead (or NVidia Quadro) which are designed more for your needs than the gaming cards. I think that inclusion alone would help. As far as comparing what the two machines can do - natively the dual Xeon out classes the Core i7 but overclocking can make up quite a bit of that.
I'll see what I can find for numbers to show how the i7 overclocked could work for you running a card like the Firepro S7000 (there likely won't be savings involved though - good professional cards aren't cheap)
My recommendation would be to scale out to multiple physical systems instead of trying to cram a ton of VMs on a single physical system. First off, you might be able to purchase two mid-performance computers or servers for the same price or even perhaps cheaper than completely going nuts on one system to do the same work or less than two separate machines. Just load balance your VMs then on the two systems.
This also gives you failover. If you have only one physical system running all those VMs, and it goes down, everything is down. If you have two systems, and one goes down, the other is still up and running and might (for a short time) be able to run more VMs as well as you need.
If you're doing work with multiple VMs, I'd really recommend going with a Xeon based system over an overclocked i7. First off, there are greater features and benefits for virtualization in the Xeon processors - after all, that's what they are designed for. Do not worry about going with insanely expensive fast RAM on the Xeon system. Go with DDR3 1333 or 1600 ECC Registered RAM for better stability if this is something that up-time is important for. Otherwise there is not really a noticeable difference in performance comparing DDR3 1333 with DDR3 2000. Save the money and put it towards the AMD FirePro or nVidia Quadro graphics card. These professional series graphics cards again are designed for this type of work and will be able to handle the 3D computation load spread out across multiple VMs more effectively than a gaming card.
Not sure if buy multiple machines could achieve the same number of vms/price. I'm already running 5 Desktop Pcs, with similar setup of the above one. They all cost around 1000-1200e in the end. Of course I could go a little bit down buying AMD instead of Intel Chip. But they will run always around 20-25 vms as I wish.
The second setup I show cost like 2500e, and if it runs 60 Vms then its 3 times the normal desktop I run. So I'm saving money still buying all in one if I hit that performance.
Also space/noise/power consumption will be reduced. Of course you point it very well if machine goes down then If was divided was better.
RAM prices are basically the same but you say DDR 3 1600 MHZ ECC will beat DDR3 2133 MHZ or the performance will be basically the same the difference ECC will be more stable memory.
The GPU, I looked in some Quadro 4000, however use of such GPUs will reduce the CPU utilization? Or will just improve the performance of the VMs in terms of rendering. If no CPU utilization will be done then I have no gain.
I think the real problem is not necessarily looking at just the CPU or RAM utilization and factoring in scaling it 3X that amount on a single box, it's how loading three times that amount on a single box will impact other things that do not scale so easily, such as network and hard drive throughput. Running 60 VMs on a set of RAID 0 SSDs is still going to be pushing the limits of what those drives can offer most likely, and you're going to be wearing them out incredibly fast. Don't be surprised when your RAID 0 arrays begin to fail and you lose all of your data.
What kind of workload is being done on these VMs, and how business-critical is your workload? That will make a difference as well into factoring server-class hardware versus desktop-class hardware.
Yes when putting all in 1 machine like you said I'm pushing the limits in all directions, not only CPU but other components also.
that's why I was thinking 4 x SSD 128gb, so 2 x 2 SSD Raid0 to divide the load of the SSD disk.
Yes Raid 0 it's risky, however there is no data to save, the machines are only "processing". Yes its lost time if I need to reinstall system, but running RAID 0 on my machines for long time and still no problems so I'm ok with it.
However you tell me that I can buy 3 desktop machines for a little more money of this and have the same performance.
However today I got the chance to use 2x e5 2650 instead of e5 2620 for the same price (used chips).
this will mean more CPU power, 32 logic CPUs, yes I'm still pushing alot all in 1 PC. But it's tempting to do so...