Generally speaking, there's more docs out there for clusters running Linux than Windows, so my gut reaction is it's going to be easier to do the Linux cluster, Windows VM approach...
...but one has to ask: why? What do you specifically need from Windows on this number-crunching set of machines? Nerd-cred alone is not a valid reason (especially since this exercise actually works against that goal in most circles)
thanks for the reply, actually i am considering building a renderfarm, one that, among other things, requires vray and 3ds max (and therefore windows), and i thought if i had windows under beowolf i would save money on harddrives,etc. and i thought it would be easier to manage once set-up. what do you think?
Are you looking to leverage the cluster itself to assist with the Windows-based rendering or simply to churn out some base files that can be passed off to other cluster-ready tools? (forgive my ignorance, I have a passing awareness of rendering from a college project where I used Blender).
I ask since, unless the VM host software you choose is cluster-aware (and I don't know of any that are) you'd be better off trying to make the cluster in Windows (which means that the SW has to be cluster-aware)
it would be best of course if i could run a beowolf-like cluster in windows, in a way that all the CPUs in the cluster are available for rendering, transparently to the rendering software, making it easier to manage the rendering que as if it was a single pc.
please excuse my noobness, but what exactly is the difference between a cluster-aware and non cluster-aware app? is it purely a performance issue? maybe i can assign different tasks to certain cpus (from task manager or something) to mitigate performance loss from network overhead?
Have you considered just making one high performance computer?
These days you can have workstations with insane core counts (20+).
Would beat spending your money on a farm, I'd say.
I'd agree with your statement since the application needs to be designed/written in a way to really take advantage of a cluster (granted, a setup to render a movie is a perfect example with nodes being passed frames to render and the master recombines them), and usually it's fairly custom work to get a cluster-ready application.
You'd be hard-pressed to do better than a beefy single machine with gobs-o-cores (maybe even a dual-socket system) and gobs-o-memory (a SSD array wouldn't be bad, but you're not really streaming tons to/from disk compared to the compute time, so it's not as necessary)
The easiest (and probably cheapest) solution is just one ridiculous rig. Find a dual-socket board, throw a pair of quad cores (or if you're an AMD fan six/eight cores) in there, and as much RAM as you can squeeze in. An SSD probably isn't necessary, but could possibly be beneficial if used as a cache.
your replies have been most helpful, thanks guys!
Then I guess i will just build a sandy bridge renderfarm, single CPU each (overclocked 2600ks), costs about $800 per node i think, should be cheaper than dual-cpu xeon setups right? suggestions are welcome.
as for SSDs, i found that having more ram yields much better performance than any reasonably priced SSD i could find.