Sign in with
Sign up | Sign in
Your question

Windows VM in beowolf?

Last response: in Linux/Free BSD
Share
August 30, 2011 2:07:56 AM

i was wondering if i could set up a windows beowolf cluster, or if thats not possible, a windows virtual machine in a linux based beowolf cluster.

More about : windows beowolf

a b 5 Linux
August 30, 2011 5:59:28 AM

Generally speaking, there's more docs out there for clusters running Linux than Windows, so my gut reaction is it's going to be easier to do the Linux cluster, Windows VM approach...

...but one has to ask: why? What do you specifically need from Windows on this number-crunching set of machines? Nerd-cred alone is not a valid reason (especially since this exercise actually works against that goal in most circles)
m
0
l
August 30, 2011 12:11:56 PM

thanks for the reply, actually i am considering building a renderfarm, one that, among other things, requires vray and 3ds max (and therefore windows), and i thought if i had windows under beowolf i would save money on harddrives,etc. and i thought it would be easier to manage once set-up. what do you think?
m
0
l
Related resources
a b 5 Linux
August 30, 2011 6:09:22 PM

As far as I know Autodesk Backburner will work on certain Linux distributions.
m
0
l
a b 5 Linux
August 30, 2011 8:58:22 PM

Are you looking to leverage the cluster itself to assist with the Windows-based rendering or simply to churn out some base files that can be passed off to other cluster-ready tools? (forgive my ignorance, I have a passing awareness of rendering from a college project where I used Blender).

I ask since, unless the VM host software you choose is cluster-aware (and I don't know of any that are) you'd be better off trying to make the cluster in Windows (which means that the SW has to be cluster-aware)
m
0
l
August 31, 2011 4:20:33 AM

it would be best of course if i could run a beowolf-like cluster in windows, in a way that all the CPUs in the cluster are available for rendering, transparently to the rendering software, making it easier to manage the rendering que as if it was a single pc.

please excuse my noobness, but what exactly is the difference between a cluster-aware and non cluster-aware app? is it purely a performance issue? maybe i can assign different tasks to certain cpus (from task manager or something) to mitigate performance loss from network overhead?
m
0
l
a b 5 Linux
August 31, 2011 5:18:03 AM

For example, creating a many-threaded/processed application that uses MPI (clicky) calls to communicate and recompile with an appropriate compiler.

Standard applications meant to run on a single machine won't see any performance-boost running on a cluster simply because it doesn't actually use the cluster, just a node in the cluster[ref]
m
0
l
a b 5 Linux
August 31, 2011 5:58:28 AM

Have you considered just making one high performance computer?

These days you can have workstations with insane core counts (20+).

Would beat spending your money on a farm, I'd say.
m
0
l
a b 5 Linux
August 31, 2011 8:06:06 PM

amdfangirl said:
Have you considered just making one high performance computer?

These days you can have workstations with insane core counts (20+).

Would beat spending your money on a farm, I'd say.


I'd agree with your statement since the application needs to be designed/written in a way to really take advantage of a cluster (granted, a setup to render a movie is a perfect example with nodes being passed frames to render and the master recombines them), and usually it's fairly custom work to get a cluster-ready application.

You'd be hard-pressed to do better than a beefy single machine with gobs-o-cores (maybe even a dual-socket system) and gobs-o-memory (a SSD array wouldn't be bad, but you're not really streaming tons to/from disk compared to the compute time, so it's not as necessary)
m
0
l
a b 5 Linux
August 31, 2011 8:25:52 PM

The easiest (and probably cheapest) solution is just one ridiculous rig. Find a dual-socket board, throw a pair of quad cores (or if you're an AMD fan six/eight cores) in there, and as much RAM as you can squeeze in. An SSD probably isn't necessary, but could possibly be beneficial if used as a cache.
m
0
l
September 1, 2011 10:01:44 AM

your replies have been most helpful, thanks guys!
Then I guess i will just build a sandy bridge renderfarm, single CPU each (overclocked 2600ks), costs about $800 per node i think, should be cheaper than dual-cpu xeon setups right? suggestions are welcome.
as for SSDs, i found that having more ram yields much better performance than any reasonably priced SSD i could find.
m
0
l
!