Sign in with
Sign up | Sign in
Your question

Building a render farm

Last response: in Systems
Share
April 12, 2012 4:16:00 AM

I have embarked on a project in which I will be building at least 10 1U render nodes running V-Ray. This project will go on for at least a few weeks and I will post updates on the build as well as look for suggestions regarding the parts. So far I have looked at BOXX's prebuilt render farm, and the 10 renderBOXX array looks quite expensive, totaling 38,000 dollars. Each of these nodes is $3,800. From what I can tell, the perks of buying these are: BOXX will maintain them, 10 nodes fit in a 4U space, and there is no hassle as the software is installed and the nodes are built. These don't affect me as I don't mind maintaining the nodes, I have enough space, and software is no issue. I also noticed that the renderBOXXes come with a Westmere Xeon E5645, and by building the nodes myself I would be able to include the newer E5-2630, wouldn't this be better?

My main goal is to build an array that is similar to the BOXX array but is much cheaper, with each node costing around $2000. This means that I will need a good dual socket motherboard that can handle two hexacore Xeons. I will also need at least 12gb of registered ECC memory. I am still looking for a good case/psu. Also, how do graphics fit into this? should i get a similar card to the one included in the BOXX array, which runs on the x1 PCI slot or something weird like that? (Im not sure about the aforementioned graphics card included in the renderBOXX).

Thanks, any suggestions are much appreciated.

More about : building render farm

April 12, 2012 4:41:15 AM

HPC, here we go. Your looking for straight cluster core count. If space and software is no issue I'll pull some low cost server specs.

Server barebones $649.99
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

Same CPU listed - $557.99
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

Ram - 33.99 x3 = 101.97
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

Crucial makes some well priced SSD's. You did not specify what you needed for drives.

Also note you would need a full gigabit switch. Since your stuck using a network to build this speed will be an issue. There are fiber options but even a 10 port, 10gbit switch will run 4k not to mention the $1000 interface card for each server.

Video, You will only need basic video for your master node.
April 12, 2012 4:47:30 AM

I don't know servers, but isn't the 2630 a better CPU?
Related resources
April 12, 2012 5:19:09 AM

killermoats said:
HPC, here we go. Your looking for straight cluster core count. If space and software is no issue I'll pull some low cost server specs.

Server barebones $649.99
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

Same CPU listed - $557.99
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

Ram - 33.99 x3 = 101.97
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

Crucial makes some well priced SSD's. You did not specify what you needed for drives.

Also note you would need a full gigabit switch. Since your stuck using a network to build this speed will be an issue. There are fiber options but even a 10 port, 10gbit switch will run 4k not to mention the $1000 interface card for each server.

Video, You will only need basic video for your master node.

Looks great! And sorry for not mentioning it before, but SSD's are exactly what I had in mind, I'll check out the crucial ones. And sorry, I'm kind of new at this (obviously) but could you elaborate a bit on the "single cluster core count" as well as how the gigabit switch and interface cards work? Thanks! This is turning out to be much cheaper than the renderBOXXes.

EDIT: What do you all think about going for the 2011 socket Xeon's instead of the Westmere ones? Is it really worth deviating from the great (and cheap) server that killermoats outlined earlier?
Anonymous
April 12, 2012 5:36:04 AM

i am sure there are many many more people that know better than i configuring a server. though the 1366 is an old socket with no upgrade path whereas the 2011 is fairly fresh and is the platform intel will be using for at least the next year.
April 12, 2012 5:46:07 AM

The server has 2 built in gigabit ports. Usually we use this for redundant networking. Often servers I build will have upwards of 6 Ethernet gigabit connections and 4-6 fiber channels. Now given that's a enterprise environment pushing 8gbit fiber.

Part A, your building a cluster, this can scale from 2 nodes (servers) to hundreds. The cluster will usually consist of a master node and then slaves. You will run your application on the master, which in turn uses the processing power of the slaves. The processing power will have a bottleneck at the network. Typical node / blade systems are attached via a main board that has tons of bandwidth. The gigabit network is your cost effective way to connect devices.

For a true HPC enviroment you would run infiniband (up to 40gbit) or some other beefy fiber. There your talking 10-90k for a switch, $500 per patch cable, and $1000 per client server.

What software do you have for building a cluster?

looniam is correct that there is better hardware out there obviously. The problem is that depending on your software will it support the bleeding edge hardware?

I run redundant Zenserver (Citrix) (Virturalization servers) that the newest release does not support AMD's 6200 procs. (only 6100's) It would be awesome if they did, or when they do. That would allow going from a 8core proc to a 16 core proc.

Problem still remains, does your software support the CPU architecture?

Start small, then scale it as you can.
April 12, 2012 5:49:18 AM

Anonymous said:
i am sure there are many many more people that know better than i configuring a server. though the 1366 is an old socket with no upgrade path whereas the 2011 is fairly fresh and is the platform intel will be using for at least the next year.


Servers = Build em, Forget em, Add / replace ram if needed. add / replace hard drives if needed. Retire and replace.

April 13, 2012 6:31:29 PM

Sorry for the late reply guys

Thanks for explaining the gigabit ethernet restrictions. Just to be sure that i read killermoats's explanation correctly, am I right in saying that the render boxes may process data faster than it can be transferred back to the master node on a standard gigabit ethernet connection, and that is where the bottleneck occurs?

Also, I am quite sure that the software supports the CPU architecture

One other thing: How do optical drives fit into this? Would there be one attached solely to the master node?

Are there any other things anyone would recommend before i consider moving on and purchasing some parts?

Thanks again
Anonymous
April 15, 2012 1:07:41 AM

really triple channel is just 3 matched sticks, so either or.
!