How To: Building Your Own Render Farm

Render Node Considerations

If you're looking at installing a large number of render nodes in your home, you need to consider both power and cooling. We’re talking about multiple systems sitting in an enclosed space, which will consume a lot of power and generate significant heat in a very small area. You should consequently think about how many nodes will fit in the space allotted.

For a freelancer using a home studio, you may actually be tempted to build 10 identical boxes, but keep power consumption in mind. The electrical standard in U.S. homes is 110 V at 15 amps, which means 1,650 W is the maximum for a typical circuit. Some houses may have 20 amp breakers, which gives you a little more leeway, but putting 10 nodes on a circuit means you'd better build extremely efficient systems. If someone turns on a hair dryer on the same circuit, you'll hear the breaker flip pretty quickly.

If you really need to put 10 nodes in your home, you may want to split them up into two groups of five. Those five may still consume most of the power available to the circuit they are on. However, keep in mind that with a low thermal design power (TDP) processor, these systems should only consume about 140 W of power apiece at 100% utilization, depending on the actual processor used, motherboard, chipset, and hard drive. Across 10 systems, that’s 1,400 W, which is still very close to the maximum yield of an average household line.

After power, your next concern should be cooling. Several 1U computer systems placed in a tight space will generate plenty of warm air behind the boxes. In order to boost airflow efficiency, most IT departments maintain a hot aisle/cold aisle layout. With a hot aisle/cold aisle layout, the systems draw in cool air from one side, which is then exhausted out the other side. To a lesser degree, you can apply this data center concept to your setup at home to handle the airflow for several nodes. Make sure, for example, that there is cool airflow at the front of the systems and a way to evacuate the air behind them (don't put the back of your rack against the wall).

You also need to worry about redundancy. If one node goes down, you could potentially lose that portion of your render farm. If you can spare the expense, you could build a spare node to swap in as needed, but then you have to suppress the urge to use it as a node and defeat the purpose of having it as a spare.

Serving Files

With multiple render nodes, it is important host the files for your software somewhere else other than on your production workstation, especially if you're trying to use the workstation while the other systems render. It is thus a good idea to either buy a network attached storage (NAS) box or build a small Linux server to handle the file-hosting chores to keep your workstation from being taxed by serving files for other systems.

Depending on personal preference, you can either "publish" the files to the server before starting a render or you can actually work with the files from the server all the time. The first option means your workstation will have fast local access when interactivity is important, while the second option means you will avoid missing files and broken internal links when moving things to the server. Troubleshooting these kinds of render problems can get very tedious, and if you're not careful, you can end up spending hours rendering an entire scene only to discover afterward that a texture in the scene was either missing or not the correct version.

If you’re not currently working with your 3D files on a remote system or file server, then you have to move those files to the server and go through and fix these potential problems. After doing that, it would be a good idea to get into the habit of working with all of your scenes remotely so that the content is automatically on the remote file system, allowing you to avoid having to move the scenes over to the server when rendering tasks are performed.

  • borandi
    And soon they'll all move to graphics cards rendering. Simple. This article for now: worthless.
    Reply
  • Draven35
    People have been saying that for several years now, and Nvidia has killed Gelato. Every time that there has been an effort to move to GPU-based rendering, there has been a change to how things are rendered that has made it ineffective to do so.
    Reply
  • borandi
    With the advent of OpenCL at the tail end of the year, and given that a server farm is a centre for multiparallel processes, GPGPU rendering should be around the corner. You can't ignore the power of 1.2TFlops per PCI-E slot (if you can render efficiently enough), or 2.4TFlops per kilowatt, as opposed to 10 old Pentium Dual Cores in a rack.
    Reply
  • Draven35
    Yes, but it still won't render in real time. You'll still need render time, and that means separate systems. i did not ignore that in the article, and in fact discussed GPU-based rendering and ways to prepare your nodes for that. Just because you may start rendering on a GPU, does not mean it will be in real time. TV rendering is now in high definitiion, (finished in 1080p, usually) and rendering for film is done in at least that resolution, or 2k-4k. If you think you're going to use GPU-based rendering, get boards with an x16 slot, and rsier cards, then put GPUs in the units when you start using it. Considering software development cycles, It will likely be a year before a GPGPU-based renderer made in OpenCL is available from any 3D software vendors for at least a year (i.e. SIGGRAPH 2010). Most 3D animators do not and will not develop their own renderers.
    Reply
  • ytoledano
    While I never rendered any 3d scenes, I did learn a lot on building a home server rack. I'm working on a project which involves combinatorial optimization and genetic algorithms - both need a lot of processing power and can be easily split to many processing units. I was surprised to see how cheap one quad core node can be.
    Reply
  • Draven35
    Great, thanks- its very cool to hear someone cite another use of this type of setup. Hope you found some useful data.
    Reply
  • MonsterCookie
    Due to my job I work on parallel computers every day.
    I got to say: building a cheapo C2D might be OK, but still it is better nowadays to buy cheap C2Q instead, because the price/performance ratio of the machine is considerably better.
    However, please DO NOT spend more than 30% of you money on useless M$ products.
    Be serious, and keep cheap things cheap, and spend your hard earned money on a better machine or on your wife/kids/bear instead.
    Use linux, solaris, whatsoever ...
    Better performance, better memory management, higher stability.
    IN FACT, most real design/3D applications run under unixoid operating systems.
    Reply
  • ricstorms
    Actually I think if you look at a value analysis, AMD could actually give a decent value for the money. Get an old Phenom 9600 for $89 and build some ridiculously cheap workstations and nodes. The only thing that would kill you is power consumption, I don't think the 1st gen Phenoms were good at undervolting (of course they weren't good on a whole lot of things). Of course the Q8200 would trounce it, but Intel won't put their Quads south of $150 (not that they really need to).
    Reply
  • eaclou
    Thanks for doing an article on workstations -- sometimes it feels like all of the articles are only concerned with gaming.

    I'm not to the point yet where I really need a render farm, but this information might come in handy in a year or two. (and I severely doubt GPU rendering will make CPU rendering a thing of the past in 2 years)

    I look forward to future articles on workstations
    -Is there any chance of a comparison between workstation graphics cards and gaming graphics cards?
    Reply
  • cah027
    I wish these software companies would get on the ball. There are consumer level software packages that will use multiple cpu cores as well as GPU all at the same time. Then someone could build a 4 socket, 6 GPU box all in one that would do the work equal to several cheap nodes!
    Reply