How To: Building Your Own Render Farm

Rolling Your Own Render Node

A popular option for the freelance artists out there is building your own nodes. The advantages are similar to the benefits of building your own PC rather than buying an off-the-shelf system: direct control over the components that go into the build and lower per-unit costs. However, the disadvantages are also the same: you have to support the failure of individual pieces yourself or pay someone else to come in and work on them when they break.

These days it really makes sense to go ahead and use rack-mounted enclosures. You'll spend a little more, but the space and power savings will be well worth it. A 1U chassis, like the Supermicro CSE-512L-260, retails for around $100 and includes a 260 W power supply. More than likely, a node will use its own onboard graphics rather than a discrete graphics card, so there's a significant power savings right off the bat. Most 3D animation and compositing renderers rely on the CPU instead of GPU, while the possibility of GPU-based rendering will be discussed below. If your nodes are going to be mission-critical, you may want to look for units with redundant power supplies, but this will significantly increase the per-unit cost.

For the rack itself, you can either spend the money and buy a professional unit (Ed.: I have one of these in my garage) or you can instead convert pieces of furniture with the correct dimensions to house your nodes. The RAST or EINA bedside tables from IKEA, for example, plus a pair of Raxxess rack rails (which you can find, oddly enough, at music stores) do the job at a budget price.

Instead of using rackmount enclosures, it is also entirely possible to use traditional cases with MicroATX motherboards, such as the Antec NSK-1380 or a barebone cube like one of Shuttle's XPC chassis. A cube chassis is small, can be purchased with low-wattage and high-efficiency power supplies, and in some cases can be stackable. You can't get quite the processing density available through rackmount units, but you can use less-specialized components for cooling, while no riser card is needed to add a discrete graphics. Plus, the system can perform other functions by serving as a secondary workstation, home-theater PC (HTPC), and so on.

Picking a motherboard for the system is actually an easier prospect (you can even choose from the sub-$100 motherboards that we reviewed here). It should be noted, however, that only one of these boards has onboard graphics, which you should consider necessary for these nodes (think G41/G43/G45 instead of the non-integrated graphics versions of those chipsets). In fact, if you don't intend to ever put a graphics card in any of these nodes, you can get an even cheaper motherboard that doesn't have a PCI Express (PCIe) x16 slot on it (instances in which you may put a graphics card in a render node will be discussed below). It is likely, though, that you'll want to pick a board with four memory slots instead of two.

For memory, 4 GB is a good start. With the availability of inexpensive 4 GB kits (reviewed here), there's no reason not to. If you are using a dual-core processor and your renderer is a 32-bit application, then 4 GB means you'd have just short of the maximum RAM for each core (which is a good idea if your renderer doesn't multi-thread properly). If you're using a 64-bit renderer, then more memory will likely be better. We are, of course, discussing using DDR2 memory because in this type of system configuration, there is no real advantage to using DDR3 memory, and the price difference increases the cost of your nodes without a significant performance benefit.

  • borandi
    And soon they'll all move to graphics cards rendering. Simple. This article for now: worthless.
    Reply
  • Draven35
    People have been saying that for several years now, and Nvidia has killed Gelato. Every time that there has been an effort to move to GPU-based rendering, there has been a change to how things are rendered that has made it ineffective to do so.
    Reply
  • borandi
    With the advent of OpenCL at the tail end of the year, and given that a server farm is a centre for multiparallel processes, GPGPU rendering should be around the corner. You can't ignore the power of 1.2TFlops per PCI-E slot (if you can render efficiently enough), or 2.4TFlops per kilowatt, as opposed to 10 old Pentium Dual Cores in a rack.
    Reply
  • Draven35
    Yes, but it still won't render in real time. You'll still need render time, and that means separate systems. i did not ignore that in the article, and in fact discussed GPU-based rendering and ways to prepare your nodes for that. Just because you may start rendering on a GPU, does not mean it will be in real time. TV rendering is now in high definitiion, (finished in 1080p, usually) and rendering for film is done in at least that resolution, or 2k-4k. If you think you're going to use GPU-based rendering, get boards with an x16 slot, and rsier cards, then put GPUs in the units when you start using it. Considering software development cycles, It will likely be a year before a GPGPU-based renderer made in OpenCL is available from any 3D software vendors for at least a year (i.e. SIGGRAPH 2010). Most 3D animators do not and will not develop their own renderers.
    Reply
  • ytoledano
    While I never rendered any 3d scenes, I did learn a lot on building a home server rack. I'm working on a project which involves combinatorial optimization and genetic algorithms - both need a lot of processing power and can be easily split to many processing units. I was surprised to see how cheap one quad core node can be.
    Reply
  • Draven35
    Great, thanks- its very cool to hear someone cite another use of this type of setup. Hope you found some useful data.
    Reply
  • MonsterCookie
    Due to my job I work on parallel computers every day.
    I got to say: building a cheapo C2D might be OK, but still it is better nowadays to buy cheap C2Q instead, because the price/performance ratio of the machine is considerably better.
    However, please DO NOT spend more than 30% of you money on useless M$ products.
    Be serious, and keep cheap things cheap, and spend your hard earned money on a better machine or on your wife/kids/bear instead.
    Use linux, solaris, whatsoever ...
    Better performance, better memory management, higher stability.
    IN FACT, most real design/3D applications run under unixoid operating systems.
    Reply
  • ricstorms
    Actually I think if you look at a value analysis, AMD could actually give a decent value for the money. Get an old Phenom 9600 for $89 and build some ridiculously cheap workstations and nodes. The only thing that would kill you is power consumption, I don't think the 1st gen Phenoms were good at undervolting (of course they weren't good on a whole lot of things). Of course the Q8200 would trounce it, but Intel won't put their Quads south of $150 (not that they really need to).
    Reply
  • eaclou
    Thanks for doing an article on workstations -- sometimes it feels like all of the articles are only concerned with gaming.

    I'm not to the point yet where I really need a render farm, but this information might come in handy in a year or two. (and I severely doubt GPU rendering will make CPU rendering a thing of the past in 2 years)

    I look forward to future articles on workstations
    -Is there any chance of a comparison between workstation graphics cards and gaming graphics cards?
    Reply
  • cah027
    I wish these software companies would get on the ball. There are consumer level software packages that will use multiple cpu cores as well as GPU all at the same time. Then someone could build a 4 socket, 6 GPU box all in one that would do the work equal to several cheap nodes!
    Reply