How To: Building Your Own Render Farm

Introduction

Everyone reads articles about the immense number of processor hours required to create visual effects and animations for the latest films and TV shows. For example, render times totaled 40 million hours for Monsters vs. Aliens, 30 million hours for Madagascar: Escape 2 Africa, and 6.6 million hours for Revenge of the Sith.

A good render time for television visual effects is anywhere between 30 minutes to one hour per frame, while multiple hours per frame is common for feature films. Some of the IMAX resolution frames required for Devastator, a character in Transformers 2: Revenge of the Fallen, took up to 72 hours per frame. How do studios get around this? They use render farms, which are banks of machines with the express purpose of rendering finished frames. In addition to the systems that animators use, render farms simultaneously use many dedicated processors for rendering. For instance, Industrial Light and Magic had a render farm with 5,700 processor cores (and 2,000 cores in their artists' machines) when Transformers 2 was produced. Even a small facility with only a dozen animators is likely to have more than a hundred processor cores at their disposal.

Do You Need A Render Farm?

Use of render farms isn't and shouldn't be just restricted to large studios and 3D artists. Smaller studios have their own render farms and many freelance artists have them as well. Compositors and freelance motion-graphics artists can also make use of them. Some editing systems support the use of additional machines called render nodes to accelerate rendering, and this type of setup can be extended to architectural visualization and even digital audio workstations.

If you are working as a freelance artist in the above-mentioned media, toying with the idea, or doing so as a hobbyist, then building even a small farm will greatly increase your productivity compared to working on a single workstation. Studios can even use this piece as a reference for building new render farms, as we're going to address scaling, power, and cooling issues.

Home render farm for a freelance artist courtesy Jeremy Massey

If you're looking at buying a new machine and are thinking of spending big bucks to get a bleeding-edge system, you might want to step back and consider whether it would be more effective to buy the latest and greatest workstation or to spend less by investing in a few additional systems to be used as dedicated render nodes.

Most 3D software and compositing applications include network rendering capabilities, and many also have some form of a network rendering controller. So, the additional nodes can be managed from your workstation, making it possible to run them as headless systems with no mouse, keyboard, or monitor. Adding a Virtual Network Computing (VNC) client to each node allows you to remotely manage the nodes without the additional expense associated with adding a multi-channel system keyboard, video, and mouse (KVM) switch for separate access to each.

Buying The Farm

There are three ways to approach acquiring systems for a render farm: building your own, having a builder make them for you, or buying pre-built boxes. Each approach has its own set of advantages and disadvantages, which we discuss below. Each approach also involves progressively higher price tiers, which range from cheap to insane.

A useful tip is to make sure the processors in your render farm are the same as the processors in your workstation, as there may be differences in rendering between processor architectures, which could mean small differences in your final rendered frames. However, these potential compatibility problems are today the exception rather than the rule, but it is still something to be cautious about. For the purposes of this article, assume that we're talking about Intel-based render nodes, although they could just as easily center on AMD CPUs.

  • borandi
    And soon they'll all move to graphics cards rendering. Simple. This article for now: worthless.
    Reply
  • Draven35
    People have been saying that for several years now, and Nvidia has killed Gelato. Every time that there has been an effort to move to GPU-based rendering, there has been a change to how things are rendered that has made it ineffective to do so.
    Reply
  • borandi
    With the advent of OpenCL at the tail end of the year, and given that a server farm is a centre for multiparallel processes, GPGPU rendering should be around the corner. You can't ignore the power of 1.2TFlops per PCI-E slot (if you can render efficiently enough), or 2.4TFlops per kilowatt, as opposed to 10 old Pentium Dual Cores in a rack.
    Reply
  • Draven35
    Yes, but it still won't render in real time. You'll still need render time, and that means separate systems. i did not ignore that in the article, and in fact discussed GPU-based rendering and ways to prepare your nodes for that. Just because you may start rendering on a GPU, does not mean it will be in real time. TV rendering is now in high definitiion, (finished in 1080p, usually) and rendering for film is done in at least that resolution, or 2k-4k. If you think you're going to use GPU-based rendering, get boards with an x16 slot, and rsier cards, then put GPUs in the units when you start using it. Considering software development cycles, It will likely be a year before a GPGPU-based renderer made in OpenCL is available from any 3D software vendors for at least a year (i.e. SIGGRAPH 2010). Most 3D animators do not and will not develop their own renderers.
    Reply
  • ytoledano
    While I never rendered any 3d scenes, I did learn a lot on building a home server rack. I'm working on a project which involves combinatorial optimization and genetic algorithms - both need a lot of processing power and can be easily split to many processing units. I was surprised to see how cheap one quad core node can be.
    Reply
  • Draven35
    Great, thanks- its very cool to hear someone cite another use of this type of setup. Hope you found some useful data.
    Reply
  • MonsterCookie
    Due to my job I work on parallel computers every day.
    I got to say: building a cheapo C2D might be OK, but still it is better nowadays to buy cheap C2Q instead, because the price/performance ratio of the machine is considerably better.
    However, please DO NOT spend more than 30% of you money on useless M$ products.
    Be serious, and keep cheap things cheap, and spend your hard earned money on a better machine or on your wife/kids/bear instead.
    Use linux, solaris, whatsoever ...
    Better performance, better memory management, higher stability.
    IN FACT, most real design/3D applications run under unixoid operating systems.
    Reply
  • ricstorms
    Actually I think if you look at a value analysis, AMD could actually give a decent value for the money. Get an old Phenom 9600 for $89 and build some ridiculously cheap workstations and nodes. The only thing that would kill you is power consumption, I don't think the 1st gen Phenoms were good at undervolting (of course they weren't good on a whole lot of things). Of course the Q8200 would trounce it, but Intel won't put their Quads south of $150 (not that they really need to).
    Reply
  • eaclou
    Thanks for doing an article on workstations -- sometimes it feels like all of the articles are only concerned with gaming.

    I'm not to the point yet where I really need a render farm, but this information might come in handy in a year or two. (and I severely doubt GPU rendering will make CPU rendering a thing of the past in 2 years)

    I look forward to future articles on workstations
    -Is there any chance of a comparison between workstation graphics cards and gaming graphics cards?
    Reply
  • cah027
    I wish these software companies would get on the ball. There are consumer level software packages that will use multiple cpu cores as well as GPU all at the same time. Then someone could build a 4 socket, 6 GPU box all in one that would do the work equal to several cheap nodes!
    Reply