How To: Building Your Own Render Farm

Buying From A Small VAR

You can step in a different direction by having a small value-added reseller (VAR) build the nodes for you. The obvious advantage is that a VAR has to support the node when it breaks. But if you buy your boxes online (as opposed to locally), you'll likely have to ship defective units back to the VAR for repairs. You're also stuck with whatever hardware the VAR offers, while specifying your own configurations may drive up the price if you start asking for components that it doesn't stock.

Many places will also start looking at you strangely when you ask for these types of configurations because they have no experience with people wanting to build their own render farms. When VARs do build systems like this, they usually expect to put some type of server operating system on the machine. If you don’t specify otherwise, they might think you're talking about some type of low-spec file or transaction server. However, once you explain to them what you need, most places should be able to build the nodes fairly close to the same specs as if you were building them yourself, but at a slightly higher price. Then, once you receive the node, you have to install all of your applications to each system separately or invest in management tools that allow you to do bulk installs.

If you get a VAR to build your nodes for you, you’ve opened up several new possibilities. One “put-all-your-eggs-in-one-basket” option might involve a single 1U enclosure like the Supermicro SuperServer 6015T, which has two dual-socket Xeon LGA 771-based processors in a 1U rack unit, meaning you can put 16 3.2 GHz processor cores in a single 1U enclosure. Of course, this unit also has a 980 W power supply and, at its peak power load, its power requirement is similar to that of 10 1U nodes.

If you need a lot of processor power in a very small space, a few Supermicro systems might be a good idea, especially if you are running a virtual studio where you have access to better power circuitry and ventilation designed to handle this kind of setup. These Supermicro high-density systems are going to get very warm.

Another option is investing in ATXBlade units. ATXBlades are like the blade servers you may have seen, but they use commodity ATX motherboards and can be configured like a normal system. Additionally, they allow you to fit 10 nodes into 8U worth of space. However, ATXBlades only accommodate a limited range of motherboard products and other components. Still, you can get the ATXBlade chassis and blade units with motherboards and no other components, and then you can build the nodes yourself. In discussing these, we've kind of diverged from a home-freelancer setup into a small-studio discussion because the setups we're talking about are getting progressively more expensive, and also are consuming more power and generating more heat. An ATXBlade unit consumes 2,000 W at 100% CPU and I/O utilization, which is more power than is available on an average single-household circuit, making this an option for a small boutique studio.

Buying From a Tier-One Vendor

"Buying commercial" involves going to a so-called big-box vendor for your render nodes. The big advantage to this option is that a large vendor is going to have a well-trained support staff if you're a business customer. Both Dell and HP have departments that are experienced in supporting 3D animation, editing, and compositing software if you are a business (and not a home) customer. Business support also means you can get someone on the phone 24/7, and in most cases if you spend the money, you can get next business-day (or even next-day) on-site repairs.

There are also specialist vendors like Boxx Technologies, which has been building workstations and dedicated render nodes for the industry since 1998. Boxx’s advantage is that its machines are designed from the ground up for this usage model. Its renderBOXX module puts two dual-processor, quad-core systems in a single chassis, which is designed to be racked with other modules. Boxx also supports pre-installation of applications and render controllers on the modules. Five of these units (80 processor cores) will fit in a 4U rack space. But each of these modules has two 520 W power supplies, translating into a max of 1,040 W per unit. Specified power consumption at 100% duty cycle is 414 W per unit with two Xeon 5580 130 W processors, but most of Boxx's systems are shipped with less-powerful CPUs. The caveat is that these systems start at around $5,000 per module, or about $25,000 for a full 80-core unit.

Most business-oriented vendors would essentially sell you low-end 1U servers molded into a custom configuration to meet your needs for a render farm. Examples of these systems would include the HP ProLiant DL120 and 320 series or the Dell PowerEdge R200, once configured with appropriate memory and operating systems. The serious high-density computing setups from these vendors become outrageously expensive and are completely over-spec'd compared to what anyone except the largest studios would need for a render farm. These systems also are designed for a server room installation and are really not meant to be deployed in a home. While having a monolithic rack of high-end blades in the corner of your home office might look impressive, the $45,000+ price tag is a lot less savory.

  • borandi
    And soon they'll all move to graphics cards rendering. Simple. This article for now: worthless.
    Reply
  • Draven35
    People have been saying that for several years now, and Nvidia has killed Gelato. Every time that there has been an effort to move to GPU-based rendering, there has been a change to how things are rendered that has made it ineffective to do so.
    Reply
  • borandi
    With the advent of OpenCL at the tail end of the year, and given that a server farm is a centre for multiparallel processes, GPGPU rendering should be around the corner. You can't ignore the power of 1.2TFlops per PCI-E slot (if you can render efficiently enough), or 2.4TFlops per kilowatt, as opposed to 10 old Pentium Dual Cores in a rack.
    Reply
  • Draven35
    Yes, but it still won't render in real time. You'll still need render time, and that means separate systems. i did not ignore that in the article, and in fact discussed GPU-based rendering and ways to prepare your nodes for that. Just because you may start rendering on a GPU, does not mean it will be in real time. TV rendering is now in high definitiion, (finished in 1080p, usually) and rendering for film is done in at least that resolution, or 2k-4k. If you think you're going to use GPU-based rendering, get boards with an x16 slot, and rsier cards, then put GPUs in the units when you start using it. Considering software development cycles, It will likely be a year before a GPGPU-based renderer made in OpenCL is available from any 3D software vendors for at least a year (i.e. SIGGRAPH 2010). Most 3D animators do not and will not develop their own renderers.
    Reply
  • ytoledano
    While I never rendered any 3d scenes, I did learn a lot on building a home server rack. I'm working on a project which involves combinatorial optimization and genetic algorithms - both need a lot of processing power and can be easily split to many processing units. I was surprised to see how cheap one quad core node can be.
    Reply
  • Draven35
    Great, thanks- its very cool to hear someone cite another use of this type of setup. Hope you found some useful data.
    Reply
  • MonsterCookie
    Due to my job I work on parallel computers every day.
    I got to say: building a cheapo C2D might be OK, but still it is better nowadays to buy cheap C2Q instead, because the price/performance ratio of the machine is considerably better.
    However, please DO NOT spend more than 30% of you money on useless M$ products.
    Be serious, and keep cheap things cheap, and spend your hard earned money on a better machine or on your wife/kids/bear instead.
    Use linux, solaris, whatsoever ...
    Better performance, better memory management, higher stability.
    IN FACT, most real design/3D applications run under unixoid operating systems.
    Reply
  • ricstorms
    Actually I think if you look at a value analysis, AMD could actually give a decent value for the money. Get an old Phenom 9600 for $89 and build some ridiculously cheap workstations and nodes. The only thing that would kill you is power consumption, I don't think the 1st gen Phenoms were good at undervolting (of course they weren't good on a whole lot of things). Of course the Q8200 would trounce it, but Intel won't put their Quads south of $150 (not that they really need to).
    Reply
  • eaclou
    Thanks for doing an article on workstations -- sometimes it feels like all of the articles are only concerned with gaming.

    I'm not to the point yet where I really need a render farm, but this information might come in handy in a year or two. (and I severely doubt GPU rendering will make CPU rendering a thing of the past in 2 years)

    I look forward to future articles on workstations
    -Is there any chance of a comparison between workstation graphics cards and gaming graphics cards?
    Reply
  • cah027
    I wish these software companies would get on the ball. There are consumer level software packages that will use multiple cpu cores as well as GPU all at the same time. Then someone could build a 4 socket, 6 GPU box all in one that would do the work equal to several cheap nodes!
    Reply