Everyone reads articles about the immense number of processor hours required to create visual effects and animations for the latest films and TV shows. For example, render times totaled 40 million hours for Monsters vs. Aliens, 30 million hours for Madagascar: Escape 2 Africa, and 6.6 million hours for Revenge of the Sith.
A good render time for television visual effects is anywhere between 30 minutes to one hour per frame, while multiple hours per frame is common for feature films. Some of the IMAX resolution frames required for Devastator, a character in Transformers 2: Revenge of the Fallen, took up to 72 hours per frame. How do studios get around this? They use render farms, which are banks of machines with the express purpose of rendering finished frames. In addition to the systems that animators use, render farms simultaneously use many dedicated processors for rendering. For instance, Industrial Light and Magic had a render farm with 5,700 processor cores (and 2,000 cores in their artists' machines) when Transformers 2 was produced. Even a small facility with only a dozen animators is likely to have more than a hundred processor cores at their disposal.
Do You Need A Render Farm?
Use of render farms isn't and shouldn't be just restricted to large studios and 3D artists. Smaller studios have their own render farms and many freelance artists have them as well. Compositors and freelance motion-graphics artists can also make use of them. Some editing systems support the use of additional machines called render nodes to accelerate rendering, and this type of setup can be extended to architectural visualization and even digital audio workstations.
If you are working as a freelance artist in the above-mentioned media, toying with the idea, or doing so as a hobbyist, then building even a small farm will greatly increase your productivity compared to working on a single workstation. Studios can even use this piece as a reference for building new render farms, as we're going to address scaling, power, and cooling issues.
Home render farm for a freelance artist courtesy Jeremy Massey
If you're looking at buying a new machine and are thinking of spending big bucks to get a bleeding-edge system, you might want to step back and consider whether it would be more effective to buy the latest and greatest workstation or to spend less by investing in a few additional systems to be used as dedicated render nodes.
Most 3D software and compositing applications include network rendering capabilities, and many also have some form of a network rendering controller. So, the additional nodes can be managed from your workstation, making it possible to run them as headless systems with no mouse, keyboard, or monitor. Adding a Virtual Network Computing (VNC) client to each node allows you to remotely manage the nodes without the additional expense associated with adding a multi-channel system keyboard, video, and mouse (KVM) switch for separate access to each.
Buying The Farm
There are three ways to approach acquiring systems for a render farm: building your own, having a builder make them for you, or buying pre-built boxes. Each approach has its own set of advantages and disadvantages, which we discuss below. Each approach also involves progressively higher price tiers, which range from cheap to insane.
A useful tip is to make sure the processors in your render farm are the same as the processors in your workstation, as there may be differences in rendering between processor architectures, which could mean small differences in your final rendered frames. However, these potential compatibility problems are today the exception rather than the rule, but it is still something to be cautious about. For the purposes of this article, assume that we're talking about Intel-based render nodes, although they could just as easily center on AMD CPUs.

I got to say: building a cheapo C2D might be OK, but still it is better nowadays to buy cheap C2Q instead, because the price/performance ratio of the machine is considerably better.
However, please DO NOT spend more than 30% of you money on useless M$ products.
Be serious, and keep cheap things cheap, and spend your hard earned money on a better machine or on your wife/kids/bear instead.
Use linux, solaris, whatsoever ...
Better performance, better memory management, higher stability.
IN FACT, most real design/3D applications run under unixoid operating systems.
I got to say: building a cheapo C2D might be OK, but still it is better nowadays to buy cheap C2Q instead, because the price/performance ratio of the machine is considerably better.
However, please DO NOT spend more than 30% of you money on useless M$ products.
Be serious, and keep cheap things cheap, and spend your hard earned money on a better machine or on your wife/kids/bear instead.
Use linux, solaris, whatsoever ...
Better performance, better memory management, higher stability.
IN FACT, most real design/3D applications run under unixoid operating systems.
I'm not to the point yet where I really need a render farm, but this information might come in handy in a year or two. (and I severely doubt GPU rendering will make CPU rendering a thing of the past in 2 years)
I look forward to future articles on workstations
-Is there any chance of a comparison between workstation graphics cards and gaming graphics cards?
Please someone clarify this. How could they render a movie for 3,000 years? Did they have this render farms hidden in Egypt??
1) Cases cablable of taking a 2 slot grpahics card woudl future proff setting up anode at this time in case GPU rendering does become applciable over the lifetiem of the node. So (m)ATX cases not rack mounts
2) Resale of a (m)ATX "reglaur" looking desktop a few years down he road to "home users" is easier than a rack mount server. So should factor that into the value.
3) With 500-1TB being the sweet spot for Gb/$ I would go with those drives and use the render node also as a distributed (redundant) back up solution , this address where are you going to store all your work over the years.
What do you think the meaning of parallel processing is? Doing a lot of that work at once, right? If we have a huge render farm of 5000+ processors, we cut down that time to less than a year, wouldn't we?
Of course, a lot of that depends how fast each processor in the render farm is, but the general public won't care about that; just give 'em the huge numbers and don't tell them you were using 1.6ghz celery's in your render farms.
As MonsterCookie pointed already out, use some good scaling multi-processor/-node OS for good distributed performance (m$ doesn't apply).
Finally a decent article on TH... almost without the usual vi$hta or $even (aka vi$hta sp2+) m$ pu$hers behind.
What? xpire x64 is working for TH? almost unbelievable...
Also, none of the usual m$ fankiddie and gamer comments, (at least) till now...
If a task took 100 man hours, that means it took 2 guys 50 hours each to do something. If you did that with 10 guys, it would take each man 10 hours of work. There is a point of diminishing efficiency, which is mentioned in the article. The extreme to this is, it would take 100 men, 1 hour of work to complete the same task. The efficiency has been drasticly reduced.
This is whats being done in these rendering farms. A bunch of processors are put together, tasked with a job, and they belt out the results. If you did that with just one processor, it would take the 3k years in egypt to come up with a result.
I do agree with graphics card rendering,but don't think this article is worthless!
When I read about xeons, I also read about AMD making similar, low power processors like that (45nm or lower?,and a TDP of around 65W, which is 30W lower than their previous processor line).
It might not be beneficial to buy xeons, but perhaps it might when going with AMD.
Like buy a few XO netbooks for the developing countries, and sponsor lots of children.
Another idea we have been playing with is using cheap USB keyfobs either as system drives or to persist config data etc. - much faster boot times, very low power consumption and great MTTF.