Sign in with
Sign up | Sign in

GTC 2013 Sessions: GPGPU in Film Production at Pixar

By - Source: Tom's Hardware US | B 10 comments

GPU muscle for movies.

Pixar GPU processing

Laurence Emms from Pixar looks at how the animation studio uses GPGPU acceleration to speed up its film production pipeline. They've always used GPUs for real-time previews, but with current GPU technology, they have extended it into other parts of the pipeline.

One example they use is LPics, their interactive relighting engine. They create a software render in Renderman, and Renderman shaders cache the rendered scene data in a format that is then loaded onto the GPU. The lighting simulation is then run on the GPU, allowing the user to get lighting results that are extremely close to what the final rendered output will be. Mr. Emms stressed that one of their difficulties with using GPGPU in their productions has been making the output as close as possible to what the final output would be, because even minor differences make the tools much less effective at accelerating their workflow.

Other portions of their workflow where GPU acceleration may be helpful include vegetation, hair, and physics.

Most of the rest of the presentation spends time talking about doing physics simulations under CUDA, and where their work will be going in the future.

Contact Us for News Tips, Corrections and Feedback

Discuss
Display all 10 comments.
This thread is closed for comments
  • 2 Hide
    slomo4sho , April 7, 2013 9:53 AM
    The advancements in animation have been profound over the last decade, lets see where the future takes us.
  • 1 Hide
    A Bad Day , April 7, 2013 10:24 AM
    slomo4shoThe advancements in animation have been profound over the last decade, lets see where the future takes us.


    One of my teachers has a DVD containing all of Pixar's early experimental animation short-films, done back in 1970s and 80s. At least one of them was rendered on a Cray supercomputer.
  • 1 Hide
    anxiousinfusion , April 7, 2013 6:18 PM
    A Bad DayOne of my teachers has a DVD containing all of Pixar's early experimental animation short-films, done back in 1970s and 80s. At least one of them was rendered on a Cray supercomputer.


    And if you watch some of Pixar's early shorts ( http://www.youtube.com/watch?v=iZJymTKzGu4 ) you'll see that most of them only look about as good as most modern video game cutscenes which are done in real time! Digital technology is fascinating.
  • 0 Hide
    bit_user , April 7, 2013 7:38 PM
    Am I the only one here seeing the irony of accelerating graphics rendering by using a graphics rendering accelerator as a general-purpose parallel computer?

    I know all about the how's and why's of this (I've been following 3D graphics for 20 years and GPGPU for the past 10), but it just strikes me as funny, on some level.

    Anyway, it would be interesting to know how close we are to running their backend in realtime. Maybe the clip answers this question, but it literally crashed my PC, when I tried to play it. I blame some bug in the flash HW accel code or my GPU's driver. Again, kinda ironic that I can't watch a clip about GPGPU, due to a bug in some GPU code. (yes, I will retry with the HW accel in flash disabled)
  • 3 Hide
    bit_user , April 7, 2013 7:45 PM
    BTW, thanks for posting this.

    I find it pretty shocking to hear that Pixar is planning on CUDA, for any future work. I'd expect anyone making such a big, long-term investment in GPU computing to go with a vendor-independent platform, like OpenCL.
  • 0 Hide
    somebodyspecial , April 8, 2013 1:46 AM
    Not shocking when you consider Cuda has 7yrs of labor from NV behind it and every major content creation app uses it. When you see OpenCL accelerating stuff FASTER than Cuda versions then you have a good argument for why you go independent. But when Cuda is already done, performing orders of magnitude faster than cpu or opencl you are saving a ton of money going cuda. Going to vendor-independent at this point is just a financial loss in rendering time etc. It's really sad Toms/Anandtech etc act like Cuda isn't in anything and extoll the virtues of OpenCL in every vid review even though they have to resort to pointless benchmarks (folding@home, bitcoing mining etc...nothing anyone makes a lot of money from professionally that is) to show how great AMD is a compute. How about firing up an Adobe CS 6 app and testing Cuda (NV) vs. OpenGL (AMD), or now maybe OpenCL. Whatever is fastest on each, tell us the numbers. No point in running OpenCL on NV when Cuda is there though, NV doesn't care to accelerate OpenCL if Cuda is an option (well duh, why?). You can download TRIAL software and pull this off in a TON of software cases.

    If going independent just slows me down (time is money), why would I do it? There's no MONEY pushing OpenCL and it will take years for it to develop to Cuda's level even if AMD etc had the money. I think it's really too late for AMD in this case as they don't have the money to fund OpenCL app optimization that NV has already spent 7yrs fostering while AMD lost 5B+ over the last 10yrs stopping any money being spent on a CUDA competitor.

    It's like trying to start a new business and take out Amazon with it. Their ecosystem will make it next to impossible to kill them (shipping deals with suppliers, cost structure, content they have deals with already etc). For the first time in ~20yrs Microsoft is vulnerable, but again, only due to their own mistakes. Nobody could have had a prayer of dethroning their strangle hold on Apps/games without them shooting themselves in both feet relentlessly. Win8, Always on console (and no used games), win8.1 looking like the same win8 again etc...There is now an opening for the next few years for linux/android to take over gaming (valve, google, NV helping all from different angles, and even Intel making x86 android). It seems dumb to say this, but most of this wouldn't have been possible if they'd have included the stupid start button and allowed boot to desktop...LOL. Amazon would have to shoot themselves in the foot for you to have a shot at taking them on today.

    As Kepler desktop meets SOC next year you may start seeing Cuda optimized games which will further push us to an even more closed world (unless consoles sell in magical numbers at xmas helping AMD get games made more often on their hardware). The money NV has spent for 7yrs is just beginning to start to show real value and will increase as we move forward. Unless Apple/MS change their game plans soon (specifically on mobile/gaming), they're going to become largely irrelevant much like Nokia, Rimm etc. Let me know when OpenCL is taught in 500 universities in 26 countries :)  Until then, if I was spending money going forward it would be wasted on anything BUT Cuda as a content developer. Profits don't come from HOPING one day OpenCL will be fast. Profits come from rendering or doing work super-fast TODAY. Clocking rendering 12x faster on Cuda than say 6-8 Intel cpu cores is why you go cuda today. Is an open world/platform better? Of course as it creates a level playing field for all (but that doesn't mean a faster world). But if going closed makes me a ton of more money because of the massive foundation already in place that's already FAST I'd be a fool to back open stuff right? :)  Cuda is an all in one package solution and more easily implemented than OpenCL because of it. NV certainly isn't in a hurry to help foster OpenCL either, they'll drag their drivers for as long as possible until AMD bleeds to death. It's just good business practice. So you'll be waiting on IBM/Intel/Arm to get OpenCL up to snuff (AMD has no money).

    It's tough to take over someone once they get fully entrenched without some kind of major disruptor (like Amazon/Google driving margins to nothing on devices causing the likes of Nokia, Motorola etc to wither and die-major disruptors). With AMD being so weak I see no disruptor on the horizon vs. Cuda. This changes if say, Apple/MS buys them...LOL. I will be selling my NV stock then or soon after probably...ROFL. Apple could buy AMD and put 5Bil behind app optimization for OpenCL (among other things they could do with AMD) which would make up a lot of CUDA years quickly. I'm guessing they could pick up AMD for under 5Bil. A joke to them. Currently I'd buy them and IMG.L (~2Bil purchase? with market cap of 1.1B or so last I checked) if I was apple as they seem to exclusively use them anyway and why not block everyone else? Nobody would get either company's chips but my devices then :)  But I'd have already built two fabs by now also (what's that $10-15Bil?) and would be dependent on nobody for my stuff for years to come, quite possibly putting other fabs out of business as their profits could allow them to outspend EVERYONE including Intel. Apple makes in a Q what Intel makes in a Year basically. They could fund a new high tech fab once a year for the next 5 years until everyone is dead...LOL. Pick up a memory company, SSD company and fab all your own crap. This is how samsung makes 8B a quarter now. ~65% of the crap in their devices come from themselves fabbed by themselves. Apple could literally throw 10B at a fab each year until Intel had no advantage at all, and everybody died off (I'm guessing 10B in a single fab makes yours the best, most are around $5-7B I think, disregarding re-investing in them for upgrades). They should have started this the second they hit 100Bil cash. Now iphone6 is delayed 3-6 months due to samsung switch to tsmc. In Jan Apple had 137B. Start a fab for 10B (14nm, 450mm wafer, and do it again every year with better stuff each rev), buy AMD/IMG.L for $6-7B and put 10B in games over 5-10yrs and another 5B in apps/OpenCL supporting your vid cards/apu's. If profits keep going up buy Corning or some glass company for screens etc (I mean if you spend this 35bil and rack up another 40B again next year, I buy corning for $30-40B cash and laugh as I block everyone else in another industry).

    That's a lot more about apple than I needed to say, but that's one way OpenCL could catch Cuda. It needs a big financial backer to bleed NV's cuda to death. It's funny Apple owns a trademark for OpenCL but what have they done for it lately? They started it all and handed it to Khronos group. I'm shocked Apple hasn't bought AMD yet. With the best fabs money could buy and AMD they could put some hurt on Intel easily (and everyone else fabbing crap). They immediately have console chops to go with their TV coming soon etc. Seems a no brainer to pick up them and IMG.L while they're dirt cheap and they both go hand in hand with Apple's future.
  • 0 Hide
    renz496 , April 8, 2013 4:19 AM
    bit_userBTW, thanks for posting this.I find it pretty shocking to hear that Pixar is planning on CUDA, for any future work. I'd expect anyone making such a big, long-term investment in GPU computing to go with a vendor-independent platform, like OpenCL.


    honestly there is nothing to shocked about. most likely Pixar has been working with CUDA for quite sometime before this. if they change to OpenCL right now all the R&D spent on CUDA will be a waste. also when it comes to corporation what's important to them is for the tool to do it's job regardless it is open or closed sourced. they will not going to scrap the technology they has develop just to support open source effort.
  • 0 Hide
    bit_user , April 8, 2013 11:03 PM
    somebodyspecialWhen you see OpenCL accelerating stuff FASTER than Cuda versions then you have a good argument for why you go independent.
    I disagree. It's not about raw performance, but rather about performance per $ (TCO, that is). Especially in something as scalable as graphics rendering. And when you're locked into one vendor, which is the case if you're using their APIs, languages, etc. then they have you over a barrel and will generally not do you any great favors on pricing.

    Look at AMD's recently-announced SKY line of server GPU boards! Certainly in up-front costs they come in way below Tesla, on a per TFLOPS basis (or any other metric you care to choose). Also, look at Xeon PHI - its pricing is on par with Tesla products, as is its performance. In fact, certain code will run much faster on Xeon PHI than Kepler or GCN, and it's easier to optimize for.

    I'm just saying that businesses generally tend to avoid vendor lock-in like the plague. Only in some extreme monopoly situation, like with MS' back office solutions, would they tend to tolerate it. Otherwise, they'll go so far as to opt for a less cost-effective short-term solution that minimizes long-term risk.

    It sounds like you have a vested interest in CUDA, so I don't expect to win this debate. I've shared my perspective, which is based on much real-world experience. I've worked with many parallel HW & SW platforms and I've developed both OpenCL code and a few toy programs in CUDA, so I have some clue about the specifics. In fact, I even helped port a Renderman-like renderer to a specialized array processor back in 1996-97 and followed developments in graphics hardware & software since 1990.
  • 0 Hide
    bit_user , April 8, 2013 11:28 PM
    somebodyspecialClocking rendering 12x faster on Cuda than say 6-8 Intel cpu cores is why you go cuda today.
    I think it's funny that we're talking about CUDA vs. OpenCL (presumably both on GPUs or similar architectures) and yet you quote performance numbers from a GPU vs. a multi-core CPU.

    If you want to compare apples and oranges, how about comparing a GPU implementation of something like crypto vs. an OpenCL implementation running on a FPGA. Yes, Altera released support for OpenCL over 2 years ago. Given an amenable workload (like bitmining), it will run circles around any GPU in both performance per $ and performance per watt.


    somebodyspecialthat's one way OpenCL could catch Cuda. It needs a big financial backer to bleed NV's cuda to death.
    No, it doesn't. It just needs to remain viable and the industry will tend to prefer it over vendor lock-in. If NVidia's customers demand their OpenCL implementation be competitive, NV will either put the resources behind it or lose business. Don't forget that Intel now supports OpenCL on Xeon PHI, so it's no longer a two-horse race, in the server.

    I'm really disappointed that Google came down with a case of not-invented-here syndrome and developed RenderScript, instead of getting behind OpenCL. But note that they certainly didn't jump on the CUDA bandwagon, because they couldn't afford to tie Android to Tegra, even if they'd wanted to.
  • 0 Hide
    bit_user , April 8, 2013 11:40 PM
    renz496if they change to OpenCL right now all the R&D spent on CUDA will be a waste.
    That's known as a "sunk cost". In business school, I'm told they pound it into your head that sunk costs have no value. I once heard that in the entire natural world, human adults are the only creatures that value sunk costs. When a hungry wolf realizes it's not going to catch its prey, regardless of how long it pursued the animal, it immediately cuts its losses and looks for another food source. Good business people are taught to do the same.

    renz496also when it comes to corporation what's important to them is for the tool to do it's job regardless it is open or closed sourced. they will not going to scrap the technology they has develop just to support open source effort.
    First of all, there's a difference between tactical decisions and strategy. In the short run, CUDA obviously gets the job done. The question is about whether it's a wise strategy to tie yourself ever closer to a single supplier, when viable alternatives exist.

    Secondly, who said anything about open source? OpenCL is an open standard. I'm not aware of any open source implementations, though they may exist. I never said Pixar should do this out of altruism. I was talking purely about their own self-interest.

    And even if they're publicly saying they're continuing to invest in CUDA, I'd be surprised if they're not at least thinking about how to hedge their bets by making their code easy to port to OpenCL, or even using OpenCL for brand new efforts.