GTC 2013 Sessions: GPGPU in Film Production at Pixar
GPU muscle for movies.
Laurence Emms from Pixar looks at how the animation studio uses GPGPU acceleration to speed up its film production pipeline. They've always used GPUs for real-time previews, but with current GPU technology, they have extended it into other parts of the pipeline.
One example they use is LPics, their interactive relighting engine. They create a software render in Renderman, and Renderman shaders cache the rendered scene data in a format that is then loaded onto the GPU. The lighting simulation is then run on the GPU, allowing the user to get lighting results that are extremely close to what the final rendered output will be. Mr. Emms stressed that one of their difficulties with using GPGPU in their productions has been making the output as close as possible to what the final output would be, because even minor differences make the tools much less effective at accelerating their workflow.
Other portions of their workflow where GPU acceleration may be helpful include vegetation, hair, and physics.
Most of the rest of the presentation spends time talking about doing physics simulations under CUDA, and where their work will be going in the future.
One of my teachers has a DVD containing all of Pixar's early experimental animation short-films, done back in 1970s and 80s. At least one of them was rendered on a Cray supercomputer.
And if you watch some of Pixar's early shorts ( http://www.youtube.com/watch?v=iZJymTKzGu4 ) you'll see that most of them only look about as good as most modern video game cutscenes which are done in real time! Digital technology is fascinating.
I know all about the how's and why's of this (I've been following 3D graphics for 20 years and GPGPU for the past 10), but it just strikes me as funny, on some level.
Anyway, it would be interesting to know how close we are to running their backend in realtime. Maybe the clip answers this question, but it literally crashed my PC, when I tried to play it. I blame some bug in the flash HW accel code or my GPU's driver. Again, kinda ironic that I can't watch a clip about GPGPU, due to a bug in some GPU code. (yes, I will retry with the HW accel in flash disabled)
I find it pretty shocking to hear that Pixar is planning on CUDA, for any future work. I'd expect anyone making such a big, long-term investment in GPU computing to go with a vendor-independent platform, like OpenCL.
If going independent just slows me down (time is money), why would I do it? There's no MONEY pushing OpenCL and it will take years for it to develop to Cuda's level even if AMD etc had the money. I think it's really too late for AMD in this case as they don't have the money to fund OpenCL app optimization that NV has already spent 7yrs fostering while AMD lost 5B+ over the last 10yrs stopping any money being spent on a CUDA competitor.
It's like trying to start a new business and take out Amazon with it. Their ecosystem will make it next to impossible to kill them (shipping deals with suppliers, cost structure, content they have deals with already etc). For the first time in ~20yrs Microsoft is vulnerable, but again, only due to their own mistakes. Nobody could have had a prayer of dethroning their strangle hold on Apps/games without them shooting themselves in both feet relentlessly. Win8, Always on console (and no used games), win8.1 looking like the same win8 again etc...There is now an opening for the next few years for linux/android to take over gaming (valve, google, NV helping all from different angles, and even Intel making x86 android). It seems dumb to say this, but most of this wouldn't have been possible if they'd have included the stupid start button and allowed boot to desktop...LOL. Amazon would have to shoot themselves in the foot for you to have a shot at taking them on today.
As Kepler desktop meets SOC next year you may start seeing Cuda optimized games which will further push us to an even more closed world (unless consoles sell in magical numbers at xmas helping AMD get games made more often on their hardware). The money NV has spent for 7yrs is just beginning to start to show real value and will increase as we move forward. Unless Apple/MS change their game plans soon (specifically on mobile/gaming), they're going to become largely irrelevant much like Nokia, Rimm etc. Let me know when OpenCL is taught in 500 universities in 26 countries
It's tough to take over someone once they get fully entrenched without some kind of major disruptor (like Amazon/Google driving margins to nothing on devices causing the likes of Nokia, Motorola etc to wither and die-major disruptors). With AMD being so weak I see no disruptor on the horizon vs. Cuda. This changes if say, Apple/MS buys them...LOL. I will be selling my NV stock then or soon after probably...ROFL. Apple could buy AMD and put 5Bil behind app optimization for OpenCL (among other things they could do with AMD) which would make up a lot of CUDA years quickly. I'm guessing they could pick up AMD for under 5Bil. A joke to them. Currently I'd buy them and IMG.L (~2Bil purchase? with market cap of 1.1B or so last I checked) if I was apple as they seem to exclusively use them anyway and why not block everyone else? Nobody would get either company's chips but my devices then
That's a lot more about apple than I needed to say, but that's one way OpenCL could catch Cuda. It needs a big financial backer to bleed NV's cuda to death. It's funny Apple owns a trademark for OpenCL but what have they done for it lately? They started it all and handed it to Khronos group. I'm shocked Apple hasn't bought AMD yet. With the best fabs money could buy and AMD they could put some hurt on Intel easily (and everyone else fabbing crap). They immediately have console chops to go with their TV coming soon etc. Seems a no brainer to pick up them and IMG.L while they're dirt cheap and they both go hand in hand with Apple's future.
honestly there is nothing to shocked about. most likely Pixar has been working with CUDA for quite sometime before this. if they change to OpenCL right now all the R&D spent on CUDA will be a waste. also when it comes to corporation what's important to them is for the tool to do it's job regardless it is open or closed sourced. they will not going to scrap the technology they has develop just to support open source effort.
Look at AMD's recently-announced SKY line of server GPU boards! Certainly in up-front costs they come in way below Tesla, on a per TFLOPS basis (or any other metric you care to choose). Also, look at Xeon PHI - its pricing is on par with Tesla products, as is its performance. In fact, certain code will run much faster on Xeon PHI than Kepler or GCN, and it's easier to optimize for.
I'm just saying that businesses generally tend to avoid vendor lock-in like the plague. Only in some extreme monopoly situation, like with MS' back office solutions, would they tend to tolerate it. Otherwise, they'll go so far as to opt for a less cost-effective short-term solution that minimizes long-term risk.
It sounds like you have a vested interest in CUDA, so I don't expect to win this debate. I've shared my perspective, which is based on much real-world experience. I've worked with many parallel HW & SW platforms and I've developed both OpenCL code and a few toy programs in CUDA, so I have some clue about the specifics. In fact, I even helped port a Renderman-like renderer to a specialized array processor back in 1996-97 and followed developments in graphics hardware & software since 1990.
If you want to compare apples and oranges, how about comparing a GPU implementation of something like crypto vs. an OpenCL implementation running on a FPGA. Yes, Altera released support for OpenCL over 2 years ago. Given an amenable workload (like bitmining), it will run circles around any GPU in both performance per $ and performance per watt.
I'm really disappointed that Google came down with a case of not-invented-here syndrome and developed RenderScript, instead of getting behind OpenCL. But note that they certainly didn't jump on the CUDA bandwagon, because they couldn't afford to tie Android to Tegra, even if they'd wanted to.
Secondly, who said anything about open source? OpenCL is an open standard. I'm not aware of any open source implementations, though they may exist. I never said Pixar should do this out of altruism. I was talking purely about their own self-interest.
And even if they're publicly saying they're continuing to invest in CUDA, I'd be surprised if they're not at least thinking about how to hedge their bets by making their code easy to port to OpenCL, or even using OpenCL for brand new efforts.