Angelini: The first thing that really caught my eye when I saw Mark’s email (from Nvidia) about what you’re doing was, uh, some of the hardware you use during filming is really familiar to the Tom’s Hardware audience. Our guys know graphics and hardware. But a lot of it also comes from an entirely different world, where we don’t mess with cameras and a lot of the post-production tools. Can you describe how those two segments come together? Are companies coming to you with, “Check this card out, it’ll speed up decompression of high-def footage?” Or are you looking for capabilities that aren’t available and asking for them to be enabled?
Rosenberg: It’s a little bit of a healthy mix of both. I would say most of the needs that we have on a regular basis get fulfilled within some window of time, and that could probably fall into Moore’s law. So, usually things that we’re anticipating eventually come onto our plate.
I mean, the Canon 5D Mark II—we were having conversations in 2008 about DSLR cameras that had the ability to shoot 24 frames per second at 1080p. And Nikon came out with their camera in 2009, but it only shot 720. And then Canon came out with the 5D Mark II in November of 2009 and that shot 1080. Most of the stuff we’ve wanted we’ve found this really cosmic alignment with the projects we were doing, and that worked out.
In terms of hardware specifically, there are two big shining lights that we felt really good about. One of them is Nvidia’s CUDA technology and GPU acceleration hooked into Adobe’s software platform. I have always been a very loyal Adobe user, and that just comes from the fact that my mentor’s brother worked at Adobe, I got a job as a consultant for Adobe fresh out of college, and I subsidized my film-making passion by being a senior consultant for Adobe’s video department. So I was always motivated personally to try to use Adobe as much as possible on workflows. And around the time that CS 4 came out, which was when we were starting Act of Valor, you had a lot of this CUDA GPU acceleration, and then that really came to fruition with CS 5 and CS 5.5 with the Mercury Playback Engine. At that point, it wasn’t a matter of asking Nvidia, because Nvidia’s job was to make its cards faster. So we were really at this unique moment where myself being so inundated with technology for so long in my career—starting working at Adobe in 1995 and building relationships—Nvidia was already a company where we had plenty of their cards. But it wasn’t until we saw the full potential of what they were doing that I thought to myself “We need to be partners with these guys in a way that we’re giving them feedback and we want to be a lighthouse account for them and help them with stuff because we’ve made a big investment in their cards, and then there are needs that we have that we can give information back to them. So, with Nvidia specifically, it was really the alignment with using Adobe and Nvidia’s hooks into Adobe.
And then the other contingent to that was Hewlett Packard. We found ourselves again using these 8400 and 8600 systems running our Avid and Premiere systems. And right in the middle of our production they came out with the Z800. The other thing they came out with was the DreamColor monitor, which was a result of their collaboration with DreamWorks. So, these were two companies that were giving solutions within the budgets we were comfortable working in. We’re not a big studio. We don’t see ourselves as a big studio. We see ourselves as an independent studio, so we’re trying to get things done on limited budgets, putting as much money onto screen as possible. And so our back-end technical processes all hinge on getting economical solutions. That was one of the things our technology director Mike McCarthy was so good at was trolling the Web to find these new technology breakthroughs and find these products and get the stuff in-house to work well.
Angelini: That leads to a multi-part question that I have. I know you have a long history with Adobe. I know Nvidia put a lot of engineering time into Mercury Playback. So where specifically does GPU-based acceleration play a role in your workflow and then how was life different before you had a graphics card to do that? We used to associate this stuff with gaming.
Rosenberg: For a video editor, real-time interaction with the material is essential. Traditionally, real-time performance and playback was always linked to a video capture card. So, Matrox, for example, had their real-time engine that was accelerating playback and effects and all of that. And around the time that CS 4 came out, just the dynamics of video capture hardware on the PC were changing in that real-time support, in terms of wanting to not have to build a $10 000 workstation that just plays stuff back in real-time.
And so, everyone needs a graphics card in their computer. That’s just a basic fact of assembling a PC or Mac. The moment that the graphics card can serve your display needs, and then accelerate the playback needs, it starts to become even more integral for the video editor. For us, for example, we would have a dual-output Nvidia graphics card that had the DreamColor display on one connector, which is calibrated and color-correct, and then our standard monitor as our primary display. And then we’re using the GPU acceleration that’s on the card to play back multiple digital camera formats in real-time. That, for us, was the watershed moment where it was like, the faster these cards, the better that performance is going to be. And then when you have digital acquisition companies investing in 2K, 4K, 5K video formats, you need more processing to play those back in real-time, and that’s where the GPU makes a really big difference.
Now, the paradigm has shifted in that the GPU is an absolute essential component of building any of those systems because it serves that dual purpose. But the secondary purpose of the accelerated editing and decompressing is almost becoming the primary need for that graphics card because of the strength of the card.