Tom's Hardware Q&A With MotionDSP
In what seems forever ago, this author had his first brush with MotionDSP on Tom’s Guide while trying to find out if the photo/video analysis done in movies is actually possible. Can someone take blurry, small footage and process it into a clear, recognizable face? MotionDSP’s vReveal is the consumer version of its Ikena software, which caters to vertical markets, including government agencies. The company’s algorithms can provide some rather amazing results (although there are limits), but this sort of post-processing takes a heavy hardware toll. What should you expect when getting into this sort of processing? We talked to MotionDSP CTO Nik Bozinovic and CEO Sean Varah to find out.
TH: How did you guys get started with hardware-based acceleration?
SV: When we were starting five years ago, there wasn’t an easy way to program GPUs. Basically, as we were working on our algorithms as early as 2008, we realized that as video was moving from sub-SD to standard-def to HD, there would be an increased need for high-performance computing. Only with that can we process video in real-time, which is absolutely mission critical on our professional-grade product. So the need to use heterogeneous computing was very obvious long before it became a reality, before vendors like AMD and Nvidia started supporting it. It’s been on the road map for years, but it’s only been in the last 12 to 24 months that we’re finally seeing that promise of having supercomputer-like performance from a GPU become a reality. Honestly, it’s really making a big impact on our bottom line.
TH: What are you doing with heterogeneous computing capabilities in your software?
SV: A couple things. One is...there are several simultaneous things that are wrong with video at any one time. It’s very rare that you have the perfect camera in perfect conditions, especially with consumers. There could be problems with resolution, noise, lighting, stabilization. So, in vReveal, we’ve packaged a series of different video filters that together can address these problems, and we’ve made the process incredibly simple by putting them all into an automatic, one-click fix operation. The reason we can make that a snappy experience for consumers is because we utilize heterogeneous computing.
NB: In addition to stabilization, we have noise enhancement, or noise removal, which we call a cleaning filter. We have auto light balance, sharpening, contrast improvement, and, as Sean mentioned, all this can happen automatically, with the complexity hidden from the user. But this is where heterogeneous computing comes in extremely handy. We have advanced video processing and video analysis tools that are all taking advantage of heterogeneous computing. I’ll just give you a couple of examples. In vReveal, we can, in near-real-time, create panoramas out of panning videos. You can take any panning shot, click a button in vReveal, and just mere seconds later end up with a beautiful stitched panorama in our pro-grade software called Ikena. We have a similar thing where stitching happens on the fly, and you can create massive mosaics used for different things, and that wouldn’t be possible without using GPU. Sure, we started as a video enhancement company, but now, using these GPU capabilities, we’re way beyond that.
TH: Specifically, what aspects of your software wouldn’t be possible without GPU-based acceleration?
SV: Well, doing it in real-time wouldn't be possible without the GPU.
NB: It’s especially true with higher-resolution media because, aside from just the sheer compute bottleneck that you solve by going to GPU—or heterogeneous computing in general, as opposed to running things only on the CPU—you are also solving a bandwidth bottleneck problem. What I mean by that is this: for our software to work and create the desired results, we have to look at a number of frames at the same time—from a couple to 30, 40, or 50 frames. It’s a very memory- or bandwidth-intensive problem to even a larger degree than it is a compute-bound problem. You have a double-win when you execute something like that on a GPU because, just to copy a large number of massive, uncompressed, high-definition frames is something you can do on a GPU, but it’s impossible on the CPU. It’s almost an order of magnitude difference between the memory bandwidth on these two devices.