Is Performance The Only Variable In Play?
An Intel employee once told me, “video transcoded using CUDA looks like shit.” Honestly, I shrugged him off at the time. Companies say stuff like that on an almost daily basis; you tend to see things with a blue/red/green tint after a while, depending on the organization you’re representing. And as an advocate for the Tom's Hardware audience, I've trained myself to take everything I hear with a dash of skepticism. But I had a number of readers ask for quality comparisons in the comments section of my Sandy Bridge review, so I dusted off a couple of high-def clips and started saving the outputs from CyberLink’s Fusion-optimized version of MediaEspresso to see if there was any credence to those claims.
The comparison here is really very, very basic. I have four test beds: the E-350-powered E350M1, the Athlon II X2 240e-driven 880GITX-A-E, the IONITX-P-E with Intel’s Celeron SU2300, and the Atom-powered IONITX-L-E. The two boards with Nvidia’s Ion chipset are going to deliver the same output as soon as you start involving hardware-accelerated encode and decode. The other two aren’t powerful enough to even enable hardware-based encode support. So, although we’re technically reviewing ASRock’s E350M1 motherboard here, our quality comparisons becomes CUDA versus software versus two AMD platforms that apply hardware-accelerated decode support.
If you download the full-sized (720p) versions of all three of these software-based images and tab through them, you’ll see the sort of quality variation that’d require you to diff each shot—and that’s in a still frame. For all intents and purpose, they’re the same.
The same goes for all three boards with hardware-accelerated decoding applied. We can clearly see that what comes out of the decoder and is then operated on by the CPU during the encode stage is largely identical. Alternatively, you can tab between the software-only shot and the corresponding hardware decode version shown above to see that the decoded content is the same, whether the process happens on the CPU or GPU.
And then we let Ion’s CUDA cores handle the encode stage and things quickly get ugly. The examples I’m using here aren’t even the best shots from our 2:30-long trailer. But if you download the screen shot and compare it to any of the six above, the difference is oh-so-evident.
An even better way to tell would be to download the actual video clips themselves. I threw three examples up on MediaFire: CUDA-based encoding, the Ion platform with hardware-accelerated encode and decode turned off, and AMD’s E-350 with hardware-accelerated decoding enabled. Watch them back to back and see for yourself. You'll notice that the issues are most noticeable in scenes with lots of motion. The best way to describe the issue would be latent blocking or pixelation that distorts the output quality.
Verdict
As far as I’m concerned, sacrificing quality for speed is never alright.
There’s a completely different story for another day here, since we can now turn around and run performance/quality tests on Sandy Bridge, AMD’s discrete graphics, and Nvidia’s add-in cards—all of which offer accelerated encoding. And since there are multiple software apps optimized for all three paths, we can really dig deeper in the days to come.
In the context of media-oriented nettops, though, I’d rather not use Ion’s CUDA-based encode acceleration and get a better picture. That takes away much of the platform’s advantage over AMD’s E-350. Hopefully, the OpenCL-based encoders that emerge later this year, utilizing Llano, are written with our quality-oriented concerns in mind.