Page 1:Image Quality: Examined
Page 2:Intel, AMD, And Nvidia: Decode And Encode Support
Page 3:Transcoding Quality Revisited: CUDA Problems?
Page 4:Test Setup
Page 5:Hardware Decoder Quality: Examined
Page 6:Software Decoding: All CPU, All the Time
Page 7:Full Blu-ray Transcoding Speed: APP Versus CUDA Versus Quick Sync
Page 8:Small Clip Transcoding Speed: APP Versus CUDA Versus Quick Sync
Page 9:Transcoding Quality: APP Versus CUDA Versus Quick Sync
Page 10:Transcoding Quality: Rated By Software Title
Page 11:Playing Devil's Advocate: "There is No Spoon"
Page 12:Inside The Black Box: GPGPU Encoding
Page 13:Final Words
Playing Devil's Advocate: "There is No Spoon"
We spent many hours talking to industry experts, including professionals from CyberLink, Arcsoft, Elemental, Nvidia, AMD, Microsoft, and Intel, about the issue of testing transcoding quality. As a result, we feel it's necessary to clear the air a bit. You could easily walk away from this article and only take from it that "AMD and Arcsoft yield hazy video" and "Nvidia and MediaEspresso together look blocky." But that would only be a small part of the story, and it doesn't consider the bigger picture.
Our original intention was to see if there was a way to definitively conclude whether Nvidia's CUDA, AMD's APP, or Intel's Quick Sync yielded the best transcoded video output quality. As it turns out, this may be a question without a clear answer.
Why? Let's start with the software. First off, all three of the applications we used for this article have different settings to encode an iPad video (and many other profiles, beyond just that one). For example the default bitrate in MediaEspresso is 3 Mb/s. MediaConverter uses 4 Mb/s. And Badaboom defaults to 2.5 Mb/s. Now, we can normalize these settings to 3 Mb/s and make all other settings the same, but the comparison still wouldn't be quite right. In order to set the same bitrate in MediaConverter, we had to create a custom H.264 MP4 profile and manually select bitrate, along with other settings. That very act changes the dynamic a bit. When you select a profile, there are encoding parameters not exposed in the user interface that affect final output. Since MediaConverter no longer uses the iPad profile, it is set already at a disadvantage.
|H.264||Badaboom||MediaConverter||MediaEspresso (AMD & Nvidia)||MediaEspresso (Intel)|
|Hardware / GPGPU Encoder||MediaSDK (Intel), |
|APP Reference Library (AMD), |
CUDA Reference Library (Nvidia),
|APP Reference Library (AMD), |
CUDA Reference Library (Nvidia)
|Hardware Decoder||Proprietary with NVCUVID (Nvidia)|
|Proprietary with DXVA pathway||Proprietary with DXVA pathway||MediaSDK|
Second, we need to talk about decoders and encoders. It seems nearly impossible to make a definitive statement about a single hardware encoder with such disparate results. For example, if you are using HD Graphics 2000 or 3000 in MediaEspresso, Badaboom, or MediaConverter, you are always employing the encoder and decoder from Intel's MediaSDK library. CyberLink only uses its proprietary decoder and encoder on AMD- and Nvidia-based hardware. Badaboom doesn't support APP-based encoding at all, but its CUDA encoder was completely developed in-house. Meanwhile, Arcsoft and CyberLink both use Nvidia's reference library to transcode video on Nvidia GPUs. If you downloaded the videos, then you know that using the same reference library doesn't guarantee consistency.
Even if you ignore some of the problems isolating a specific encoder, comparing different hardware in the same application can raise just as many issues. For encoding, the rate control, mode (Inter/Intra, 4x4, 8x8, 16x16 block) selection, and encoding option (B frame) all affect the transcoding video quality. One software programmer raised the point that the encoding parameters used between the different libraries implementations may not even be the same. For example, it is possible that that MediaEspresso uses a 4x4 macroblock in APP and 8x8 in CUDA.
So, what makes for a bad video? One of the software vendors told us it uses Elecard's StreamEye Studio to analyze transcoded video. But what happens when you need to call source material into question? When you transcode video, you are passing it through the decoder and then the encoder. Afterwards, the very act of pressing play on your transcoded video forces the video data through another decoder and a specific renderer. This means you are viewing your data through four lenses. If there is an error, where was it from? The scientific method calls for us to isolate as many variables as possible. That means we don't change the resolution we transcode to, nor do we change frame rates, or CABAC profiles. Only by testing one by one can we isolate factors. Yet, the fact remains that we are still examining video through multiple lenses, even if we use something as well-coded as StreamEye Studio. At that point, you are still using an Elecard decoder and a specific renderer to capture a frame for analysis.
Short of every software company giving us access to their proprietary encoder so that we can pull RAW frames from the frame buffer, there isn't even a way for us to definitively look at images without introducing the variable of playback (adding a decoder and renderer).
At the codec level, industry tools call for PSNR (peak signal-to-noise ratio) analysis, but like sound, it isn't a precise science. There is an overarching method, but few industry standards. Different tools calculate PSNR differently. One researcher even told us that Tektronix once sold a $50 000 machine for image analysis, but it forced you to use the company's reference image. So what do you do when the very math you use for analysis can be scrutinized? In our talks with AMD, we were told that its engineers only do PSNR measurements to ensure they are using the same protocol and reference point.
Someone commented that H.264 analysis is easier than MPEG-2. Indeed, the H.264 standard is much tighter for decoders and there is less sloppiness accepted (in fact, it is bit-exact). One method is to examine the pure bitstream to see if it is compliant, but this alone doesn't tell you if it is a good or bad video. A compliant bitstream can look bad and a non-compliant bitstream can look good. We are told this may be what happened in our some of our videos that had tracking errors.
To make things even more complicated, good decoders (hardware or software) can make up for a bad encoding job. This means a video where you see some sort of visual artifact may appear in VLC, but you won't see it in PowerDVD or WMP12. And there is no universal codec that will always do better in everything, so you will see visual errors on a case by case basis depending on the transcoding job and decoder/renderer being used for playback. So even if you have narrowed the problem down to the encoder and the decoder/renderer used for playback, how can you tell bad video from good video? Hardware decoders like UVD and PureVideo correct errors at the firmware level on the fly (like ECC memory), so even if you were the programmer writing the encoding software, it still hard to know for sure where an error originates.
Sure, you can tell a 360p video from 1080p. But can you tell one bad low-bitrate 480p transcoding job from a good one? When you have fewer reference points to evaluate video, it becomes very difficult. What does that mean for your mom, grandfather, or little brother, though? How do they know that the video they feed in to this easy-to-use transcoding machine is going to come out the other end looking like something they'll want to watch?
According to the experts, that is the million dollar question plaguing our industry. If you don't have a lot of dough and a lot of time on your hands, the short answer is that you can only tell if it is an obvious mistake. "You'll know it when you see it." This was the case with our CUDA-based transcoding in our recent Brazos coverage. It was bad enough to warrant a "I wouldn't watch a full movie with this sort of visual corruption."
Now, it is easy to brush this off as an infrequent occurrence and accuse us of nitpicking for the sake of a story. But if you transcode a lot of video, then you know that the industry scrutinizes codecs intensely. There are case studies on decoders and encoders for which you need to pay thousands of dollars.
Within our own tests, I would say about 3-5% of our transcoded videos have obvious errors. And we have some examples to share. In the screen shots above, we have visual artifacts that we would describe as tearing in WMP12. Now, usually, you can fault the renderer for this, but that isn't always the case. For this particular video, it turns out that the it's the fault of the software decoder, because this artifact only appears during playback on WMP12 with HD Graphics 3000.
In another case, we had a smaller error from our Up! trailer that was part of a poor transcoding job. This artifact appeared on all video players, regardless of our hardware configuration.
- Image Quality: Examined
- Intel, AMD, And Nvidia: Decode And Encode Support
- Transcoding Quality Revisited: CUDA Problems?
- Test Setup
- Hardware Decoder Quality: Examined
- Software Decoding: All CPU, All the Time
- Full Blu-ray Transcoding Speed: APP Versus CUDA Versus Quick Sync
- Small Clip Transcoding Speed: APP Versus CUDA Versus Quick Sync
- Transcoding Quality: APP Versus CUDA Versus Quick Sync
- Transcoding Quality: Rated By Software Title
- Playing Devil's Advocate: "There is No Spoon"
- Inside The Black Box: GPGPU Encoding
- Final Words