Sign in with
Sign up | Sign in

Playing Devil's Advocate: "There is No Spoon"

Video Transcoding Examined: AMD, Intel, And Nvidia In-Depth
By

We spent many hours talking to industry experts, including professionals from CyberLink, Arcsoft, Elemental, Nvidia, AMD, Microsoft, and Intel, about the issue of testing transcoding quality. As a result, we feel it's necessary to clear the air a bit. You could easily walk away from this article and only take from it that "AMD and Arcsoft yield hazy video" and "Nvidia and MediaEspresso together look blocky." But that would only be a small part of the story, and it doesn't consider the bigger picture.

Our original intention was to see if there was a way to definitively conclude whether Nvidia's CUDA, AMD's APP, or Intel's Quick Sync yielded the best transcoded video output quality. As it turns out, this may be a question without a clear answer.

Why? Let's start with the software. First off, all three of the applications we used for this article have different settings to encode an iPad video (and many other profiles, beyond just that one). For example the default bitrate in MediaEspresso is 3 Mb/s. MediaConverter uses 4 Mb/s. And Badaboom defaults to 2.5 Mb/s. Now, we can normalize these settings to 3 Mb/s and make all other settings the same, but the comparison still wouldn't be quite right. In order to set the same bitrate in MediaConverter, we had to create a custom H.264 MP4 profile and manually select bitrate, along with other settings. That very act changes the dynamic a bit. When you select a profile, there are encoding parameters not exposed in the user interface that affect final output. Since MediaConverter no longer uses the iPad profile, it is set already at a disadvantage.

H.264
Badaboom
MediaConverter
MediaEspresso (AMD & Nvidia)
MediaEspresso (Intel)
Software Decoder
(CPU only)
MediaSDK
Proprietary
ProprietaryMediaSDK
Software Encoder
(CPU only)
MediaSDK
ProprietaryProprietaryMediaSDK
Hardware / GPGPU Encoder
MediaSDK (Intel),
Proprietary (CUDA)
APP Reference Library (AMD),
CUDA Reference Library (Nvidia),
MediaSDK (Intel)
APP Reference Library (AMD),
CUDA Reference Library (Nvidia)
MediaSDK
Hardware Decoder
Proprietary with NVCUVID (Nvidia)
MediaSDK
Proprietary with DXVA pathway
Proprietary with DXVA pathwayMediaSDK


Second, we need to talk about decoders and encoders. It seems nearly impossible to make a definitive statement about a single hardware encoder with such disparate results. For example, if you are using HD Graphics 2000 or 3000 in MediaEspresso, Badaboom, or MediaConverter, you are always employing the encoder and decoder from Intel's MediaSDK library. CyberLink only uses its proprietary decoder and encoder on AMD- and Nvidia-based hardware. Badaboom doesn't support APP-based encoding at all, but its CUDA encoder was completely developed in-house. Meanwhile, Arcsoft and CyberLink both use Nvidia's reference library to transcode video on Nvidia GPUs. If you downloaded the videos, then you know that using the same reference library doesn't guarantee consistency.

Even if you ignore some of the problems isolating a specific encoder, comparing different hardware in the same application can raise just as many issues. For encoding, the rate control, mode (Inter/Intra, 4x4, 8x8, 16x16 block) selection, and encoding option (B frame) all affect the transcoding video quality. One software programmer raised the point that the encoding parameters used between the different libraries implementations may not even be the same. For example, it is possible that that MediaEspresso uses a 4x4 macroblock in APP and 8x8 in CUDA.

So, what makes for a bad video? One of the software vendors told us it uses Elecard's StreamEye Studio to analyze transcoded video. But what happens when you need to call source material into question? When you transcode video, you are passing it through the decoder and then the encoder. Afterwards, the very act of pressing play on your transcoded video forces the video data through another decoder and a specific renderer. This means you are viewing your data through four lenses. If there is an error, where was it from? The scientific method calls for us to isolate as many variables as possible. That means we don't change the resolution we transcode to, nor do we change frame rates, or CABAC profiles. Only by testing one by one can we isolate factors. Yet, the fact remains that we are still examining video through multiple lenses, even if we use something as well-coded as StreamEye Studio. At that point, you are still using an Elecard decoder and a specific renderer to capture a frame for analysis.

Short of every software company giving us access to their proprietary encoder so that we can pull RAW frames from the frame buffer, there isn't even a way for us to definitively look at images without introducing the variable of playback (adding a decoder and renderer).

At the codec level, industry tools call for PSNR (peak signal-to-noise ratio) analysis, but like sound, it isn't a precise science. There is an overarching method, but few industry standards. Different tools calculate PSNR differently. One researcher even told us that Tektronix once sold a $50 000 machine for image analysis, but it forced you to use the company's reference image. So what do you do when the very math you use for analysis can be scrutinized? In our talks with AMD, we were told that its engineers only do PSNR measurements to ensure they are using the same protocol and reference point.

Someone commented that H.264 analysis is easier than MPEG-2. Indeed, the H.264 standard is much tighter for decoders and there is less sloppiness accepted (in fact, it is bit-exact). One method is to examine the pure bitstream to see if it is compliant, but this alone doesn't tell you if it is a good or bad video. A compliant bitstream can look bad and a non-compliant bitstream can look good. We are told this may be what happened in our some of our videos that had tracking errors.

To make things even more complicated, good decoders (hardware or software) can make up for a bad encoding job. This means a video where you see some sort of visual artifact may appear in VLC, but you won't see it in PowerDVD or WMP12. And there is no universal codec that will always do better in everything, so you will see visual errors on a case by case basis depending on the transcoding job and decoder/renderer being used for playback. So even if you have narrowed the problem down to the encoder and the decoder/renderer used for playback, how can you tell bad video from good video? Hardware decoders like UVD and PureVideo correct errors at the firmware level on the fly (like ECC memory), so even if you were the programmer writing the encoding software, it still hard to know for sure where an error originates.

Sure, you can tell a 360p video from 1080p. But can you tell one bad low-bitrate 480p transcoding job from a good one? When you have fewer reference points to evaluate video, it becomes very difficult. What does that mean for your mom, grandfather, or little brother, though? How do they know that the video they feed in to this easy-to-use transcoding machine is going to come out the other end looking like something they'll want to watch?

According to the experts, that is the million dollar question plaguing our industry. If you don't have a lot of dough and a lot of time on your hands, the short answer is that you can only tell if it is an obvious mistake. "You'll know it when you see it." This was the case with our CUDA-based transcoding in our recent Brazos coverage. It was bad enough to warrant a "I wouldn't watch a full movie with this sort of visual corruption."

Now, it is easy to brush this off as an infrequent occurrence and accuse us of nitpicking for the sake of a story. But if you transcode a lot of video, then you know that the industry scrutinizes codecs intensely. There are case studies on decoders and encoders for which you need to pay thousands of dollars.

WMP12: HW Decode (MediaFoundation)PowerDVD: SW DecodePowerDVD: HW Decode

Within our own tests, I would say about 3-5% of our transcoded videos have obvious errors. And we have some examples to share. In the screen shots above, we have visual artifacts that we would describe as tearing in WMP12. Now, usually, you can fault the renderer for this, but that isn't always the case. For this particular video, it turns out that the it's the fault of the software decoder, because this artifact only appears during playback on WMP12 with HD Graphics 3000.

Encoding Error

In another case, we had a smaller error from our Up! trailer that was part of a poor transcoding job. This artifact appeared on all video players, regardless of our hardware configuration.

Display all 52 comments.
This thread is closed for comments
Top Comments
  • 28 Hide
    spoiled1 , February 7, 2011 3:39 AM
    Tom,
    You have been around for over a decade, and you still haven't figured out the basics of web interfaces.

    When I want to open an image in a new tab using Ctrl+Click, that's what I want to do, I do not want to move away from my current page.

    Please fix your links.
    Thanks
  • 19 Hide
    spammit , February 7, 2011 4:11 AM
    omgf, ^^^this^^^.

    I signed up just to agree with this. I've been reading this site for over 5 years and I have hoped and hoped that this site would change to accommodate the user, but, clearly, that's not going to happen. Not to mention all the spelling and grammar mistakes in the recent year. (Don't know about this article, didn't read it all).

    I didn't even finish reading the article and looking at the comparisons because of the problem sploiled1 mentioned. I don't want to click on a single image 4 times to see it fullsize, and I certainly don't want to do it 4 times (mind you, you'd have to open the article 4 separate times) in order to compare the images side by side (alt-tab, etc).

    Just abysmal.
  • 17 Hide
    cpy , February 7, 2011 4:30 AM
    THW have worst image presentation ever, you can't even load multiple images so you can compare them in different tabs, could you do direct links to images instead of this bad design?
Other Comments
  • 28 Hide
    spoiled1 , February 7, 2011 3:39 AM
    Tom,
    You have been around for over a decade, and you still haven't figured out the basics of web interfaces.

    When I want to open an image in a new tab using Ctrl+Click, that's what I want to do, I do not want to move away from my current page.

    Please fix your links.
    Thanks
  • 19 Hide
    spammit , February 7, 2011 4:11 AM
    omgf, ^^^this^^^.

    I signed up just to agree with this. I've been reading this site for over 5 years and I have hoped and hoped that this site would change to accommodate the user, but, clearly, that's not going to happen. Not to mention all the spelling and grammar mistakes in the recent year. (Don't know about this article, didn't read it all).

    I didn't even finish reading the article and looking at the comparisons because of the problem sploiled1 mentioned. I don't want to click on a single image 4 times to see it fullsize, and I certainly don't want to do it 4 times (mind you, you'd have to open the article 4 separate times) in order to compare the images side by side (alt-tab, etc).

    Just abysmal.
  • 17 Hide
    cpy , February 7, 2011 4:30 AM
    THW have worst image presentation ever, you can't even load multiple images so you can compare them in different tabs, could you do direct links to images instead of this bad design?
  • 4 Hide
    ProDigit10 , February 7, 2011 4:53 AM
    I would say not long from here we'll see encoders doing video parallel encoding by loading pieces between keyframes. keyframes are tiny jpegs inserted in a movie preferably when a scenery change happens that is greater than what a motion codec would be able to morph the existing screen into.
    The data between keyframes can easily be encoded in a parallel pipeline or thread of a cpu or gpu.
    Even on mobile platforms integrated graphics have more than 4 shader units, so I suspect even on mobile graphics cards you could run as much as 8 or more threads on encoding (depending on the gpu, between 400 and 800 Mhz), that would be equal to encoding a single thread video at the speed of a cpu encoding with speed of 1,6-6,4GHz, not to mention the laptop or mobile device still has at least one extra thread on the CPU to run the program, and operating system, as well as arrange the threads and be responsible for the reading and writing of data, while the other thread(s) of a CPU could help out the gpu in encoding video.

    The only issue here would be B-frames, but for fast encoding video you could give up 5-15MB video on a 700MB file due to no B-frame support, if it could save you time by processing threads in parallel.
  • 7 Hide
    intelx , February 7, 2011 6:04 AM
    first thanks for the article i been looking for this, but your gallery really sucks, i mean it takes me good 5 mins just to get 3 pics next to each other to compare , the gallery should be updated to something else for fast viewing.
  • 7 Hide
    _Pez_ , February 7, 2011 6:09 AM
    Ups ! for tom's hardware's web page :p , Fix your links. :)  !. And I agree with them; spoiled1 and spammit.
  • 8 Hide
    AppleBlowsDonkeyBalls , February 7, 2011 6:12 AM
    I agree. Tom's needs to figure out how to properly make images accessible to the readers.
  • 7 Hide
    kikireeki , February 7, 2011 9:49 AM
    spoiled1Tom, You have been around for over a decade, and you still haven't figured out the basics of web interfaces.When I want to open an image in a new tab using Ctrl+Click, that's what I want to do, I do not want to move away from my current page.Please fix your links.Thanks


    and to make things even worse, the new page will show you the picture with the same thumbnail size and you have to click on it again to see the full image size, brilliant!
  • 6 Hide
    acku , February 7, 2011 10:31 AM
    Apologies to all. There are things I can control in the presentation of an article and things that I cannot, but everyone here has given fair criticism. I agree that right click and opening to a new window is an important feature for articles on image quality. I'll make sure Chris continues to push the subject with the right people.

    Web dev is a separate department, so we have no ability to influence the speed at which a feature is implemented. In the meantime, I've uploaded all the pictures to ZumoDrive. It's packed as a single download. http://www.zumodrive.com/share/anjfN2YwMW

    Remember to view pictures in the native resolution to avoid scalers.

    Cheers
    Andrew Ku
    TomsHardware.com
  • 4 Hide
    Reynod , February 7, 2011 10:41 AM
    An excellent read though Andrew.

    Please give us an update in a few months to see if there has been any noticeable improvements ... keep your base files for reference.

    I would imagine Quicksynch is now a major plus for those interested in rendering ... and AMD and NVidia have some work to do.

    I appreciate the time and effort you put into the research and the depth of the article.

    Thanks,

    :) 
  • -1 Hide
    acku , February 7, 2011 10:54 AM
    Quote:
    An excellent read though Andrew.

    Please give us an update in a few months to see if there has been any noticeable improvements ... keep your base files for reference.

    I would imagine Quicksynch is now a major plus for those interested in rendering ... and AMD and NVidia have some work to do.

    I appreciate the time and effort you put into the research and the depth of the article.

    Thanks,

    :) 


    Will do, but I think overall this article sums up everything in a way that it's relavant for months to come. (Well, it's my hope it did anyways). "In a worst-case scenario, hardware acceleration gives you 75% of the quality and a minor speed up versus processor-only transcoding. In a best-case scenario, you are getting 99% of the quality, and running up to 400% faster than a processor working on its own." The difference is that in a few months, the worse case will likely be up to 80%, 90%, or even 99%.

    There is always going to be some sort of trade off, and for the majority of us, 99% quality preservation at 4x the speed is well worth the benefit. The problem is that there is virtually no way to compare transcoding software or even GPGPU hardware (or software) without introducing new variables to testing. You need to accept all the variables and treat the problem like a puzzle grid.

    I would add there is so much more to image quality than what we talked about. We didn't even discuss LCD hardware or colorspace. I think this article changes the game a bit. I think we have gotten so use to seeing tearing, blocking, or some video artifact and then we simply blame the video encoder without a second thought.

    If you read many of the sandy bridge articles on the web, people were simply saying "that video looks fuzzy" in very specific cases and then labeled Quick Sync or CUDA poor at transcoding as a result. While the video they saw was fuzzy, that doesn't automatically make it a transcoding error. It could have been a renderer or decoder problem. For example, if bitrate dropped off suddenly, its possible that a specific decoder wasn't cable of keeping up. This was a major point we were trying to make. Those automatic claims are invalid if they didn't cross check the problem to isolate decoders and renderers.

    Hell, you can't even rely on the same trancode path. If you rerun a trancode, the randomness (due to parallelism) can cause an visible error you didn't see in the first transcode, even if you use the same hardware and software config

    Cheers,
    Andrew Ku
    TomsHardware.com
  • -4 Hide
    Miharu , February 7, 2011 11:34 AM
    Hi Toms,
    Before you write this article I had never hear about all of 3 softwares you talking about.
    I figure out you talk about new software supporting iPhone.

    New softwares... who they're probably no optimized for all solution.
    So I just imagine you didn't thinked about this before write this article.

    Comeback with x264 and MediaConcept H.264 analyst and benchmark. Perhaps I'll read you this time.
  • 1 Hide
    acku , February 7, 2011 11:43 AM
    Quote:
    Hi Toms,
    Before you write this article I had never hear about all of 3 softwares you talking about.
    I figure out you talk about new software supporting iPhone.

    New softwares... who they're probably no optimized for all solution.
    So I just imagine you didn't thinked about this before write this article.

    Comeback with x264 and MediaConcept H.264 analyst and benchmark. Perhaps I'll read you this time.


    When it comes to GPGPU transcoding, these are the three software titles that are at the forefront. MediaConcept only recently finished a CUDA encoder in August. Elemental coded its own back in 2008. They were the first and they are just as valid as MediaConcept. If you follow insider industry news (like streamingmedia.com - read by people that create video for the masses like Hulu's Eric Feng), then you know that Elemental's software is used by ABC, Big Ten Network, CBS Interactive, National Geographic and PBS. Hell MainConcept's Quick Sync encoder is still in beta as of this month. http://www.mainconcept.com/press/single-view/article/updated-mainconceptTM-h264avc-encoder-sdk-for-intelR-quick-sync-video.html Arcsoft and Cyberlink were Intel's launch partners to demo Quick Sync, read any of the Sandy reviews.

    Cheers,
    Andrew Ku
    TomsHardware

  • 0 Hide
    Anonymous , February 7, 2011 12:07 PM
    Thanks for the work put into the article, since I'm very new to all this however, I think it may have gone over my head :) 

    I am in the market for a new 'budget pc' and leaning toward an intel i5-2500k with an nVidia gts450 gfx card, the system should be aimed at producing great video quality at reasonable speed.

    I'm not sure if I interpretted the results correctly, but it seems I would not need to get the nvidia card after all since software encoding produces better results and the HD 3000 would suffice? any advice would be greatly appreciated.

    Thanks!
    Amien
  • 0 Hide
    acku , February 7, 2011 12:21 PM
    Quote:
    Thanks for the work put into the article, since I'm very new to all this however, I think it may have gone over my head :) 

    I am in the market for a new 'budget pc' and leaning toward an intel i5-2500k with an nVidia gts450 gfx card, the system should be aimed at producing great video quality at reasonable speed.

    I'm not sure if I interpretted the results correctly, but it seems I would not need to get the nvidia card after all since software encoding produces better results and the HD 3000 would suffice? any advice would be greatly appreciated.

    Thanks!
    Amien


    Quick Sync is basically = GPGPU. It's just done fixed function style. I would say if you aren't a crazy cook about image q, and I mean at the extreme end.... Using Spectracal to calibrate your HDTV. Only watch tv-reruns on Blu-ray, etc... Don't worry about software encoding. If are willing to give up that 1% (best case scenario) or ~25% (worse case), Quick Sync on the new Sandies will gives you up to a 4x speed bump. Remember that we used a GTX 580. It has 512 CUDA cores. The 450 only has 192. If you bought that graphics card, you wouldn't see the same transcoding performance as we did with the 580. Plus transcoding using a CUDA or APP uses the GPU for processing. That is going to burn into your power bill. Quick Sync uses fixed function hardware so its always going to be the most power efficient, even more than a pure software route.

    As I see it, forget the Nvidia card (unless you are gaming). The i5-2500k will still give you two options: Quick Sync or full software encoding. Remember that you need software that actually uses Quick Sync to transcode though. It isn't an automatic feature with every transcode software.

    Good luck on your build. I'd ping Don (who does our best CPU and graphics for the $ guides) if you have more questions on specific components.

    Cheers,
    Andrew Ku
    TomsHardware.com
  • 0 Hide
    Miharu , February 7, 2011 12:56 PM
    Andrew, did there are any avantage using Intel 3000 with ATI or Intel 3000 with Nvidia chipset as GPGPU ?
    I don't think "drivers" currently support that kind of thing... or any encode softwares?

    What did you think?

    Thank you
  • 1 Hide
    acku , February 7, 2011 12:59 PM
    Quote:
    Andrew, did there are any avantage using Intel 3000 with ATI or Intel 3000 with Nvidia chipset as GPGPU ?
    I don't think "drivers" currently support that kind of thing... or any encode softwares?

    What did you think?

    Thank you


    You can only choose one encoder. It is only going to be one of the following Quick Sync, APP, or CUDA. You can't do combos. Remember that Intel HD 3000 is the graphics side. Quick Sync is a separate logic circuit even though it's on the same die. I'll add that Quick Sync is disabled if you use a discrete graphics card.
  • 0 Hide
    cknobman , February 7, 2011 1:13 PM
    I have always just been happy using handbrake for all my video encoding needs and have never been disatisfied.

    I usually dont get in that big of a hurry and have never noticed anything terrible when watching the output but then .......

    Im not a videophile
  • 0 Hide
    amien , February 7, 2011 2:03 PM
    Thanks very much for that info, I'll be using Premiere Pro cs5, so i'm not sure if that supports Quick Sync?

    Out of interest, what card was used (if any) in the cpu benchmarks at
  • 0 Hide
    amien , February 7, 2011 2:04 PM
Display more comments