It seems pretty clear from benchmark results and my own experience that video encoding tends to benefit from multiple cores but audio encoding does not. Why is this? Is it simply that Lame and iTunes are not multi-threaded while the main video codecs tend to be? Or is it something inherent in the encoding algorithms that makes video easier to parallelize? If the former, are there multi-threaded audio encoding apps out there?
The TMPGEnc benchmarks show a noticeable multi-core improvement when encoding DivX but not Xvid. And in the (Tom's) reviews it's not uncommon to find sentences like: "...the DivX codec results show a notable nod to threading, while the Xvid codec does not."
It's evidence like this that makes me think there's more to it than just the main program's ability to multi-thread.
Also, if it's just a matter of the app being multi-threaded, are there audio encoders out there that benefit from multiple cores?
That's true. But if it's only about whether the application is multi-threaded or not, how do you explain that the same multi-threaded application (TMPGEnc) scales across multiple cores well with one codec (DivX) but not with the other (Xvid)?