Introduction
In part one of our Avivo vs. PureVideo comparison, we concentrated on DVD playback. In our second article in this series, we focused on high-definition video formats—HD DVD in particular—and we dug into what it takes to get them to work on the PC. In the third article, we evaluated the end-user experience with low-end video accelerators.
In this article, we will dig deep into the performance of the first integrated high-def video accelerators—the Radeon HD 3200 (used in AMD’s 780G chipset) and the GeForce 8200 (used in Nvidia’s MCP78S chipset). Both chipsets are for AMD socket AM2+ CPUs, so we can perform a real apples-to-apples comparison here.
Will these solutions perform similarly, or will a true leader emerge? How well will these solutions work with slower single-core budget CPUs? What amount of memory needs to be allocated to the on-board graphics for Blu-ray playback? How do they compare to discrete video cards? We will answer all of these questions and more.
The Competitors
Let’s begin by having a closer look at the technical differences between the integrated Radeon HD 3200 and GeForce 8200 graphics processors.
Codename: | 780G |
Process: | 55nm |
Universal Shaders: | 40 |
Texture Units: | 4 |
ROPs: | 4 |
Memory Bus: | 64-bit |
Core Speed MHz: | 500 |
Memory Speed MHz: | 400 (800 effective) |
DirectX / Shader Model | DX 10 / SM 4.0 |
In terms of hardware, the integrated Radeon HD 3200 is a carbon copy of the discrete Radeon HD 3450 (and the previous 2400 series as well), with the same number of shaders, texture units, and ROPs. I can’t recall another instance where a contemporary discrete desktop video card shared exact specifications with an integrated part; usually integrated motherboard graphics are a generation behind even the low-end discrete cards. In this respect, the 780G is special. Of course, it still suffers from the plague of sharing system memory over a 64-bit bus, but out of the gate, the 3200 shows promise.
Codename: | 7CP78S |
Process: | 80nm |
Universal Shaders: | 16 |
Texture Units: | 4 |
ROPs: | 4 |
Memory Bus: | 64-bit |
Core Speed MHz: | 500 |
Memory Speed MHz: | 400 (800 effective) |
DirectX / Shader Model | DX 10 / SM 4.0 |
The GeForce 8200 is a very close relative to the discrete 8400 GS, based on the older 80 nm process and sporting the same 16 universal shaders. However, the number of texture units and ROPs have been cut in half compared to the 8400.
The specs look incredibly close between these two competitors, with near-identical clock speeds, texture units, and ROPs, but the GeForce 8200’s 16 universal shaders look a bit weak in comparison to the Radeon 3200’s 40 universal shaders. Consider, however, that the GeForce and Radeon architectures are vastly different, and that this is not an accurate measure of performance between the two architectures—GeForce cards have done more with fewer shaders since the GeForce 8000 series. So it’s still a good match-up.