Imagine if you told everybody you were going to throw this awesome, mind-altering, uberlicious party. But the day of the party, the first people in the door discovered that the plumbing was backed up, and everybody left, which was fine because the live band had been killed by freak tornado while en route. Five months later, you try to throw the same party again. The difference is that now you have a Fisher boom box instead of a live band, and, thanks to some duct tape, the plumbing works. Meanwhile, another guy down the block has already started throwing his own party. The invitations look a lot like yours. He’s serving the same drinks. You’re throwing in a free party favor, but no one seems to care, in part because the people who might care are already bustin’ moves down the block. Several people have RSVPed for your soiree, but only two or three have showed up so far.
You’re AMD, and the name of your party is “ATI Stream.”
If you caught our recent coverage of Nvidia’s CUDA platform, then you’re up to speed on the state of GPGPU processing, or GPU computing, or whatever you want to call it these days, and you know that ATI Stream stands alongside CUDA as one of the two most prevalent GPU computing platforms available today. The idea with GPU computing is to take highly parallelized tasks typically run in the CPU and offload them to the GPU, where they can run more quickly and efficiently. Programmable shaders are exceptionally well-suited for floating point-intensive tasks. Each shader operates as its own sort of processor core, so instead of having four or eight threads crunching on a parallelized task in the CPU, you could have 64 or 320 or however many stream processors tackling the same work in the GPU. Naturally, the program has to be coded to take advantage of this architecture, and the operations need to involve a relatively heavy amount of arithmetic per memory fetch in order to see decent results.
When Stream launched last December, AMD had only enabled it to accelerate encoding into MPEG-2 and H.264 formats. The acceleration part was fine. What AMD hadn’t counted on was that it would be deluged with criticisms over its encoding quality. With the May’s Catalyst 9.5 driver update, though, we finally have bug fixes for the quality issues and a fuller acceleration pipeline that now includes MPEG-2 and H.264 decoding, as well as resolution scaling. You can see this represented in the high-level illustration shown here.
The burning question, of course, is how does Stream stack up? Was it worth the wait? We’ve got some preliminary answers and more besides, but first, let’s step back for some perspective...
- Introduction
- Underground Stream: A GPGPU History
- Whither Avivo Video Converter?
- The “Balanced Platform”
- CyberLink Serves Up Espresso
- Let’s Pull Some Shots!
- Add Shots
- Mixed Messages
- Heavier Lifting
- Sim(ply Not Ready)HD
- In AMD’s Words
- Why Only These Codecs?
- Putting The “General” In GPGPU
- AMD On Stream’s Prospects

Virtually no one will bother using CUDA or Steam after OpenCL's out - why limit yourself to one hardware base after all? It'd be like writing Windows software that only ran on AMD processors and not Intel. Developers will not bother writing for both when they can just use one language that can run on both hardware platforms.
Well it's good to see more then just 1 app that supports it.
http://forums.nvidia.com/index.php?showtopic=96665&st=0&start=0
13 pages with ppl having different problems with that driver
It's the same graph from the following "Heavier Lifting" page instead of a graph for the 298MB VOB file that should be shown?
Virtually no one will bother using CUDA or Steam after OpenCL's out - why limit yourself to one hardware base after all? It'd be like writing Windows software that only ran on AMD processors and not Intel. Developers will not bother writing for both when they can just use one language that can run on both hardware platforms.
Yeah...that's just what i want from a GPU : Folding@Home. I find video transcoding to be a more 'useful' way of using you GPU.
Nice article. Haven't seen one in a long time.
I agree. The last three cards that I bought were Nvidia cards, based solely on their folding performance. When gaming, I prefer an ATI card. Oh yeah, I have four computers, three using Nvidia cards for folding and one with an ATI card for gaming. I think it would be great if the reviews included Folding@Home performance. It might also encourage ATI to make cards that did better for folding.
It actually IS a tie. You awarded NVidia a point for not offering an option for WMV encoding,
even though performance showed a very slight, but measurable, increase with Stream enabled.
You didn't give credit where credit was due. Do it right the next time.
1. I would have loved to test with the Folding@home app. I actually tried to when doing the former CUDA-on-a-budget article. However, I quickly discovered that the results were meaningless because the work loads varied too much. NVIDIA helped solve this problem by creating a series of batch files for SETI@home that used a common work load, and that's what you see in the article. However, there is no such tool that I know of for Folding@home and AMD/ATI has not released an equivalent set of testing tools for SETI@home.
2. I count seven charts -- 4 to 3. I did give the better coding point to NVIDIA on page 7. NVIDIA has 0 points on page 6 and two points by the end of page 7. :-)
3. The side-by-side captures you see in the later article pages show samples of Stream vs. CUDA output. These are taken from GPU-accelerated output files. To my eye, they look almost identical, but I offer them up for you to make your own judgments. I would say that the output quality issues that plagued Stream's initial launch have been remedied.
4. Yes, I agree that, ultimately, OpenCL and DirectX 11 will lay the entire Stream/CUDA issue to rest. But that's someday. For now, this article's purpose was to take a look at today's technology.
5. I tested with an HD 4890, not a 4870. Apologies if there are any typos to the contrary.
6. There is no behind-the-scenes money changing hands that resulted in my page detailing CyberLink Espresso. I developed that page for two reasons. First, as I mentioned, Espresso is the ONLY application today with even support for both Stream and CUDA, so it made sense to me that many people might want to buy it because of its agnostic support -- and it's a great tool. Second, in part because of this agnosticism, CyberLink has been immensely helpful to me in writing this article in a fair, even-minded, and accurate manner. The company helped me through many nights, often maintaining email dialogues well past midnight. So forgive me for being enthusiastic about the product. If CyberLink's customer support is even half its press support, I think you'll be pleased.