Page 1:Fast Action Behind Still Photos
Page 2:Q&A: Under The Hood With AMD
Page 3:Q&A: Under The Hood With AMD, Cont.
Page 4:Test Platforms
Page 5:Applications: GIMP, AfterShot Pro, And Musemage
Page 6:Applications: Adobe Photoshop CS6
Page 7:Q&A: Under The Hood With Adobe
Page 8:Q&A: Under The Hood With Adobe, Cont.
Page 9:Q&A: Under The Hood With Adobe, Cont.
Page 10:Benchmark Results: GIMP
Page 11:Benchmark Results: AfterShot Pro
Page 12:Benchmark Results: Musemage
Page 13:Benchmark Results: Photoshop CS6
Page 14:The Picture Is Changing
Q&A: Under The Hood With Adobe, Cont.
Tom's Hardware: Are we anywhere close to saturating 16 lanes of second-gen PCIe for image editing operations?
Russell Williams: I don't have numbers off the top of my head, but think of a 16-megapixel DSLR image. Say you want to do something, like modifying the tilt of the blur plane in the blur gallery, and you want to get feedback in real-time—30 to 60 FPS. Then you have to composite the result with 50 other layers, and that compositing needs to be done back on the CPU, because the entire compositing engine isn't done on the GPU. So copying data back at 60 FPS, you're copying the full image that's being processed two or three times per frame. Suddenly, that PCIe doesn't look as fast as you originally thought.
Or look at it from a different point of view. Regardless of whether PCIe is fast enough, what matters is how fast it is compared to how fast the computation out on the card is. If the on-card computation takes half as long as before, the trip across the bus can mean that you only sped up the entire thing by 10% or so. I have a pithy metaphor: it's like driving to New York to make a sandwich.
Tom's Hardware: Say what?
Russell Williams: If you want to make a sandwich, and you invent a machine that can make your sandwich in two seconds, it still doesn't make sense to drive to New York to use the machine when you live in California. The shorter latency of the APU empowers us to use the GPU in all sorts of ways that don't make sense for discrete graphics. Really, the APU is a new kind of compute device. In the future, it's likely our code will have quite a few cases where it says "if discrete GPU, use discrete" but quite a few more that say "if APU, use APU."
Tom's Hardware: What about the future of shaders in a time of OpenCL and similar APIs. Adobe has taken a proprietary approach with Pixel Bender, but do you see this continuing as the market shifts to open standards?
Russell Williams: Shaders have a very solid future. Graphics APIs like OpenGL and DirectX are not going anywhere. OpenGL with custom shaders still provides the best solution for problems that are similar to 3D rendering, like 3D rendering in Photoshop or the Liquify filter. Now, I can’t speak for Adobe on this, but my own opinion is that GPGPU programming has come a long way since Adobe started Pixel Bender, and now that there's an industry standard—OpenCL—that addresses this area, we're adding more emphasis to that. We're members of Khronos, and we'll be contributing the experience we gained designing and building Pixel Bender to help improve future versions of OpenCL.
Tom's Hardware: My own impression is that many people still view CPUs with integrated graphics—APUs—as a budget solution. Maybe it’s just a habit from so many years of suffering with graphics-equipped Intel northbridges, I don’t know. But today...has the market shifted? Is APU and heterogeneous architecture really a game-changer?
Russell Williams: There are different sources of compute power in the box. It used to be there was just one—the CPU—and you wrote in C to use that resource. Now, a great deal of power is in the GPU, but it’s only suited for some problems. And a great deal of the CPU is in multiple cores and compute units, like vector units, which are only good at certain problems. In order to use the compute resources and utilize the performance of the machine, you have to use all the different kinds of units and resources in the machine. You have to "light up" all these things at the same time, with the CPU, GPU, vector units, and so on all doing the things they're best at. We're trying to use them all at once to give the user the most responsive experience. We're trying to move away from “fill out a dialog box, click OK, and watch the progress bar” to a more game-like, cinematic FPS experience, where you modify the image directly and get immediate feedback. The only way to do that is to utilize all the compute resources.
The significance of having integrated performance plus highly capable graphics is it moves this capability into more platforms. Many platforms that don't have the space, cost, or power budget for discrete. The APU-based solutions give you a tremendous potential performance boost in those environments. The other critical impact of APU is performance. We have a fixed power budget, and we don't know how to make a CPU go faster in a significant way on that power budget. We've seen the last of the 50% per year performance boosts on the CPU side. And we're not going to just keep scaling cores—it’s too difficult to make use of them. The number of programs that could really take advantage of a 24-core single-socket CPU is near zero. So the GPU is essentially the path to bring that transistor budget to users in a way that can be used.
I think that GPGPU and APUs are just beginning to deliver on the promise that many people have seen in them for many years. We'll see a lot more advantage taken of that, not just in Photoshop, but in other Adobe apps over the next couple of versions.
- Fast Action Behind Still Photos
- Q&A: Under The Hood With AMD
- Q&A: Under The Hood With AMD, Cont.
- Test Platforms
- Applications: GIMP, AfterShot Pro, And Musemage
- Applications: Adobe Photoshop CS6
- Q&A: Under The Hood With Adobe
- Q&A: Under The Hood With Adobe, Cont.
- Q&A: Under The Hood With Adobe, Cont.
- Benchmark Results: GIMP
- Benchmark Results: AfterShot Pro
- Benchmark Results: Musemage
- Benchmark Results: Photoshop CS6
- The Picture Is Changing