Finally, in spite of what we said earlier about this not being a horserace, we couldn’t resist the temptation of running the program on an 8800 GTX, which proved to be three times as fast as the mobile 8600, independent of the size of the blocks. You might think the result would be four or more times faster based on the respective architectures: 128 ALUs compared to 32 and a higher clock frequency (1.35GHz compared to 950MHz), but in practice that wasn’t the case. Here again the most likely hypothesis is that we were limited by the memory accesses. To be more precise, the initial image is accessed like a CUDA multidimensional array – a very complicated term for what’s really nothing more than a texture. There are several advantages:
- accesses get the benefit of the texture cache;
- we have a wrapping mode, which avoids having to manage the edges of the image, unlike the CPU version.
We could also have taken advantage of free filtering with normalized addressing between [0,1] instead of [0, width] and [0, height], but that wasn’t useful in our case. As you know as a faithful reader, the 8600 has 16 texture units compared to 32 for the 8800GTX. So there’s only a two-to-one ratio between the two architectures. Add the difference in frequency and we get a ratio of (32 x 0.575) / (16 x 0.475) = 2.4 – in the neighborhood of the three-to-one we actually observed. That theory also has the advantage of explaining why the size of the blocks doesn’t change much on the G80, since the ALUs are limited by the texture units anyway.
In addition to the encouraging results, our first steps with CUDA went very well considering the unfavorable conditions we’d chosen. Developing on a Vista laptop means you’re forced to use CUDA SDK 2.0, still in its beta stage, with the 174.55 driver, which is also in beta. Despite all that, we have no unpleasant surprises to report – just a little scare when the first execution of our program, still very buggy, tried to address memory beyond the allocated space.
The monitor blinked frenetically, then went black … until Vista launched the video driver recovery service and all was well. But you have to admit it was surprising when you’re used to seeing an ordinary Segmentation Fault with standard programs in cases like that. Finally, one (very small) criticism of Nvidia: In all the documentation available for CUDA, it’s a shame not to find a little tutorial explaining step by step how to set up the development environment in Visual Studio. That’s not too big a problem since the SDK is full of example programs you can explore to find out how to build the skeleton of a minimal project for a CUDA application, but for beginners a tutorial would have been a lot more convenient.