Sign in with
Sign up | Sign in

The Theory: CUDA from the Hardware Point of View

Nvidia's CUDA: The End of the CPU?
By

If you’re a faithful reader of Tom’s Hardware, the architecture of the latest Nvidia GPUs won’t be unfamiliar to you. If not, we advise you to do a little homework. With CUDA, Nvidia presents its architecture in a slightly different way and exposes certain details that hadn’t been revealed before now.

nvidia CUDA

As you can see above, Nvidia’s Shader Core is made up of several clusters Nvidia calls Texture Processor Clusters. An 8800GTX, for example, has eight clusters, an 8800GTS six, and so on. Each cluster, in fact, is made up of a texture unit and two streaming multiprocessors. These processors consist of a front end that reads/decodes and launches instructions and a backend made up of a group of eight calculating units and two SFUs (Super Function Units) where the instructions are executed in SIMD fashion: The same instruction is applied to all the threads in the warp. Nvidia calls this mode of execution SIMT (for single instruction multiple threads). It’s important to point out that the backend operates at double the frequency of the front end. In practice, then, the part that executes the instructions appears to be twice as “wide” as it actually is (that is, as a 16-way SIMD unit instead of an eight-way one). The streaming multiprocessors’ operating mode is as follows: At each cycle, a warp ready for execution is selected by the front end, which launches execution of an instruction. To apply the instruction to all 32 threads in the warp, the backend will take four cycles, but since it operates at double the frequency of the front end, from its point of view only two cycles will be executed. So, to avoid having the front end remain unused for one cycle and to maximize the use of the hardware, the ideal is to alternate types of instructions every cycle – a classic instruction for one cycle and an SFU instruction for the other.

Each multiprocessor also has certain amount of resources that should be understood in order to make the best use of them. They have a small memory area called Shared Memory with a size of 16 KB per multiprocessor. This is not a cache memory – the programmer has a free hand in its management. As such, it’s like the Local Store of the SPUs on Cell processors. This detail is particularly interesting, and demonstrates the fact that CUDA is indeed a set of software and hardware technologies. This memory area is not used for pixel shaders – as Nvidia says, tongue in cheek, “We dislike pixels talking to each other.”

React To This Article