Vertex Shaders and Pixel Shaders

Primitive Processing

Primitive processing is a simple way to describe what the majority of a 3D game engine does. In one sentence: it takes game-specific data and data structures and produces vertices for the vertex processor to process.

Typically, these vertices will describe triangles, although it's also possible that they will describe lines, points or point sprites (DX8). Currently, the game engine carries out the majority of the primitive processing, but in the future you can expect at least some primitive processing to be carried into the API domain.

With DirectX 8, some primitive processing has been introduced to the API, notably higher-order primitives, but so far there has been limited uptake from game developers. This could possibly be due to the fairly limited implementations available with DX8, but more likely it's because there is little hardware out there currently that accelerates this functionality. ATI's new series of cards promise to open up higher-order primitives to the games programmer under the name of "truform," but Nvidia seems unimpressed, so for now, the jury's out.

One thing is for certain, in the future you can be sure that this area of the API is going to increase in size and importance. Displacement maps, anybody?

Vertex Processing

Vertex processing can be further subdivided into a few major blocks of processing: notably, transformation, lighting, and to a limited extent, texture coordinate transformation.

Transformation involves taking position data as it's stored in a vertex structure and transforming it into a 'screenspace' position. 'Screenspace' refers to the 2D plane that represents the viewer's window onto the world. If you like, it's the front of the monitor in the real world, and the viewer's position in the game world. For traditional vertex processing, the transformation block does one of two things: it either passes the position data from the untransformed vertex to the transformed vertex without doing anything to it, or it passes the vertices through a world transform (matrix), followed by a view matrix, and finally a projection matrix.

Lighting has become an increasingly important part of the vertex processing step. As polygon density increases in game engines, it becomes more and more viable to carry out lighting as a vertex processing step. Traditionally, the most impressive lighting comes from static lighting calculated as a pre-process and stored as a texture (lightmaps). As polygon counts increase, dynamic lighting is starting to be carried out per-vertex. In the future, dynamic lighting will increasingly move into the pixel processing layer, although the vertex processor will still need to prepare data for the pixel processor.

Texture transformation involves passing the texture coordinates within the vertex formats to the pixel rasterization block. Most of the time, this block of processing does no more than pass through the texture coordinates. However, for animated textures, it's possible to apply a matrix transform to the texture coordinates, enabling texture animation to be carried out without modifying the vertices (which would require the vertices to be modified in the VB, and subsequently re-uploaded across the AGP bus, an expensive operation nowadays). The texture transformation block can also be used to calculate texture coordinates for spherical and cubic reflection mapping.