Vertex Shaders and Pixel Shaders

Introduction

It was nearly a year ago that Microsoft released their latest version of DirectX. At the time, they said that the changes they'd made to the architecture were probably the biggest since DX5. Like a lot of people, I just assumed that this was typical marketing talk, but as it turns out, the new architectures introduced in DX8 have substantially changed the way that programmers should work with the graphics hardware available to them.

In this article I'll discuss the pros and cons of the shader architecture for developers, and give you my opinions about the way games programmers are going to have to change to accommodate this new architecture.

Some of the discussion is going to be a little dry as architecture very often is, so I'll warn you now that this article is going to be split into a few sections. For the real beginners to this topic, I'll go through a brief introduction to the overall graphic pipeline, and what each element does. The next section will talk about some of the specific differences between the shader architecture introduced in DX8 and the traditional approaches available in DX7. Finally, I'll discuss some of the changes that are going to be needed so that graphics engines can take advantage of these new technologies, along with some examples of their typical usage.

The Graphics Pipeline

The use of graphics engines in games is still a relatively young field. Only ten years ago the cutting-edge graphics engines were written as a hodgepodge of quick hacks and assembler inner loops. Only since the advent of graphics cards (and API's that try to take advantage of them) have people started to look at ways of breaking down the rendering process into a set of logical blocks. At the moment, the graphics engine can be described simply as three different processing blocks:

The typical processing blocks of a graphics pipeline.