Skip to main content

HLSL's, Cg and the RenderMonkey

Introduction

Well, it's all happening at the moment. For the first time in a couple of years, it looks like nVidia's stranglehold on the PC graphics market might be starting to slip. ATI has brought out a graphics card which is faster and more full-featured than nVidia's best offering, and nVidia is still a number of months away from releasing its next bit of kit.

At the same time, ATI and nVidia have both made attempts to grasp control of the future of graphics software development with a suite of developer products. ATI has RenderMonkey, a truly fantastically named product, and nVidia has brought us Cg, or "C for Graphics." There are actually many differences between nVidia and ATI's efforts, the most fundamental being that RenderMonkey was written as a development environment for new materials, while Cg is an entirely new programming language.

So, for this article, I'm going to talk about HLSLs, or "high level shading languages": what they are, why they're good, and why graphics programmers are going to become very familiar with them. Then I'll have a short discussion about the concept of materials and whether or not RenderMonkey is more than just a fantastic name.

HLSLs

Computer graphics APIs have been changing fairly rapidly in the recent past. It used to be that the graphics card had a set of capabilities, and the graphics programmer was tasked with making the most of them. More recently, we've been seeing the start of the transition to programmable graphics hardware.

The current programmable graphics cards include nVidia's Geforce 3 and 4 classes of hardware (but not the GF4MX series), the XBox console (GF3.5), the Matrox Parhelia, 3Dlabs' P10, and from ATI, the Radeon 8500, 9000 and 9700 graphics cards. However, the programmability of these graphics cards has been fairly tightly specified and is currently accessed through vertex shader (VS) and pixel shader (PS) interfaces in DirectX 8 (DX8).

The more general of the two interfaces is the vertex shader; this allows the programmer to specify up to 128 instructions for the graphics card to carry out on each vertex. If the graphics card itself doesn't support vertex shaders, they can still be processed on the CPU, although with older TnL cards (GF1/2, Radeon, etc.), it's often possible to carry out simple processing much quicker by using the fixed function vertex pipe (i.e. without using vertex shaders).

The pixel shader interface is far more confusing, and has many restrictions placed upon it. Most importantly, pixel shaders can only be processed on graphics hardware that supports them; so on older TnL hardware, the developer is forced to develop pixel shader and non-pixel shader technology. In practice, this has meant that games writers have seen pixel shaders as an extra set of features, which don't necessarily get as much time on the schedule as core graphics programming.

Up to now, the development tools have also been rather basic, with most programmers sitting in front of a text editor to develop their graphical effects.

So, a couple of things need to happen. First, we need development environments that allow the graphics programmers to see what they're creating; and secondly, we need some programming languages that are going to make graphics programming a little easier for the average human.