The next stage in the 3D-pipeline is the transformation. I have explained it in previous articles, which is why I will keep this a bit short. A frame of a 3D-scene, as displayed on your monitor, consists of several different objects in certain places that are lit by one or several different kind of light sources, seen from a certain viewpoint. I guess this explanation is as basic, but also as complete as it gets. Every object, may it be a player, a wall, the floor or whatnot is made of a certain number of triangles.

'Vertices' (sing. 'Vertex') are the corners of those triangles that each 3D-object is made of. In fact, the vertices are the very 'virtual matter' that makes a 3D-object. As the game engine transfers the object of a scene to the graphics processor, it actually sends over all the vertices of this object. Each vertex carries a lot of information. First of all there are the 3-dimensional coordinates x, y, z and w (weight). Then there is the color, often specified in form of a diffuse as well as a specular color. This color data is coded in the common 'RGBA' format, for 'red, blue, green and alpha'. The vertex also needs to carry its normal, the vector that points orthogonal off its surface. Then there are the texture coordinates s, t, r and q, which represent the texture and its position for the vertex. A vertex can of course have several texture coordinates in case that more than one texture is supposed to be applied to it. Additionally there might be fog as well as point size information and even more. You can see that a vertex, the smallest unit in a 3D-scene, is carrying a huge amount of descriptions.

### Transform

You saw the example of the kettle above. It was supposed to represent the definition of a 3D-object as it is sent to the transform engine, using the 'general' or 'basic' 3D-coordinates supplied by the game (model space/world space). Now you can imagine that from your view port (the screen), you might see the kettle from a different angle, a different direction or in a different location (view space). Thus the coordinates of the vertices need to be altered, with the result that each triangle that makes up the kettle might have to be rotated, enlarged/reduced or shifted up, down, left or right. This is what transforming does. It changes the coordinates of the vertices that make up a 3D-object as supplied by the 3D-game to the coordinates that accord to your point of view.

- Introduction
- The General Features Of GeForce3
- GeForce3's New Vertex Shader - A Poor Name For A Great Set Of Features
- What Is A Vertex?
- Lighting
- Vertex Shader Details
- Programming The Vertex Shader
- Programming The Vertex Shader, Continued
- Programming The Vertex Shader, Continued
- Procedural Deformation
- Setup For Dot Product Bump Mapping (Per Pixel Bump Mapping)
- Reflection And Refraction
- More Effects
- The Programmable Pixel Shader Of GeForce3
- What Happens In The 3D-Pipeline Before The Pixel Shader? Continued
- The Basics Of GeForce3's Pixel Shader
- 2 Textures Per Clock Cycle, But 4 Textures Per Pass?
- Pixel Shader Programming, Continued
- Advances And Advantages Of The Pixel Shader
- Shadow Mapping
- Isotropic BRDF Based Lighting
- Blinn Bump Mapping = True Reflective Bump Mapping
- Anti-Aliasing - Removing The 'Jaggies'
- Quincunx ! Samples
- Higher Order Surfaces
- Higher Order Surfaces, Continued
- Higher Order Surface