We know it since the release of NVIDIA's GeForce256 last October, hardware transform and lighting lets the 3D chip do the power hungry floating point calculations that make a scene out of a 3D world ('transforming'), remove the objects that are outside of the viewing range ('clipping') and give each vertex a light vector after computing the 3D-scene and its light sources ('lighting').
Most people that come to hardware websites are pretty sure they've got an idea what T&L is actually doing, but I still think that the usual descriptions are far too abstract to understand what's really happening inside the chip. Thus I'll try to give an analogy that's easier to understand.
The first data that the platform/software unit sends to the 3D chip could be compared with the plan of an architect. The architect gives you the map of a room in a house that he wants to build and you, as the 3D chip, have to make a picture of this room as you e.g. stand in the door and look inside. This is pretty much what the transform unit does. It creates a scene of a room viewed from a particular spot after receiving the layout of the room from the CPU. The layout of the room is supplied by the platform/software in form of 'vertices', which are pretty much all the 'corners' in this room. These 'vertices' have coordinates that are based on the 3D world and they never change. The scene created by the transform unit is again made up of 'vertices', but those vertices have coordinates that could e.g. start from the viewer. These coordinates change each time that the location of the viewer changes.
In this picture you see the transformed vertices that represent a Porsche Boxster. You can see the right rear and front wheels, which are included into the transformation although they will be covered by the body of the car.
Now depending where you stand in this room there are parts of it that you cannot see because they are outside of your field of view. 'Clipping' removes those parts, so that the next steps in the 3D pipeline don't have to bother about them. Clipping DOES NOT remove any objects that are within your field of view but covered by other objects in front!
The 'lighting' is easier to understand. The platform/software unit tells the 3D chip where the light sources in this room are. Depending on those light sources, the 'lighting' unit calculates a special light vector for each vertex.
What you have now, after those three steps are finished, is a room with all the objects that are in your field of view, including those that are behind an object in front. Those objects have no textures though. If the light sources should supply only simple white light, you would see all objects with the same plain surface in different shades of gray.
This is the same Boxster 'coated' with a solid skin, after it has been transformed and lit. The textures are still missing.
- ATi's Business Strategy
- Hardware T&L
- Hardware T&L, Continued
- Vertex Skinning
- Excurse - The Next Step Of The 3D Pipeline After T&L, The Triangle Setup
- Fill Rate, Rendering Pipelines And Triangle Size
- Wasted Energy - The Rendering Of Hidden Surfaces
- B - Radeon's HyperZ
- Fill Rate And Memory Bandwidth - They Belong Together!
- Fill Rate And Memory Bandwidth - They Belong Together! Continued
- C - The Pixel Tapestry Architecture
- The Pixel Tapestry Architecture, Continued
- 3D - Textures
- Range Based Fog
- Card Details
- Driver Interface
- Driver Interface, Continued
- Test Setup
- Benchmark Expectations
- Benchmark Results - Quake 3 Arena Demo001
- Benchmark Results - Quake 3 Arena Demo001 FSAA
- Benchmark Results - Expendable Demo
- Benchmark Results - Expendable Demo FSAA
- Benchmark Results - Dagoth Moor Zoological Gardens
- Benchmark Results - Evolva Rolling Demo
- Benchmark Results - Evolva Rolling Demo Bump Mapped
- Benchmark Results - MDK2 Demo