Page 1:OpenGL 2.0: Programmable, Scalable, And Extensible
Page 2:OpenGL 2.0: Programmable, Scalable, And Extensible, Continued
Page 3:An Answer Appears To Be At Hand, And The News Is Encouraging
Page 4:An Answer Appears To Be At Hand, And The News Is Encouraging, Continued
Page 7:Functionality, Continued
Page 8:Several Additional Functions And Capabilities Are Being Added, Such As
Some of the big points of the new API include:
- Shading Language , a hardware-independent shading language for OpenGL 2.0 that is closely integrated with OpenGL 1.3. The existing state machine is augmented with programmable units that enable incremental replacement of OpenGL 1.3 fixed functionality. The new shader will provide automatic tracking of existing OpenGL state (e.g., make simple lighting changes without having to rewrite parameter management). It will be C-based, with comprehensive vector and matrix types, and will also integrate some Renderman features. This language will virtualize pipeline resources so that programmers, for the most part, won't need to be concerned with resource management. There will also be the same language for vertex shaders and fragment shaders with some specialized built-in functions and data qualifiers.
- Vertex Processor capabilities for lighting, material and geometry flexibility. Vertex programs will replace parts of the OpenGL pipeline, such as: vertex transformation; normal transformation; normalization and rescaling; lighting; color material application; clamping of colors; texture coordinate generation; and texture coordinate transformation. However, the vertex shader does not replace the following: perspective projection and viewport mapping; frustum and user clipping; backface culling; primitive assembly; two-sided lighting selection; polymode processing; polygon offset; or polygon mode.
- Fragment Processor capabilities for texture access, interpolator and pixel operation flexibility. Open GL 2.0 has added fragment processor capabilities, which will replace the following: operations on interpolated vertex data; pixel zoom; texture access, scale and bias; texture application; color table lookup; fog; convolution; and the color matrix parts of the OpenGL pipeline. However, the fragment shader does not replace the following: OpenGL's shading model; histogram; coverage; minmax; pixel ownership test; pixel packing and unpacking; scissor; stipple; alpha test; depth test; stencil test; alpha blending; logical ops; dithering; or plane masking.
- Pack/ unpack operation. The goal of the pack/ unpack operation is to convert "application pixels" to a coherent stream of pixel groups. Unpack storage modes are applied before data is presented to the unpack processor. The unpack processor is involved in application-to-OpenGL transfers and the pack processor is involved in OpenGL-to-application transfers, and neither is involved in copy operations. Copies within the graphics subsystem only use the fragment processor.
OpenGL's existing "pixel transfer" operations are supported by the fragment processor, not the pack/ unpack processors. The fragment processor has the capabilities needed for scale, bias, lookup, convolution, etc. And since the ARB doesn't want to require redundant hardware capabilities, the pack/ unpack processors do not need the floating point horsepower of the other programmable units. The primary operations are shift, mask, and convert to/from float - the kind of operations involved in application-to-OpenGL transfers of pixel data. Programs in the pack and unpack processors must be compatible with the current fragment shader and work in conjunction with the fragment processor in order to implement the OpenGL pixel pipeline.
- Data Movement and Memory Managment. To enhance performance, data movement must be minimized. The primary types of data in visual processing are: vertex data (color, normal, position, user defined, etc), and image data (textures, images, pixel buffers). The general mechanism to create and manage OpenGL objects is to locate, bind, and manage objects through the same interface, and use vertex array, image, texture, shader, display list, and pixel buffer.
Currently, OpenGL memory management is a black box; i.e. everything is done automatically. As a result applications don't know when an operation will happen, how long an operation takes, how much backing store is allocated and doesn't have control over where objects are stored. Therefore, the current version of OpenGL doesn't have control of when objects are copied, moved, deleted, or packed (defragmented). And it doesn't know about the virtualization of memory resources. The end result is that OpenGL currently has a very limited ability to 'query for space,' and it can only do it for proxy textures. The following diagram illustrates the organization of the current OpenGL memory management.
OpenGL 2.0 will offer better memory management and will give applications control over the movement of data, providing better vertex manipulation, more efficient methods of getting data into OpenGL, direct access to OpenGL objects. In addition, the memory management features can eliminate copies of the data facilitating data streaming and greatly enhancing peformance.
- OpenGL 2.0: Programmable, Scalable, And Extensible
- OpenGL 2.0: Programmable, Scalable, And Extensible, Continued
- An Answer Appears To Be At Hand, And The News Is Encouraging
- An Answer Appears To Be At Hand, And The News Is Encouraging, Continued
- Functionality, Continued
- Several Additional Functions And Capabilities Are Being Added, Such As