The New Graphics

Fry'em Up Dan

The pressing thing about the new standard that intrigues us all is the hardware that will be running it. Once again, what we see coming down the pipe come from the two major camps in Markham, Ontario (ATI) and Santa Clara, California (Nvidia). In fact, as you read this very article, we are being officially briefed on the contestant in the green trunks. The G80 part (GeForce 8800) has many rumors surrounding it, but we will not know the facts until our time with Nvidia is through.

From what we hear, the new card might not adopt the Microsoft graphics standard of a true Direct 3D 10 engine with a unified shader architecture. Unified shader units can be changed via a command change from a vertex to geometry or pixel shader as the need arises. This allows the graphics processor to put more horsepower where it needs it. We would not put it past Nvidia engineers to keep a fixed pipeline structure. Why not? They have kept the traditional pattern for all of their cards. It was ATI that deviated and fractured the "pipeline" view of rendering; the advent of the Radeon X1000 introduced the threaded view of instructions and higher concentrations of highly programmable pixel shaders, to accomplish tasks beyond the "traditional" approach to image rendering.

One thing is for sure; ATI is keeping the concept of the fragmented pipeline and should have unified and highly programmable shaders. We have heard about large cards - like ones 12" long that will require new system chassis designs to hold them - and massive power requirements to make them run.

Why shouldn't the new cards with ATI's R600 require 200-250 watts a piece and the equivalent of a 500 W power supply to run in CrossFire or G80 in SLI? We are talking about adding more hardware to handle even greater tasks and functions, like geometry shaders that can take existing data and reuse it for the subsequent frames. More instructions and functions means more demand for dedicated silicon. We should assume that there will need to be a whole set of linkages and caches designed to hold the data from previous frames, as well as other memory interfaces. Why wouldn't we assume that this would require more silicon?

Although we are not in favor of pushing more power to our graphics processors and the massive amounts of memory on the card, we are excited to think about the prospects of more in our games. Currently one can perform Folding at Home on your graphics processor, and soon we will be able to do effects physics on them too. (Although Ageia is gaining ground, Havok FX and gameplay physics in on a GPU has yet to be seen).

You can take to the bank the fact that graphics is still moving forward - with or without Microsoft Vista. The new hardware is already being put onto silicon. Only time will tell what shape this will actually emerge as or how it will evolve as the standard gains a foothold. Until then, we will keep drooling over what possibilities exist; we can't wait to show you what it can do.

Join our discussion on this topic