Tegra 4’s vertex processing architecture is essentially the same as NV40’s—the GPU that powered GeForce 6800 and its derivatives back in 2004.
Geometry data from the vertex processing stage is fed into the triangle setup engine, which can do a visible triangle in five clock cycles.
From there, triangles are turned into pixels. Tegra 4 does a combination of raster and early-Z detection at a rate of eight pixels per clock, discarding data that isn’t going to be visible early on in the pipeline and saving the engine from working on occluded pixels unnecessarily. It should come as no surprise (given Nvidia’s background) that this approach borrows heavily from the desktop GPU space.
How does Nvidia’s immediate-mode renderer differ from the tile-based deferred method employed by Imagination Technologies’ PowerVR IP core, and consequently Apple's Ax and Intel's Atom SoCs? Prior to the rasterization stage in a TBDR architecture, frames are cut up into tiles, and the resulting geometry data is put into a memory buffer, where occluded pixels are resolved. Particularly as the geometric complexity of a scene increases, the hidden surface removal process doesn’t fare as well.
Back to Tegra 4. Color and Z data are then compressed via a lossless algorithm. This is especially beneficial for enabling anti-aliasing without huge memory bandwidth costs (more on this shortly), since the values contained entirely within a primitive tend to be the same. Thus, they compress away nicely, yielding high compression ratios. Now, a lossless approach means that data is only compressed when it can be, so you still have to allocate space in memory—there are no savings there. But a lot of bandwidth can be conserved.
The raster stage feeds Tegra 4’s fragment pipe, which can do four pixels per clock. As mentioned, each pixel pipe has three ALUs with four multiply-add units, plus one multi-function unit, facilitating a number of VLIW instruction combinations to do different things (normalizes and combines, blends, traditional lighting calculations, and so on). Tegra 4 exposes 24 FP20 registers per pixel, compared to Tegra 3’s 16, allowing more threads in flight at any given time.
The four pipes have their own read- and write-capable L1 cache, and are serviced by a shared L2 texture cache—a new feature to Tegra 4. Naturally, you get better locality for texture filtering, again saving memory bandwidth. According to Nvidia, the cache is also really well-optimized for 2D imaging-style operations, which plays into the company’s work with computational photography.
Even as Nvidia’s engineers emphasize bandwidth-saving across the GPU, balancing a significant increase in texture rate requires a memory subsystem able to keep the SoC’s resources busy. Tegra 3 got by with a single-channel 32-bit interface. Tegra 4 uses two 32-bit channels, along with LPDDR3 memory at up to 1,866 MT/s, to push more than 3x the throughput available from LPDDR2-1066.