I think one of the factors is that alot of that bandwidth is being used internally and is taking place on the graphics card itself, so it's transfer rate to the rest of the system is not being used.
i.e. when you're using full screen antialiassing. These calculations are applied by the graphics card itself to the scene information. It's calculated internally.
The information the card has to send out through the AGP bus is limited to what you see on screen, which would be constant at any given resolution at the same frames per second.
i.e:
one 1024*768 pixel frame @ 3MB
* 60 Frames per second
= a mere 180 MB/second output bandwidth
This example is oversimplification, but it demonstrates the theory.
Remember also, the CPU and GPU have different jobs, they don't duplicate the same work... they just hand each other the work they need to. The CPU is calculating AI and other program functions, and is handing the GPU tons of work to do graphically. The raw mathematics that represent a 3d scene can be very small as far as storage space and bandwidth go (i.e. co-ordinates of the vertices, which are simply numbers) but after the GPU has processed that information along with the textures it adds levels of complexity that the CPU never sees.
To be honest, I'm not exactly sure if this is the case as I've never seen it verbated plainly like this, but this is the impression I've got from paying attention to the video card industry over the past 10 years. But if I'm wrong I'm sure someone will correct me.