Vive le GeForce FX!
The two worlds remained separate for a long time. We used the CPU (or several CPUs) for office and Internet applications and GPUs were good only for drawing pretty pictures faster. But a single event would change all that: the appearance of programmability in GPUs. At first, CPUs had nothing to fear. The first so-called programmable GPUs (the NV20 and R200) were far from being a threat. The number of instructions for a program remained limited to around 10, and they worked on exotic data types like nine- or 12-bit fixed-point numbers.
But Moore’s Law rears its head once again. Not only does the increase in the number of transistors make it possible to increase the number of calculating units, but it also increases their flexibility. So, the appearance of the NV30 was significant for several reasons. While gamers may not induct the NV30 into their hall of fame, it did usher in two factors that were important in changing the mindset that sees GPUs as nothing more than graphics accelerators:
- support for single-precision floating-point calculations (even if it didn’t comply with the IEEE754 standard);
- support for a number of instructions in excess of a thousand.
At this point, all the conditions were in place to attract a few curious researchers on the lookout for ways to wring out more processing power.