I thought about this after reading about Intel's planned 32 core chip.
Regular software has a long way to go before it explots the power of so many cores, but scientific / engineering codes written for multiprocessing systems can take advantage of this right away.
Currently, systems with 32 compute cores are prohibitively expensive for individuals (think mid range server), but this would bring 50-200 GFlops computing to a desktop. Of course, the system would have to be souped up with more RAM and more hard-disk space than a regular desktop, but it would still be way cheaper in $$ / GFlop. 100-200 GFlops is good enough for lots of serious work, and universities / small research labs will definitely benefit.
Right of the top of my mind, I can think of CAE software (ABAQUS), Computational fluid dynamics packages (FLUENT),
most of the 3D Electromagnetic field codes, linear optimization (CPLEX) that will benefit enormously. Raytracing is trivially parallelized, just think about PoVRay running on a 32 core CPU :twisted:
More about :multicores make highend computing cheaper