An announcement from Nvidia made at the most recent SC11 conference apparently got lost in a wave of supercomputing announcements.
Together with Cray, PGI and CAPS, Nvidia announced a new parallel programming standard called OpenACC. OpenACC has been published as API that includes a range of compiler directives which describe code additions for C, C++ and Fortran to take advantage of accelerators to speed up code processing in highly data parallel environments. OpenACC is designed to apply to general parallel processors and can be used on GPUs as well as CPUs.
Cray, PGI and CAPS are planning to deliver initial compiler support for OpenACC Q1 2012. There was no information when Nvidia would provide a dedicated OpenACC compiler, but the company said that its CUDA architecture is "fully compatible and interoperable" with OpenACC. Nvidia did not indicate that it would drop CUDA anytime soon, but its participation in this group of companies suggests that Nvidia may be planning for a time beyond CUDA.
While CUDA code also runs on multi-core processors, the underlying architecture is proprietary technology. The advantage of OpenACC is a much broader range of devices support. Initially, there is no indication that OpenACC will have a huge impact in consumer applications as the group of OpenACC supporters is especially hoping for interest from developers in supercomputing applications fields in industry and academia covering topics such as chemistry, biology, physics, data analytics, weather and climate, intelligence, and many other fields.