Theres devs in the way of this, c++ may be the bees knees, but if its just more work for devs, even if its overall easier, its still more.
LRB coming in has to do what nVidia is already doing, nVidia has CUDA, and is working in areas, while LRB has its own LRBl, and itll be used as well.
nVidia has the "lead", Intel, a more tried and true approach, except when it comes to HW.
If the HW fails in the gaming sector, itll put alot of strain on LRB overall, and is the key to its success, as Haswell and bobcat/bulldozer will implement it at the chip level, and itll no longer matter, but for discrete, LRB is still up in the air IMHLO, as it doesnt have alot of time to stretch its legs before its on chip with competition, which looks to me having a mainly upper mid/highend solution for gfx only discrete solutions, where we may yet be surprised by both nVidia and ATI here