Intel's 'Larrabee' to Shakeup AMD, Nvidia
Few advancements in computer technology interest enthusiasts as much as Intel’s future Larrabee architecture does. There is still so little known about final product details, yet the basic premise for the device has been pretty well established at this point. Naturally, where there is still mystery and interest, there will be curiosity and speculation, and Larrabee is no exception. Will it fail or will it not?
One key aspect of Larrabee’s success will stem from how well a gaming graphics solution it is. Despite not exactly being either a GPU or CPU, rather a hybrid solution of both, Intel is still clearly setting it’s sights on the gamer with Larrabee. One thing AMD has taught us about this is that to survive, you don’t need to have the fastest product, you just need something mainstream that’s priced right. If Intel fails here though, there would be little chance for Larrabee, especially since Intel’s graphics production history hasn’t been, well, pretty.
Performance wise, rumors have it that Larrabee is expected no earlier than 2009 and shall be only as fast as today’s current generation of GPUs upon release. According to a recent paper from Intel, simulated Larrabee performance would have us believe that with 25-cores, each running at 1GHz, we would be able to run both F.E.A.R. and Gears of War at 60 FPS. Speculating that Larrabee will be released with 32-cores, running at over 2GHz each, it is possible that Larrabee could actually be faster than rumored.
Other than performance, another concern for Larrabee is its drivers and support. Even excellent hardware can be tainted by poorly written drivers and lack of support in the industry. There has been a lot of criticism of Intel’s past ability to write quality drivers, adding to the concern, but Intel still has time to address this matter properly. Larrabee has a really rather flexible and programmable design that ultimately depends on the drivers and supplied compilers for it to be useful.
There are still other factors that could harm Larrabee’s success, such as fabrication problems resulting in low yields, release date delays, and high prices. A clear unknown still for Larrabee is it’s competition during the time frame it will be released. Performance aside, current GPUs are quickly becoming more and more programmable, now offering similar general purpose functions to that of Larrabee. Applications and functions such as Folding@Home, PhysX support, medical image processing, and CUDA support are all benefiting from current GPU abilities.
A unique benefit Larrabee could be capable of, thanks to its flexible design, is that new features, such as Pixel Shader 5.0, could be added with just a software update. Unlike with traditional video cards, where new features often require buying a new generation of card, the only reason to upgrade Larrabee may be for increased performance. Will Larrabee’s more flexible x86 programmability offer enough to compete successfully with future GPUs?
Larrabee does offer some other strong capabilities compared to current GPUs though, and a popular one to point out is its strengths with ray-tracing. Ray-tracing is an imaging technique that produces greater photo realism, yet at a high computational cost. While used in animated movies, where real-time performance is not necessary, traditional video cards have avoided ray-tracing in computer games due to performance issues. Both AMD, Nvidia, and Intel have recently demonstrated (translated) their capabilities to perform astonishing real-time ray-tracing though, and some believe it could be the next big graphical push in the industry.
So how do they compare though? It’s hard to say really. Intel’s Quake 4 demonstration was mightily impressive, showing that Larrabee really can be capable of powerful gaming. AMD’s real-time rendering was also spectacular, though it wasn’t implemented as a game demonstration, rather as a real-time rendering. On the bright side, it was demonstrated on current Radeon HD 4800 hardware, meaning future hardware could be much more capable. Lastly, Nvidia’s demonstration of ray-tracing on some of their high-end hardware was promising, yet it’s still at least a year away from becoming mainstream possible. What we can come away from with all this however is that Larrabee won’t be the only one capable of ray-tracing in the years to come.
Larrabee is definitely an interesting and innovative idea, yet it still remains to be seen if the risk will pay off. Even if Larrabee doesn’t succeed however, there is the unavoidable trend of GPUs taking on more general purpose functions usually reserved for CPUs. The future CPU could possibly be compared to as a hybrid of the Nehalem architecture and Larrabee, with a few large cores, surrounded by dozens of smaller simpler cores, but only time will tell for sure.