Intel's 'Larrabee' to Shakeup AMD, Nvidia

Few advancements in computer technology interest enthusiasts as much as Intel’s future Larrabee architecture does. There is still so little known about final product details, yet the basic premise for the device has been pretty well established at this point. Naturally, where there is still mystery and interest, there will be curiosity and speculation, and Larrabee is no exception. Will it fail or will it not?

One key aspect of Larrabee’s success will stem from how well a gaming graphics solution it is. Despite not exactly being either a GPU or CPU, rather a hybrid solution of both, Intel is still clearly setting it’s sights on the gamer with Larrabee. One thing AMD has taught us about this is that to survive, you don’t need to have the fastest product, you just need something mainstream that’s priced right. If Intel fails here though, there would be little chance for Larrabee, especially since Intel’s graphics production history hasn’t been, well, pretty.

Performance wise, rumors have it that Larrabee is expected no earlier than 2009 and shall be only as fast as today’s current generation of GPUs upon release. According to a recent paper from Intel, simulated Larrabee performance would have us believe that with 25-cores, each running at 1GHz, we would be able to run both F.E.A.R. and Gears of War at 60 FPS. Speculating that Larrabee will be released with 32-cores, running at over 2GHz each, it is possible that Larrabee could actually be faster than rumored.

Other than performance, another concern for Larrabee is its drivers and support. Even excellent hardware can be tainted by poorly written drivers and lack of support in the industry. There has been a lot of criticism of Intel’s past ability to write quality drivers, adding to the concern, but Intel still has time to address this matter properly. Larrabee has a really rather flexible and programmable design that ultimately depends on the drivers and supplied compilers for it to be useful.

There are still other factors that could harm Larrabee’s success, such as fabrication problems resulting in low yields, release date delays, and high prices. A clear unknown still for Larrabee is it’s competition during the time frame it will be released. Performance aside, current GPUs are quickly becoming more and more programmable, now offering similar general purpose functions to that of Larrabee. Applications and functions such as Folding@Home, PhysX support, medical image processing, and CUDA support are all benefiting from current GPU abilities.

A unique benefit Larrabee could be capable of, thanks to its flexible design, is that new features, such as Pixel Shader 5.0, could be added with just a software update. Unlike with traditional video cards, where new features often require buying a new generation of card, the only reason to upgrade Larrabee may be for increased performance. Will Larrabee’s more flexible x86 programmability offer enough to compete successfully with future GPUs?

Larrabee does offer some other strong capabilities compared to current GPUs though, and a popular one to point out is its strengths with ray-tracing. Ray-tracing is an imaging technique that produces greater photo realism, yet at a high computational cost. While used in animated movies, where real-time performance is not necessary, traditional video cards have avoided ray-tracing in computer games due to performance issues. Both AMD, Nvidia, and Intel have recently demonstrated (translated) their capabilities to perform astonishing real-time ray-tracing though, and some believe it could be the next big graphical push in the industry.

So how do they compare though? It’s hard to say really. Intel’s Quake 4 demonstration was mightily impressive, showing that Larrabee really can be capable of powerful gaming. AMD’s real-time rendering was also spectacular, though it wasn’t implemented as a game demonstration, rather as a real-time rendering. On the bright side, it was demonstrated on current Radeon HD 4800 hardware, meaning future hardware could be much more capable. Lastly, Nvidia’s demonstration of ray-tracing on some of their high-end hardware was promising, yet it’s still at least a year away from becoming mainstream possible. What we can come away from with all this however is that Larrabee won’t be the only one capable of ray-tracing in the years to come.

Larrabee is definitely an interesting and innovative idea, yet it still remains to be seen if the risk will pay off. Even if Larrabee doesn’t succeed however, there is the unavoidable trend of GPUs taking on more general purpose functions usually reserved for CPUs. The future CPU could possibly be compared to as a hybrid of the Nehalem architecture and Larrabee, with a few large cores, surrounded by dozens of smaller simpler cores, but only time will tell for sure.

  • reann
    I stopped reading the article. I know intel can deliver Larrabee. The question here is, IS INTEL CAPABLE OF DELIVERING GOOD DRIVERS TO SUPPORT THEIR HARDWARE? Me thinks well GMA X3500, GMA X4500, Larrabee... oh wait driver fails nevermind -_-"
    Reply
  • JonathanDeane
    Intel makes some decent drivers when they really want too... (They just need to get motivated)
    I suspect the driver team was probably not too motivated to work miricles on any of Intel's past hardware. I mean think about it if you had to look at what ATI was doing or Nvidia and then you had the job of writting drivers for some integrated crap, you would die a little inside every day lol
    Reply
  • jaragon13
    Larrabee sounds horridly inefficient compared to GPU's.

    25 cores....and still not as fast as a regular GPU? sounds like I'm gonna be putting "400 more watts and a custom cooling system" if it's going to go to my expectations,of ridiculous.
    Reply
  • eklipz330
    lets not jump to conclusions jaragon... ima wait until more information is released before i come to a descision, even if its 50 cores, as long as it gets the job done, and doesn't require large amounts of power
    Reply
  • unatommer
    sounds like a jack of all trades product (which I guess the CPU is now, 'cept it sucks at graphics where larabee might be decent I suppose).

    However, intel has yet to prove that they can make a graphics chip that moves away from a rating of "total suck"...so i'm not exactly holding my breath.
    Reply
  • jaragon13
    eklipz330lets not jump to conclusions jaragon... ima wait until more information is released before i come to a descision, even if its 50 cores, as long as it gets the job done, and doesn't require large amounts of powerHow can you not? If Intel says exactly what their conclusions are,I'm pretty much right.It's not going to be really a good GPU at all.Unless,that includes 24x anti-aliasing,16x anisotropical filtering,2560x1600 resolution,and maxed out graphics with 2048x2048 textures,it doesn't sound great at all.

    I myself,want to run my system on a graphics card.Not all on a CPU.
    Reply
  • exiled scotsman
    Sounds like jaragon doesn't want any possible innovation in the GPU sector. I welcome it. I hope larrabee blows AMD and Nvidia on their ass.

    Larrabee + lucid chip sounds like a killer combination.
    Reply
  • iocedmyself
    God i'm sick of the half-assed or flat out wrong assessments given in these THG articles.

    The larrabee performanced was described in the classic misleading intel fashion. It will be able to achieve UP TO 16 flops/second per clock on each core. No one seems to pay attention to the fact that they can't even gaurentee 16 flops, it could be anything between 2 and 16...so the best it could do, they have already said that the first larrabe they launch will have 8 cores ....running at full performance is...

    1 Ghz core x 16 flops = 16 Gigaflops/sec x 8 cores = 128 gigaflops/sec
    2 ghz core x 16 flops = 32 gigaflops/sec x 8 cores = 256 gigaflops/sec
    3 Ghz core x 16 flops = 48 gigaflops/sec x 8 cores = 384 gigaflops/sec

    the $270 ATI 4870xt already does more than 1 teraflop/second on a single GPU core. By the time that larrabee is launched in another year or more those $270 cards will be under $100. Intel whose has continually badmouthed AMD for their practice of putting all multi-core cpu's on one piece of silicon...along with an IMC, and the cache's, they haven't launched a native quad core chip yet for that very reason, but they won't have any issues with yields on 8-16-24-32-40-48 core larrabe chips? Along with the fact that they notoriously suck in any and all things GPU related.

    Intel said they want to have a 32core larrabe out by the end of 2010, so in another 16 months they will be capable of producing a gfx card that does
    1 Ghz core x 16 flops = 16 Gigaflops/sec x 32 cores = 512 gigaflops/sec
    2 ghz core x 16 flops = 32 gigaflops/sec x 32 cores = 1024 gigaflops/sec
    3 Ghz core x 16 flops = 48 gigaflops/sec x 32 cores = 1536 gigaflops/sec

    Wow, those numbers look great huh? they'll be able to do 336 gigaflops/sec more than one, single core $270 ati 4870....within the next 16 months. IF they can actually get the clock speeds up to 3ghz, which will result in a 250-300 watt card.

    The x850xt Pe is not quite 4 years old, it did around 225 Gigaflop/sec
    the x1950xtx in 2006 did 400-450 Gigaflop/sec
    The 3870 in 2007 did 550-600 gigaflop/sec
    And the 4870 released 7 months later does 1000-1200 Gigaflop/sec

    In 3.5 years the processing power of a single GPU has increased by 5 times. The RV870 is slated to launch in 2009, before larrabe, which will probably mean a processing power increase to 1500 gigaflops/sec, and will be done on a 40nm core.

    The fact that intel is also going to allow the larrabe to be an undefined API makes for a 3 time gamble, trying to compete with ATI and AMD out of the gate, huge leap in multi-core density, and relying on non-existing software to get to a place where it can even compete. None of their strong suits. As it stands now games would have to be coded specifically to run on larrabe, (didn't that work out well for agiea with PhysX?)

    When GPU's are becoming more and more programable, capable of off setting the CPU load, this is the complete opposite of what the industry needs in order to make progress. Running quake 4 in some bastard wide screen resolution of 1280x890 or something is hardly a breakthrough, whether it's ray-traced or not. AMD demo's realtime ray-tracing using a 4800 gpu several months before.

    In the past two months news on the larrabee has changed from It's gonna provide 1.2 to 1.5 times the performance of current nvidia and ati cards, to it will perform as well or slightly slower than nvidia and ati cards but it will be low power and easier for programmers to code games for, to it won't be for any specific graphics API so it needs to be coded special to be used, to see high performance it will draw alot of power and produce ALOT of heat and oh yes in another year or so it should be performing on par with stuff that's available today, so really by the time it's realeased the cards that are out today will cost 50-75% less....but hey, they're only proven to work great alrady, and why would you buy something that old when you can pay the intel preffered customor price that's 5-10 times higher than the competitions performance equal.

    New and different doesn't mean innovative, innovatie implies major performance increases, less power consumption and such. This is more like what i would expect to see from a group of speed freaks with good financing.

    Kind of like someone that has the really great idea of shaving their hair off, and making it into a wig so their hair...will be a hat too! and it'll be easier to wash, and won't have to brush it or get it cut either. True? Sure, practical? Well you'd think so if you were hopped up on speed.
    Reply
  • dagger
    Keep in mind the term "core" used for Larrabee is more analogous to, say, a stream processor in traditional gpu, rather than a gpu as a unit. In other words, it's more accurate to compare Larrabe's 16 to a 8800gts's 128.
    Reply
  • In terms of raw GPU horse power this thing will definitely suck. There is no doubt about it. Looking at iocedmyself's calculations it is already bad, but it gets even worse...Being the jack of all trades that it is will hurt performance bring the results much lower.

    Furthermore, if the clock speeds are that high, the power consumption will go through the roof. For example: in order to double the clock speed you need to double the voltage (approximation), which means doubling the current, which means quadrupling the wattage. By comparison taking a 600mhz card and clocking it to 3000mhz (by increasing the voltage by a factor of 5) will have the following effects

    5 times performance
    25 times power consumption (squared relationship)

    At that rate they could create a 1000 watt card at 45nm and it still couldn't keep up with the HD4850.

    Also, with that little horsepower having support for the newest technologies will be useless. I own a 7600GS Nvidia card and even though it supports HDR lighting i never use it to help my poor FPS. BTW i am getting a new comp in a month with a HD 4850!!!

    Regardless dedicated graphics will always have an advantage over integrated graphics.

    On the plus side Larabee could improve the CPU side of things. But that will likely take a long time as most programs cannot make use of more then 1 core.

    It will cost Intel money in the short run, but with several attempts they may get it right. only time will tell.
    Reply