Sign in with
Sign up | Sign in

Larrabee Versus GPU

Larrabee: Intel's New GPU
By

As we saw earlier, Larrabee doesn’t look very much like a GPU at all with the exception of its texture units. There’s no sign of the other fixed units you usually find, like the setup engine, which converts triangles into pixels and interpolates the different attributes of the vertices, or the ROPs, which write pixels to the frame buffer and also resolve anti-aliasing and perform any necessary blending operations. In Larrabee’s case, all of these operations are performed directly by the cores. The advantage of this approach is that it allows greater flexibility. With blending, for example, it’s possible to manage transparency independently of the order in which primitives are sent, which is especially complicated for today’s GPUs. The downside is that Intel will have to give its GPU greater processing power than its competitors, which can use the dedicated units for this kind of task, leaving the programmable units to concentrate on shading.

Although GPUs have become very flexible since the arrival of Direct3D, they’re still far from the flexibility Larrabee can offer. One of the main limitations of GPUs, even using APIs like CUDA, is memory access. As with the Cell, memory management is fairly constraining, and to get optimum performance, the number of registers used has to be minimized and small memory zones of a few kilobytes also have to be used.

What’s more, despite the flexibility GPUs have gained, their functionalities remain heavily oriented towards raw calculation. For example, there’s no question of performing I/O operations from a GPU. Conversely, Larrabee is totally capable of that, meaning that Larrabee can directly perform printf or file-handling operations. It’s also possible to use recursive and virtual functions, which is impossible with a GPU.

Obviously, all of this functionality doesn’t come without cost, and there will necessarily be an impact on the program’s efficiency, since they go against the paradigm of parallel programming But that remains acceptable for code that’s not used in an area that’s sensitive to performance. Making this kind of code possible without using the CPU opens up some interesting possibilities. For example, modern GPUs include a just-in-time (JIT) compiler in their drivers to adapt shaders to specific details of their architecture on the fly when given that task. Larrabee is no exception to the rule, but instead of integrating this compiler with the processor, it runs directly on Larrabee.

Another interesting opportunity is that code can be debugged directly by Larrabee, whereas to debug CUDA code, it’s generally indispensable to use emulation via the CPU. In cases like this, the CPU emulates the GPU, but doesn’t precisely simulate its behavior, and certain bugs can be difficult to identify.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 95 comments.
This thread is closed for comments
  • 0 Hide
    thepinkpanther , March 23, 2009 6:35 AM
    very interesting, i know nvidia cant settle for being the second best. As always its good for the consumer.
  • 6 Hide
    IzzyCraft , March 23, 2009 6:49 AM
    Yes interesting, but intel already makes like 50% of every gpu i rather not see them take more market share and push nvidia and amd out although i doubt it unless they can make a real performer, which i have no doubt on paper they can but with drivers etc i doubt it.
  • 0 Hide
    Anonymous , March 23, 2009 6:50 AM
    I wonder if their aim is to compete to appeal to the gamer market to run high end games?
  • 0 Hide
    Alien_959 , March 23, 2009 8:12 AM
    Very interesting, finally some more information about Intel upcoming "GPU".
    But as I sad before here if the drivers aren't good, even the best hardware design is for nothing. I hope Intel invests more on to the software side of things and will be nice to have a third player.
  • 0 Hide
    crisisavatar , March 23, 2009 8:28 AM
    cool ill wait for windows 7 for my next build and hope to see some directx 11 and openGL3 support by then.
  • 0 Hide
    Stardude82 , March 23, 2009 8:32 AM
    Maybe there is more than a little commonality with the Atom CPUs: in-order execution, hyper threading, low power/small foot print.

    Does the duo-core NV330 have the same sort of ring architecture?
  • 2 Hide
    liemfukliang , March 23, 2009 10:27 AM
    Driver. If Intel made driver as bad as Intel Extreme than event if Intel can make faster and cheaper GPU it will be useless.
  • 3 Hide
    IzzyCraft , March 23, 2009 10:44 AM
    Hope for an Omega Drivers equivalent lol?
  • 1 Hide
    phantom93 , March 23, 2009 11:16 AM
    Damn, hoped there would be some pictures :( . Looks interesting, I didn't read the full article but I hope it is cheaper so some of my friends with reg desktps can join in some Orginal Hardcore PC Gaming XD.
  • 9 Hide
    Slobogob , March 23, 2009 11:51 AM
    I was quite suprised by the quality of this article and am quite eager to see the follow up.
  • 1 Hide
    JeanLuc , March 23, 2009 12:26 PM
    Well I am looking forward to Larrabee but I'll keep my optimisim under wraps until I start seeing some screenshots of Larabee in action playing real games i.e. not Intel demo's.

    I wonder just how compatible larrabee is going to be with older games?
  • 3 Hide
    tipoo , March 23, 2009 12:46 PM
    Great article! Keep ones like this coming!
  • -2 Hide
    tipoo , March 23, 2009 12:48 PM
    IzzyCraftHope for an Omega Drivers equivalent lol?



    That would be FANTASTIC! Maybe the same people who make the Omega drivers could make alternate Larrabee drivers? We all know Intel sucks balls at drivers.
  • 7 Hide
    armistitiu , March 23, 2009 12:49 PM
    So this is Intel's approach to a GPU... we put lots of simple x86 cores in it , add SMT and vector operations and hope that they would do the job of a GPU. IMHO Larrabee will be a complete failure as GPU but as an x86 CPU that is highly parallel this thing could screw AMD's FireStream and NVIDIA's CUDA (OPENCL too) beacause it's x86 and the programming is pretty popular for this kind of architecture.
  • 0 Hide
    wicko , March 23, 2009 1:18 PM
    IzzyCraftYes interesting, but intel already makes like 50% of every gpu i rather not see them take more market share and push nvidia and amd out although i doubt it unless they can make a real performer, which i have no doubt on paper they can but with drivers etc i doubt it.

    Yeah but that 50% includes all the integrated cards that no consumer even realizes they're buying most of the time.. but not in discrete cards. I'd like to see a bit more competition on the discrete side.
  • 2 Hide
    B-Unit , March 23, 2009 1:26 PM
    wtfnl"Simultaneous Multithreading (SMT). This technology has just made a comeback in Intel architectures with the Core i7, and is built into the Larrabee processors." just thought i'd point out that with the current amd vs intel fight..if intel takes away the x86 licence amd will take its multithreading and ht tech back leaving intel without a cpu and a useless gpu


    Umm, what makes you think that AMD pioneered multi-threading? And Intel doesnt use HyperTransport, so they cant take it away.
  • 1 Hide
    justaguy , March 23, 2009 2:02 PM
    Now we know what they're trying to do with it. There's still no indication if it will work or not.

    I really don't see the 1st gen. being successful-it's not like AMD and nVidia are goofing around waiting for Intel to join up and show them a real GPU. Although there's no numbers on this that I've seen, I'm thinking Larry's going to have a pretty big die size to fit all those mini-cores so it better perform, because it will cost a decent sum.
  • 8 Hide
    crockdaddy , March 23, 2009 2:09 PM
    I would mention ... "but will it play crysis" but I am not sure how funny that is anymore.
  • -4 Hide
    Pei-chen , March 23, 2009 2:12 PM
    Can't wait for Larrabee; hopefully a single Larrabee can have the performance of 295. Nvidia and ATI are slacking as they know they can price fixing and stop coming out with better GPU, just more cards with the same old GPU.
Display more comments