Larrabee: Intel's New GPU

Larrabee Versus GPU

As we saw earlier, Larrabee doesn’t look very much like a GPU at all with the exception of its texture units. There’s no sign of the other fixed units you usually find, like the setup engine, which converts triangles into pixels and interpolates the different attributes of the vertices, or the ROPs, which write pixels to the frame buffer and also resolve anti-aliasing and perform any necessary blending operations. In Larrabee’s case, all of these operations are performed directly by the cores. The advantage of this approach is that it allows greater flexibility. With blending, for example, it’s possible to manage transparency independently of the order in which primitives are sent, which is especially complicated for today’s GPUs. The downside is that Intel will have to give its GPU greater processing power than its competitors, which can use the dedicated units for this kind of task, leaving the programmable units to concentrate on shading.

Although GPUs have become very flexible since the arrival of Direct3D, they’re still far from the flexibility Larrabee can offer. One of the main limitations of GPUs, even using APIs like CUDA, is memory access. As with the Cell, memory management is fairly constraining, and to get optimum performance, the number of registers used has to be minimized and small memory zones of a few kilobytes also have to be used.

What’s more, despite the flexibility GPUs have gained, their functionalities remain heavily oriented towards raw calculation. For example, there’s no question of performing I/O operations from a GPU. Conversely, Larrabee is totally capable of that, meaning that Larrabee can directly perform printf or file-handling operations. It’s also possible to use recursive and virtual functions, which is impossible with a GPU.

Obviously, all of this functionality doesn’t come without cost, and there will necessarily be an impact on the program’s efficiency, since they go against the paradigm of parallel programming But that remains acceptable for code that’s not used in an area that’s sensitive to performance. Making this kind of code possible without using the CPU opens up some interesting possibilities. For example, modern GPUs include a just-in-time (JIT) compiler in their drivers to adapt shaders to specific details of their architecture on the fly when given that task. Larrabee is no exception to the rule, but instead of integrating this compiler with the processor, it runs directly on Larrabee.

Another interesting opportunity is that code can be debugged directly by Larrabee, whereas to debug CUDA code, it’s generally indispensable to use emulation via the CPU. In cases like this, the CPU emulates the GPU, but doesn’t precisely simulate its behavior, and certain bugs can be difficult to identify.

  • thepinkpanther
    very interesting, i know nvidia cant settle for being the second best. As always its good for the consumer.
    Reply
  • IzzyCraft
    Yes interesting, but intel already makes like 50% of every gpu i rather not see them take more market share and push nvidia and amd out although i doubt it unless they can make a real performer, which i have no doubt on paper they can but with drivers etc i doubt it.
    Reply
  • I wonder if their aim is to compete to appeal to the gamer market to run high end games?
    Reply
  • Alien_959
    Very interesting, finally some more information about Intel upcoming "GPU".
    But as I sad before here if the drivers aren't good, even the best hardware design is for nothing. I hope Intel invests more on to the software side of things and will be nice to have a third player.
    Reply
  • crisisavatar
    cool ill wait for windows 7 for my next build and hope to see some directx 11 and openGL3 support by then.
    Reply
  • Stardude82
    Maybe there is more than a little commonality with the Atom CPUs: in-order execution, hyper threading, low power/small foot print.

    Does the duo-core NV330 have the same sort of ring architecture?
    Reply
  • "Simultaneous Multithreading (SMT). This technology has just made a comeback in Intel architectures with the Core i7, and is built into the Larrabee processors."

    just thought i'd point out that with the current amd vs intel fight..if intel takes away the x86 licence amd will take its multithreading and ht tech back leaving intel without a cpu and a useless gpu
    Reply
  • liemfukliang
    Driver. If Intel made driver as bad as Intel Extreme than event if Intel can make faster and cheaper GPU it will be useless.
    Reply
  • IzzyCraft
    Hope for an Omega Drivers equivalent lol?
    Reply
  • phantom93
    Damn, hoped there would be some pictures :(. Looks interesting, I didn't read the full article but I hope it is cheaper so some of my friends with reg desktps can join in some Orginal Hardcore PC Gaming XD.
    Reply