Xeon Phi: Intel's Larrabee-Derived Card In TACC's Supercomputer

Back To Larabee: Starting The Many Core Revolution

Larrabee is the code name for a now-infamous project whereby Intel planned to build a graphics card based on a many-core processor and go toe-to-toe with AMD and Nvidia. Why not use x86 for everything, the company asked, and make some GPU-specific changes to the hardware, along with software-based optimizations? The fact that Intel has a huge investment in the x86 ISA explains its interest in leveraging existing technology to solve the future's performance issues. 

The idea of Larrabee was intriguing. We even published our own analysis back in 2009 (Larrabee: Intel's New GPU). Unfortunately, later that same year, Intel announced that Larrabee would not be a retail part. Then, in 2010, we received word that not only was the project shelved, but that Intel was taking a derivative of Larrabee into the HPC space.

Fast forward to now. Not only is there a shipping product based on the last eight years of work, but it's also part of a 10 petaFLOPS-class supercomputer called Stampede, which we mentioned on the prior page. Both Intel and TACC are quick to point out that the hardware composing Stampede is pre-production, although it's purportedly fairly similar to the Xeon Phi 5110P and 3100 series coprocessors.

The competition is also very active in this space. Nvidia has a longer history of GPU-based computing than Intel, and it recently disclosed that the Titan supercomputer, developed by Cray for the Oak Ridge National Laboratory, employs Kepler-based Tesla K20 cards to help push performance as high as 20 petaFLOPS. 

AMD is similarly working to drum up excitement about its FirePro cards, particularly in light of the exceptional compute performance enabled by the Graphics Core Next architecture. In the meantime, we also see the company enjoying success with its Opteron processors. The same Titan supercomputer populated with Nvidia GPUs also leverages 18 688 Opteron 6274 CPUs, each with eight Bulldozer modules.

Bottom line: although Intel is a long-time proponent of using multiple cores in parallel, its approach up until now has largely involved general-purpose x86 CPUs operating in concert. Meanwhile, companies like AMD and Nvidia do their part to compete with graphics-oriented architectures that just so happen to handle floating-point math deftly. By jumping on-board now, Intel is late to the game. But it's banking on the ubiquity of x86 to make work easier on software developers, many of whom are still trying to get their heads around programming for CUDA or OpenCL.

  • esrever
    meh
    Reply
  • tacoslave
    i wonder if they can mod this to run games...
    Reply
  • mocchan
    Articles like these is what makes me more and more interested in servers and super computers...Time to read up and learn more!
    Reply
  • wannabepro
    Highly interesting.
    Great article.

    I do hope they get these into the hands of students like myself though.
    Reply
  • ddpruitt
    Intriguing idea....

    These X86 cores have the uumph to run something a little more complex than what a GPGPU can. But is it worth it and what kind of effort does it require. I'd have to disagree with Intel's assertion that you can get used to it by programming for an "i3". Anyone with a relatively modern graphics card can learn to program OpenCL or CUDA on there own system. But learning how to program 60 cores efficiently (or more) from an 8 core (optimistically) doesn't seem reasonable. And how much is one of these cards going to run? You might get more by stringing a few GPUs together for the same cost.

    I'm wonder if this is going to turn into the same time of niche product that Intel's old math-coprocessors did.
    Reply
  • CaedenV
    man, I love these articles! Just the sheer amounts of stuffs that go into them... measuring ram in hundreds of TBs... HDD space in PBs... it is hard to wrap one's brain around!

    I wonder what AMD is going to do... on the CPU side they have the cheaper (much cheaper) compute power for servers, but it is not slowing Intel sales down any. Then on the compute side Intel is making a big name for themselves with their new (but pricy) cards, and nVidia already has a handle on the 'budget' compute cards, while AMD does not have a product out yet to compete with PHI or Tesla.
    On the processor side AMD really needs to look out for nVidia and their ARM chip prowess, which if focused on could very well eat into AMD's server chip market for the 'affordable' end of this professional market... It just seems like all the cards are stacked against AMD... rough times.

    And then there is IBM. The company that has so much data center IP that they could stay comfortably afloat without having to make a single product. But the fact is that they have their own compelling products for this market, and when they get a client that needs intel or nvidia parts, they do not hesitate to build it for them. In some ways it amazes me that they are still around because you never hear about them... but they really are still the 'big boy' of the server world.
    Reply
  • A Bad Day
    esrevermeh
    *Looks at the current selection of desktops, laptops and tablets, including custom built PCs*

    *Looks at the major data server or massively complex physics tasks that need to be accomplished*

    *Runs such tasks on baby computers, including ones with an i7 clocked to 6 GHz and quad SLI/CF, then watches them crash or lock up*

    ENTIRE SELECTION IS BABIES!

    tacoslavei wonder if they can mod this to run games...
    A four-core game that mainly relies on one or two cores, running on a thousand-core server. What are you thinking?
    Reply
  • ThatsMyNameDude
    Holy shit. Someone tell me if this will work. Maybe, if we pair this thing up with enough xeons and enough quadros and teslas, we can connect it with a gaming system and we could use the xeons to render low load games like cod mw3 and tf2 and feed it to the gaming system.
    Reply
  • mayankleoboy1
    Main advantage of LRB over Tesla and AMD firepro S10000 :

    A simple recompile is all thats needed to use PHI. Tesla/AMD needs a complete code re write. Which is very very expensive .
    I see LRB being highly successful.
    Reply
  • PudgyChicken
    It'd be pretty neat to use a supercomputer like this to play a game like Metro 2033 at 4K, fully ray-traced.

    I'm having nerdgasms just thinking about it.
    Reply