Xeon Phi: Intel's Larrabee-Derived Card In TACC's Supercomputer

Intel Xeon Phi Architecture

Intel has a vast portfolio of technology its engineers have developed, and Xeon Phi unquestionably taps some of that. However, the Many Integrated Core architecture is notably more than a bunch of modified Pentium processors manufactured at 22 nm. Some of its notable attributes include:

  • An in-order, dual-issue x86 design with 64-bit support
  • Four threads per core, and up to 61 cores per coprocessor
  • 512-bit SIMD for wider vectors
  • 512 KB of L2 cache per core (up to 30.5 MB per Xeon Phi)
  • 22 nm tri-gate transistors
  • Red Hat Enterprise Linux 6.x or SuSE Linux 12+ support
  • 6 or 8 GB of GDDR5 per card

You'll notice that even the highest-end Xeon Phi wields far fewer cores than a typical GPU. However, you cannot compare an MIC core to a CUDA core, for example, on a 1:1 basis. Just one Phi core is quad-threaded with a 512-bit SIMD unit. A fair comparison requires getting past marketing's definition of a "core."

It's also interesting that the card runs Linux. This probably isn't a solution you'd want to run a LAMP package on, but I would also guess that someone will try to do it anyway. You can SSH into the Xeon Phi card, though, to find out more about the hardware. We were advised that the following screenshot came from a pre-production board.

In the following diagram of a MIC architecture core, Intel claims that less than two percent of the core and cache die area is x86-specific logic. Although the Xeon E5-2680 CPUs also found in the Stampede supercomputer are made up of 2.27 billion transistors each, the lineage of x86 comes from the 20 000- to 30 000-transistor 8086 processor.

Of course, even today's desktop CPUs are incredibly complex, emphasizing the importance of getting data to and from where it needs to go as expediently as possible. Like the Sandy and Ivy Bridge-based CPUs, the prototype product code-named Knights Corner employs a ring bus interconnect to most effectively maximize throughput and available die area. By also giving each core lots of cache, the processor is able to avoid the performance hit it'd take if each core instead needed to be fed constantly from the GDDR memory controller stops.

  • esrever
  • tacoslave
    i wonder if they can mod this to run games...
  • mocchan
    Articles like these is what makes me more and more interested in servers and super computers...Time to read up and learn more!
  • wannabepro
    Highly interesting.
    Great article.

    I do hope they get these into the hands of students like myself though.
  • ddpruitt
    Intriguing idea....

    These X86 cores have the uumph to run something a little more complex than what a GPGPU can. But is it worth it and what kind of effort does it require. I'd have to disagree with Intel's assertion that you can get used to it by programming for an "i3". Anyone with a relatively modern graphics card can learn to program OpenCL or CUDA on there own system. But learning how to program 60 cores efficiently (or more) from an 8 core (optimistically) doesn't seem reasonable. And how much is one of these cards going to run? You might get more by stringing a few GPUs together for the same cost.

    I'm wonder if this is going to turn into the same time of niche product that Intel's old math-coprocessors did.
  • CaedenV
    man, I love these articles! Just the sheer amounts of stuffs that go into them... measuring ram in hundreds of TBs... HDD space in PBs... it is hard to wrap one's brain around!

    I wonder what AMD is going to do... on the CPU side they have the cheaper (much cheaper) compute power for servers, but it is not slowing Intel sales down any. Then on the compute side Intel is making a big name for themselves with their new (but pricy) cards, and nVidia already has a handle on the 'budget' compute cards, while AMD does not have a product out yet to compete with PHI or Tesla.
    On the processor side AMD really needs to look out for nVidia and their ARM chip prowess, which if focused on could very well eat into AMD's server chip market for the 'affordable' end of this professional market... It just seems like all the cards are stacked against AMD... rough times.

    And then there is IBM. The company that has so much data center IP that they could stay comfortably afloat without having to make a single product. But the fact is that they have their own compelling products for this market, and when they get a client that needs intel or nvidia parts, they do not hesitate to build it for them. In some ways it amazes me that they are still around because you never hear about them... but they really are still the 'big boy' of the server world.
  • A Bad Day
    *Looks at the current selection of desktops, laptops and tablets, including custom built PCs*

    *Looks at the major data server or massively complex physics tasks that need to be accomplished*

    *Runs such tasks on baby computers, including ones with an i7 clocked to 6 GHz and quad SLI/CF, then watches them crash or lock up*


    tacoslavei wonder if they can mod this to run games...
    A four-core game that mainly relies on one or two cores, running on a thousand-core server. What are you thinking?
  • ThatsMyNameDude
    Holy shit. Someone tell me if this will work. Maybe, if we pair this thing up with enough xeons and enough quadros and teslas, we can connect it with a gaming system and we could use the xeons to render low load games like cod mw3 and tf2 and feed it to the gaming system.
  • mayankleoboy1
    Main advantage of LRB over Tesla and AMD firepro S10000 :

    A simple recompile is all thats needed to use PHI. Tesla/AMD needs a complete code re write. Which is very very expensive .
    I see LRB being highly successful.
  • PudgyChicken
    It'd be pretty neat to use a supercomputer like this to play a game like Metro 2033 at 4K, fully ray-traced.

    I'm having nerdgasms just thinking about it.