TACC's Stampede Supercomputer: Xeon Phi In The Field
Intel wanted to demonstrate that Xeon Phi isn't just a face-saving announcement to explain away the last eight years of research, which seemed to hit a wall with Larrabee. Rather than simply claiming that customers had cards and were working with them, the company invited us to an event at the Texas Advanced Computing Center, which is currently building a supercomputer based on Xeon Phi. When we visited, there were already more than 2000 cards installed.
During the installation process, every card is placed into a PCIe riser chassis, and then dropped into a Dell server. Each PowerEdge C8220X "Zeus" node contains two Xeon E5-2680 processors and 32 GB of system memory. Here is what the server looks like:
That mezzanine card in the back is for InfiniBand. Dual LGA 2011 interfaces are covered by passive heat sinks and flanked by four DIMMs each. Equipped with 32 GB per node, that works out to eight 4 GB ECC-capable DIMMs per server. On the right, you see room for 2.5" storage. Stampede uses conventional hard drives. The supercomputer is not set up to be a Hadoop cluster; it's focused on compute performance.
We were told that the blue lights you see inside some of the nodes are Xeon Phi cards already installed. Several of these Dell servers were then placed in racks and flanked by APC cooling and the necessary power conduits.
Xeon Phi co-processors make up about seven petaFLOPS of the supercomputer's 10 petaFLOPS of capacity.
But Stampede is not just composed of thousands of Xeon E5 CPUs and Xeon Phi co-processors. It also features 128 Nvidia Tesla K20s for remote visualization, along with 16 servers sharing 1 TB of memory and two GPUs for large data analysis. Truly, there's a lot more that goes into a supercomputer than just concessions for raw compute potential.