In any system with multiple nodes, it's always a challenge to enable effective communication between them. With its remote direct memory access capabilities and low latency, TACC's Stampede takes advantage of Mellanox's 56 Gb/s InfiniBand interconnects. Fiber cables from the servers converge on an integrated switch in each rack.
Core switches tie everything together in the supercomputer. Here we see a Mellanox switch still being populated. Remember, Stampede is still under construction.
In case you're wondering, that large radius bend ensures the fiber optic cable does not kink. Given 75 miles worth of cabling, troubleshooting a problematic connection is not a simple proposition.
Fully populated, the switches look like this:
Stampede Data Storage
Those 2.5" drives found in each node don't have the capacity to hold the huge data sets a supercomputer works on, so dedicated storage nodes are folded into the mix.
I was expecting to see rows of hot-swap drive trays, and was taken aback by the view above (at least until I was shown how the drives are configured).
Conventional 3.5" drives are arranged two-wide and eight-deep to provide more than 14 PB of Lustre storage to go along with the 270 TB of RAM. These shelves can be pulled out for drive swaps without disconnecting cables. It is actually a very elegant solution.
Thanks to the TACC and the University of Texas at Austin for graciously hosting the event and letting the press run all over Stampede.
- Introducing Intel Xeon Phi
- Back To Larabee: Starting The Many Core Revolution
- Intel Xeon Phi Architecture
- Intel Xeon Phi Hardware
- Intel Xeon Phi Performance
- The Value Proposition Of Xeon Phi: Optimization
- TACC's Stampede Supercomputer: Xeon Phi In The Field
- TACC's Stampede Supercomputer: Xeon Phi In The Field, Continued
- A Look Into The Competition