Intel's Xeon Phi launch needs to be taken into context. There are many trends that are converging on the big data analytics space.
Nvidia is going in a couple of interesting directions pertinent to our discussion today. First up is its Tegra 3 (and, by extension, upcoming versions of Tegra). One of the platform's biggest attractions is its four (plus one) ARM cores and GPU in a low-power package. That's minimal power consumption, even on a no-longer-current 40 nm process. When we look at examples like the Barcelona Supercomputing Center's Mont-Blanc Project, it becomes clear that are opportunities to create powerful clusters using power-optimized hardware. Nvidia will, however, need to enable cores with ECC support.
Of course, as we see from ORNL's Titan supercomputer, Nvidia's push for CUDA adoption yields sigificant results, particularly when any extraneous development costs are outweighed by performance gains. Should Nvidia continue to demonstrate the benefits of its architecture and platform, it'll have an easier time convincing ISVs to jump on-board.
AMD's portfolio of HPC-capable components is perhaps the most interesting, if only because of its diversity.
To begin, it has Socket G34, and is well known for low-cost quad-processor configurations. Cray uses Socket G34 heavily, and in fact deployed it in the Titan supercomputer. Of course, Titan employs Opteron 6200-series CPUs, so there's yet another upgrade path should Opteron 6300-series chips show enough promise.
There's also a compelling GPU-oriented compute story. On the desktop, Fusion-based APUs already demonstrate the potential of x86 cores and graphics processing resources on the same die. Given the suggested modularity of AMD's Bulldozer module, it's not hard to imagine the company replacing its shared floating-point unit with shader cores for OpenCL-optimized applications.
Moving in the other direction, AMD countered some of Intel's cloud computing-specific advantages with the acquisition of SeaMicro, and will be embedding its Freedom Fabric into an upcoming generation of server processors with 64-bit ARM cores.
HP and Dell are very large players in the HPC space. Stampede is largely made up of Dell servers, for instance. Both HP and Dell have relationships with many vendors, and therefore many technologies. Both are exploring ARM for cloud computing (and HPC), much like AMD.
In the case of Stampede, Dell's design was very much based on Intel's hardware. With that said, both Dell and HP have AMD and ARM in parts of their portfolios, and enjoy huge sales networks to use whichever company's technologies they want.
No discussion of supercomputing is complete without mentioning IBM. After all Sequoia was given the top spot on the Top500 list, achieving 16.32 petaFLOPS, using IBM's PowerPC architecture.
In developing the Cell architecture, which was made famous for its use in the PlayStation 3 but also helped score another number-one finish on the Top500 list back in 2008, IBM implemented the idea of using smaller specialized cores to increase compute efficiency.
Facing PowerPC, 64-bit ARM, and graphics-oriented hardware, Intel's advocacy of x86 makes sense (at least for its own business purposes). Xeon Phi's value is that it allows developers to spend less time learning new languages and tools. Instead, there's a many-core x86-based solution that works in much the same way as Intel's CPUs, at least from a programming standpoint.
- Introducing Intel Xeon Phi
- Back To Larabee: Starting The Many Core Revolution
- Intel Xeon Phi Architecture
- Intel Xeon Phi Hardware
- Intel Xeon Phi Performance
- The Value Proposition Of Xeon Phi: Optimization
- TACC's Stampede Supercomputer: Xeon Phi In The Field
- TACC's Stampede Supercomputer: Xeon Phi In The Field, Continued
- A Look Into The Competition