Xilinx Pairs AMD's EPYC With Four FPGAs At Supercomputing 2017

AMD's made quite the splash at Supercomputing 2017 with a range of EPYC server solutions on display both at its own booth and from several third-party vendors. We found one of the most interesting demos at Xilinx's booth, where the company paired a single-socket EPYC server platform with four of its beefy FPGAs.

The demonstration featured four Xilinx cards that each power up to 21 TOPs of 8-bit integer throughput. Each card features 64GB of onboard DDR4 memory and a VU9P Virtex UltraScale+ FPGA with close to 2.5 million configurable logic cells. Xilinx claims a single dual-slot full-length full-height card can deliver 10-100x more performance than a standard CPU while pulling 225W. Cram four of them in a 2U server chassis and you have a processing beast.

The single-socket server space is pining for an alternative that delivers copious connectivity, and EPYC's 128 PCIe 3.0 lanes pair nicely with platforms that feature multiple PCIe connected accelerators, be they FPGAs or GPUs. Each respective accelerator has its own advantages and disadvantages. FPGAs can be reprogrammed on the fly and strip out many of the unnecessary functions found on GPUs, which reduces latency and boosts performance. GPUs tend to handle the heavy lifting for many workloads because FPGAs are limited to integer inference or lower performance floating point than GPUs.

But FPGAs also tend to deliver superior performance-per-watt, which is a crucial consideration in large deployments, like data centers.  These systems are designed for machine learning, data analytics, genomics and live video transcoding workloads.

Scalability is a key requirement for many applications, so networking becomes another important aspect. AMD's EPYC provides enough lanes to host these four PCIe 3.0 x16 FPGAs and still have an additional 64 PCIe lanes available for other additives, such as networking. 

Memory capacity and performance is also a limiting factor for many workloads, so EPYC's 145GBps of bandwidth, eight DDR4 channels and 2TB of memory capacity for a single socket server is a good fit for many diverse workloads. Of course, the company also features lower pricing than Intel's equivalents with similar core counts, which aren't capable of delivering the same amount of connectivity in a single-socket solution. Throwing in up to 32 Zen cores and 64 threads also provide plenty of CPU processing power for parallelized workloads.

This thread is closed for comments
    Your comment
  • redgarl
    AMD knews that PCIe lines were a big thing for data center using GPU or other calculating processors. I wonder if AMD could get a good part of the market for these systems over Intel.
  • 0VERL0RD
    Should be 32 Zen cores, 64 threads!
  • berezini.2013
    Redgarl, intel solved the networking issue and has multitude solutions on that front that differs from AMD's lane tactics