Today, AMD unveiled the world's first and fastest 7nm data center GPUs at its Next Horizon Event. The Radeon Instinct MI60 and MI50 accelerators were built to tackle the most demanding workloads, such as deep learning, high-performance computing, cloud computing and other intensive rendering applications.
AMD Radeon Instinct MI60 and MI50 Specs
Header Cell - Column 0 | Instinct MI60 | Instinct MI50 |
---|---|---|
Compute Units | 64 | 60 |
Stream Processors | 4,096 | 3,840 |
Peak Half Precision (FP16) Performance | 29.5 TFLOPs | 26.8 TFLOPs |
Peak Single Precision (FP32) Performance | 14.7 TFLOPs | 13.4 TFLOPs |
Peak Double Precision (FP64) Performance | 7.4 TFLOPs | 6.7 TFLOPs |
Peak INT8 Performance | 58.9 TFLOPs | 53.6 TFLOPs |
Memory Size | 32GB | 16GB |
Memory Type (GPU) | HBM2 | HBM2 |
Memory Bandwidth | 1024GB/s | 1024GB/s |
The Radeon Instinct MI60 and MI50 accelerators are based on AMD's Vega 20 GPU architecture and produced under TSMC's 7nm FinFET manufacturing process. Besides AMD saying they're the first 7nm GPUs on the market, the MI60 and MI50 are also the first to exploit PCI-SIG's latest PCIe 4.0 x16 interface, which can bear up to 31.51GB/s of bandwidth.
From the outside, the Instinct MI60 and MI50 share an identical design. The accelerators measure 267mm in length and occupy two PCI slots. They rely on a passive cooling solution and don't have any cooling fans.
The MI60 is equipped with 64 supercharged compute units and 4,096 stream processors. It features a 1,800MHz peak engine clock and 32GB of HBM2 ECC (error-correcting code) memory clocked at 1GHz across a 4,096-bit memory interface. The GPU has a TDP (thermal design power) rated at 300W and draws power from a 6-pin and 8-pin PCIe power connectors.
The MI50 is no slacker either. It sports 60 compute units and 3,840 stream processors. The MI510 clocks in at 1,747MHz and comes with 16GB of HBM2 ECC memory operating at 1GHz across the same 4096-bit memory bus as the MI60. Like the M160, the MI50 also has a 300W TDP and relies on a combination of a 6-pin and 8-pin PCIe power connectors.
Both GPUs are outfitted with two Infinity Fabric Links. AMD's Infinity Fabric Link technology allows enterprise consumers to connect up to two clusters of four GPUs in a single server to achieve peer-to-peer GPU communication speeds around 200GB/s, which is up to to six times faster that of a single PCIe 3.0 interface.
AMD will launch the Instinct MI60 and MI50 accelerators on November 18. The chipmaker has yet to disclose the pricing for either model.