AMD announced a range of new products today at its Data Center and AI Technology Premiere event in San Francisco, California. The company finally shared more details about its 5nm EPYC Bergamo processors for cloud native applications, and the chips are shipping to customers now.
AMD also announced its Instinct MI300 processors that feature 3D-stacked CPU and GPU cores on the same package with HBM, along with a new GPU-only MI300X model that is also used to bring eight accelerators onto one platform that wields an incredible 1.5TB of HBM3 memory. AMD also announced that its EPYC Genoa-X processors with up to 1.1GB of L3 cache. All three of these products are available now, but AMD also has its EPYC Sienna processors for telco and the edge coming in the second half of 2023.
AMD EPYC Bergamo
AMD's 128-core EPYC Bergamo processors are the industry's first x86 cloud native CPUs, which are designed for the highest core density with an optimized Zen 4c core that halves the area needed for each core. These chips will compete with Intel's 144-core Sierra Forest chips, which mark the debut of Intel's Efficiency cores (E-cores) in its Xeon data center lineup, and Ampre's 192-core AmpereOne processors, not to mention the custom silicon being developed or employed by Google and Microsoft.
All of these offerings are designed to maximize power efficiency for highly-parallel and latency-tolerant workloads. Examples include high-density VM deployments, data analytics, and front-end web services. The chips offer higher core counts than standard data center solutions, with a lower frequency and power envelope.
AMD's Bergamo has 128 cores and drops into server platforms that utilize the same socket SP5 as the standard 96-core EPYC Genoa processors. Like their regular counterparts, Bergamo supports 12-channel memory running at DDR5-4800. AMD forges the chips by combining chiplets with Zen 4c cores with the company's existing 'Floyd' central I/O die, thus tying the compute chiplets to a memory and I/O chiplet based on an older process node.
|Row 0 - Cell 0||Cores / Max Threads||Base/Boost (GHz)||Default TDP||L3 Cache|
|9754||128 / 256||2.25 / 3.1||360W||256 MB|
|9754S||128 / 128||2.25 / 3.1||360W||256 MB|
|9734||112 / 224||2.2 / 3.0||320W||256 MB|
For now, AMD has announced the above two Bergamo processors, the EPYC 9754 with 128-cores/256-threads, and the EPYC 9734 with 112-cores/224-threads. The latter has two cores per CCD disabled. Most of the remaining specs other than core counts are the same, so the 9734 still has the full 16MB of L3 cache per CCX and 256MB of L3 cache total. AMD claims a 2.7X increase in energy efficiency with the Bergamo chips.
AMD shared a few broad strokes about the Bergamo architecture, including that it has a core + L3 cache area of 2.48mm^2, which is 35% smaller than the 3.84mm^2 that it achieved on the same process node with the standard Zen 4 cores. AMD employs eight 16-core CCDs to reach the peak core count of 128 cores.
It's also interesting to note that at present, AMD uses just eight Zen 4C chiplets with the central IO chiplet, whereas the standard EPYC chips use up to twelve Zen 4 chiplets. Could we see a future Zen 4C solution with twelve chiplets and 192-cores? Perhaps, though AMD hasn't announced such a design yet so we'll have to wait and see.
We're learning more deep-dive architectural details of the chips today, stay tuned for further coverage.