At the ISSCC 2023 earlier this week, AMD discussed the future of computing over the next decade. CEO Dr. Lisa Su was the lead presenter and showed that AMD has performed admirably in supercomputer, server, and GPU performance trends over the last few decades. However, probably more interesting are the well-crafted plans that show how AMD aims to keep the pedal to the metal and use advanced technologies to counteract the tapering off of semiconductor process shrink benefits.
In the above performance slides, AMD claims it has successfully doubled mainstream server performance every 2.4 years since 2009. However, it doesn’t share any projections with respect to this market. AMD is confident to peer further into the future with its GPU performance trends (slide 2 in the gallery above). Here you can see that it claims to have doubled GPU performance every 2.2 years since 2006. The chart shows this trend is locked in until at least 2025.
AMD's supercomputer performance is the most successful in terms of advances over time, with the last chart above showing that, since the late 90s, AMD processors have been instrumental in doubling supercomputer performance every 1.2 years. Moreover, AMD predicts we will reach Zetascale supercomputer performance in approximately a decade. AMD also took time to highlight efficiency gains, and the intense battle to keep Moore’s Law alive as logic density tapers off.
Generously, AMD laid out some of its key plans which will help it drive forward with efficiency and performance gains over the coming decade. Advanced packaging is going to be a strong driver of both performance and efficiency, according to AMD. We have already seen some of the results of AMD traveling down this path with the use of chiplets and 3D V-Cache, and it will continue.
Some advanced packaging avenues which will be explored include the integration of 3D CPU & GPU silicon “for next-level efficiency.” Additionally, AMD reckons that “even tighter integration of compute and memory” will result in higher bandwidth with lower power. AMD will also target processing in memory. A slide shared at ISSCC showed a processor with an HBM module stacked on top. AMD says that if key algorithmic kernels can be executed in memory, that takes significant burden and latency out of the system.
Another big target for efficiency savings, and thus potential performance boosts, are chip I/O and communications. Specifically, using optical communications tightly integrated with the compute die is expected to provide a worthwhile efficiency boost.
AMD also took some time to boast about the AI performance gains which have been delivered by its processor portfolio over the last decade. The presentation discussed some use cases for AI computing and highlighted the potential performance gains AI can deliver for simulations.
Of course, AMD isn’t alone in looking at the benefits of advanced packaging, chiplets, die stacking, in-memory computing, optical computing, and AI acceleration. It is good, though, to see that it has solid plans for fierce competition with rivals, and to produce ever faster and more efficient chips for PC enthusiasts.