Skip to main content

Live

AMD Advancing AI Event Live Blog: Instinct MI300 Launch, Ryzen 8000 "Hawk Point" Expected

AMD takes the Nvidia bull by the horns.

AMD
(Image: © AMD)

Also check out our extended deep-dive coverage:

AMD unveils Instinct MI300X GPU and MI300A APU, claims up to 1.6X lead over Nvidia’s competing GPUs

The refresh that wasn’t - AMD announces ‘Hawk Point’ Ryzen 8040 Series, teases Strix Point

AMD CEO Lisa Su will take to the stage here in San Jose, California, to share the company's latest progress on enabling AI from the cloud to the edge and PCs. The show begins today, December 6, at 10am PT, and we're here to provide live event coverage. 

AMD says it will reveal its Instinct MI300 accelerators at the event. All signs point to these coming as both a GPU and a blended CPU+GPU product (APU), both of which are designed to unseat Nvidia's dominance in the AI market. 

Make no mistake; the Instinct MI300 is a game-changing design - the data center APU blends a total of 13 chiplets, many of them 3D-stacked, to create a chip with twenty-four Zen 4 CPU cores fused with a CDNA 3 graphics engine and eight stacks of HBM3. Overall, the chip weighs in with 146 billion transistors, making it the largest chip AMD has pressed into production. 

If you're more interested in the latest PC technology, AMD is also expected to unveil its "Hawk Point" Ryzen 8000 mobile series of chips. Rumors point to these chips offering many of the same characteristics as their predecessors, but targeted enhancements offer more performance. These are the follow-on to AMD's Ryzen 7040 series, the first PC chips to launch with a dedicated AI processing NPU unit, so we think there's a chance these enhanced models will debut at the show.

Pull up a seat; the show starts shortly. 

LIVE: Latest Updates

Refresh

AMD has begun displaying its cautionary statements on the screen, so the show is about to start. 

AMD CEO Lisa Su has come out on the stage. She opened the presentation reminiscing on the launch of ChatGPT just one year ago, and the explosive impact it has had on the world. 

AMD Advancing AI event

(Image credit: AMD)

AMD Advancing AI event

(Image credit: AMD)

Generative AI will require significant investments to meet the needs for training and inference workloads. One year ago, AMD predicted a $150 billion TAM for AI workloads by 2027. Now AMD has revised that estimate up to $400 billion in 2027. 

AMD Advancing AI event

(Image credit: AMD)

AMD Advancing AI event

(Image credit: AMD)

AMD is currently focusing on tearing down the barriers to AI adoption and cooperating with its partners to develop new solutions. 

AMD Advancing AI event

(Image credit: AMD)

Lisa Su said that the availability of GPU hardware is the biggest barrier, and now the company is helping address that with the launch of its Instinct MI300 accelerators. The new CDNA 3 architecture delivers huge performance gains in multiple facets. 

AMD Advancing AI event

(Image credit: AMD)

The MI300 has 150 billion transistors. 128-channels of HBM3, fourth-gen Infinity Fabric, and eight CDNA 3 GPU chiplets. 

AMD Advancing AI event

(Image credit: AMD)

 The Instinct MI300 is a game-changing design - the data center APU blends a total of 13 chiplets, many of them 3D-stacked, to create a chip with twenty-four Zen 4 CPU cores fused with a CDNA 3 graphics engine and 8 stacks of HBM3. Overall, the chip weighs in with 146 billion transistors, making it the largest chip AMD has pressed into production.
 

AMD Advancing AI event

(Image credit: AMD)

AMD claims up to a 1.3X more performance than Nvidia's H100 GPUs in certain workloads. The slide above outlines the claimed performance advantages. 

AMD Advancing AI event

(Image credit: AMD)

AMD Advancing AI event

(Image credit: AMD)

Scalability is incredibly important -- performance needs to increase linearly as more GPUs are employed. Here AMD shows they match Nvidia's eight-GPU H100 HGX system with an eight-GPU AMD platform. 

AMD Advancing AI event

(Image credit: AMD)

The MI300 delivers performance parity in training with Nvidia, but exhibits the strongest advantages in inference. AMD highlights a 1.6X advantage in inferencing. 

AMD Advancing AI event

(Image credit: AMD)

Microsoft CTO Kevin Scott has come to the stage to talk with Lisa Su about the challenges of building out AI infrastructure. 

AMD

(Image credit: AMD)

While they discuss the details, here are some details about MI300. 

AMD Advancing AI event

(Image credit: AMD)

Microsoft will have MI300X coud instances available in preview today. 

AMD Advancing AI event

(Image credit: AMD)

Lisa Su displayed the AMD Instinct MI300X platform. 

AMD Advancing AI event

(Image credit: AMD)

AMD Advancing AI event

(Image credit: AMD)

AMD CTO Victor Peng has come to stage to talk about the latest advances in ROCM, AMD's open source competitor to Nvidia's CUDA.

AMD Advancing AI event

(Image credit: AMD)

AMD Advancing AI event

(Image credit: AMD)

Peng talked about the advantages of the open ROCm ecosystem, as opposed to Nvidia's proprietary approach. 

AMD Advancing AI event

(Image credit: AMD)

AMD's next-gen ROCm 6 is launching later this month. Support for Radeon GPUs continues, but it also has new optimizations for MI300. 

AMD Advancing AI event

(Image credit: AMD)

ROCm provides up to a 2.6X improvement in vLLM, among other optimizations that total an 8X improvement on MI300X compared to ROCm 5 on MI250X (this isn't a great comparison). 

AMD Advancing AI event

(Image credit: AMD)

AMD Advancing AI event

(Image credit: AMD)

AMD continues to work with industry stalwarts like Hugging Face and PyTorch to expand the open source ecosystem. 

AMD Advancing AI event

(Image credit: AMD)

AMD GPUs, including the MI300, will be supported in the standard Triton distribution starting with version 3.0. 

AMD Advancing AI event

(Image credit: AMD)

Peng is now talking with leaders from Databricks, essential AI, and Lamini. 

AMD Advancing AI event

(Image credit: AMD)

The talk has turned to different forms of AI, and possible evolutionary updates in the future. 

AMD

(Image credit: AMD)

Here are some of the specifications of AMD's new Instinct MI300X platform. The system consists of eight MI300X accelerators in one system. It supports 400 GbE networking and has a monstrous 1.5TB of total HBM3 capacity. 

AMD Advancing AI event

(Image credit: AMD)

62,000 AI models run on the Instinct lineup today, and many more will run on the MI300X. Peng says the arrival of ROCm 6 heralds the inflection point for the broader adoption of AMD's software. 

AMD Advancing AI event

(Image credit: AMD)

Lisa Su has returned to stage, inviting Ajit Mathews, the Senior Director of Engineering at Meta, to the stage. 

AMD Advancing AI event

(Image credit: AMD)

Meta feels that an open source approach to AI is the best path forward for the industry. 

Meta has been benchmarking ROCm and working to build its support in PyTorch for several years. Meta will deploy Instinct MI300X GPUs in its data centers. 

AMD Advancing AI event

(Image credit: AMD)

AMD is working to bring integrated AI solutions to market for enterprises, a lucrative portion of the market. 

Arthur Lewis, the President of Dell's Core Business Operations, Global Infrastructure Solutions Group, to talk about the company's partnership with AMD. 

AMD Advancing AI event

(Image credit: AMD)

Dell has added AMD's MI300X to its portfolio, offering Poweredge servers with eight of the GPUs inside.