AMD released instructions for running DeepSeek on Ryzen AI CPUs and Radeon GPUs
DeepSeek R1 can now be run on AMD's latest consumer-based hardware.

AMD has provided instructions on how to run DeepSeek’s R1 AI model on AI-accelerated Ryzen AI and Radeon products, making it easy for users to run the new chain-of-thought model on their PCs locally. Several LLMs using R1 are compatible with RX 7000 series desktop GPUs and select Ryzen CPUs with XDNA NPUs. However, they require the optional driver Adrenalin 25.1.1 to run.
The guide has everything AMD users need to get DeepSeek R1 running on their local (supported) machine. LM Studio has a one-click installer tailor-made for Ryzen AI, which is the method AMD users will use to install R1. AMD shows how the application needs to be tuned for its hardware, including a list of the maximum supported LLM parameters.
DeepSeek R1 allegedly has only recently been distilled into "highly capable" smaller models, small enough to run on consumer-based hardware. The DeepSeek-V3 model was initially trained on a cluster of 2,048 Nvidia H800 GPUs for context.
The maximum supported LLM parameters are based on memory capacity. The RX 7600 XT, 7700 XT, 7800 XT, 7900 GRE, and 7900 XT all support up to “DeepSeek-R1-Distill-Qwen-14B”. The flagship RX 7900 XTX supports up to “DeepSeek-R1-Distill-Qwen-32B”. The RX 7600, with its 8GB of VRAM, supports up to “DeepSeek-R1-Distill-Llama-8B”.
Similarly, Ryzen 8040 and 7040 series mobile APUs are equipped with 32GB of RAM, and the Ryzen AI HX 370 and 365 with 24GB and 32GB of RAM can support up to “DeepSeek-R1-Distill-Llama-14B”. The Ryzen AI Max+ 395 can support up to “DeepSeek-R1-Distill-Llama-70B”, but only in 128GB and 64GB memory capacities. The 32GB supports up to “DeepSeek-R1-Distill-Qwen-32B”.
DeepSeek’s new AI model has taken the world by storm, with its 11 times lower computing cost than leading-edge models. Two days ago, it was solely responsible for Nvidia’s record-breaking $589 billion market cap loss. The DeepSeek R1 model relies on extreme optimization levels to provide its 11X efficiency uplift, relying on Nvidia’s assembly-like Parallel Thread Execution (PTX) programming for most of the performance uplift.
Nvidia and AMD GPUs aren’t the only GPUs that can run R1; Huawei has already implemented DeepSeek support into its Ascend AI GPUs, enabling performant AI execution on homegrown Chinese hardware.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.
-
whatisit3069 I'm running on a desktop and a mini pc. The desktop has a 7700x, 64gb RAM, AND A7800XT. The mini pc has a 8845hs, 64gb RAM, and 780m internal gasoline graphics. Running Ollama in both dual boot. I get better a litlle inference performance on Ubuntu.Reply -
systemBuilder_49
Nvidia is in serious trouble when it comes to AI Model execution. Both Apple & AMD are offering compute platforms with up to 128GB of RAM that can execute VERY LARGE AI models. NVidia cannot touch the price/performance of these machines and apparently they have no plans to create a competing product anytime soon. It's for this reason that I bought my son a 48GB MacBook Pro M4Pro laptop - the ability to run larger AI models.ezst036 said:AMD taking advantage of Nvidia's moment of weakness.
This weakness in NVidia hardware is also causing Mac Mini sales to skyrocket because you can put 64GB of RAM into an M4Pro model and run 64GB models that the 5090 will NEVER run for $2699. -
heffeque
I wanted to buy a 128 GB Strix Halo mini-PC in the coming months... but I'm afraid that with DeepSeek coming out, all of those Strix Halo will end up in hands of AI people.systemBuilder_49 said:Nvidia is in serious trouble when it comes to AI Model execution. Both Apple & AMD are offering compute platforms with up to 128GB of RAM that can execute VERY LARGE AI models. NVidia cannot touch the price/performance of these machines and apparently they have no plans to create a competing product anytime soon. It's for this reason that I bought my son a 48GB MacBook Pro M4Pro laptop - the ability to run larger AI models.
This weakness in NVidia hardware is also causing Mac Mini sales to skyrocket because you can put 64GB of RAM into an M4Pro model and run 64GB models that the 5090 will NEVER run for $2699. -
USAFRet Good luck with running that:Reply
https://www.tomsguide.com/computing/online-security/deepseek-ai-is-collects-your-keystrokes-and-may-never-delete-them -
DashMad
"Running on consumer grade card" Oh great another GPU scarcity on the Horizon just like mining fad, prepare for gaming GPU double or triple the price.Admin said:AMD has provided instructions on how to run DeepSeek R1 on its latest consumer-based Ryzen AI and RX 7000 series CPUs and GPUs.
AMD released instructions for running DeepSeek on Ryzen AI CPUs and Radeon GPUs : Read more -
wujj123456
You got it backwards or perhaps didn't really understand the article. The privacy issues apply to their apps, website and other products that links to the privacy policy. Honestly every AI company collects similar load of information, just not sending to China if that matters to you.USAFRet said:Good luck with running that:
https://www.tomsguide.com/computing/online-security/deepseek-ai-is-collects-your-keystrokes-and-may-never-delete-them
However, they distributed their code and weights with MIT license. What we have here is a local setup that can be run entirely offline, which truly eliminates the problem. If privacy is your concern, running open models locally is the only way to go and that's what this article is about. -
void555
https://www.tomshardware.com/pc-components/gpus/nvidia-and-mediatek-collaborate-on-3nm-ai-pc-cpu-chip-reportedly-ready-for-tape-out-this-monthsystemBuilder_49 said:Nvidia is in serious trouble when it comes to AI Model execution. Both Apple & AMD are offering compute platforms with up to 128GB of RAM that can execute VERY LARGE AI models. NVidia cannot touch the price/performance of these machines and apparently they have no plans to create a competing product anytime soon. It's for this reason that I bought my son a 48GB MacBook Pro M4Pro laptop - the ability to run larger AI models.
This weakness in NVidia hardware is also causing Mac Mini sales to skyrocket because you can put 64GB of RAM into an M4Pro model and run 64GB models that the 5090 will NEVER run for $2699. -
DS426
Rare... maybe they can keep the momentum going rather than fumble, lol.ezst036 said:AMD taking advantage of Nvidia's moment of weakness.