OpenClaw-fueled ordering frenzy creates Apple Mac shortage — delivery for high Unified Memory units now ranges from 6 days to 6 weeks
AI is coming for high-end Mac Studios and Mac minis, too.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Some Apple customers have recently been taken by surprise due to the order lead times on several Mac models with upgraded Unified Memory quotas, which could be largely driven by the immense popularity of OpenClaw, the locally-run open-source AI agent that’s taking the internet by storm and sending users scrambling for Macs to run the AI.
While you can still get base model units of the MacBook Air, iMac, M4 Mac mini, and other basic models on the same day, upgrading memory can now increase delivery wait times by up to three weeks.
However, going for the highest possible memory capacity on high-end models greatly increases your waiting time, with the M3 Ultra Mac Studio with 512GB of Unified Memory taking up to five to six weeks to be delivered.
Article continues belowAlex Finn, founder and CEO of Creator Buddy, linked this shortage in his X post to demand driven by “the world’s first true AI agent" in reference to OpenClaw (previously Clawdbot/Moltbot).
Something big is happening. First Mac Minis. Now Mac Studios.Completely sold out.When I bought 2 Mac Studios a month ago my wait was 14 days. Now the wait is 54 days.The world has changed more in the last month than in the previous 100 years combined.The world's first… https://t.co/GMDgLeQzQuFebruary 13, 2026
While data centers are hungry for AI GPUs and some startups are using multi-gaming GPU setups to train AI models, they’re not ideal for personal agentic AI run locally. This is especially true if you use a huge 70-billion parameter model in FP16 for your agent, which would require around 140GB of memory just for weights, according to AI investor Ben Pouladian. That means that it wouldn’t fit inside a single RTX 5090 with 32GB of VRAM, and even if you manage to connect five graphics cards for a total of 160GB of memory, you’re still bound by the PCIe bottleneck.
Apple’s Unified Memory architecture fixes that problem. Even though HBM is still way faster than the LPDDR used in Macs and MacBooks, the fact that all the processing units — CPU, GPU, and NPU — share the same memory means that they don’t have to deal with PCIe bottlenecks or require technologies similar to NVLink, which is typically only found on data center class graphics cards.
The world is just catching up on what we’ve been doing since 2024.For the last 2 years, at Eternal AI, we’ve been running clusters and clusters of Mac Studios.These Mac clusters are perfect for long-running agentic tasks and local private LLMs.Welcome home, @openclaw 🦞 https://t.co/LBMEeD5Cwi pic.twitter.com/EIPn7B6rqRFebruary 13, 2026
Because of this, more and more people who want to run their own local AI agent are purchasing high-memory Mac models. This isn’t limited to M3 Ultra Mac Studio units with 512GB of memory. Even Mac minis and MacBook Pros with upgraded memory now have a waiting time of two to three weeks.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
We cannot definitively say that these delays are caused entirely by huge numbers of people buying these devices to run their own AI models, as Apple CEO Tim Cook admitted that it’s chasing memory supply to meet high customer demand. However, additional pressure from the consumer side will definitely not help with the memory chip shortage that, as of the moment, is primarily driven by AI hyperscalers and institutional buyers.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Jowi Morales is a tech enthusiast with years of experience working in the industry. He’s been writing with several tech publications since 2021, where he’s been interested in tech hardware and consumer electronics.
-
ezst036 It is good to see that people are getting Macs instead of Winspyware boxes.Reply
Still, OpenClaw runs just fine on Linux as well. -
hwertz I'm not a Mac fan, but this is a good way to get a high memory system for GPU compute. Intel GPUs supoort large shared memory too (at least in linux) but... well, the modern intel gpus are a lot better than the old ones but i don't think it's near as fast as the gpus in the M series.Reply -
Daniel15 Reply
It does, but it's a lot harder to get 128GB+ VRAM (or unified memory) on a Linux system, and you'll need at least that much for a powerful local AI model. In theory you could get something like the Framework Desktop which uses those Ryzen AI chips with unified memory, but I'm not sure how well the AI models work on AMD.ezst036 said:Still, OpenClaw runs just fine on Linux as well -
ThisIsMe The rationale here is a bit of a stretch. It’s more likely that this is more about general supply availability since getting fast, higher capacity RAM is just all around difficult for all OEMs at the moment.Reply
I’m not saying the clear speculation presented here is completely wrong, I’m sure it plays a small part in this minor issue. Hopefully that’s all it is, just innocent conjecture and not some weird fallout from somebody’s sunk cost fallacy related issues where they try and convince others that something is a great deal and they need to urgently get one as well before they’re all gone! Even though prices are stupidly sky high and nobody should currently be buying any of it for that reason to force market correction and give the power back to the buyers.
…but I digress. -
m3city Hi, a honest question. What are the usecases for home trained ai? Are these llm or sth else?Reply -
ezst036 Reply
Which Macs have 128+GB UMA?Daniel15 said:It does, but it's a lot harder to get 128GB+ VRAM (or unified memory) on a Linux system, and you'll need at least that much for a powerful local AI model. In theory you could get something like the Framework Desktop which uses those Ryzen AI chips with unified memory, but I'm not sure how well the AI models work on AMD.
I genuinely do not know, the Mac world isn't where I live. Also whats that going to cost? -
Daniel15 Reply
Mac Mini goes up to 64GB and Mac Studio goes up to 512GB RAM. 128GB Mac Studio is currently $3500 new in the USA, but I wouldn't be surprised if the price goes up soon due to the memory storages. The 512GB RAM one is around $10000. I'm usually not a Mac person, but honestly I don't know of any cheaper way to get that much unified memory in one system.ezst036 said:Which Macs have 128+GB UMA?
I genuinely do not know, the Mac world isn't where I live. Also whats that going to cost? -
dada_dave Reply
M4 Maxes can be equipped up to 128GB while M3 Ultra is up to 512GB. Minimum cost for an M4 Max with 128GB is $3500. Minimum cost for a (binned/full) M3 Ultra with 96GB of RAM is $4000/$5500. Binned/full Ultra with 256GB of unified memory is $5600/$7100. Only the full Ultra comes with 512GB which costs a minimum of $9499. Memory bandwidth is the same on binned/full Ultra models. Unclear if Apple will be keeping its pricing the same when the M5 variants come (Ultra thought to be skipping the M4).ezst036 said:Which Macs have 128+GB UMA?
I genuinely do not know, the Mac world isn't where I live. Also whats that going to cost? -
hwertz Replyezst036 said:It is good to see that people are getting Macs instead of Winspyware boxes.
Still, OpenClaw runs just fine on Linux as well.
To be honest, years back I ran Linux on a DEC Alpha, a PA-RISC, a PowerMac and an IBM POWER, I think I threw it onto a 68K system one time... I just assumed they were talking about Asahi on an M series (but realize now they probably weren't).Daniel15 said:It does, but it's a lot harder to get 128GB+ VRAM (or unified memory) on a Linux system, and you'll need at least that much for a powerful local AI model. In theory you could get something like the Framework Desktop which uses those Ryzen AI chips with unified memory, but I'm not sure how well the AI models work on AMD.
It's not THAT hard to find PCs with 128GB RAM. That said, Macs are not cheap but neither are these systems. The likes of Dell have always marked their top shelf systems way up anyway (you won't get those models that can take 128GB without also paying for a 16 or 18" screen, probably a 1TB or larger SSD, for instance), and I'm sure with RAMPocalypse as an excuse, I see on Dell's site an already $2700 system (saving $75 by getting Ubuntu instead of Windows...), they with a straight face charge $3100 in addition to bump it up from 16GB to 128GB RAM. -
Daniel15 Reply
It's not 128GB RAM that's the hard part; it's 128GB of VRAM. The Mac Studio's unified memory means that the RAM and VRAM are unified, so you can use a small(-ish) amount for the system and use the majority for AI models on the GPU/TPU.hwertz said:It's not THAT hard to find PCs with 128GB RAM
Some PCs with soldered RAM can do this too (like the Framework Desktop), but otherwise you need several expensive Nvidia GPUs to load large AI models.
This is also why the Framework Desktop uses soldered RAM. DIMMs/SODIMMs just aren't fast enough at the moment, nor do they support such a wide bus (256 bits).