Anthropic forms new security council to help secure AI's place in government
Is compute access a matter of national security?

On Aug. 27, Anthropic, the company behind Claude, unveiled what it calls its “National Security and Public Sector Advisory Council” — an 11-member council that includes a former U.S. senator and intelligence chief, to guide how its models are deployed in U.S. defense and government applications.
Partnering with the Pentagon
This might look like yet another Beltway advisory board, but it actually it appears to be Anthropic’s way of locking in its place in the compute-hungry, deep-pocketed U.S. national security sector.
Anthropic has already launched Claude Gov, a tuned-down version of its AI that “refuses less” when handling sensitive or classified queries. It has also secured a $200 million prototype contract with the Pentagon’s Chief Digital and Artificial Intelligence Office alongside Google, OpenAI, and xAI. Claude Gov is live in the Lawrence Livermore National Laboratory, and is being offered to federal agencies for a symbolic $1 price tag to spur adoption.
This push toward the public sector matters because training frontier models is now all about infrastructure. Anthropic’s next-gen Claude models will run on “Rainier,” a monster AWS supercluster powered by hundreds of thousands of Trainium 2 chips. Amazon has poured $8 billion into Anthropic and has positioned it as the flagship tenant for its custom silicon. Meanwhile, Anthropic is hedging with Google Cloud, where it taps TPU accelerators and offers Claude on the FedRAMP-compliant Vertex AI platform.
By contrast, OpenAI still relies heavily on Nvidia GPUs via Microsoft Azure — though it has started renting Google TPUs; while Elon Musk’s xAI scrapped its custom Dojo wafer-level processor initiative and fell back on Nvidia and AMD hardware. Google’s DeepMind remains anchored to Google’s in-house TPU pipeline but has kept a lower profile in defense. Neither has assembled anything like Anthropic’s new council, though.
GPUs, geopolitics, and government
Anthropic’s council can also be seen as a sign that access to compute is becoming a national security priority. The Center for a New American Security has already acknowledged that securing and extending the government’s access to compute will play a “decisive role in whether the United States leads the world in AI or cedes its leadership to competitors.”
Nvidia Blackwell GPUs are sold out through most of 2025, export controls are unpredictable, and U.S. agencies are scrambling to secure reliable training capacity. By recruiting insiders from the Department of Energy and the intelligence community, Anthropic is aiming to secure both the hardware and policy headroom it needs to stay competitive.
This strategy is risky: Tying the Claude brand to the Pentagon may alienate some users and could saddle Anthropic with political baggage. But there are also clear rewards, including steady contracts, priority access to chips, and a direct role in shaping public sector AI standards. Someone, somewhere, has made some careful calculations, and Anthropic’s leadership is clearly hoping they'll pay off.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.