OpenAI strikes deal with Pentagon following Claude blacklisting — Anthropic to challenge supply chain risk designation in court
It’s understood that the DoD has agreed to OpenAI’s “red lines” on mass surveillance and autonomous weapons.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
OpenAI CEO Sam Altman announced late Friday night that the company had reached an agreement with the U.S. Department of Defense (“rebranded” as the Department of War under the current administration) to deploy its AI models on the Pentagon's classified network, with the same two safety conditions Anthropic was effectively blacklisted for insisting on: no domestic mass surveillance, and human oversight of decisions involving lethal force and autonomous weapons.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote in a post on X. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
Altman’s announcement came not long after President Trump “ordered” every federal agency to immediately stop using Anthropic's technology, following weeks of tense negotiations between Anthropic and Pentagon officials that ultimately collapsed. The DoD had labeled Anthropic a supply chain risk and demanded that it drop restrictions on its Claude model, requiring the model to be available for "all lawful purposes." Anthropic refused. Hours later, the Pentagon accepted functionally identical conditions from OpenAI.
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.AI safety and wide distribution of…February 28, 2026
It’s understood that no formal contract between OpenAI and the Pentagon has been signed yet, and that the agreement also limits OpenAI's deployment to cloud environments, not edge systems such as aircraft or drones.
Anthropic argued that the law hasn't kept pace with what AI can do, particularly in aggregating publicly available data for surveillance purposes. Altman seemed to agree with this, stating in an internal memo to OpenAI staff that it shares Anthropic's "red lines" and wanted to help "de-escalate" the situation.
By Friday afternoon, however, he held a company all-hands meeting, telling employees the deal was taking shape. Around 70 OpenAI employees have separately signed an open letter titled "We Will Not Be Divided" expressing solidarity with Anthropic.
Anthropic was the first AI lab to deploy its models on the Pentagon's classified networks, through a partnership with Palantir. OpenAI had previously held a $200 million DoD contract for non-classified use cases. Anthropic said Friday it will challenge the supply chain risk designation in court, stating that "no amount of intimidation or punishment from the Department of War will change our position."
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.
-
wyldpea It's all about money. There's no more ethics. You may agree with Altman, but when they turn their sights on the American people, we'll see how well that goes! Skynet isn't so farfetched anymore.Reply -
CelicaGT Reply
Ethics in business died many decades ago. What you see here is a symptom, not the disease.wyldpea said:It's all about money. There's no more ethics. You may agree with Altman, but when they turn their sights on the American people, we'll see how well that goes! Skynet isn't so farfetched anymore. -
Notton These "red lines" from OpenAI seem to be the same thing as "Guardrails" from Anthropic.Reply
I assume Altman got the deal because he's easier to ply and bend over. -
Jabberwocky79 This smacks of a difficult client insisting on a contractor doing something a certain way even though they are told it isn't possible. The client gets mad, thinking they are being lied to, and they go and find someone else who ends up telling the them the exact same thing. The client has to concede the first guy was right but is too proud to go back and admit it, so the client just goes with the second guy even though nothing has changed.Reply