Leading artificial intelligence developer Anthropic PBC filed a lawsuit in federal court in San Francisco on Monday against the U.S. Department of War and 16 other federal agencies over its designation as a “supply chain risk,” a category that threatens its government contracts and customer relationships.
The company has asked the court to issue orders promptly to preserve the status quo pending resolution of the litigation.
Anthropic, headquartered in San Francisco, is the creator of the popular chatbot Claude and the developer of various “frontier” AI models that are used by customers all over the world. The company has worked with the Department of War and more than a dozen federal agencies to help the agencies incorporate its technology and products into their workflows.
Anthropic is a public benefit corporation and prides itself on its stated commitment to develop AI models that are “safe.” Dario Amodei, Anthropic’s CEO, split from OpenAI in 2020 over concerns about OpenAI’s commitment to the long-term safety of its advancing technology.

According to its court filings, Anthropic provides its products to all clients subject to the terms of a Usage Policy that contains limitations or guardrails on how the customer may use the technology. The company says that for commercial and civilian users, the policy “prohibits engaging in surveillance, compromising computer networks, and designing weapons or other systems to cause harm or loss of human life.”
In situations where the government is the client, the filings say that Anthropic adds an addendum to the policy that allows the government more freedom of usage than a non-governmental user. However, two key restrictions remain; the addendum “forbid[s] the use of Anthropic’s models for lethal autonomous warfare or for mass surveillance of Americans.”
In requesting an injunction, Anthropic said these limitations have been in place since Anthropic began contracting with the U.S. government in November 2024 and they have not caused any problem.
Shifting political landscape
The company alleges that it is the leading AI supplier to the government and “Claude is reportedly the Department’s most widely deployed frontier AI model.” It adds that after a rigorous 18-month security review, Anthropic was granted a “Top Secret facility security clearance.”
However, beginning in September of 2025, the Department of War demanded that Anthropic scrap the usage policy and addendum and allow the department as well as its contractors and subcontractors to use Claude “for all lawful usages.” The parties allegedly negotiated into the new year, with Anthropic agreeing to some changes in the policy but not wavering on the requirement that its technology not be used for lethal automated warfare or mass surveillance of the American public.
The complaint alleges that those two conditions were based on the company’s commitment to AI safety and aligned with its view that at this stage in its development, the AI technology cannot be safely used for those purposes.
On Feb. 24, four days before the United States and Isreal began a bombing campaign against Tehran that killed Iran’s Supreme Leader Ayatollah Ali Khamenei, Secretary of War Pete Hegseth allegedly gave Anthropic an ultimatum. On or before 5 p.m. on Feb. 27, Anthropic had to drop the restrictions or it would either be deemed a supply chain risk, or Claude would be “commandeered” as an asset essential to U.S. security.
Amodei issued a public statement explaining the company’s position and Anthropic tried to keep negotiations alive. However, on Feb. 27, President Donald Trump posted a statement on social media “directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.” The directive called Anthropic a “Radical Left AI company” with employees who were “Leftwing nut jobs.”
"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.
The Leftwing nut jobs at Anthropic… pic.twitter.com/aIEx92nnyx — The White House (@WhiteHouse) February 27, 2026
That same evening, Hegseth issued a “final decision” that directed the department to designate Anthropic as a supply chain risk to national security, and then declared that “(e)ffective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
He followed that by what Anthropic argues is the directly contradictory statement that Anthropic will continue to provide services to the department for up to six months, meaning that on one hand, the company is a grave security risk that must be terminated, and on the other, so essential to national security that it must continue.
A retaliatory strike
Anthropic’s court filings go on to detail the many ways in which the government’s actions have severely damaged its business, not only through the loss of government contracts but by threatening its relationships with its customers.
Anthropic’s lawyers contend that the government acted in retaliation for Anthropic and Amodei’s public position on AI safety, in violation of the First Amendment’s protection of freedom of speech. They also argue that the government’s directive violated the requirements of the relevant statutes as well as the constitutional prohibition against depriving a person of property without due process of law.
The case has been assigned to U.S. District Judge Rita Lin of the Northern District of California. A number of entities, including a group of Anthropic and OpenAI employees, have requested that Lin allow them to file amicus or “friend of the court” briefs in support of Anthropic.
Because of the profound impact on the company, Anthropic is seeking an order to prevent the government from implementing the directive until the litigation is resolved and is pressing the court to hold a hearing on its request as soon as Friday.
