A federal judge heard arguments Tuesday on a motion by Anthropic PBC, a leading San Francisco artificial intelligence developer, to block the federal government from designating it a “supply chain risk” and blacklisting it from government contracts.  

After a nearly 2-hour hearing, U.S. District Judge Rita Lin said she would reserve her decision on Anthropic’s motion, but her comments during the course of the hearing indicated skepticism about the government’s position. 

The lawsuit arises from the U.S. Department of Defense’s widely publicized Feb. 27 designation of Anthropic as a “supply chain risk” and an accompanying directive from Secretary Pete Hegseth that said “[e]ffective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” 

Anthropic said that the action caused it “irreparable injury,” including stigmatization of its good name, deprivation of government contracts, and serious damage to its relationships with business partners. As a result, Anthropic argued, the government “has put millions, possibly billions, of dollars at risk.”  

Anthropic asked the court to enter a preliminary injunction against the department and the other agency defendants named in the complaint. A preliminary injunction is a court order that is designed to preserve the status quo while a lawsuit is being litigated.  

Lin recognized the case implicated important issues. At a scheduling conference on March 10, she said that the matter was “consequential for both sides” and accelerated the Tuesday hearing after a U.S. Department of Justice lawyer declined to provide assurances that the government would not take additional adverse action against Anthropic before a later hearing date. 

Pages from the Anthropic website and the company’s logos are displayed on a computer screen on Thursday, Feb. 26, 2026. (AP Photo/Patrick Sison)

At the core of the dispute is Anthropic’s refusal to drop two provisions in its usage policy relating to AI technology, already in wide use by the department and many other federal agencies. The first says Anthropic’s technology cannot be used for mass surveillance on American citizens; the second that it cannot be used for lethal autonomous weaponry.  

Anthropic alleged that Hegseth acted in retaliation for Anthropic’s public position on AI safety, in violation of the company’s rights under the First Amendment. The company also argued that the government’s directive violated the requirements of the relevant federal statutes as well as the constitutional prohibition against depriving a person of property without due process of law. 

Anthropic was supported by more than a dozen groups and organizations given permission by Lin to file “amicus” or “friend of the court” briefs. 

‘An attempt to cripple Anthropic’

In her preliminary remarks, Lin observed that the government’s actions against Anthropic did not appear to be reasonably tailored to respond to a disagreement about contractual provisions but to be “an attempt to cripple Anthropic” as punishment for taking a position that the department did not like.  

The department argued that the government has wide discretion in choosing the contractors it chooses to work with and the terms under which the work is performed. The department said when it proposed to use Anthropic’s technology for all lawful purposes, Anthropic pushed back and tried to impose limitations. In its view, that amounted an unacceptable risk to national security.  

The government’s position was supported by a “declaration” — a sworn statement — submitted by Emil Michael identified as the Under Secretary of Defense for Research and Engineering and responsible for “spearheading” the department’s efforts “to ensure U.S. military technological superiority.”  

Michael alleged that Anthropic had become a supply chain risk because of a cluster of factors.  

First, the “opaque” nature of the Large Language Models that Anthropic supplies to the department create a “baseline risk.” Second, Anthropic retains an “unusual degree of control” over the products after their employment Third, Anthropic has taken an “adversarial posture” towards the department’s mission and the way it is conducted. 

Michael acknowledged that the opacity is true of other AI systems but claimed it was “significantly elevated” by the actions by Anthropic’s leadership. Michael specifically called out Anthropic’s leadership for demonstrating “bad faith by sharing with the press unclassified but sensitive details of private conversations with [Department of War] leadership in order to exert public pressure on DoW to concede to Anthropic’s demands.” 

Competing claims over trust, control

Eric Hamilton, a DOJ lawyer representing the defendants, told Lin that the case was about trust. Given the sensitivity of the department’s function, it needs to trust its contractors and given Anthropic’s actions, the department no longer trusted the company.  

According to Hamilton, Anthropic wanted to dictate how the department could use Anthropic’s technology, raising a serious risk that Anthropic would try to subvert or disable its technology in the future based on a disagreement with the department’s mission or objectives.  

When Anthropic’s counsel responded that once the model is employed, Anthropic has no ability to change it, Hamilton said it could be done via updates to system software, though he conceded that the department would have the right to evaluate updates before they were placed into service.  

Anthropic’s lawyer asked the court to recognize that it is not seeking to prevent the government from terminating its contract; it only seeks to enjoin the campaign of retribution undertaken by the department and its secretary and to make sure the department follows the law in its future course of action. 

At the close of hearing, Lin said she would take the matter under advisement and issue her decision in a few days. 

Joe Dworetzky is a second career journalist. He practiced law in Philadelphia for more than 35 years, representing private and governmental clients in commercial litigation and insolvency proceedings. Joe served as City Solicitor for the City of Philadelphia under Mayor Ed Rendell and from 2009 to 2013 was one of five members of the Philadelphia School Reform Commission with responsibility for managing the city’s 250 public schools. He moved to San Francisco in 2011 and began writing fiction and pursuing a lifelong interest in editorial cartooning. Joe earned a Master’s in Journalism from Stanford University in 2020. He covers Legal Affairs and writes long form Investigative stories. His occasional cartooning can be seen in Bay Area Sketchbook. Joe encourages readers to email him story ideas and leads at joe.dworetzky@baycitynews.com.