Federal Judge Blocks Pentagon From Labeling Anthropic a National Security Threat

Coinpedia

This past week, a federal judge in San Francisco blocked the Pentagon and the Trump administration from enforcing a national security designation against Anthropic, the artificial intelligence (AI) company that refused to remove safety restrictions from its Claude models.

Court Halts Trump Administration’s Ban on Anthropic’s Claude AI for Federal Agencies

U.S. District Judge Rita F. Lin issued the preliminary injunction on March 26, finding that the government’s actions against Anthropic likely violated the First Amendment, denied the company due process, and exceeded statutory authority under the Administrative Procedure Act. The ruling is stayed for seven days, giving the administration until approximately April 2 to file an emergency appeal with the Ninth Circuit.

The dispute began when the Department of Defense (DoD) sought unrestricted access to Claude for federal use. Anthropic had long maintained two exceptions in its acceptable use policy: Claude would not be used for mass domestic surveillance of American citizens or for lethal autonomous weapons systems operating without meaningful human oversight. The DoD demanded that those guardrails be removed. Anthropic refused.

Negotiations broke down in late 2025. The conflict became public through CEO Dario Amodei’s written statements and an essay outlining the company’s position on AI safety. DoD officials viewed the restrictions as Anthropic attempting to dictate government policy.

On Feb. 27, 2026, President Trump posted on Truth Social, directing all federal agencies to immediately halt use of Anthropic technology, with a six-month phase-out period. Defense Secretary Pete Hegseth announced a supply chain risk designation under 10 U.S.C. § 3252 — a statute previously applied to foreign adversaries — labeling Anthropic a potential risk of “sabotage” and “subversion.”

Several federal contractors paused or terminated deals with the company following the designation. Anthropic responded and filed suit on March 9 in the Northern District of California, alleging retaliation, due process violations, and APA breaches. A related action was filed in the D.C. Circuit.

In a 43-page order, Judge Lin enjoined the DoD, 17 other federal agencies, and Secretary Hegseth from implementing or enforcing any of the challenged actions. She ordered restoration of the status quo, allowing Anthropic to continue existing federal contracts and partnerships.

Lin wrote that the government’s conduct represented “classic illegal First Amendment retaliation.” She noted the timing of the actions, along with internal government communications referencing Anthropic’s “rhetoric,” “arrogance,” and “strong-arming,” pointed directly to punitive intent tied to the company’s public statements on AI safety.

On due process, the court found the government had stripped Anthropic of liberty interests in its reputation and business operations without providing pre-deprivation notice or a hearing. Lin also found that the statutory designation had never before been applied to an American company under these circumstances and that prior government vetting of Anthropic.

This includes Top Secret clearances, FedRAMP authorization, and contracts worth up to $200 million — showed no genuine security concern. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” Lin wrote.

The court found potential financial harm to Anthropic in the hundreds of millions to billions of dollars, along with reputational damage that monetary relief could not fully repair. Amici briefs from military leaders and AI researchers cited risks to defense readiness and the broader AI safety debate.

Anthropic said it was grateful for the court’s speed and that it planned to keep working with the federal government. The company stated its goal remained to ensure Americans have access to safe and reliable AI.

The injunction does not resolve the underlying contract dispute. No final merits decision has been issued. A separate challenge in the D.C. Circuit remains pending, and the administration retains the option to appeal.

FAQ 🔎

  • What did the federal judge rule regarding Anthropic? U.S. District Judge Rita F. Lin issued a preliminary injunction on March 26, blocking the Pentagon and Trump administration from enforcing a national security designation and federal ban against Anthropic and its Claude AI models.
  • Why did the Pentagon designate Anthropic a supply chain risk? The DoD sought unrestricted use of Claude AI, including for mass surveillance and autonomous weapons, and labeled Anthropic a supply chain risk after the company refused to remove those safety restrictions.
  • Is the injunction currently in effect? The injunction is stayed for seven days from March 26 to allow the government to file an emergency appeal, meaning it does not take effect until approximately April 2, 2026.
  • What happens next in the Anthropic vs. Pentagon case? The case continues on its merits, a related action remains pending in the D.C. Circuit, and the Trump administration may seek emergency relief from the Ninth Circuit before the stay expires.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Opmerking
0/400
Geen opmerkingen