Read: My Thoughts on Anthropic v. The United States Government
I. What Was Filed
Anthropic filed two lawsuits on Monday, March 9th.
The first is a 48-page civil complaint in the U.S. District Court for the Northern District of California. It names the Department of Defense, the Treasury Department, the State Department, the Commerce Department, the General Services Administration, and their top officials as defendants. This is the broader suit. It argues that the government's actions constitute an unconstitutional campaign of retaliation against Anthropic for exercising its First Amendment rights, specifically for its public statements about AI safety and its refusal to drop two contractual restrictions on how the Pentagon uses Claude.
The second suit is a narrower petition filed in the U.S. Court of Appeals for the D.C. Circuit. Anthropic filed there because the Federal Acquisition Supply Chain Security Act (FASCSA) gives that specific court jurisdiction over challenges to supply chain risk designations. The arguments in both suits overlap substantially, but the two different legal authorities the government invoked required Anthropic to challenge them in two different courts.
The core claim in both filings is the same: the supply chain risk designation and the government-wide ban on Anthropic's technology are unlawful, exceed the statutory authority Congress granted to the Pentagon, violate the procedures Congress required, and amount to government retaliation for protected speech.
II. The Statutes at Issue
To understand why legal experts have been skeptical of the government's position, it helps to understand the two statutes the Pentagon appears to be relying on. Neither of them was designed for this situation.
The first is 10 U.S.C. § 3252. This gives the Secretary of Defense the authority to exclude companies from competing for contracts or subcontracts involving certain sensitive military information systems, specifically things like intelligence systems, command and control, and weapons platforms. The law exists to prevent foreign adversaries from infiltrating the defense supply chain. It has never before been used against an American company. The statute allows the Secretary to act without prior notice to the company and with very limited judicial review, which is why Congress designed it narrowly. It was meant for situations where, say, a Chinese firm had embedded itself in a defense contractor's software stack. It was not meant for a contract disagreement between the Pentagon and a domestic AI company over terms of service.
The second statute is the Federal Acquisition Supply Chain Security Act of 2018 (FASCSA, 41 U.S.C. §§ 1321-1328 and 4713). FASCSA gives the government broader authority to issue exclusion and removal orders against products that present a supply chain risk. But it requires the government to go through a formal process. There is supposed to be a risk assessment by a federal council, consideration of less restrictive alternatives, notice to the targeted company with an opportunity to respond, a written national security determination, and notification to Congress. Legal analysts at both Lawfare and Just Security have pointed out that the public record contains no indication any of these procedural steps were followed before the designation was issued.
Both statutes require a finding that the exclusion is necessary to protect national security and that less intrusive measures were not reasonably available. This is a mandatory legal requirement, not a formality.
III. Why the Designation Looks Legally Vulnerable
Several legal analyses have identified specific problems with the government's position. The Lawfare analysis by Michael Endrias and Alan Z. Rozenshtein is probably the most thorough, and they are not sympathetic to Anthropic by default. They are national security law scholars. Their conclusion is that the designation has serious legal problems on multiple fronts.
The first problem is operational history. Anthropic's two restrictions, no mass surveillance and no fully autonomous weapons, have not interfered with a single government mission to date. Claude has been used extensively across the Department of Defense for intelligence analysis, operational planning, cyber operations, and other applications. The Wall Street Journal reported that Claude was used in military operations including the arrest of Venezuelan leader Nicolás Maduro and intelligence assessments during the U.S. conflict with Iran. If the model has been functioning in classified military settings without the restrictions causing any operational issues, it is hard to argue that the restrictions create a supply chain risk that requires the most extreme remedy in the procurement toolkit.
The second problem is the less-intrusive-measures requirement. Under both statutory tracks, the government must demonstrate that it considered and rejected less restrictive alternatives before reaching for a supply chain designation. The most obvious alternative was to simply not renew the contract and switch to a competitor. That is a routine procurement decision available to any buyer who doesn't like a vendor's terms. It doesn't require a supply chain designation, a secondary boycott of defense contractors, or a government-wide ban. The fact that the government skipped this option and went directly to the most aggressive tool available is itself evidence that the designation is being used for something other than managing supply chain risk.
The third problem, and this may be the most damaging one for the government, is the public statements made by Pentagon officials. In Anthropic's court filing, a Pentagon official is quoted as saying the government intended to "make sure they pay a price" for the company's refusal. Secretary Hegseth posted on X that Anthropic's stance was "fundamentally incompatible with American principles." President Trump posted on Truth Social calling Anthropic a "radical left, woke company" and ordered every federal agency to "immediately cease" all use of its technology. The top Pentagon official for research and technology used his official account to call Amodei "a liar" with "a God-complex."
This matters legally because Anthropic's lawsuit rests partly on a First Amendment retaliation claim. For that claim to succeed, Anthropic needs to show that the government took adverse action because of Anthropic's protected speech. Normally this is hard to prove. But when the President and the Defense Secretary publicly state that the action was taken because Anthropic expressed views they disagreed with, they have essentially handed Anthropic the evidence on a silver platter. As the Lawfare analysis put it, Hegseth's own public statements "may have doomed the government's litigation posture before it even begins."
IV. The Scope Problem
There is also a significant dispute about what the supply chain risk designation actually does.
When Hegseth announced the designation, he stated that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." Read literally, that would mean any company that has a Pentagon contract would be prohibited from using Claude for anything, even completely unrelated commercial work. That interpretation would be devastating for Anthropic and would affect a large portion of major American corporations.
But the statutes don't actually grant that authority. Section 3252 applies specifically to contracts and subcontracts for certain enumerated sensitive information systems. FASCSA orders apply to federal work, not to all commercial activity. Multiple law firms and major technology companies have reached the same conclusion independently. Microsoft's legal team reviewed the designation and confirmed that Claude would remain available to their customers through M365, GitHub, and Azure AI Foundry for non-defense work. Google and Amazon issued similar statements.
Anthropic's CEO has said the formal designation letter they received is narrower than Hegseth's public statements suggested. It applies to the use of Claude in work directly related to Pentagon contracts, not to all commercial relationships. But the ambiguity between Hegseth's public statements and the actual legal scope of the designation has already caused damage. At least ten defense-focused companies at one venture capital firm alone have preemptively stopped using Claude entirely rather than try to draw the line between defense and non-defense work. In defense contracting, compliance caution is the norm, and ambiguity from the Secretary of Defense is effectively an instruction even when it has no legal basis.
V. The Amicus Brief
Within hours of Anthropic filing its lawsuits, 37 researchers and engineers from OpenAI and Google DeepMind filed an amicus brief supporting Anthropic's position. They signed as individuals, not as representatives of their companies, but the names on the filing carry weight. The most notable signatory is Jeff Dean, Google's chief scientist and the person who leads the Gemini AI program. Other signatories include researchers like Leo Gao and Pamela Mishkin from OpenAI and Alexander Matt Turner from Google DeepMind.
The brief makes three arguments. First, that Anthropic's two red lines address legitimate technical concerns. On surveillance, the researchers explain that AI could combine fragmented datasets, such as facial recognition, geolocation, financial transactions, and social media, into a unified real-time surveillance system operating across millions of people simultaneously. On autonomous weapons, they note that AI systems behave unpredictably in environments that differ from their training data, making reliable target identification difficult. Second, that the supply chain risk designation was an improper use of government authority. The brief points out that if the Pentagon didn't like Anthropic's terms, the straightforward response was to cancel the contract and use a competitor's product. Third, that the designation chills debate across the entire AI industry about the risks and appropriate uses of frontier systems.
This last point is worth paying attention to. Many of the same employees who signed the brief also signed open letters in the preceding weeks urging the Pentagon to withdraw the designation and calling on the leadership of their own companies to support Anthropic and refuse to allow unilateral military use of their AI systems. The amicus brief is unusual because it involves employees of Anthropic's direct competitors publicly arguing that what happened to Anthropic threatens all of them. It's one thing for a company's allies to file supporting documents. It is something else entirely when the people who build the competing products say the government's actions endanger the whole field.
VI. The OpenAI Complication
There is an awkward subplot to all of this.
Within hours of the Trump administration's order against Anthropic on February 27th, OpenAI announced it had reached its own deal with the Pentagon. The timing was not subtle. The deal appeared to give the Pentagon the unrestricted access it had been demanding from Anthropic, and it drew immediate backlash. Users began uninstalling ChatGPT. Claude briefly topped the iPhone App Store charts. At least one OpenAI executive resigned over concerns that the announcement was rushed without adequate safeguards.
OpenAI later published a FAQ claiming it had negotiated three red lines of its own: no mass domestic surveillance, no directing autonomous weapons systems, and no high-stakes automated decisions. These are substantially similar to the restrictions Anthropic had been insisting on. But buried in the FAQ was a question asking what would happen if the government violated the contract terms. OpenAI's answer was that they could terminate the contract, but that they didn't expect that to happen.
Amodei responded in an internal memo, later reported by The Information, calling OpenAI's staff "gullible" and accusing the company's leadership of "straight-up lies." He described OpenAI's approach as "safety theater." Altman then appeared to respond indirectly, saying it was "bad for society" when companies abandon democratic norms because they dislike who is in power. Relations between the two companies have not recovered.
The CFR analysis of this situation is pointed. It notes that OpenAI's contract essentially assumes the government will honor the terms voluntarily, while Anthropic's position was that contractual terms mean nothing without enforcement mechanisms. Anthropic negotiated for restrictions that the government couldn't override unilaterally. OpenAI negotiated for restrictions that the government agreed to but could presumably walk back at any time. Whether those are meaningfully different protections or just different ways of arriving at the same vulnerability is something the courts will not address, but it is probably the right question.
VII. What Happens Next
The case is in its early stages. Anthropic has asked the Northern District of California for a temporary restraining order to prevent the supply chain designation from being enforced while the litigation proceeds. If granted, this would allow Anthropic to continue working with defense contractors during the lawsuit.
The government will need to justify the designation in court. Given the procedural requirements of both statutes and the public record of statements by Trump and Hegseth, legal observers have generally been skeptical that the designation will survive judicial review. The Lawfare analysis concluded that the designation "won't survive first contact with the legal system." The Just Security analysis called it "highly unlikely" that the Secretary could meet the statutory requirements. A Mayer Brown analysis noted that both parties acknowledge the negotiations broke down over terms of use, not over adversarial risks to DoD systems, which is what the supply chain risk statutes were designed to address.
There is also the possibility of a negotiated resolution. The Financial Times reported on March 4th that Anthropic had reopened talks with the Pentagon. At the Morgan Stanley conference on March 4th, Amodei said Anthropic was trying to "deescalate the situation and come to some agreement that works for us and works for them." Pentagon undersecretary Emil Michael told Pirate Wires that he would be "open-minded" about a resolution. CBS reported that in the five days after Trump canceled Anthropic's contracts, company executives expressed regret to Pentagon officials over what they characterized as a misunderstanding.
So the situation is still moving. The lawsuits establish Anthropic's legal position and create leverage for negotiation, but both sides appear to prefer a deal over prolonged litigation, especially with ongoing military operations in Iran that reportedly depend in part on Claude's intelligence analysis capabilities.
Whether they reach one is a different question. The administration has not shown much interest in compromise so far, and the supply chain risk designation is not easy to quietly walk back once it's been issued with this much publicity. But the legal problems with the designation are real, and courts may not give the government the deference it expects when the public record so clearly suggests retaliation rather than genuine security concern.
We'll see.