According to Sentinel Beating monitoring, Anthropic has filed documents with the Washington Federal Appeals Court claiming that once its AI model is deployed in the Pentagon environment, the company has neither visibility nor technical means to control or shut down the model, and there is no "kill switch." Anthropic also points out that the Pentagon had the opportunity to test the model before deployment.
This filing represents the latest development in the dispute between Anthropic and the Pentagon over "supply chain risk" labeling. In March of this year, the Pentagon designated Anthropic as a supply chain risk, citing the company's improper intervention in how its technology is used in sensitive military operations. The crux of the controversy is that Anthropic's usage policy prohibits the use of Claude for autonomous weapons or mass surveillance, terms that the Pentagon views as "smoke and mirrors."
The lawsuit has now resulted in a split between the two courts: the Washington court rejected Anthropic's request to suspend the supply chain risk label, while the California court approved it. The practical effect is that Anthropic is unable to participate in new Pentagon contracts but can continue to serve other government agencies. Meanwhile, the Trump administration is actively pushing for the deployment of Anthropic's new model, Mythos, across federal agencies. Agency heads are currently exploring how to use Mythos to defend against cyberattacks, creating a contradiction with the Pentagon's stance that Anthropic poses a national security risk. The next hearing is scheduled for May 19th.
