According to 1M AI News monitoring, the Pentagon's decision to classify Anthropic as a "supply chain risk" is now affecting key channels of its government business. Over the past year, Anthropic has provided AI services to U.S. defense and intelligence agencies through Palantir Technologies, with the Pentagon using the Claude model hosted on AWS in conjunction with Palantir software to identify patterns and assist in decision-making within a large volume of classified data. If the supply chain risk classification takes effect, Palantir will have to discontinue the use of Claude in its military operations. Palantir receives approximately 42% of its revenue (nearly half of its annual revenue of around $4.5 billion) from U.S. government contracts. Part of Palantir's software is specifically tailored to Claude, and transitioning to another model vendor is expected to take about two weeks. Insiders say that Palantir is still expected to derive roughly the same revenue from these contracts after the switch. This revenue is relatively small for Anthropic, which expects its revenue to reach as high as $18 billion this year.
Palantir CEO Alex Karp, speaking at the Washington Defense Technology Summit hosted by Andreessen Horowitz on Tuesday, indirectly criticized Anthropic. He warned Silicon Valley not to be enemies with the U.S. military: "If Silicon Valley thinks it can take away everyone's white-collar jobs and then screw the military... if you don't think that eventually leads to our technology being nationalized, you're retarded. That’s the end of that road."
Amodei's internal memo, on the other hand, criticized Palantir from a different angle. He revealed that during Pentagon negotiations, Palantir pitched Anthropic a "classifier" set, claiming it could determine red lines through a machine learning system. Amodei believes such a solution is "about 20% real and 80% safety theater," the reason being that the model cannot assess the broader context in which it operates: it doesn’t know if there are humans in the loop in a weapon system (autonomous weapons issue), or whether the analyzed data is from overseas or U.S. citizen data, obtained with user consent or purchased through gray channels (mass surveillance issue), in addition to the frequent and easy-to-implement jailbreak attacks.
Amodei further pointed out that the security layer provided by Palantir is "almost entirely security theater" and believes that Palantir's understanding of Anthropic's stance is, "You have some unhappy employees, you need to give them something to appease them, or make what's happening invisible to them, that's the service we provide." He stated that everyone, including the Pentagon, Palantir, and Anthropic's political advisors, thought that the problem Anthropic needed to solve was merely employee morale management. It is worth noting that OpenAI did not participate in Pentagon-related work through Palantir.
