According to 1M AI News monitoring, the Amodei memo revealed key details of the breakdown in negotiations between Anthropic and the Pentagon. In the final stages of the negotiations, the Pentagon proposed a compromise: to accept all of Anthropic's contract terms on the condition of removing a clause about "analysis of bulk acquired data." This happened to be the scenario that Anthropic was most concerned about: the Pentagon could legitimately purchase GPS location and private data of U.S. citizens from third-party vendors (often through covert terms of service agreements), analyze it on a large scale using AI, create citizen profiles, and track movement patterns. Anthropic considered the Pentagon's sole request to remove this clause "highly suspicious" and rejected the proposal.
In the memo, Amodei deconstructed the security commitments of the OpenAI contract point by point. OpenAI stated that its contract allowed the Pentagon to use AI for "all lawful purposes" while having a "safety stack" in place to prevent misconduct. Amodei believed this mechanism to be largely ineffective: the model itself cannot determine if a human is in the loop of a weapon system nor identify the source and nature of the analyzed data, and adversarial attacks are frequent and often only require feeding the model with misleading data descriptions to bypass protections. He also noted that OpenAI's practice of deploying forward engineers to oversee deployments, which Anthropic discussed internally a few months earlier, concluded that it was "viable only in very few cases and should not be relied upon as a safeguard."
The memo further exposed a double standard issue. Anthropic had attempted to include in the contract some of the same security provisions as OpenAI (as a supplement to an acceptable use policy) but was rejected by the Pentagon. Amodei claimed to have negotiation email chains as evidence that the statement "OpenAI's terms were offered to us and rejected by us" was false.
Regarding the definition of "lawful purposes," Amodei pointed out two key loopholes. First, while the Pentagon does indeed have domestic surveillance capabilities, the impact of these capabilities was limited before the era of AI but has very different implications in the AI age. Second, the Pentagon's claim that "human in the loop is a legal requirement" is inaccurate; it is just an internal policy set during the Biden administration, which Defense Secretary Hegseth can unilaterally change. OpenAI updated its contract language under public pressure to include restrictions on domestic surveillance, but legal critics criticized the updated language for only prohibiting "intentional" and "deliberate" surveillance behaviors, considering it too narrow and leaving room for interpretation.
